azure-ai-projects-dotnet

Pass

Audited by Gen Agent Trust Hub on Feb 13, 2026

Risk Level: LOWEXTERNAL_DOWNLOADS
Full Analysis

The skill SKILL.md and its accompanying references/acceptance-criteria.md provide documentation and code examples for the Azure.AI.Projects .NET SDK. The skill itself does not execute any code directly on the agent; it provides instructions and examples for a user to implement.

  1. Obfuscation: No obfuscation techniques (Base64, zero-width characters, homoglyphs, URL/hex/HTML encoding) were detected in either file. The content is presented in clear, readable markdown and C# code snippets.

  2. Prompt Injection: No patterns indicative of prompt injection (e.g., IMPORTANT: Ignore, role-play instructions, system prompt extraction attempts) were found in the skill's description, name, or content. The skill's purpose is to instruct on SDK usage, not to manipulate the AI agent's behavior.

  3. Data Exfiltration:

    • The skill instructs users to set environment variables (PROJECT_ENDPOINT, MODEL_DEPLOYMENT_NAME, etc.) and retrieve them using Environment.GetEnvironmentVariable(). This is a secure practice for handling configuration.
    • Authentication uses DefaultAzureCredential(), which is a recommended and secure method for Azure services.
    • While the SDK functions like UploadFile, UploadFolder, and GetConnection(..., includeCredentials: true) handle potentially sensitive data (local files, connection credentials), these are legitimate functions of an SDK designed for managing AI projects within Azure. The skill itself does not contain any commands to exfiltrate this data to untrusted external destinations. The "Best Practices" section explicitly advises caution with includeCredentials: true.
    • No curl, wget, or similar commands targeting non-whitelisted domains for data exfiltration were found.
  4. Unverifiable Dependencies:

    • The SKILL.md file includes dotnet add package commands for Azure.AI.Projects, Azure.Identity, Azure.AI.Projects.OpenAI, and Azure.AI.Agents.Persistent.
    • The references/acceptance-criteria.md file also refers to these and Azure.AI.OpenAI, OpenAI.Chat.
    • These packages belong to the Azure and OpenAI namespaces. The GitHub source link provided (https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/ai/Azure.AI.Projects) confirms that these are official Microsoft Azure SDK components.
    • As microsoft and openai are listed as trusted GitHub organizations, these dependencies are considered low risk. This finding is downgraded to LOW severity.
  5. Privilege Escalation: No commands like sudo, doas, chmod +x, chmod 777, or attempts to install services or modify system files were found.

  6. Persistence Mechanisms: No attempts to establish persistence (e.g., modifying shell profiles, creating cron jobs, LaunchAgents, systemd services, or SSH authorized_keys) were detected.

  7. Metadata Poisoning: The name and description fields in SKILL.md are benign and accurately reflect the skill's purpose. No malicious instructions were found in metadata.

  8. Indirect Prompt Injection: The skill describes an SDK for interacting with AI models and agents. Applications built using this SDK will process user inputs and model outputs. While the skill itself is not directly vulnerable, applications that integrate this SDK must implement robust input validation and sanitization to prevent indirect prompt injection attacks where malicious instructions could be embedded in data processed by the AI models. This is an INFO level warning regarding the general risk for downstream applications.

  9. Time-Delayed / Conditional Attacks: No conditional logic or time-based triggers for malicious behavior were identified.

Adversarial Reasoning: The skill is essentially documentation for a well-known, legitimate SDK. The instructions are clear, and the code examples demonstrate standard, secure practices. There are no hidden elements or suspicious behaviors that would suggest malicious intent. The references to external packages are to official, trusted sources.

Audit Metadata
Risk Level
LOW
Analyzed
Feb 13, 2026, 10:24 AM