azure-ai-projects-dotnet
Audited by Gen Agent Trust Hub on Feb 13, 2026
The skill SKILL.md and its accompanying references/acceptance-criteria.md provide documentation and code examples for the Azure.AI.Projects .NET SDK. The skill itself does not execute any code directly on the agent; it provides instructions and examples for a user to implement.
-
Obfuscation: No obfuscation techniques (Base64, zero-width characters, homoglyphs, URL/hex/HTML encoding) were detected in either file. The content is presented in clear, readable markdown and C# code snippets.
-
Prompt Injection: No patterns indicative of prompt injection (e.g.,
IMPORTANT: Ignore, role-play instructions, system prompt extraction attempts) were found in the skill's description, name, or content. The skill's purpose is to instruct on SDK usage, not to manipulate the AI agent's behavior. -
Data Exfiltration:
- The skill instructs users to set environment variables (
PROJECT_ENDPOINT,MODEL_DEPLOYMENT_NAME, etc.) and retrieve them usingEnvironment.GetEnvironmentVariable(). This is a secure practice for handling configuration. - Authentication uses
DefaultAzureCredential(), which is a recommended and secure method for Azure services. - While the SDK functions like
UploadFile,UploadFolder, andGetConnection(..., includeCredentials: true)handle potentially sensitive data (local files, connection credentials), these are legitimate functions of an SDK designed for managing AI projects within Azure. The skill itself does not contain any commands to exfiltrate this data to untrusted external destinations. The "Best Practices" section explicitly advises caution withincludeCredentials: true. - No
curl,wget, or similar commands targeting non-whitelisted domains for data exfiltration were found.
- The skill instructs users to set environment variables (
-
Unverifiable Dependencies:
- The
SKILL.mdfile includesdotnet add packagecommands forAzure.AI.Projects,Azure.Identity,Azure.AI.Projects.OpenAI, andAzure.AI.Agents.Persistent. - The
references/acceptance-criteria.mdfile also refers to these andAzure.AI.OpenAI,OpenAI.Chat. - These packages belong to the
AzureandOpenAInamespaces. The GitHub source link provided (https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/ai/Azure.AI.Projects) confirms that these are official Microsoft Azure SDK components. - As
microsoftandopenaiare listed as trusted GitHub organizations, these dependencies are considered low risk. This finding is downgraded to LOW severity.
- The
-
Privilege Escalation: No commands like
sudo,doas,chmod +x,chmod 777, or attempts to install services or modify system files were found. -
Persistence Mechanisms: No attempts to establish persistence (e.g., modifying shell profiles, creating cron jobs, LaunchAgents, systemd services, or SSH authorized_keys) were detected.
-
Metadata Poisoning: The
nameanddescriptionfields inSKILL.mdare benign and accurately reflect the skill's purpose. No malicious instructions were found in metadata. -
Indirect Prompt Injection: The skill describes an SDK for interacting with AI models and agents. Applications built using this SDK will process user inputs and model outputs. While the skill itself is not directly vulnerable, applications that integrate this SDK must implement robust input validation and sanitization to prevent indirect prompt injection attacks where malicious instructions could be embedded in data processed by the AI models. This is an INFO level warning regarding the general risk for downstream applications.
-
Time-Delayed / Conditional Attacks: No conditional logic or time-based triggers for malicious behavior were identified.
Adversarial Reasoning: The skill is essentially documentation for a well-known, legitimate SDK. The instructions are clear, and the code examples demonstrate standard, secure practices. There are no hidden elements or suspicious behaviors that would suggest malicious intent. The references to external packages are to official, trusted sources.