azure-ai-agents-persistent-dotnet

Pass

Audited by Gen Agent Trust Hub on Feb 13, 2026

Risk Level: LOWNO_CODE
Full Analysis

The analysis of 'SKILL.md' and 'references/acceptance-criteria.md' reveals that both files are primarily documentation and code examples for the Azure AI Agents Persistent SDK for .NET. No direct executable scripts or malicious commands are present within the skill's instructions that would be run by the AI agent.

  1. Prompt Injection: No patterns indicative of prompt injection were found. The content focuses on SDK usage rather than manipulating an LLM's behavior.
  2. Data Exfiltration: The skill demonstrates how to use the SDK to upload files (client.Files.UploadFileAsync) and interact with Azure services. This is part of the SDK's legitimate functionality, and the destination for data is a specified Azure service, not an arbitrary external server. Environment variables are read, which is standard practice and not exfiltration. No sensitive local file paths are accessed for exfiltration.
  3. Obfuscation: No obfuscation techniques (Base64, zero-width characters, homoglyphs, URL/hex/HTML encoding) were detected in either file.
  4. Unverifiable Dependencies: The skill instructs the user to install NuGet packages (dotnet add package Azure.AI.Agents.Persistent, dotnet add package Azure.Identity, dotnet add package Azure.AI.Projects). These packages and their associated GitHub repositories (https://github.com/Azure/azure-sdk-for-net) are official Microsoft products and fall under the 'Trusted GitHub Organizations' list (specifically 'Azure'). Therefore, while these are external dependencies, their source is trusted, and this finding is downgraded to INFO.
  5. Privilege Escalation: No commands like sudo, chmod, or attempts to install services were found.
  6. Persistence Mechanisms: No attempts to establish persistence (e.g., modifying .bashrc, crontab, authorized_keys) were detected.
  7. Metadata Poisoning: The metadata fields (name, description) are benign and accurately reflect the skill's purpose.
  8. Indirect Prompt Injection: As the skill describes an SDK for AI agents, the agents themselves, if deployed, might be susceptible to indirect prompt injection if they process untrusted user input. However, this is a general risk associated with AI agents and not a vulnerability within the skill's instructions themselves. The skill's instructions do not contain any patterns that would facilitate this.
  9. Time-Delayed / Conditional Attacks: No conditional logic for malicious purposes based on time, usage, or environment was found.

Conclusion: The skill is a documentation and example set for a trusted SDK. It does not contain any direct security vulnerabilities or malicious patterns. The external dependencies are from trusted sources. The skill itself is 'NO_CODE' in the sense that it's not an executable script for the agent, but rather instructions for the user.

Audit Metadata
Risk Level
LOW
Analyzed
Feb 13, 2026, 10:24 AM