agent-framework-azure-ai-py

Pass

Audited by Gen Agent Trust Hub on Mar 1, 2026

Risk Level: SAFE
Full Analysis
  • [EXTERNAL_DOWNLOADS]: The skill references official Microsoft Python packages (agent-framework, agent-framework-azure-ai) and standard libraries such as azure-identity. These are well-known, trusted dependencies for building Azure-integrated applications.
  • [COMMAND_EXECUTION]: The skill describes the use of the HostedCodeInterpreterTool, which allows agents to execute Python code. This execution occurs within a managed, sandboxed environment provided by Azure AI Foundry, which is designed to isolate the execution and prevent unauthorized access to the host system.
  • [CREDENTIALS_UNSAFE]: The documentation correctly encourages the use of DefaultAzureCredential and AzureCliCredential from the azure-identity library. This approach avoids the risk of hardcoding sensitive API keys or connection strings by relying on environment-based or identity-based authentication.
  • [INDIRECT_PROMPT_INJECTION]: The skill facilitates the creation of agents that can ingest external data via web search (HostedWebSearchTool), document search (HostedFileSearchTool), and MCP servers (MCPStreamableHTTPTool). This creates an inherent attack surface where untrusted content could potentially contain malicious instructions.
  • Ingestion points: Web search results, uploaded files (Azure AI Files), and responses from external MCP endpoints (e.g., learn.microsoft.com).
  • Boundary markers: The provided prompt templates do not explicitly define boundary markers or 'ignore' instructions for external data, though developers can implement them.
  • Capability inventory: Agents have access to a Python Code Interpreter and external API tools.
  • Sanitization: Relies on the underlying LLM's safety alignment and Azure AI Service's built-in content filters for tool outputs.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 1, 2026, 12:34 AM