azure-ai-agent-deploy

Fail

Audited by Gen Agent Trust Hub on Feb 16, 2026

Risk Level: HIGHCREDENTIALS_UNSAFECOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [CREDENTIALS_UNSAFE] (HIGH): The skill programmatically retrieves and displays Azure AI Services API keys to the console. Specifically, scripts/setup-azure-ai.sh and the get_api_key method in scripts/agents/azure_discovery.py execute az cognitiveservices account keys list and print the resulting secret key.
  • [COMMAND_EXECUTION] (MEDIUM): Multiple scripts utilize shell interpolation to execute Azure CLI commands. scripts/setup-azure-ai.sh interpolates user-provided variables like PREFIX directly into shell command strings without validation, creating a potential for command injection.
  • [PROMPT_INJECTION] (MEDIUM): The skill exhibits an Indirect Prompt Injection surface (Category 8). The interactive test tool scripts/test-agent.py accepts untrusted user input and transmits it to the AI agent without implementation of boundary markers or sanitization.
  • Ingestion points: User input in scripts/test-agent.py and YAML configuration parsing in scripts/agents/yaml_parser.py.
  • Boundary markers: Not present in the messaging pipeline.
  • Capability inventory: Agent creation and interaction via the azure-ai-projects SDK.
  • Sanitization: No escaping or validation is performed on external content before model interpolation.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 16, 2026, 12:51 PM