azure-ai-agent-deploy
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHCREDENTIALS_UNSAFECOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [CREDENTIALS_UNSAFE] (HIGH): The skill programmatically retrieves and displays Azure AI Services API keys to the console. Specifically,
scripts/setup-azure-ai.shand theget_api_keymethod inscripts/agents/azure_discovery.pyexecuteaz cognitiveservices account keys listand print the resulting secret key. - [COMMAND_EXECUTION] (MEDIUM): Multiple scripts utilize shell interpolation to execute Azure CLI commands.
scripts/setup-azure-ai.shinterpolates user-provided variables likePREFIXdirectly into shell command strings without validation, creating a potential for command injection. - [PROMPT_INJECTION] (MEDIUM): The skill exhibits an Indirect Prompt Injection surface (Category 8). The interactive test tool
scripts/test-agent.pyaccepts untrusted user input and transmits it to the AI agent without implementation of boundary markers or sanitization. - Ingestion points: User input in
scripts/test-agent.pyand YAML configuration parsing inscripts/agents/yaml_parser.py. - Boundary markers: Not present in the messaging pipeline.
- Capability inventory: Agent creation and interaction via the
azure-ai-projectsSDK. - Sanitization: No escaping or validation is performed on external content before model interpolation.
Recommendations
- AI detected serious security threats
Audit Metadata