azure-ai-contentsafety-py
Audited by Gen Agent Trust Hub on Feb 13, 2026
The skill azure-ai-contentsafety-py primarily serves as documentation and provides code examples for interacting with the Azure AI Content Safety SDK. The analysis covered both SKILL.md and references/acceptance-criteria.md.
1. Prompt Injection: No patterns indicative of prompt injection (e.g., 'IMPORTANT: Ignore', role-play instructions) were found in either file.
2. Data Exfiltration:
- The skill instructs
pip install azure-ai-contentsafety. This is an external dependency download. However,azure-ai-contentsafetyis an official SDK from Microsoft Azure, which is a trusted organization. This finding is downgraded to INFO/LOW severity. - Code examples show reading
os.environ["CONTENT_SAFETY_ENDPOINT"]andos.environ["CONTENT_SAFETY_KEY"]. This is a secure practice for handling credentials and does not expose sensitive data. - An example shows
with open("image.jpg", "rb") as f:, which reads a local file. This is expected behavior for an image analysis skill and does not target sensitive system files. - Network calls are directed to
https://<resource>.cognitiveservices.azure.com(a trusted Azure domain) orhttps://example.com/image.jpg(a placeholder domain). No exfiltration to untrusted domains was detected.
3. Obfuscation: No obfuscation techniques (Base64, zero-width characters, homoglyphs, etc.) were found in the skill files.
4. Unverifiable Dependencies: The skill instructs pip install azure-ai-contentsafety. As noted above, this package is from a trusted source (Microsoft Azure, confirmed by https://github.com/Azure/azure-sdk-for-python in references/acceptance-criteria.md). This is noted as a LOW/INFO finding due to the trusted source.
5. Privilege Escalation: No commands like sudo, chmod +x, or modifications to system files were found.
6. Persistence Mechanisms: No attempts to establish persistence (e.g., modifying .bashrc, creating cron jobs) were detected.
7. Metadata Poisoning: The skill's name and description are benign and accurately reflect its purpose. No malicious instructions were found in metadata fields.
8. Indirect Prompt Injection: The skill processes user-provided text and images. While any skill processing external content carries an inherent, general risk of indirect prompt injection, this skill is designed to detect harmful content, not facilitate injection. This is an informational note about a general risk, not a specific vulnerability in the skill's code.
9. Time-Delayed / Conditional Attacks: No conditional logic for time-delayed or environment-specific attacks was found.
Conclusion: The only finding is the instruction to install an external dependency (azure-ai-contentsafety). Given that this dependency is from a trusted source (Microsoft Azure), this finding is downgraded to LOW/INFO. No other security concerns were identified. The skill itself is primarily descriptive and provides code examples for using a trusted SDK, rather than containing directly executable malicious code.