azure-ai-contentsafety-py

Pass

Audited by Gen Agent Trust Hub on Apr 28, 2026

Risk Level: SAFE
Full Analysis
  • Official SDK Integration: The skill facilitates the use of the azure-ai-contentsafety package, which is an official library from a well-known service provider. It allows for the detection of various harm categories such as hate, sexual content, violence, and self-harm.\n- Secure Credential Handling: The documentation and code examples prioritize security by using environment variables for authentication. The inclusion of acceptance criteria that explicitly flag hardcoded secrets as an error reinforces secure development practices.\n- Resource Targeting: All network operations are directed towards official cloud service endpoints. The functionality for analyzing images via URLs is a standard feature of the integrated service, used for content moderation purposes.\n- Data Processing Context: While the skill processes user-generated text and images, it does so through a specialized moderation API designed to categorize and filter content, rather than executing it as code or passing it to unsafe internal functions.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 28, 2026, 03:16 PM