openai-agents-sdk

Pass

Audited by Gen Agent Trust Hub on Mar 7, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: The skill is primarily educational, providing code snippets for utilizing the openai-agents library. All instructions and examples align with standard agentic development patterns.\n- [SAFE]: The documentation emphasizes security by demonstrating how to implement guardrails (input_guardrail, output_guardrail, tool_guardrail). These examples explicitly show how to mitigate risks such as path traversal, PII leakage, and inappropriate content handling.\n- [SAFE]: Data protection is addressed through the inclusion of EncryptedSession examples, which illustrate how to protect conversation history using encryption.\n- [SAFE]: All credentials in the documentation use non-functional placeholders (e.g., sk-...), following best practices for avoiding secret exposure.\n- [SAFE]: External references and package recommendations point to official or trusted GitHub-hosted documentation associated with OpenAI.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 7, 2026, 12:39 AM