openai-agents

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFE
Full Analysis
  • [Prompt Injection] (SAFE): The skill includes defensive templates (e.g., agent-guardrails-input.ts) specifically designed to detect and block prompt injection attempts. No malicious instructions or bypass markers were found in the templates or documentation.
  • [Data Exposure & Exfiltration] (SAFE): The skill provides explicit guidance and code templates for securing API keys. It demonstrates how to generate short-lived ephemeral tokens on the server (Next.js) to avoid exposing the primary OPENAI_API_KEY to the client side. No hardcoded credentials or unauthorized data access patterns were detected.
  • [Unverifiable Dependencies & Remote Code Execution] (SAFE): All dependencies in templates/shared/package.json are legitimate libraries (@openai/agents, zod). The version check script scripts/check-versions.sh uses standard npm view commands for maintenance and does not download or execute untrusted remote content.
  • [Indirect Prompt Injection] (LOW): While agents process untrusted user data in templates like agent-structured-output.ts and agent-streaming.ts, the skill consistently implements and advocates for the use of Zod for strict schema validation and instruction-based guardrails to sanitize inputs and outputs.
  • [Dynamic Execution] (SAFE): The templates use standard JavaScript/TypeScript patterns for agent orchestration. No use of eval(), exec(), or other dangerous dynamic code execution methods was found.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 04:41 PM