NYC

openai-agents

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFE
Full Analysis
  • [PROMPT_INJECTION] (SAFE): The skill includes specific templates (agent-guardrails-input.ts) that demonstrate how to implement guardrail agents to detect and block prompt injection and jailbreak attempts.
  • [DATA_EXPOSURE] (SAFE): No hardcoded credentials were found. Templates (e.g., api-realtime-route.ts) explicitly warn against exposing the OPENAI_API_KEY to the client-side and provide patterns for secure server-side proxying and ephemeral key generation.
  • [EXTERNAL_DOWNLOADS] (SAFE): The scripts/check-versions.sh script uses npm view to check for SDK updates from the official npm registry. This is a standard, non-malicious development utility.
  • [COMMAND_EXECUTION] (SAFE): The skill implements tools using safe patterns. High-stakes actions like account deletion or payments are implemented with a 'Human-in-the-loop' (HITL) requirement in agent-human-approval.ts, requiring explicit manual confirmation before execution.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 05:56 PM