openai-responses

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFE
Full Analysis
  • [Data Exposure & Exfiltration] (SAFE): The templates correctly use environment variables (process.env.OPENAI_API_KEY, env.STRIPE_OAUTH_TOKEN) for sensitive credentials. No hardcoded API keys or secrets were found across the 18 files.
  • [Unverifiable Dependencies & Remote Code Execution] (SAFE): The package.json file uses official and trusted packages such as openai, typescript, and Cloudflare's wrangler. No suspicious or unversioned dependencies are present.
  • [Command Execution] (SAFE): The provided shell script scripts/check-versions.sh is a benign version utility that uses standard commands (npm, node -p, sed, cut) to verify package compatibility. It does not perform any dangerous operations or acquire elevated privileges.
  • [Indirect Prompt Injection] (SAFE): While the templates provide ingestion points for user data (e.g., in the Cloudflare Worker example), they are intended as architectural demonstrations. Standard practices for this specific API are followed. Users implementing these templates should follow the remediation guidance for production environments.
  • [Obfuscation] (SAFE): All code is provided in clear text. No Base64, zero-width characters, or other obfuscation techniques were detected.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:38 PM