openai-responses
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFE
Full Analysis
- [Data Exposure & Exfiltration] (SAFE): The templates correctly use environment variables (
process.env.OPENAI_API_KEY,env.STRIPE_OAUTH_TOKEN) for sensitive credentials. No hardcoded API keys or secrets were found across the 18 files. - [Unverifiable Dependencies & Remote Code Execution] (SAFE): The
package.jsonfile uses official and trusted packages such asopenai,typescript, and Cloudflare'swrangler. No suspicious or unversioned dependencies are present. - [Command Execution] (SAFE): The provided shell script
scripts/check-versions.shis a benign version utility that uses standard commands (npm,node -p,sed,cut) to verify package compatibility. It does not perform any dangerous operations or acquire elevated privileges. - [Indirect Prompt Injection] (SAFE): While the templates provide ingestion points for user data (e.g., in the Cloudflare Worker example), they are intended as architectural demonstrations. Standard practices for this specific API are followed. Users implementing these templates should follow the remediation guidance for production environments.
- [Obfuscation] (SAFE): All code is provided in clear text. No Base64, zero-width characters, or other obfuscation techniques were detected.
Audit Metadata