openai-assistants

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE] (SAFE): The skill contains well-structured documentation and boilerplate code for building with OpenAI's API. No malicious patterns or attack vectors were identified.
  • [DATA_EXPOSURE] (SAFE): The skill follows security best practices by using environment variables (process.env.OPENAI_API_KEY) and does not hardcode sensitive credentials.
  • [COMMAND_EXECUTION] (SAFE): Helper scripts and templates perform standard development operations, such as package version checks and local file processing for data analysis, with no risky command injection surfaces.
  • [REMOTE_CODE_EXECUTION] (SAFE): No unauthorized remote code execution patterns, such as piping remote content to a shell, were found.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 04:43 PM