swagger-gen

Pass

Audited by Gen Agent Trust Hub on Mar 17, 2026

Risk Level: SAFEPROMPT_INJECTIONDATA_EXFILTRATION
Full Analysis
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection because it reads untrusted source code and includes it in an LLM prompt.
  • Ingestion points: All .js, .ts, and .mjs files located in the path provided to the generateSwagger function in src/index.ts.
  • Boundary markers: Files are separated by simple comment headers (e.g., // === filename ===) which do not reliably prevent an LLM from following instructions embedded within the code.
  • Capability inventory: The skill has the ability to read arbitrary files (filtered by extension) and write the resulting documentation to a local file. It does not execute the generated content.
  • Sanitization: No sanitization or escaping of the file content is performed before interpolation into the system prompt.
  • [DATA_EXFILTRATION]: The skill reads the full content of source code files and transmits them to OpenAI's API to generate the specification. While this is the tool's intended purpose, users should be aware that their source code is shared with a third-party AI provider.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 17, 2026, 07:05 AM