project-diagrams
Warn
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: MEDIUMPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- [PROMPT_INJECTION] (MEDIUM): The skill accepts user-provided text to drive an AI-based diagram generation and refinement process. The lack of sanitization allows for potential prompt injection attacks where a user could manipulate the underlying LLM behavior. Evidence Chain: 1. Ingestion points: The prompt argument in scripts/generate_schematic.py. 2. Boundary markers: None used during prompt construction. 3. Capability inventory: Uses requests for network communication with OpenRouter/Gemini and subprocess.run to execute scripts. 4. Sanitization: No input validation or filtering of the natural language prompt.
- [COMMAND_EXECUTION] (LOW): The script executes an internal Python script using subprocess.run with a list of arguments. This is a secure implementation that prevents shell injection.
- [CREDENTIALS_UNSAFE] (LOW): Documentation encourages users to store sensitive API keys in environment variables or .env files. No hardcoded keys were found, but the practice requires careful environment management.
- [EXTERNAL_DOWNLOADS] (LOW): The skill requires the requests library to interact with external AI APIs at openrouter.ai, which is consistent with its stated purpose.
Audit Metadata