code-roast
Pass
Audited by Gen Agent Trust Hub on Mar 16, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: The analysis is performed by a local Python script (scripts/analyze.py) that uses standard libraries to inspect file contents. It does not make any network requests.
- [SAFE]: File access is limited to a predefined list of code extensions and avoids common sensitive or dependency directories like .git and node_modules.
- [COMMAND_EXECUTION]: The script executes git log using subprocess.run to analyze commit messages. It uses a safe execution method (direct argument list) with the repository path as the working directory.
- [PROMPT_INJECTION]: The skill has an indirect prompt injection surface (Category 8) because it processes untrusted data from the analyzed codebase (code comments and commit messages) and interpolates it into the agent's output.
- Ingestion points: scripts/analyze.py extracts comments matching SHAME_COMMENTS and git log messages into a JSON report.
- Boundary markers: Absent. The skill instructions do not include specific delimiters or 'ignore' instructions for content found within the code.
- Capability inventory: The agent writes a new markdown file to the local filesystem and displays the generated roast in the chat interface.
- Sanitization: Absent. Content from the codebase (such as TODO comments) is displayed directly in the final report.
Audit Metadata