detecting-memory-leaks
Warn
Audited by Gen Agent Trust Hub on Mar 24, 2026
Risk Level: MEDIUMCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: The script
scripts/generate_report.pycontains agenerate_scriptfunction that writes a string template to a file and explicitly grants it executable permissions usingchmod(0o755). This capability allows the agent to dynamically create and potentially execute arbitrary shell scripts. - [COMMAND_EXECUTION]: The file
scripts/setup_environment.shis a Python script disguised with a shell script extension. While it currently performs directory and configuration file creation, this naming mismatch can be a technique used to evade simple file-type security policies or human review. - [PROMPT_INJECTION]: The skill is designed to ingest and analyze external source code for memory leak patterns, which exposes the agent to indirect prompt injection. Maliciously crafted code or comments in the analyzed files could contain instructions that the agent might follow.
- Ingestion points: Target source code files being analyzed (referenced in
SKILL.mdinstructions). - Boundary markers: No explicit boundary markers or instructions to ignore embedded commands are present in the processing logic.
- Capability inventory: The skill has access to powerful tools including
Bash,Write,Edit, andGrep. - Sanitization: There is no evidence of sanitization or validation of the content read from external source files.
Audit Metadata