debugging-agent

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFECOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • PROMPT_INJECTION (LOW): The skill is susceptible to indirect prompt injection because it analyzes external data sources (system logs) that could contain malicious instructions designed to influence the agent's output.
  • Ingestion points: System execution and error logs located at backend/ai/skills/logs/*/*/execution-*.jsonl.
  • Boundary markers: No explicit delimiters or instructions to ignore embedded commands are documented for the log parsing process.
  • Capability inventory: The skill utilizes view_file, grep_search, and run_command to inspect the system and execute its internal analysis scripts.
  • Sanitization: While no technical sanitization of the log data is described, the skill's design mandates human approval ('User Approval Required') for all generated proposals, serving as a critical verification layer.
  • COMMAND_EXECUTION (SAFE): The skill uses run_command and subprocess.run to execute its own internal Python scripts (log_reader.py, pattern_detector.py, improvement_proposer.py). These executions are restricted to static paths within the skill's directory and are fundamental to its operation.
  • DATA_EXPOSURE (SAFE): The agent reads application source code and execution logs to perform root cause analysis. Given its role as a 'Debugging Agent,' this access is considered legitimate and necessary for its primary functionality.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:25 PM