ai-pentesting
Fail
Audited by Gen Agent Trust Hub on Apr 20, 2026
Risk Level: HIGHEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [EXTERNAL_DOWNLOADS]: Fetches external code from an unverified remote repository (github.com/KeygraphHQ/shannon.git) to automate penetration testing.
- [REMOTE_CODE_EXECUTION]: Instructs the user to clone an external repository and execute a local script (./shannon start) which performs complex operations on the host environment.
- [COMMAND_EXECUTION]: Uses the subprocess.run module in Python to execute various system-level security tools including nmap, subfinder, and nuclei with parameters derived from external or AI-generated inputs.
- [PROMPT_INJECTION]: The skill exhibits an indirect prompt injection surface. It ingests data from external security tools (reconnaissance results) and interpolates it directly into LLM prompts to guide exploitation and reporting without using delimiters or sanitization.
- Ingestion points: recon_data used in _analyze_attack_surface and _generate_report methods within ai_pentester.py.
- Boundary markers: None present in the prompt templates.
- Capability inventory: subprocess.run calls in _recon and _exploit methods.
- Sanitization: No evidence of escaping or validating tool outputs before processing.
Recommendations
- AI detected serious security threats
Audit Metadata