cairn-ai-pentest

Fail

Audited by Gen Agent Trust Hub on Apr 23, 2026

Risk Level: HIGHCOMMAND_EXECUTIONEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTION
Full Analysis
  • [COMMAND_EXECUTION]: The documentation provides Python code examples for custom tool creation that use subprocess.run(shell=True) with string interpolation of variables like target and payload. This pattern is highly susceptible to shell command injection.
  • [EXTERNAL_DOWNLOADS]: The skill's installation instructions direct users to clone an external GitHub repository (github.com/oritera/Cairn) and install third-party dependencies from unverified sources using pip.
  • [REMOTE_CODE_EXECUTION]: The core agent loop is designed to autonomously select and execute shell-based security tools (such as nmap, sqlmap, and curl) on the host machine based on AI reasoning, creating a vector for autonomous code execution.
  • [COMMAND_EXECUTION]: Ingestion points: Tool observations and network response data entered into the agent context (SKILL.md). Boundary markers: None identified in the provided architecture examples. Capability inventory: subprocess.run, shell.execute, and multiple shell-based tool wrappers (SKILL.md). Sanitization: No input sanitization or command escaping is demonstrated in the documentation examples.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Apr 23, 2026, 04:35 AM