onboard
Fail
Audited by Gen Agent Trust Hub on Feb 19, 2026
Risk Level: HIGHCREDENTIALS_UNSAFECOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- CREDENTIALS_UNSAFE (HIGH): The skill workflow in Step 2 explicitly instructs the agent to 'Look for and read' configuration files including '.env*'. Accessing sensitive files known to contain secrets like DATABASE_URL or API_KEY (which are even used as examples in the skill's template) poses a high risk of credential exposure if the agent includes actual values in the output guide.
- PROMPT_INJECTION (LOW): The skill is vulnerable to indirect prompt injection as it ingests and analyzes untrusted data from a codebase to generate documentation. ● Ingestion points: Reads local source code files (.py, .js, .ts, etc.) and project manifests (package.json, pyproject.toml). ● Boundary markers: Absent; there are no instructions to delimit codebase content or ignore instructions found within the files. ● Capability inventory: The skill can execute shell commands (find, cat, ls) and generate markdown output. ● Sanitization: None; the content is processed directly to identify patterns and generate descriptions.
- COMMAND_EXECUTION (SAFE): The skill uses standard shell commands like 'find', 'cat', and 'ls' for project analysis. These are used in a limited, non-arbitrary way and are consistent with the skill's stated purpose of codebase onboarding.
Recommendations
- AI detected serious security threats
Audit Metadata