doc
Fail
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: HIGHCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION] (HIGH): The file
scripts/validate.shimplements acheckfunction that usesevalon its second argument. While the current usage in the script involves static strings, this is a dangerous coding pattern that facilitates arbitrary command execution if arguments are ever influenced by untrusted data. - [PROMPT_INJECTION] (HIGH): The skill is highly susceptible to Indirect Prompt Injection (Category 8).
- Ingestion points: The skill reads arbitrary source code (
.py,.go,.js, etc.) and markdown files viagrepandcatinSKILL.mdandreferences/validation-rules.md. - Boundary markers: There are no delimiters or instructions to ignore embedded commands within the processed data.
- Capability inventory: The skill can write files to the repository, create issues via
ghorbdCLI, and query cluster state viaoccommands. - Sanitization: No sanitization or validation of the ingested content is performed before it is used to generate reports or influence downstream actions. Malicious instructions hidden in code comments could lead to unauthorized file modifications or command execution.
- [COMMAND_EXECUTION] (MEDIUM): The skill invokes several external binaries including
oc(OpenShift),gh(GitHub),bd(Beads), and a specific local script~/.claude/scripts/doc-validate.py. The security of these dependencies is unverifiable, and the use of computed paths for the Python script introduces risk. - [DATA_EXFILTRATION] (LOW): While no direct network exfiltration was detected, the skill has broad read access to the local filesystem and cluster metadata. It aggregates this information into documentation files which could potentially expose sensitive architectural details if the documentation directory is public-facing.
Recommendations
- AI detected serious security threats
Audit Metadata