lore-creation-starting-skill
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- [Indirect Prompt Injection] (HIGH): The skill facilitates the ingestion and processing of untrusted data from technical work (e.g., git commits) into an LLM context.
- Ingestion points: Untrusted technical content is ingested via
git-diff HEADand manual inputs inlore-flow.sh(SKILL.md). - Boundary markers: Absent. The workflow does not specify delimiters or instructions to ignore embedded commands within the technical content.
- Capability inventory: The skill requests
Bash,Read,Write, andEditpermissions (SKILL.md frontmatter), granting the agent power to modify the system or execute code if the LLM is manipulated. - Sanitization: Absent. There is no evidence of filtering or validation for the technical data before it is sent to the LLM.
- [Command Execution] (MEDIUM): The skill relies on a suite of local shell scripts (
manage-lore.sh,lore-flow.sh,create-persona.sh,quick-lore.sh) to perform its functions. The reliance on theBashtool to execute these scripts provides a path for potential abuse if the input strings are not correctly escaped or if the LLM is coerced into generating malicious shell commands. - [Data Exposure] (LOW): The skill sends technical details to an external LLM provider (
LLM_PROVIDER=claude). If a user accidentally commits secrets (API keys, credentials) to the repository, these may be exfiltrated to the LLM provider during the lore generation process.
Recommendations
- AI detected serious security threats
Audit Metadata