llm-npc-dialogue
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFE
Full Analysis
- [Prompt Injection] (SAFE): No malicious injection patterns detected. The skill actually advises on how to implement defensive guardrails to prevent NPCs from breaking character or being manipulated by adversarial players.
- [Credentials Unsafe] (SAFE): The file references/validations.md contains regex patterns designed to identify hardcoded API keys in developer code during a review process. It does not contain any actual secrets or exfiltration logic.
- [Indirect Prompt Injection] (SAFE): The skill is designed to analyze external source code. It includes specific validation rules (Category 8 surface) to detect if LLM responses are used without sanitization or if conversation histories are unbounded, helping to secure the application being built.
- [Remote Code Execution] (SAFE): No external dependencies, script downloads, or dynamic execution patterns were found. The skill operates entirely as a set of static instructions and reference rules.
Audit Metadata