feedback
Pass
Audited by Gen Agent Trust Hub on Feb 20, 2026
Risk Level: SAFEDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
- [Indirect Prompt Injection] (LOW): The skill implements a learning mechanism that stores patterns from user interactions in 'learned-patterns.json'. This creates a surface for indirect prompt injection where malicious input could influence the 'learned' behavior and lead to unsafe auto-approvals. 1. Ingestion points: Data is ingested into 'learned-patterns.json' and 'metrics.json' (references/file-locations.md). 2. Boundary markers: No specific boundary markers or 'ignore' instructions are mentioned for the data being learned. 3. Capability inventory: The skill is designed to influence auto-approval of commands and tracks agent performance across various tasks (rules/consent-and-security.md). 4. Sanitization: While PII scanning is mentioned for outgoing analytics, there is no documented sanitization for the patterns being learned locally.
- [Data Exposure & Exfiltration] (LOW): The skill includes an optional network transmission component ('analytics-sender.sh') to send aggregated data to a remote endpoint. Although it requires explicit opt-in and claims to anonymize data, it establishes a functional path for data exfiltration. It also interacts with a global configuration file ('~/.claude/global-patterns.json') outside the project scope.
Audit Metadata