prompt-caching
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHPROMPT_INJECTIONEXTERNAL_DOWNLOADS
Full Analysis
- [Indirect Prompt Injection] (HIGH): The skill handles untrusted user data via variables like
userQueryinpatterns.mdandCAGSystem.query. There are no boundary markers or sanitization steps before this data is interpolated into prompts. - Ingestion points:
userQueryinpatterns.mdandCAGSystem.query. - Boundary markers: Absent in all code examples.
- Capability inventory: Network operations to Anthropic API and Redis databases.
- Sanitization: None detected in the provided patterns.
- [External Downloads] (LOW): Implementation patterns reference external Node.js dependencies including
@anthropic-ai/sdkandioredis. While these are standard, they represent external code that must be verified. - [Credentials Unsafe] (INFO): The code snippets use
process.env.REDIS_URL. While this is a recommended practice over hardcoding, it reminds users to protect their environment configurations from exposure.
Recommendations
- AI detected serious security threats
Audit Metadata