langchain4j-ai-services-patterns

Pass

Audited by Gen Agent Trust Hub on Feb 23, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The skill facilitates indirect prompt injection by demonstrating architectural patterns that interpolate untrusted data directly into LLM prompts. * Ingestion points: The skill defines interfaces in SKILL.md and references/examples.md that accept user-controlled strings (e.g., 'customerMessage', 'feedback', 'userMessage'). * Boundary markers: The provided templates (e.g., '{{it}}', '{{text}}') lack delimiters or instructions to ignore embedded commands within the variables. * Capability inventory: The skill requests broad permissions in SKILL.md metadata, including 'Bash', 'Write', and 'Edit', which significantly increases the potential impact of an injection. * Sanitization: There is no evidence of input validation or sanitization within the provided patterns or code examples.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 23, 2026, 11:33 PM