langchain4j-ai-services-patterns

Pass

Audited by Gen Agent Trust Hub on Apr 1, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: No security issues detected. The skill provides architectural patterns for integrating LLMs into Java applications using the LangChain4j library. All code samples follow standard development practices.
  • [INDIRECT_PROMPT_INJECTION]: The skill defines patterns for processing untrusted user input and connecting LLMs to external tools, which creates an attack surface for indirect prompt injection.
  • Ingestion points: Java methods like chat(String userMessage) and handleInquiry(String customerMessage) in SKILL.md and references/examples.md ingest external data into the prompt context.
  • Boundary markers: Prompts are structured using @SystemMessage and @UserMessage annotations with template variables (e.g., {{text}}) to delimit instructions from data.
  • Capability inventory: The skill documents tool integration allowing LLMs to execute Java methods (e.g., Calculator, WeatherService, DateService) and lists Bash, Write, and Edit as allowed tools in the manifest.
  • Sanitization: The skill includes explicit safety warnings in the 'Constraints and Warnings' section, advising developers to 'Implement validation for user inputs' and 'validate AI-generated outputs before use in production systems'.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 1, 2026, 07:09 AM