skills/giuseppe-trisciuoglio/developer-kit-claude-code/langchain4j-ai-services-patterns/Gen Agent Trust Hub
langchain4j-ai-services-patterns
Pass
Audited by Gen Agent Trust Hub on Apr 1, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: No security issues detected. The skill provides architectural patterns for integrating LLMs into Java applications using the LangChain4j library. All code samples follow standard development practices.
- [INDIRECT_PROMPT_INJECTION]: The skill defines patterns for processing untrusted user input and connecting LLMs to external tools, which creates an attack surface for indirect prompt injection.
- Ingestion points: Java methods like
chat(String userMessage)andhandleInquiry(String customerMessage)inSKILL.mdandreferences/examples.mdingest external data into the prompt context. - Boundary markers: Prompts are structured using
@SystemMessageand@UserMessageannotations with template variables (e.g.,{{text}}) to delimit instructions from data. - Capability inventory: The skill documents tool integration allowing LLMs to execute Java methods (e.g.,
Calculator,WeatherService,DateService) and listsBash,Write, andEditas allowed tools in the manifest. - Sanitization: The skill includes explicit safety warnings in the 'Constraints and Warnings' section, advising developers to 'Implement validation for user inputs' and 'validate AI-generated outputs before use in production systems'.
Audit Metadata