langchain4j-spring-boot-integration

Pass

Audited by Gen Agent Trust Hub on Mar 25, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [INDIRECT_PROMPT_INJECTION]: The skill defines several AI service interfaces (e.g., CustomerSupportAssistant and RagAssistant in SKILL.md) that interpolate untrusted user input directly into prompt templates using placeholders like {{customerMessage}} or {{question}}. Without explicit boundary markers (like XML tags or triple quotes) or input sanitization, malicious user input could potentially attempt to override system instructions. This is particularly relevant as the agent environment is granted powerful tools.
  • Ingestion points: Method parameters in interfaces annotated with @AiService found in SKILL.md and references/examples.md (e.g., handleInquiry(String customerMessage)).
  • Boundary markers: The provided examples do not demonstrate the use of delimiters or specific instructions to ignore embedded commands within the variables.
  • Capability inventory: According to the SKILL.md frontmatter, the agent has access to Bash, Write, Edit, Read, Glob, and Grep tools.
  • Sanitization: No explicit sanitization or validation logic is included in the tutorial code snippets.
  • [COMMAND_EXECUTION]: The skill's configuration explicitly allows the use of the Bash tool. This is expected for a development-focused skill that handles Spring Boot integration, build processes, and testing, but it increases the impact if a prompt injection occurs.
  • [EXTERNAL_DOWNLOADS]: The instructions guide the user to include external dependencies such as dev.langchain4j:langchain4j-spring-boot-starter and dev.langchain4j:langchain4j-open-ai-spring-boot-starter. These are legitimate, well-known libraries from the LangChain4j project, used according to standard Java development practices.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 25, 2026, 03:38 PM