langchain4j-tool-function-calling-patterns

Pass

Audited by Gen Agent Trust Hub on Feb 23, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • [COMMAND_EXECUTION]: The skill documentation and metadata (SKILL.md) describe the implementation of tools with high-privilege capabilities, including a 'Bash' tool and database write operations. These tools allow an AI agent to execute system-level commands and modify persistent data based on model-generated input.
  • [PROMPT_INJECTION]: The skill architecture is susceptible to indirect prompt injection because it demonstrates patterns where untrusted user input is interpolated into tool arguments.
  • Ingestion points: User messages are ingested via assistant.chat(query) and assistant.help(query) across various implementation examples in references/examples.md and references/implementation-patterns.md.
  • Boundary markers: The provided code snippets do not implement explicit boundary markers (e.g., XML tags or specific delimiters) to separate user data from system instructions within the tool-calling logic.
  • Capability inventory: The skill enables significant capabilities including file system modification (Write, Edit), system command execution (Bash), and database manipulation (databaseService.executeQuery, repository.save).
  • Sanitization: While the 'Security Considerations' section in SKILL.md recommends input sanitization, the majority of the code examples (e.g., the ApiTools in SKILL.md) pass user-influenced parameters directly to external systems without validation logic.
  • [EXTERNAL_DOWNLOADS]: The skill references well-known and trusted technology services including OpenAI, LangChain4j, and Spring Boot for its implementation examples. All external references target official documentation or established library repositories.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 23, 2026, 11:39 PM