skills/giuseppe-trisciuoglio/developer-kit-claude-code/langchain4j-tool-function-calling-patterns/Gen Agent Trust Hub
langchain4j-tool-function-calling-patterns
Pass
Audited by Gen Agent Trust Hub on Feb 23, 2026
Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- [COMMAND_EXECUTION]: The skill documentation and metadata (
SKILL.md) describe the implementation of tools with high-privilege capabilities, including a 'Bash' tool and database write operations. These tools allow an AI agent to execute system-level commands and modify persistent data based on model-generated input. - [PROMPT_INJECTION]: The skill architecture is susceptible to indirect prompt injection because it demonstrates patterns where untrusted user input is interpolated into tool arguments.
- Ingestion points: User messages are ingested via
assistant.chat(query)andassistant.help(query)across various implementation examples inreferences/examples.mdandreferences/implementation-patterns.md. - Boundary markers: The provided code snippets do not implement explicit boundary markers (e.g., XML tags or specific delimiters) to separate user data from system instructions within the tool-calling logic.
- Capability inventory: The skill enables significant capabilities including file system modification (
Write,Edit), system command execution (Bash), and database manipulation (databaseService.executeQuery,repository.save). - Sanitization: While the 'Security Considerations' section in
SKILL.mdrecommends input sanitization, the majority of the code examples (e.g., theApiToolsinSKILL.md) pass user-influenced parameters directly to external systems without validation logic. - [EXTERNAL_DOWNLOADS]: The skill references well-known and trusted technology services including OpenAI, LangChain4j, and Spring Boot for its implementation examples. All external references target official documentation or established library repositories.
Audit Metadata