spring-ai

Pass

Audited by Gen Agent Trust Hub on Apr 1, 2026

Risk Level: SAFE
Full Analysis
  • [PROMPT_INJECTION]: The skill provides code patterns for passing user-supplied input directly into LLM prompts (e.g., ChatService.chat and PromptTemplate examples in SKILL.md), representing a standard surface for indirect prompt injection common to AI applications.
  • Ingestion points: Java methods such as chat(String message) and generatePrompt(String style, String question) receive untrusted input from method arguments.
  • Boundary markers: The provided examples do not use specific delimiters or instructions to ignore embedded commands in the input data.
  • Capability inventory: The skill documents integration with external AI service APIs (OpenAI, Anthropic, Azure).
  • Sanitization: No explicit input sanitization or verification logic is shown in the code snippets.
  • [EXTERNAL_DOWNLOADS]: Mentions official Spring AI artifacts from the org.springframework.ai group intended for download from standard Maven/Gradle repositories.
  • [CREDENTIALS_UNSAFE]: Recommended configuration patterns use environment variable placeholders (e.g., ${OPENAI_API_KEY}) rather than hardcoded secrets, adhering to security best practices.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 1, 2026, 07:32 AM