rag-retrieval

Pass

Audited by Gen Agent Trust Hub on Feb 25, 2026

Risk Level: SAFE
Full Analysis
  • [EXTERNAL_DOWNLOADS]: The skill integrates with several well-known AI and data services including OpenAI, Anthropic, Cohere, Voyage AI, Tavily, and Pinecone. These are established technology providers, and the integration patterns used (environment variables for API keys) follow security best practices.
  • [PROMPT_INJECTION]: The skill implements robust system instructions designed to prevent hallucinations and constrain the AI's response to provided context. Examples include instructions such as "Answer using ONLY the provided context" and "If not in context, say 'I don't have that information.'", which are standard defense-in-depth measures for RAG applications.
  • [DATA_EXPOSURE]: No hardcoded credentials or sensitive file paths were found. The provided templates and scripts use environment variables for sensitive configuration like API keys.
  • [INDIRECT_PROMPT_INJECTION]: As a RAG implementation, the skill handles external data from web searches and vector databases, which is a known surface for indirect prompt injection.
  • Ingestion points: Data enters the system via the web_search node in scripts/scripts/crag-workflow.py and document retrieval in scripts/rag-pipeline-template.ts.
  • Boundary markers: The skill uses clear delimiters and strict system prompts to separate retrieved context from user queries.
  • Capability inventory: Capabilities are limited to generating text responses and saving temporary image files to /tmp/ for multimodal processing.
  • Sanitization: The skill includes sophisticated 'Self-RAG' and 'CRAG' patterns that use LLM-based grading nodes to verify the relevance and support of retrieved documents before they are used in generation, significantly mitigating the risk of processing malicious injected instructions.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 25, 2026, 03:19 PM