exa-rag

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEEXTERNAL_DOWNLOADS
Full Analysis
  • External Downloads (SAFE): The skill documentation recommends installing standard AI ecosystem packages from PyPI and npm (e.g., langchain-exa, @agentic/exa). These are appropriate for the skill's stated purpose of building RAG pipelines.
  • Indirect Prompt Injection (LOW): As a RAG-focused skill, it facilitates the ingestion of external web content into LLM prompts. While this is an inherent risk of RAG, the skill provides standard prompt templates with delimiters to mitigate accidental obedience to embedded instructions.
  • Ingestion points: ExaSearchRetriever.invoke() in references/langchain.md, ExaReader.load_data() in references/llamaindex.md, and exa.searchAndContents in references/vercel-ai.md.
  • Boundary markers: Prompts use standard Context: and Question: delimiters to separate retrieved data from user instructions.
  • Capability inventory: The skill facilitates web search retrieval and tool usage within agentic frameworks (LangChain, LlamaIndex, Vercel AI SDK).
  • Sanitization: The skill relies on standard LLM grounding and framework-level prompt engineering.
  • Data Exposure (SAFE): No hardcoded credentials or sensitive file access patterns were found. All API key examples use placeholders (e.g., your-key) or environment variables.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:11 PM