building-rag-systems

Pass

Audited by Gen Agent Trust Hub on Mar 6, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The skill describes an architecture with a clear surface for indirect prompt injection, which is a standard risk for RAG systems.
  • Ingestion points: Data is ingested from various untrusted sources including Word documents, PDFs, CSVs, and video/audio transcriptions as detailed in references/DATA-LOADING.md.
  • Boundary markers: The prompt templates provided in the code examples (e.g., for table summarization in references/DATA-LOADING.md and virtual question generation in references/DATA-PREPARATION.md) do not use delimiters or explicit instructions to prevent the LLM from obeying commands embedded within the retrieved text.
  • Capability inventory: The documented systems have the capability to perform network operations (OpenAI API calls) and file system reads/writes.
  • Sanitization: No logic for sanitizing, escaping, or validating the content of external files is provided before the data is included in the prompt context.
  • [EXTERNAL_DOWNLOADS]: The skill uses hub.pull to fetch a prompt template from the well-known LangChain Hub service in references/DATA-PREPARATION.md. This is a standard operation for the mentioned framework.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 6, 2026, 02:12 PM