llamaindex

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEPROMPT_INJECTIONDATA_EXFILTRATIONEXTERNAL_DOWNLOADS
Full Analysis
  • PROMPT_INJECTION (LOW): The documentation defines patterns for creating agents that ingest untrusted data from external sources, creating a surface for indirect prompt injection.\n
  • Ingestion points: The skill illustrates the use of SimpleWebPageReader, BeautifulSoupWebReader, and GithubRepositoryReader (references/data_connectors.md) to load data from arbitrary URLs and repositories into the LLM context.\n
  • Boundary markers: Examples in references/query_engines.md show the use of delimiters (e.g., ---------------------) in prompt templates to separate context from instructions, which is a recommended mitigation.\n
  • Capability inventory: FunctionAgent (references/agents.md) is capable of executing Python functions as tools, which could be triggered by instructions found within ingested data.\n
  • Sanitization: No explicit sanitization or filtering logic is demonstrated for external content before its use in RAG pipelines.\n- DATA_EXFILTRATION (LOW): The skill documentation includes examples of network operations targeting external domains for data retrieval.\n
  • Evidence: BeautifulSoupWebReader is shown accessing non-whitelisted domains such as https://docs.python.org and https://numpy.org in references/data_connectors.md.\n- EXTERNAL_DOWNLOADS (LOW): The documentation guides users to install additional LlamaIndex reader packages from PyPI.\n
  • Evidence: The file references/data_connectors.md contains the command pip install llama-index-readers-notion.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 04:51 PM