llamaindex
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- Indirect Prompt Injection (HIGH): The skill defines patterns for ingesting untrusted external data that could contain malicious instructions.
- Ingestion points: Data is loaded from arbitrary URLs via
SimpleWebPageReaderand GitHub repositories viaGithubRepositoryReaderinreferences/data_connectors.md. - Boundary markers: Minimal boundary markers are used (simple delimiters in
references/query_engines.md), which are insufficient to prevent an LLM from following instructions embedded in the retrieved text. - Capability inventory: The
FunctionAgentinreferences/agents.mdcan execute tools/functions. If an attacker injects instructions into a web page the agent reads, they could trick the agent into calling functions with malicious arguments. - Sanitization: The provided examples do not demonstrate any sanitization or validation of the content retrieved from external sources.
- Command Execution (MEDIUM): The skill promotes the use of
FunctionAgent, which delegates execution authority to the LLM. While the examples use a benignmultiplyfunction, this pattern allows for arbitrary tool execution, which is dangerous when combined with the ingestion of untrusted data described above.
Recommendations
- AI detected serious security threats
Audit Metadata