moai-ml-rag

Warn

Audited by Gen Agent Trust Hub on Mar 2, 2026

Risk Level: MEDIUMCOMMAND_EXECUTIONREMOTE_CODE_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The skill requests permissions for the 'Bash' tool in the YAML frontmatter. Combined with the numerous Python and shell-style code blocks throughout the document, this presents a risk where an agent might attempt to execute these snippets directly in the host environment.
  • [REMOTE_CODE_EXECUTION]: The 'Performance Optimization' section provides a Python example using 'pickle.loads()' to retrieve data from a Redis cache. This constitutes an unsafe deserialization pattern; if an attacker gains control over the Redis instance or the key-value store, they could inject malicious serialized objects leading to arbitrary code execution.
  • [PROMPT_INJECTION]: The skill defines an architecture for processing untrusted external content (documents and user queries).
  • Ingestion points: Untrusted data enters the context through 'vectorstore.from_documents' and 'qa.run()' calls in 'SKILL.md'.
  • Boundary markers: The provided code examples lack explicit delimiters (e.g., XML tags or clear separators) to prevent the LLM from confusing document content with instructions.
  • Capability inventory: The skill allows the use of 'Bash', 'WebSearch', and 'WebFetch' tools.
  • Sanitization: There is no evidence of input validation or content filtering in the provided examples, creating a surface for indirect prompt injection.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Mar 2, 2026, 05:14 PM