faion-ml-engineer

Warn

Audited by Gen Agent Trust Hub on Mar 1, 2026

Risk Level: MEDIUMCOMMAND_EXECUTIONPROMPT_INJECTIONEXTERNAL_DOWNLOADS
Full Analysis
  • [COMMAND_EXECUTION]: Several code examples implement tools that use the Python eval() function to process strings generated by the LLM. In autonomous-agents/examples.md and tool-use-function-calling/examples.md, the calculate tool executes eval() on raw input without any character filtering or sanitization. This allows for arbitrary Python code execution if the input string is manipulated by an attacker.
  • [PROMPT_INJECTION]: The skill has a broad attack surface for indirect prompt injection due to its heavy focus on ingesting data from external, untrusted sources such as web pages, PDFs, and code repositories (e.g., in llamaindex/examples.md). Many provided prompt templates in llm-prompts.md do not include strict boundary markers or instructions to ignore embedded commands within the retrieved context.
  • [EXTERNAL_DOWNLOADS]: The local-llm-ollama/checklist.md file includes a recommendation to install the Ollama service using a piped shell script (curl -fsSL https://ollama.com/install.sh | sh). While this targets a well-known service, executing remote scripts directly in the shell is a high-risk installation pattern.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Mar 1, 2026, 04:33 PM