faion-ml-engineer
Warn
Audited by Gen Agent Trust Hub on Mar 1, 2026
Risk Level: MEDIUMCOMMAND_EXECUTIONPROMPT_INJECTIONEXTERNAL_DOWNLOADS
Full Analysis
- [COMMAND_EXECUTION]: Several code examples implement tools that use the Python
eval()function to process strings generated by the LLM. Inautonomous-agents/examples.mdandtool-use-function-calling/examples.md, thecalculatetool executeseval()on raw input without any character filtering or sanitization. This allows for arbitrary Python code execution if the input string is manipulated by an attacker. - [PROMPT_INJECTION]: The skill has a broad attack surface for indirect prompt injection due to its heavy focus on ingesting data from external, untrusted sources such as web pages, PDFs, and code repositories (e.g., in
llamaindex/examples.md). Many provided prompt templates inllm-prompts.mddo not include strict boundary markers or instructions to ignore embedded commands within the retrieved context. - [EXTERNAL_DOWNLOADS]: The
local-llm-ollama/checklist.mdfile includes a recommendation to install the Ollama service using a piped shell script (curl -fsSL https://ollama.com/install.sh | sh). While this targets a well-known service, executing remote scripts directly in the shell is a high-risk installation pattern.
Audit Metadata