faion-ai-agents
Fail
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: HIGHCOMMAND_EXECUTIONREMOTE_CODE_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION] (HIGH): The skill metadata in SKILL.md and langchain-agents-architectures/README.md requests access to the Bash tool, allowing for arbitrary command execution on the host system. This is a high-privilege capability that increases the impact of any agent subversion.\n- [REMOTE_CODE_EXECUTION] (HIGH): Multiple documentation files and code examples (e.g., in langchain-agents-architectures/README.md and SKILL.md) demonstrate implementing a calculator tool using the eval() function. This pattern allows for arbitrary code execution if untrusted or LLM-generated strings are passed to the function, posing a critical security risk.\n- [DATA_EXFILTRATION] (LOW): The skill includes instructions for connecting to various external services such as GitHub, Slack, Notion, and general web URLs (documented in llamaindex-basics/README.md). These ingestion capabilities, while standard for RAG, create a potential surface for data exfiltration if output filters or egress controls are not implemented.\n- [PROMPT_INJECTION] (LOW): Indirect Prompt Injection Surface Analysis:\n
- Ingestion points: llamaindex-basics/README.md (SimpleWebPageReader, GithubRepositoryReader, SlackReader, DatabaseReader).\n
- Boundary markers: Absent in provided templates and examples.\n
- Capability inventory: Bash tool access, eval() in tool examples, Write/Edit file operations.\n
- Sanitization: No explicit sanitization or validation of data from external connectors is demonstrated in the provided examples.
Recommendations
- AI detected serious security threats
Audit Metadata