provider-integration-templates

Fail

Audited by Gen Agent Trust Hub on Feb 16, 2026

Risk Level: HIGHREMOTE_CODE_EXECUTIONCOMMAND_EXECUTION
Full Analysis
  • Dynamic Execution (HIGH): Multiple templates use the eval() function to implement mathematical calculation tools, creating a significant risk of arbitrary code execution. \n
  • In templates/openai-functions.ts, the calculate implementation uses eval(args.expression) on raw input without any validation or sanitization. \n
  • In templates/vercel-tools-config.ts, the calculatorTool uses eval(expression) on raw input without any validation. \n
  • In templates/langchain-agent.py, the calculate tool uses eval(operation) with a basic character whitelist, which is still a discouraged and potentially bypassable practice. \n- Indirect Prompt Injection (HIGH): The skill facilitates the ingestion of untrusted data through RAG and Agent templates while declaring broad permissions. \n
  • Ingestion points: templates/langchain-rag.py uses TextLoader to read documents; templates/langchain-agent.py processes user input via tool-calling agents. \n
  • Boundary markers: Templates lack delimiters or instructions to ignore embedded commands in the processed data. \n
  • Capability inventory: The skill requests Bash and Write permissions in SKILL.md. \n
  • Sanitization: No sanitization is implemented for data retrieved in RAG or processed by agents. \n- External Downloads (LOW): Setup scripts install dependencies using standard tools. \n
  • Scripts scripts/setup-langchain-integration.sh and scripts/setup-vercel-integration.sh use pip and npm/pnpm/yarn to install well-known libraries from official registries.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 16, 2026, 06:00 AM