skills/cpave3/skills/evalite/Gen Agent Trust Hub

evalite

Pass

Audited by Gen Agent Trust Hub on Feb 28, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection due to the pattern of interpolating user-provided test data into scoring prompts. \n- Ingestion points: Test data enters the context via the data field in evalite() calls within .eval.ts files, as seen in SKILL.md and references/full-example.md. \n- Boundary markers: The LLM-as-judge example in references/llm-judge-example.md employs delimiters such as [BEGIN DATA] and string separators (************), which provide some isolation but are not foolproof. \n- Capability inventory: The skill utilizes a Vitest-based runner for code execution and makes network requests to LLM providers for task execution and scoring. \n- Sanitization: There is no evidence of automated sanitization or escaping of input data before it is embedded in the evaluation prompts. \n- [COMMAND_EXECUTION]: The skill documentation describes the use of CLI tools for setup and operation. \n- Evidence: Instructions include running pnpm add for installation and evalite or evalite watch for executing the evaluation suite. \n- [EXTERNAL_DOWNLOADS]: The skill relies on external packages from the npm registry for its core functionality. \n- Evidence: Usage of common libraries such as evalite, vitest, autoevals, and @ai-sdk/openai is required for the skill to operate.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 28, 2026, 04:00 AM