llm-usage-researcher
Pass
Audited by Gen Agent Trust Hub on Mar 29, 2026
Risk Level: SAFEPROMPT_INJECTION
Full Analysis
- [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection because it processes untrusted 'test inputs' (like external code snippets or documentation) to evaluate the performance of different LLM models.\n
- Ingestion points: Test inputs are gathered from the user and processed in the evaluation loop (as defined in
SKILL.md).\n - Boundary markers: The instructions do not specify the use of delimiters or 'ignore embedded instructions' warnings for the data being evaluated.\n
- Capability inventory: The skill performs network operations via LLM API calls and writes multiple files to the local file system (
dashboard.html,comparison.tsv,results.json).\n - Sanitization: There is no evidence of sanitization or escaping of the external content before it is interpolated into the prompts for the target models.\n- [SAFE]: The generated
dashboard.htmlincludes references toChart.jsfrom a public CDN. This is a well-known and safe practice for providing interactive data visualization in generated reports.
Audit Metadata