testing-llm
Pass
Audited by Gen Agent Trust Hub on Apr 17, 2026
Risk Level: SAFEPROMPT_INJECTIONDATA_EXFILTRATION
Full Analysis
- [PROMPT_INJECTION]: The skill contains strings typical of prompt injection, such as "Ignore previous instructions", in
examples/llm-test-patterns.md. These are part of atest_injection_attemptfunction used to verify the security of target applications, and do not represent a threat to the agent itself. - [DATA_EXFILTRATION]: Documentation in
rules/llm-mocking.mdandchecklists/llm-test-checklist.mdinstructs users to filter sensitive headers (e.g., "authorization", "x-api-key") from recorded test data, demonstrating proper security hygiene. - [PROMPT_INJECTION]: An indirect prompt injection surface is present in the multi-agent workflow where code is generated from external Markdown specs and application DOM content.
- Ingestion points: Markdown files in
specs/and the DOM of the application under test. - Boundary markers: No explicit delimiters are used to separate input data from instructions in the generation process.
- Capability inventory: File system write access for test generation (
tests/*.spec.ts), web interaction via Playwright, and network access viaWebFetch. - Sanitization: No evidence of input sanitization for external specifications before their use in code generation.
Audit Metadata