llm-debugger

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • PROMPT_INJECTION (LOW): The skill is susceptible to indirect prompt injection (Category 8) because it ingests untrusted data and interpolates it into new prompts without sanitization.
  • Ingestion points: Functions like diagnose_failure, generate_test_cases, and suggest_prompt_fix accept raw string inputs from potentially untrusted LLM outputs.
  • Boundary markers: There are no delimiters (like XML tags or markdown blocks) or specific 'ignore embedded instructions' warnings used when processing the data.
  • Capability inventory: The skill code consists entirely of logic and string manipulation. No dangerous capabilities such as os.system, subprocess, file writing, or network requests are present.
  • Sanitization: The skill does not perform any escaping or validation on the input or output strings before concatenating them into the final prompt or test case objects.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 05:56 PM