skills/jeremylongshore/claude-code-plugins-plus-skills/model-explainability-tool/Gen Agent Trust Hub
model-explainability-tool
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHCOMMAND_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
- COMMAND_EXECUTION (HIGH): The skill requests
Bash(python:*)permissions, which allows for the execution of arbitrary Python scripts. This creates a high-risk execution environment if the logic is influenced by untrusted data. - EXTERNAL_DOWNLOADS (HIGH): The skill requests
Bash(pip:*)permissions, enabling the download and installation of third-party packages from external registries. This introduces supply chain vulnerabilities and the potential for executing malicious dependencies. - PROMPT_INJECTION (HIGH): As a tool for model explainability, this skill is designed to ingest external models and datasets (Category 8). The lack of specified boundary markers or input sanitization, combined with the requested high-privilege tool access, creates a critical vulnerability to indirect prompt injection attacks where data content can control agent behavior.
- METADATA_POISONING (MEDIUM): The trigger section contains redundant keywords ('model explainability tool' repeated), which is a deceptive metadata pattern likely intended to manipulate agent activation or selection priority.
Recommendations
- AI detected serious security threats
Audit Metadata