dependency-management
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [Indirect Prompt Injection] (HIGH): The skill is highly vulnerable to indirect prompt injection because it ingests untrusted data from project dependency files (package.json, requirements.txt, pyproject.toml) and uses that data to perform high-privilege actions like software installation and updates. \n
- Ingestion points: The helper script scripts/parse_dependencies.py reads local files provided as arguments. \n
- Boundary markers: No explicit boundary markers or instructions to ignore embedded malicious content are present in the scripts or the prompt instructions. \n
- Capability inventory: The skill is authorized to execute arbitrary commands via several package managers (npm, pip, mvn, gradle, cargo, poetry) and modify files. \n
- Sanitization: While the Python script uses safe parsing methods (json.loads), it does not validate or sanitize the package names, versions, or registry URLs before they are used in system commands, allowing a malicious file to influence the agent's execution flow.\n- [Remote Code Execution] (MEDIUM): The skill's core functionality involves downloading and executing code from external registries. Although these registries are generally considered standard, automating the installation process via an AI agent significantly increases the risk of successful supply chain attacks or the execution of malicious install scripts contained within packages.\n- [Command Execution] (MEDIUM): The skill invokes multiple system-level tools (npm, pip, jq, etc.) based on the contents of files it parses. Without strict validation or a human-in-the-loop for every command, there is a risk of unauthorized system modification.
Recommendations
- AI detected serious security threats
Audit Metadata