skills/jeremylongshore/claude-code-plugins-plus-skills/training-machine-learning-models/Gen Agent Trust Hub
training-machine-learning-models
Pass
Audited by Gen Agent Trust Hub on Mar 12, 2026
Risk Level: SAFECOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: The skill configuration in
SKILL.mdrequests theBashtool withcmd:*permissions. This allows for the execution of arbitrary shell commands. While this permission is used to run the bundled ML scripts, it represents a high-privilege capability that could be misused if the agent is misled. - [PROMPT_INJECTION]: The skill processes external datasets, creating a surface for indirect prompt injection attacks.
- Ingestion points: Untrusted data is ingested by
scripts/train_model.pyandscripts/preprocess_data.pyas part of the automated training workflow. - Boundary markers: The prompt instructions do not include delimiters or specific guidance to ignore instructions embedded within the datasets.
- Capability inventory: The agent has access to
Bash(cmd:*),Read, andWritetools, which provide powerful execution capabilities if a dataset contains malicious instructions. - Sanitization: The provided boilerplate scripts lack logic to sanitize or validate dataset contents against potential embedded instructions.
Audit Metadata