skills/jeremylongshore/claude-code-plugins-plus-skills/training-machine-learning-models/Gen Agent Trust Hub
training-machine-learning-models
Pass
Audited by Gen Agent Trust Hub on Apr 7, 2026
Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
- [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection when analyzing external datasets.
- Ingestion points: Data files and datasets provided for model training (e.g., assets/example_dataset.csv).
- Boundary markers: The instructions lack delimiters or specific directives to ignore instructions embedded in data.
- Capability inventory: Broad shell access via the Bash tool, along with Read, Write, and Edit permissions.
- Sanitization: There is no evidence of input validation or sanitization routines to filter malicious content from datasets.
- [COMMAND_EXECUTION]: The workflow involves the execution of Python scripts via the Bash tool to perform model training and evaluation tasks. Scripts like train_model.py and evaluate_model.py are intended to be executed in the shell environment.
- [EXTERNAL_DOWNLOADS]: The skill references well-known Python libraries (scikit-learn, pandas, numpy, tensorflow, torch) in the assets/requirements.txt file for installation via pip. These are established packages from a well-known service.
Audit Metadata