scikit-learn
Pass
Audited by Gen Agent Trust Hub on Apr 30, 2026
Risk Level: SAFEREMOTE_CODE_EXECUTIONPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- [REMOTE_CODE_EXECUTION]: The documentation in 'references/model_evaluation.md' provides examples for using 'pickle' and 'joblib' to save and load models. These libraries are capable of executing arbitrary code during deserialization, which poses a risk if an attacker provides a malicious model file. This usage is common in the machine learning field but requires careful handling of file sources.
- [PROMPT_INJECTION]: The skill exposes an attack surface for indirect prompt injection through data ingestion. 1. Ingestion points: 'SKILL.md' (Common Workflows) uses 'pd.read_csv' to load external data, and 'references/model_evaluation.md' uses 'joblib.load'. 2. Boundary markers: Absent in the provided examples. 3. Capability inventory: 'scripts/clustering_analysis.py' performs file writes ('plt.savefig'), and 'references/model_evaluation.md' demonstrates model saving ('joblib.dump'). 4. Sanitization: No explicit data validation or sanitization is shown for ingested data.
- [COMMAND_EXECUTION]: The skill includes executable Python scripts ('scripts/classification_pipeline.py' and 'scripts/clustering_analysis.py') for demonstrating machine learning tasks. These scripts are transparent and perform expected data processing and visualization functions.
Audit Metadata