ml-model-explainer
Fail
Audited by Gen Agent Trust Hub on Feb 15, 2026
Risk Level: CRITICALREMOTE_CODE_EXECUTION
Full Analysis
- Dynamic Execution / Remote Code Execution (CRITICAL): The skill performs unsafe deserialization of untrusted data using the Python
picklemodule. - Ingestion Point: In
scripts/ml_model_explainer.py, the--modelcommand-line argument allows a user to specify a file path to be loaded. - Evidence: Line 106:
model = pickle.load(f). The script opens the user-provided file in read-binary mode and passes it directly topickle.load(). - Risk: The
picklemodule is not secure against erroneous or maliciously constructed data. A crafted pickle file can execute arbitrary system code during the unpickling process. This is a classic high-impact RCE vector, especially in machine learning workflows where model files are often shared or downloaded from untrusted sources. - Sanitization: There is no validation, integrity checking (e.g., HMAC), or sandboxing applied to the model file before it is deserialized.
Recommendations
- AI detected serious security threats
Audit Metadata