mlflow
Fail
Audited by Gen Agent Trust Hub on Mar 13, 2026
Risk Level: HIGHCREDENTIALS_UNSAFECOMMAND_EXECUTIONREMOTE_CODE_EXECUTIONPROMPT_INJECTION
Full Analysis
- [CREDENTIALS_UNSAFE]: The skill provides an example configuration for an MLflow tracking server that includes a hardcoded PostgreSQL connection string containing plaintext credentials (
postgresql://user:pass@localhost:5432/mlflow). - [COMMAND_EXECUTION]: The skill documentation includes several shell commands that interact with the system and network infrastructure, specifically
pip install mlflow,mlflow ui, andmlflow models serve. - [REMOTE_CODE_EXECUTION]: The Python examples use
mlflow.pyfunc.load_modelandmlflow.sklearn.log_model, which facilitate the loading of serialized Python objects. This introduces a risk of arbitrary code execution if models are loaded from compromised or untrusted sources due to the underlying use of serialization libraries likepickle. - [PROMPT_INJECTION]: The skill's model registry and serving capabilities create an attack surface for indirect prompt injection.
- Ingestion points: Loading models from the registry via
mlflow.pyfunc.load_modelinmodel_registry.py. - Boundary markers: None; there are no instructions provided to the agent to verify the integrity or origin of the models.
- Capability inventory: The skill enables hosting local REST API services via
mlflow models serveand file writes viamlflow.log_artifact. - Sanitization: None; the skill does not perform validation or security scanning of model content.
Recommendations
- AI detected serious security threats
Audit Metadata