ltv-predictor

Warn

Audited by Gen Agent Trust Hub on Mar 12, 2026

Risk Level: MEDIUMCOMMAND_EXECUTIONREMOTE_CODE_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [REMOTE_CODE_EXECUTION]: The skill uses joblib.load in scripts/regression_models.py to deserialize machine learning models. This method is inherently insecure as it relies on Python's pickle format, which can execute arbitrary code during the loading process. The skill provides pre-trained models in the examples/ directory that are loaded during normal operation.
  • [REMOTE_CODE_EXECUTION]: In scripts/deployment_manager.py, the generated API server is configured with debug=True and binds to 0.0.0.0. This exposes the Flask interactive debugger to the network, allowing any remote user with access to the port to execute arbitrary Python code on the host.
  • [COMMAND_EXECUTION]: The DeploymentManager class in scripts/deployment_manager.py utilizes subprocess.run to install system-level Python dependencies and os.system to start the API server process.
  • [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection because scripts/data_processor.py ingests user-provided CSV data without sanitization or boundary markers. Malicious instructions embedded in fields such as product descriptions could be interpreted as commands by the AI agent when reading the generated analysis reports.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Mar 12, 2026, 05:33 AM