model-routing-patterns
Pass
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: LOWPROMPT_INJECTIONCOMMAND_EXECUTIONREMOTE_CODE_EXECUTION
Full Analysis
- [PROMPT_INJECTION] (LOW): The routing logic relies on simple keyword matching (e.g., 'legal', 'financial') from untrusted user prompts to determine model selection. An attacker could inject these keywords into low-priority messages to force the system to use expensive premium models, potentially exhausting API credits.\n
- Evidence: classifyTask function implementation in examples/cost-routing-example.md.\n- [COMMAND_EXECUTION] (INFO): Bash scripts perform arithmetic expansion on variables derived from user-supplied command-line arguments. While not directly exploitable in standard bash environments for RCE, this is a best-practice violation when handling numeric input from external sources.\n
- Evidence: MONTHLY_REQUESTS variable expansion in scripts/analyze-cost-savings.sh.\n- [REMOTE_CODE_EXECUTION] (LOW): The balanced-routing.json template references a machine learning model stored as a Pickle (.pkl) file. Unsafe deserialization of Pickle files is a critical vulnerability that can lead to arbitrary code execution, though implementation code for loading this file is currently absent from the skill.\n
- Evidence: complexity_detection.ml_model path in templates/balanced-routing.json.
Audit Metadata