awq-quantization

Fail

Audited by Gen Agent Trust Hub on Feb 16, 2026

Risk Level: HIGHPROMPT_INJECTIONREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTION
Full Analysis
  • [Indirect Prompt Injection] (HIGH): The skill presents a high-risk surface for indirect prompt injection. Ingestion points: The skill loads model architectures and calibration datasets from external sources like HuggingFace (SKILL.md). Boundary markers: No delimiters or instructions to ignore embedded commands are present in the prompts handling this data. Capability inventory: The skill can execute code via model inference (generate), modify the file system (save_quantized), and run terminal commands (pip). Sanitization: No validation or escaping is applied to untrusted external inputs.
  • [Remote Code Execution] (HIGH): The troubleshooting guide (troubleshooting.md) recommends setting safetensors=False to resolve loading errors. This practice allows the deserialization of pickle-based files, which can contain and execute arbitrary malicious code upon being loaded into the environment.
  • [External Downloads] (MEDIUM): The skill instructs users to install the autoawq library without specifying versions or verifying hashes. This creates a supply chain risk where a compromised or malicious version of the package could be installed and executed.
  • [Dynamic Execution] (MEDIUM): The skill utilizes runtime-loaded kernels (Marlin, ExLlama, Triton) and supports on-the-fly compilation of CUDA code when the autoawq[kernels] extension is used. This behavior can be exploited if an attacker can influence the environment or inputs leading to kernel loading.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 16, 2026, 02:05 AM