gptq

Warn

Audited by Gen Agent Trust Hub on Mar 28, 2026

Risk Level: MEDIUMCOMMAND_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The skill provides instructions for the agent or user to execute shell commands to manage the Python environment and system dependencies.
  • Evidence: Multiple instances of pip install for packages like auto-gptq, transformers, and accelerate in SKILL.md and references/troubleshooting.md.
  • Evidence: Instruction to use sudo apt-get install python3-dev in references/troubleshooting.md to install system development headers.
  • [EXTERNAL_DOWNLOADS]: The skill retrieves software, models, and data from various external repositories and services.
  • Evidence: Fetches pre-quantized models from Hugging Face (e.g., TheBloke/Llama-2-7B-Chat-GPTQ) and datasets (e.g., c4, bigcode/the-stack, ShareGPT).
  • Evidence: References a custom index for Python wheels at huggingface.github.io.
  • [PROMPT_INJECTION]: The skill includes instructions to load and process data from external, untrusted datasets, which could potentially contain malicious content aimed at influencing the agent's behavior.
  • Ingestion points: The skill uses datasets.load_dataset in SKILL.md and references/calibration.md to ingest text from public datasets for model calibration.
  • Boundary markers: The code examples do not demonstrate the use of delimiters or specific instructions to the agent to treat the calibration text as non-executable data.
  • Capability inventory: The skill utilizes model.generate and model.quantize, both of which process the external data directly.
  • Sanitization: No sanitization or validation of the text content within the ingested datasets is documented in the provided implementation examples.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Mar 28, 2026, 06:07 PM