gptq
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFEEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- Unverifiable Dependencies (LOW): The guides suggest loading datasets from third-party Hugging Face accounts (e.g., 'anon8231489123/ShareGPT_Vicuna_unfiltered'). Loading data from unverified sources is a potential security concern as the content is not controlled by the model or tool developer.
- Privilege Escalation (LOW): Troubleshooting documentation includes instructions for using 'sudo' to install system dependencies. While typical for environment setup, this identifies a request for elevated permissions within the user's instructions.
- Indirect Prompt Injection (LOW): The skill identifies a potential surface for indirect prompt injection by demonstrating how to ingest external datasets for model calibration without implementing sanitization or boundary markers.
- Ingestion points: 'load_dataset' function calls throughout 'references/calibration.md'.
- Boundary markers: None identified in the provided implementation examples.
- Capability inventory: The skill documents capabilities including shell command execution via Docker ('integration.md') and file system writes ('calibration.md').
- Sanitization: No sanitization or validation of dataset content is demonstrated before it is used to calibrate model weights.
Audit Metadata