skills/mukul975/anthropic-cybersecurity-skills/detecting-deepfake-audio-in-vishing-attacks/Gen Agent Trust Hub
detecting-deepfake-audio-in-vishing-attacks
Warn
Audited by Gen Agent Trust Hub on Apr 10, 2026
Risk Level: MEDIUMREMOTE_CODE_EXECUTIONPROMPT_INJECTION
Full Analysis
- [REMOTE_CODE_EXECUTION]: The
scripts/agent.pyscript usesjoblib.load()to load machine learning models from a file path provided via the command-line argument--model. Becausejoblibuses pickle-based deserialization, loading a malicious model file can lead to arbitrary code execution on the host system. - [PROMPT_INJECTION]: The skill exhibits an indirect prompt injection surface by processing untrusted audio files.
- Ingestion points: Audio files are loaded into the process using
librosa.load()inscripts/agent.py. - Boundary markers: None; raw audio samples are processed directly without safety delimiters or validation.
- Capability inventory: The skill possesses capabilities for file system writes (
json.dump) and code execution via model deserialization (joblib.load). - Sanitization: No sanitization or validation is performed on the audio content or its embedded metadata before processing.
- [SAFE]: No hardcoded credentials, malicious prompt injections, or unauthorized network operations were detected in the instructions or scripts. All external references in the documentation point to reputable academic research datasets and official library documentation.
Audit Metadata