notebooklm
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHPROMPT_INJECTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTION
Full Analysis
- [PROMPT_INJECTION] (HIGH): The skill implements a behavioral override through a 'Follow-Up Mechanism' defined in SKILL.md and ask_question.py that appends mandatory instructions ('EXTREMELY IMPORTANT') to tool outputs to force a multi-turn logic loop.
- [PROMPT_INJECTION] (HIGH): The skill is vulnerable to Indirect Prompt Injection as it ingests untrusted data from the NotebookLM web UI via ask_question.py without sanitization or boundary markers, allowing malicious notebook content to influence subsequent agent behavior.
- [EXTERNAL_DOWNLOADS] (HIGH): The setup_environment.py and run.py scripts perform automated runtime installation of Python packages and download browser binaries ('patchright install chrome') from untrusted sources at runtime.
- [COMMAND_EXECUTION] (MEDIUM): The skill relies on subprocess.run for managing virtual environments and executing child processes, which is a high-privilege execution pattern for environment management.
Recommendations
- AI detected serious security threats
Audit Metadata