notebooklm-cli
Pass
Audited by Gen Agent Trust Hub on Mar 15, 2026
Risk Level: SAFECOMMAND_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: The skill uses
subprocess.runto execute thenlmbinary with argument lists, which prevents shell injection. It includes arawcommand for direct interaction with the tool but implements safety checks to block interactive sessions likenlm chat startand enforces confirmation for deletions. - [EXTERNAL_DOWNLOADS]: Through the
source add_urloperation, the skill allows the ingestion of content from arbitrary external URLs which are subsequently processed by the NotebookLM tool. - [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection because it processes untrusted data from URLs, text, and local files that could contain hidden instructions intended to manipulate the agent.
- [INGESTION_POINTS]: Untrusted data enters the skill's logic via
add_url,add_text, andadd_filecommands inrun.py. - [BOUNDARY_MARKERS]: The skill does not use delimiters or provide warnings to the agent to disregard instructions potentially embedded within ingested content.
- [CAPABILITY_INVENTORY]: The skill can execute system commands via the
nlmbinary and read local files. - [SANITIZATION]: No sanitization of ingested natural language content is performed to remove or neutralize embedded instructions.
Audit Metadata