notebooklm-cli

Pass

Audited by Gen Agent Trust Hub on Mar 15, 2026

Risk Level: SAFECOMMAND_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The skill uses subprocess.run to execute the nlm binary with argument lists, which prevents shell injection. It includes a raw command for direct interaction with the tool but implements safety checks to block interactive sessions like nlm chat start and enforces confirmation for deletions.
  • [EXTERNAL_DOWNLOADS]: Through the source add_url operation, the skill allows the ingestion of content from arbitrary external URLs which are subsequently processed by the NotebookLM tool.
  • [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection because it processes untrusted data from URLs, text, and local files that could contain hidden instructions intended to manipulate the agent.
  • [INGESTION_POINTS]: Untrusted data enters the skill's logic via add_url, add_text, and add_file commands in run.py.
  • [BOUNDARY_MARKERS]: The skill does not use delimiters or provide warnings to the agent to disregard instructions potentially embedded within ingested content.
  • [CAPABILITY_INVENTORY]: The skill can execute system commands via the nlm binary and read local files.
  • [SANITIZATION]: No sanitization of ingested natural language content is performed to remove or neutralize embedded instructions.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 15, 2026, 02:06 AM