notebooklm

Warn

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: MEDIUMEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • EXTERNAL_DOWNLOADS (MEDIUM): The scripts/run.py wrapper automatically creates a virtual environment and installs dependencies at runtime. Because specific packages and versions are not explicitly defined in the provided skill file, this behavior poses a risk of installing and executing unverifiable third-party code.
  • COMMAND_EXECUTION (LOW): The skill frequently executes local Python scripts via the scripts/run.py wrapper. While the scripts are local, this pattern of dynamic execution increases the attack surface if the wrapper logic or passed arguments are manipulated by an attacker.
  • PROMPT_INJECTION (LOW): As a document-querying tool, the skill ingests untrusted data from external sources (NotebookLM notebooks). It lacks explicit instructions for delimiting this external content or sanitizing it, making it vulnerable to indirect prompt injection (Category 8). Evidence Chain: 1. Ingestion points: Document content fetched via ask_question.py. 2. Boundary markers: None identified; untrusted content is likely interpolated directly into the prompt. 3. Capability inventory: Shell command execution via the run.py wrapper for various managers (auth, notebook, cleanup). 4. Sanitization: None described.
  • DATA_EXFILTRATION (SAFE): The skill manages sensitive authentication data (cookies and tokens) in a local data/ directory. There is no evidence of these secrets being transmitted to unauthorized external domains.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Feb 17, 2026, 06:46 PM