notebooklm

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
  • Prompt Injection (SAFE): The skill appends instructions (via the FOLLOW_UP_REMINDER constant in scripts/ask_question.py) to the end of tool outputs. This 'injection' is used to control the agent's internal reasoning process to ensure it asks for more details if an answer is incomplete. Because it is a functional part of the skill's logic and does not bypass safety constraints, it is deemed safe.
  • Indirect Prompt Injection (SAFE): The skill reads external content from notebooklm.google.com. While this content could theoretically contain instructions to mislead the agent, the risk is inherent to any tool that processes documentation. 1. Ingestion points: scripts/ask_question.py reads text from a browser page. 2. Boundary markers: Absent; the skill returns raw text with an appended reminder. 3. Capability inventory: Subprocess execution (run.py), browser automation (patchright), and local file access (data/ directory). 4. Sanitization: Absent; the skill performs whitespace stripping but no instruction filtering.
  • Command Execution (SAFE): The skill uses a modular execution model where scripts/run.py launches other internal Python scripts via subprocess.run. This is used exclusively for environment management and local script execution within the skill's own directory.
  • External Downloads (SAFE): During initial setup, scripts/setup_environment.py downloads Python packages from PyPI and the Chrome browser binary. These downloads are required for the skill's core functionality and come from trusted automation frameworks.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:27 PM