notebooklm

Pass

Audited by Gen Agent Trust Hub on Mar 21, 2026

Risk Level: SAFEPROMPT_INJECTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTION
Full Analysis
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection as it ingests and processes responses from Google NotebookLM (external data) and returns them to the agent context.
  • Ingestion points: scripts/ask_question.py and scripts/browser_session.py read synthesized response content directly from the browser DOM after querying NotebookLM.
  • Boundary markers: The skill appends a follow-up reminder string (FOLLOW_UP_REMINDER) to the ingested content but does not utilize formal data delimiters or instructions to ignore embedded commands.
  • Capability inventory: The skill possesses significant capabilities including command execution via subprocess.run (in run.py and setup_environment.py), local file system access for managing authentication state and notebook metadata, and full network access via an automated browser context.
  • Sanitization: The external content is retrieved and returned as raw text without explicit sanitization or validation of its contents.
  • [EXTERNAL_DOWNLOADS]: The skill downloads and installs Python packages (patchright, python-dotenv) from official registries and utilizes the patchright library to download Google Chrome or Chromium binaries to facilitate browser automation.
  • [COMMAND_EXECUTION]: The skill uses subprocess.run within scripts/run.py and scripts/setup_environment.py to orchestrate the creation of a local virtual environment, install required dependencies, and execute internal Python scripts for task automation.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 21, 2026, 07:37 AM