notebooklm

Pass

Audited by Gen Agent Trust Hub on Mar 16, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • [PROMPT_INJECTION]: The skill uses instructional prompts with high-emphasis markers intended to override or strongly influence the agent's conversational behavior.
  • Evidence: In scripts/ask_question.py, the constant FOLLOW_UP_REMINDER is appended to answers and starts with "EXTREMELY IMPORTANT:", instructing the AI to check for clarity and ask follow-up questions before replying.
  • [COMMAND_EXECUTION]: The skill utilizes automated browser interactions to perform tasks on behalf of the user.
  • Evidence: The scripts scripts/ask_question.py and scripts/remote_manager.py use the Playwright library to navigate the Google NotebookLM interface, fill forms, and simulate user input to automate complex workflows.
  • [DATA_EXPOSURE]: The skill manages sensitive session data to provide persistent authentication.
  • Evidence: scripts/auth_manager.py extracts and validates critical Google session cookies (such as SID, HSID, and SSID) and stores them in persistent browser profiles within the user's home directory at ~/.config/claude/notebooklm-skill/.
  • [INDIRECT_PROMPT_INJECTION]: The skill retrieves data from a remote web service, which could contain instructions that influence the agent's logic.
  • Ingestion points: scripts/ask_question.py extracts text directly from the NotebookLM web UI via CSS selectors in _collect_response_texts.
  • Boundary markers: No delimiters or boundary markers are used to separate the fetched content from the agent's internal instructions.
  • Capability inventory: The skill can read local files, write files, and perform network requests via Playwright.
  • Sanitization: External content is not sanitized or validated before being returned to the AI agent.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 16, 2026, 03:00 PM