notebooklm
Pass
Audited by Gen Agent Trust Hub on Mar 4, 2026
Risk Level: SAFECOMMAND_EXECUTIONPROMPT_INJECTIONDATA_EXFILTRATION
Full Analysis
- [PROMPT_INJECTION]: Scripted instructions are used to override the agent's default behavior.
- Evidence: The FOLLOW_UP_REMINDER in "scripts/ask_question.py" uses high-urgency language ("EXTREMELY IMPORTANT") to instruct the agent to ask follow-up questions, bypassing the agent's standard conversational judgment.
- [PROMPT_INJECTION]: The skill's architecture is vulnerable to indirect prompt injection from processed data.
- Ingestion points: "scripts/ask_question.py" (reading NotebookLM web responses) and "scripts/remote_manager.py" (reading local source documents).
- Boundary markers: Absent. The skill does not use delimiters or warnings to separate external data from system instructions.
- Capability inventory: Extensive capabilities including file system access (read/write), management of authenticated Google session cookies, and network communication via browser automation.
- Sanitization: No sanitization or filtering logic is applied to text retrieved from NotebookLM before it is delivered to the agent.
- [COMMAND_EXECUTION]: Automated browser interactions are performed within a persistent, authenticated context.
- Evidence: Multiple scripts utilize Playwright to automate actions on the "notebooklm.google.com" platform, leveraging stored user session data.
- [DATA_EXFILTRATION]: Local authentication tokens and files are accessed for use in remote operations.
- Evidence: "scripts/auth_manager.py" targets critical session cookies (e.g., SID, HSID) stored in the local Chrome profile. "scripts/remote_manager.py" reads user files from the local system for upload to Google servers.
Audit Metadata