notebooklm
NotebookLM
NotebookLM = external RAG engine that offloads reading from your context window. Uses the notebooklm-mcp-cli MCP server (PyPI, v0.6.1+) to create notebooks, manage sources, generate content, label sources thematically, and query with grounded AI responses. Supports batch operations across notebooks, pipelines, and multilingual content generation.
Disclaimer: Uses internal undocumented Google APIs via browser authentication. Sessions last ~20 minutes. API may change without notice.
What's New (April 2026 — v0.6.1)
- Source labels (NEW in v0.6.0) — Organize sources into thematic categories. New
labelMCP tool +nlm labelCLI:auto(AI generates from sources, requires 5+),list,create,rename,delete,assign,unassign. Surfaces in NotebookLM web UI's source filter. - Label reorganize (NEW in v0.6.1) —
nlm label reorganize(orlabelMCP withaction="reorganize") forces AI re-categorization when labels already exist. Two modes: all-sources (replace, requires--confirm) or unlabeled-only (preserve existing). - Video formats — 3 formats (explainer, brief, cinematic) + 9 visual styles
- Audio length —
audio_lengthparam: short, default, long - PPTX export —
download_artifact(slide_deck_format="pptx")alongside PDF - Bulk source ops —
source_add(urls=[...]),source_delete(source_ids=[...]),source_add(wait=True) - Studio artifact rename —
studio_status(action="rename", artifact_id="...", new_title="...") - Slide/report/quiz params —
slide_format,report_format,difficulty+question_count - Infographic options —
orientation,detail_level, 11infographic_styleoptions - Audio sources — Upload m4a, wav, mp3, aac, ogg, opus
- Async large queries —
notebook_query_start/notebook_query_statusfor 50+ source notebooks - Multi-browser auth — Arc, Brave, Edge, Chromium, Vivaldi, Opera + WSL2 (
nlm login --wsl) - Enterprise —
NOTEBOOKLM_BASE_URLenv var for Google Workspace deployments - Security — Download path traversal protection,
0o600auth files, Chrome origins locked to localhost
Prerequisites
- Install:
uv tool install notebooklm-mcp-cli(orpip install notebooklm-mcp-cli) - Authenticate:
nlm login(opens browser, session ~20 min) - Configure MCP:
nlm setup add claude-code(auto-configures.mcp.json) ornlm setup add allfor multi-tool setup - Verify:
nlm login --checkto confirm active session - Upgrade:
uv tool upgrade notebooklm-mcp-cli— restart MCP server after upgrade
CRITICAL: Task Management is MANDATORY (CC 2.1.16)
BEFORE doing ANYTHING else, create tasks to track progress:
# 1. Create main task IMMEDIATELY
TaskCreate(
subject="NotebookLM: {operation}",
description="Managing notebooks, sources, and content generation",
activeForm="Managing NotebookLM resources"
)
# 2. Create subtasks for the notebook workflow
TaskCreate(subject="Notebook setup", activeForm="Creating/configuring notebook")
TaskCreate(subject="Source management", activeForm="Adding sources to notebook")
TaskCreate(subject="Content generation", activeForm="Generating studio content")
# 3. Set dependencies for sequential steps
TaskUpdate(taskId="3", addBlockedBy=["2"])
TaskUpdate(taskId="4", addBlockedBy=["3"])
# 4. Before starting each task, verify it's unblocked
task = TaskGet(taskId="2") # Verify blockedBy is empty
# 5. Update status as you progress
TaskUpdate(taskId="2", status="in_progress") # When starting
TaskUpdate(taskId="2", status="completed") # When done
Decision Tree — Which Rule to Read
What are you trying to do?
│
├── Create / manage notebooks
│ ├── List / get / rename ──────► notebook_list, notebook_get, notebook_rename
│ ├── Create new notebook ──────► notebook_create
│ └── Delete notebook ──────────► notebook_delete (irreversible!)
│
├── Add sources to a notebook
│ ├── URL / YouTube ────────────► source_add(type=url)
│ ├── Plain text ───────────────► source_add(type=text)
│ ├── Local file ───────────────► source_add(type=file)
│ ├── Google Drive ─────────────► source_add(type=drive)
│ ├── Rename a source ──────────► source_rename
│ └── Manage sources ──────────► rules/setup-quickstart.md
│
├── Query a notebook (AI chat)
│ ├── Ask questions ────────────► notebook_query
│ └── Configure chat style ────► chat_configure
│
├── Generate studio content
│ ├── 10 artifact types ───────► rules/workflow-studio-content.md
│ ├── Revise slides ───────────► studio_revise (creates new deck)
│ └── Export to Docs/Sheets ──► export_artifact
│
├── Research & discovery
│ └── Web/Drive research ──────► rules/workflow-research-discovery.md
│
├── Notes (capture insights)
│ └── Create/list/update/delete ► note (unified tool)
│
├── Sharing & collaboration
│ └── Public links / invites / batch ► rules/workflow-sharing-collaboration.md
│
├── Batch & cross-notebook
│ ├── Query across notebooks ────► cross_notebook_query
│ ├── Bulk operations ───────────► batch (query, add-source, create, studio)
│ └── Multi-step pipelines ──────► rules/workflow-batch-pipelines.md
│
├── Organization
│ └── Tag notebooks ─────────────► tag
│
└── Workflow patterns
├── Second brain ─────────────► rules/workflow-second-brain.md
├── Research offload ─────────► rules/workflow-research-offload.md
└── Knowledge base ──────────► rules/workflow-knowledge-base.md
Quick Reference
| Category | Rule | Impact | Key Pattern |
|---|---|---|---|
| Setup | setup-quickstart.md |
HIGH | Auth, MCP config, source management, session refresh |
| Workflows | workflow-second-brain.md |
HIGH | Decision docs, project hub, agent interop |
| Workflows | workflow-research-offload.md |
HIGH | Synthesis, onboarding, token savings |
| Workflows | workflow-knowledge-base.md |
HIGH | Debugging KB, security handbook, team knowledge |
| Workflows | workflow-studio-content.md |
HIGH | 10 artifact types (audio, cinematic video, slides, infographics, mind maps...) |
| Research | workflow-research-discovery.md |
HIGH | Web/Drive research async flow |
| Collaboration | workflow-sharing-collaboration.md |
MEDIUM | Public links, collaborator invites, batch sharing |
| Batch | workflow-batch-pipelines.md |
HIGH | Cross-notebook queries, batch ops, pipelines |
| Release | workflow-versioned-notebooks.md |
HIGH | Per-release notebooks with changelog + diffs |
Total: 9 rules across 5 categories
MCP Tools by API Group
| Group | Tools | Count |
|---|---|---|
| Notebooks | notebook_list, notebook_create, notebook_get, notebook_describe, notebook_rename, notebook_delete | 6 |
| Sources | source_add, source_rename, source_list_drive, source_sync_drive, source_delete, source_describe, source_get_content | 7 |
| Querying | notebook_query, chat_configure | 2 |
| Studio | studio_create, studio_status (also: list_types, rename), studio_revise, studio_delete | 4 |
| Research | research_start, research_status, research_import | 3 |
| Sharing | notebook_share_status, notebook_share_public, notebook_share_invite, notebook_share_batch | 4 |
| Notes | note (unified: list/create/update/delete) | 1 (4 actions) |
| Downloads | download_artifact | 1 |
| Export | export_artifact (Google Docs/Sheets) | 1 |
| Batch | batch (multi-notebook ops), cross_notebook_query | 2 |
| Pipelines | pipeline (action: run|list; ingest-and-podcast, research-and-report, multi-format) | 1 |
| Tags | tag (action: add|remove|list|select) | 1 |
| Auth | save_auth_tokens, refresh_auth, server_info | 3 |
Total: 36 tools across 13 groups (v0.6.1+)
Key Decisions
| Decision | Recommendation |
|---|---|
| New notebook vs existing | One notebook per project/topic; add sources to existing |
| Source type | URL for web, text for inline, file for local docs, drive for Google Docs |
| Large sources | Split >50K chars into multiple sources for better retrieval |
| Auth expired? | nlm login --check; sessions last ~20 min, re-auth with nlm login |
| Studio content | Use studio_create, poll with studio_status (generation takes 2-5 min) |
| Cinematic video | studio_create(artifact_type="video", video_format="cinematic") — requires Plus/Ultra, English only, 20/day |
| Audio format | Choose brief/critique/debate/deep_dive via audio_format + short/default/long via audio_length |
| Research discovery | research_start for web/Drive discovery, then research_import (timeout=300s default) |
| Deep research | research_start(mode="deep") for multi-source synthesis (v0.5.1+, auto-retries) |
| Release notebooks | One notebook per minor version; upload CHANGELOG + key skill diffs as sources |
| Query vs search | notebook_query for AI-grounded answers; source_get_content for raw text |
| Notes vs sources | Notes for your insights/annotations; sources for external documents |
| Infographic style | 11 visual styles via infographic_style param on studio_create |
| Slide revision | Use studio_revise to edit individual slides (creates a new deck) |
| Export artifacts | export_artifact sends reports → Google Docs, data tables → Sheets |
| Language | language param on studio_create accepts BCP-47 codes (e.g., he for Hebrew, en, es, ja) |
| Bulk source add | source_add(urls=["url1","url2"], wait=True) for multi-URL ingestion with processing wait |
| Batch operations | Use batch for multi-notebook ops; cross_notebook_query for aggregated answers |
| Pipelines | pipeline(action="run", pipeline_name="ingest-and-podcast") for multi-step workflows |
Example
# 1. Create a notebook for your project
notebook_create(title="Auth Refactor Research")
# 2. Add sources (docs, articles, existing code analysis)
source_add(notebook_id="...", type="url", url="https://oauth.net/2.1/")
source_add(notebook_id="...", type="text", content="Our current auth uses...")
source_add(notebook_id="...", type="file", path="/docs/auth-design.md")
# 3. Query with grounded AI responses
notebook_query(notebook_id="...", query="What are the key differences between OAuth 2.0 and 2.1?")
# 4. Generate a deep dive audio overview (supports language param)
studio_create(notebook_id="...", artifact_type="audio", audio_format="deep_dive", language="he", confirm=True)
studio_status(notebook_id="...") # Poll until complete
# 5. Generate a cinematic video overview (Plus/Ultra, English)
studio_create(notebook_id="...", artifact_type="video", video_format="cinematic", visual_style="classic", confirm=True)
studio_status(notebook_id="...") # Poll — takes 3-8 minutes
# 6. Capture insights as notes
note(notebook_id="...", action="create", content="Key takeaway: PKCE is mandatory in 2.1")
Common Mistakes
- Forgetting auth expiry — Sessions last ~20 min. Always check with
nlm login --checkbefore long workflows. Re-auth withnlm login. - One giant notebook — Split by project/topic. One notebook with 50 sources degrades retrieval quality.
- Huge single sources — Split documents >50K characters into logical sections for better chunking and retrieval.
- Not polling studio_status — Studio content generation takes 2-5 minutes. Poll
studio_statusinstead of assuming instant results. - Ignoring source types — Use
type=urlfor web pages (auto-extracts),type=filefor local files. Usingtype=textfor a URL gives you the URL string, not the page content. - Deleting notebooks without checking —
notebook_deleteis irreversible. List contents withsource_list_driveandnote(action=list)first. - Skipping research_import —
research_startdiscovers content but does not add it. Useresearch_importto actually add findings as sources. - Raw queries on empty notebooks —
notebook_queryreturns poor results with no sources. Add sources before querying. - Ignoring language param —
studio_createsupports BCP-47languagecodes (e.g.,he,ar,ja). Defaults to English if omitted. - Batch without purpose —
batchandcross_notebook_queryare powerful but add latency. Use for multi-project synthesis, not single-notebook tasks.
Related Skills
ork:mcp-patterns— MCP server building, security, and composition patternsork:web-research-workflow— Web research strategies and source evaluationork:memory— Memory fabric for cross-session knowledge persistenceork:security-patterns— Input sanitization and layered security
More from yonatangross/orchestkit
ui-components
UI component library patterns for shadcn/ui and Radix Primitives. Use when building accessible component libraries, customizing shadcn components, using Radix unstyled primitives, or creating design system foundations.
443rag-retrieval
Retrieval-Augmented Generation patterns for grounded LLM responses. Use when building RAG pipelines, embedding documents, implementing hybrid search, contextual retrieval, HyDE, agentic RAG, multimodal RAG, query decomposition, reranking, or pgvector search.
340domain-driven-design
DDD tactical patterns for complex business modeling including entities, value objects, aggregates, domain services, repositories, specifications, and bounded contexts. Python dataclass implementations with TypeScript alternatives. Use when building rich domain models, enforcing invariants, or separating domain logic from infrastructure.
326zustand-patterns
Reference for Zustand 5.x state management including slices, middleware, Immer, useShallow, persistence, selectors, and devtools integration. Documents 7 core patterns with TypeScript examples and anti-patterns. Use when building React state management with Zustand instead of Redux.
322memory
Unified read-side memory operations including knowledge graph search, session context loading, decision timeline viewing, and Mermaid graph visualization. Subcommands: search, load, history, viz, status. Complements /ork:remember (write-side). Use when searching past decisions, loading context, or visualizing the knowledge graph.
318agent-orchestration
Agent orchestration patterns for agentic loops, multi-agent coordination, alternative frameworks, and multi-scenario workflows. Use when building autonomous agent loops, coordinating multiple agents, evaluating CrewAI/AutoGen/Swarm, or orchestrating complex multi-step scenarios.
305