shot-list
Shot List Generator
Parse screenplays, collaboratively determine shots, and generate production-ready PDF shot lists.
Workflow Overview
- Parse Script → Extract scenes, locations, characters, action
- Collaborate → Discuss shot choices scene-by-scene with user
- Generate PDF → Create professional, printable shot list
Step 1: Parse the Script
Support formats: .fountain, .fdx, .txt, .pdf, .docx
Scene Extraction Pattern
Extract from script:
- Scene number (auto-generate if missing)
- Scene heading (INT./EXT., location, time)
- Characters in scene
- Key action beats (story moments needing coverage)
- Page/timing estimate
Fountain/Text Parsing
import re
def parse_screenplay(text):
"""Extract scenes from screenplay text."""
scenes = []
scene_pattern = r'^((?:INT\.|EXT\.|INT\./EXT\.|I/E\.)\s+.+)$'
lines = text.split('\n')
current_scene = None
scene_num = 0
for i, line in enumerate(lines):
line = line.strip()
if re.match(scene_pattern, line, re.IGNORECASE):
if current_scene:
scenes.append(current_scene)
scene_num += 1
current_scene = {
'number': scene_num,
'heading': line,
'characters': set(),
'action_beats': [],
'content': []
}
elif current_scene:
current_scene['content'].append(line)
if line.isupper() and len(line) > 1 and len(line) < 40:
if not any(t in line for t in ['CUT TO', 'FADE', 'DISSOLVE']):
current_scene['characters'].add(line.split('(')[0].strip())
if current_scene:
scenes.append(current_scene)
for s in scenes:
s['characters'] = list(s['characters'])
return scenes
Step 2: Collaborative Shot Planning
After parsing, present scenes and discuss coverage. For each scene ask:
- What's the emotional arc? (Drives framing choices)
- Who has focus? (Determines coverage priority)
- Key moments? (Beats requiring specific shots)
- Practical constraints? (Location, equipment, time)
- Visual style reference? (Film/show inspiration)
Shot Type Reference
| Type | Code | Use For |
|---|---|---|
| Wide/Establishing | WS | Location, groups |
| Full Shot | FS | Full body, action |
| Medium Shot | MS | Dialogue, interaction |
| Medium Close-Up | MCU | Emotional dialogue |
| Close-Up | CU | Reaction, emotion |
| Extreme Close-Up | ECU | Critical detail |
| Over-the-Shoulder | OTS | Dialogue coverage |
| Two-Shot | 2S | Paired characters |
| Insert | INS | Props, details |
| POV | POV | Character perspective |
Camera Movement Reference
| Movement | Code | Effect |
|---|---|---|
| Static | STATIC | Stability |
| Pan | PAN | Follow horizontally |
| Tilt | TILT | Reveal height |
| Dolly | DOLLY | Approach/retreat |
| Tracking | TRACK | Follow movement |
| Crane | CRANE | Epic scale |
| Handheld | HH | Tension, energy |
| Steadicam | STEDI | Fluid following |
Angle Reference
| Angle | Effect |
|---|---|
| Eye Level | Neutral |
| Low Angle | Power |
| High Angle | Vulnerability |
| Dutch | Unease |
Step 3: Building Shot Entries
shot_entry = {
'scene': 1,
'shot': 'A',
'setup': 1,
'shot_type': 'MS',
'framing': 'Medium on Sarah',
'angle': 'Eye Level',
'movement': 'STATIC',
'lens': '50mm',
'description': 'Sarah enters, sees the letter',
'characters': ['SARAH'],
'notes': 'Practical window light'
}
Coverage Pattern
Master → Medium → Close-ups → Inserts
Step 4: Generate PDF
Use scripts/generate_shot_list_pdf.py for professional output.
PDF Columns
| Column | Content |
|---|---|
| Shot # | Scene.Shot ID |
| Setup | Camera setup |
| Type | Shot type code |
| Framing | Description |
| Move | Camera movement |
| Action | What happens |
| Notes | Technical notes |
Output to /mnt/user-data/outputs/shot_list_{project}.pdf
References
references/shot_terminology.md- Complete glossaryreferences/coverage_patterns.md- Common coverage strategies
More from jakerains/agentskills
nextjs-pwa
Build Progressive Web Apps with Next.js: service workers, offline support, caching strategies, push notifications, install prompts, and web app manifest. Use when creating PWAs, adding offline capability, configuring service workers, implementing push notifications, handling install prompts, or optimizing PWA performance. Triggers: PWA, progressive web app, service worker, offline, cache strategy, web manifest, push notification, installable app, Serwist, next-pwa, workbox, background sync.
9elevenlabs
Complete ElevenLabs AI audio platform: text-to-speech (TTS), speech-to-text (STT/Scribe), voice cloning, voice design, sound effects, music generation, dubbing, voice changer, voice isolator, and conversational voice agents. Use when working with audio generation, voice synthesis, transcription, audio processing, or building voice-enabled applications. Triggers: generate speech, clone voice, transcribe audio, create sound effects, compose music, dub video, change voice, isolate vocals, build voice agent, ElevenLabs API/SDK/CLI/MCP.
9onnx-webgpu-converter
Convert HuggingFace transformer models to ONNX format for browser inference with Transformers.js and WebGPU. Use when given a HuggingFace model link to convert to ONNX, when setting up optimum-cli for ONNX export, when quantizing models (fp16, q8, q4) for web deployment, when configuring Transformers.js with WebGPU acceleration, or when troubleshooting ONNX conversion errors. Triggers on mentions of ONNX conversion, Transformers.js, WebGPU inference, optimum export, model quantization for browser, or running ML models in the browser.
8skill-seekers
Convert documentation websites, GitHub repositories, and PDFs into Claude AI skills. Use when creating Claude skills from docs, scraping documentation, packaging websites into skills, or converting repos/PDFs to Claude knowledge.
7vercel-workflow
Build durable workflows with Vercel Workflow DevKit using "use workflow" and "use step" directives. Use for long-running tasks, background jobs, AI agents, webhooks, scheduled tasks, retries, and workflow orchestration. Supports Next.js, Vite, Astro, Express, Fastify, Hono, Nitro, Nuxt, SvelteKit.
7apple-foundation-models
Build Apple Intelligence features with Foundation Models and Image Playground on iOS 26+, iPadOS 26+, macOS 26+, Mac Catalyst 26+, and visionOS 26+. Use when implementing SystemLanguageModel, LanguageModelSession, guided generation with @Generable/@Guide, tool calling, streaming responses, prompt design, safety and guardrail handling, model availability checks, content tagging, context-window limits, local on-device inference, routing to larger-model paths, adapters, and ImagePlayground/ImageCreator APIs. Covers model capabilities and limitations, structured output, error handling, and SwiftUI integration patterns.
7