d1-migration
D1 Migration Workflow
Guided workflow for Cloudflare D1 database migrations using Drizzle ORM.
Standard Migration Flow
1. Generate Migration
pnpm db:generate
This creates a new .sql file in drizzle/ (or your configured migrations directory).
2. Inspect the SQL (CRITICAL)
Always read the generated SQL before applying. Drizzle sometimes generates destructive migrations for simple schema changes.
Red Flag: Table Recreation
If you see this pattern, the migration will likely fail:
CREATE TABLE `my_table_new` (...);
INSERT INTO `my_table_new` SELECT ..., `new_column`, ... FROM `my_table`;
-- ^^^ This column doesn't exist in old table!
DROP TABLE `my_table`;
ALTER TABLE `my_table_new` RENAME TO `my_table`;
Cause: Changing a column's default value in Drizzle schema triggers full table recreation. The INSERT SELECT references the new column from the old table.
Fix: If you're only adding new columns (no type/constraint changes on existing columns), simplify to:
ALTER TABLE `my_table` ADD COLUMN `new_column` TEXT DEFAULT 'value';
Edit the .sql file directly before applying.
3. Apply to Local
pnpm db:migrate:local
# or: npx wrangler d1 migrations apply DB_NAME --local
4. Apply to Remote
pnpm db:migrate:remote
# or: npx wrangler d1 migrations apply DB_NAME --remote
Always apply to BOTH local and remote before testing. Local-only migrations cause confusing "works locally, breaks in production" issues.
5. Verify
# Check local
npx wrangler d1 execute DB_NAME --local --command "PRAGMA table_info(my_table)"
# Check remote
npx wrangler d1 execute DB_NAME --remote --command "PRAGMA table_info(my_table)"
Fixing Stuck Migrations
When a migration partially applied (e.g. column was added but migration wasn't recorded), wrangler retries it and fails on the duplicate column.
Symptoms: pnpm db:migrate errors on a migration that looks like it should be done. PRAGMA table_info shows the column exists.
Diagnosis
# 1. Verify the column/table exists
npx wrangler d1 execute DB_NAME --remote \
--command "PRAGMA table_info(my_table)"
# 2. Check what migrations are recorded
npx wrangler d1 execute DB_NAME --remote \
--command "SELECT * FROM d1_migrations ORDER BY id"
Fix
# 3. Manually record the stuck migration
npx wrangler d1 execute DB_NAME --remote \
--command "INSERT INTO d1_migrations (name, applied_at) VALUES ('0013_my_migration.sql', datetime('now'))"
# 4. Run remaining migrations normally
pnpm db:migrate
Prevention
CREATE TABLE IF NOT EXISTS— safe to re-runALTER TABLE ADD COLUMN— SQLite has noIF NOT EXISTSvariant; check column existence first or use try/catch in application code- Always inspect generated SQL before applying (Step 2 above)
Bulk Insert Batching
D1's parameter limit causes silent failures with large multi-row INSERTs. Batch into chunks:
const BATCH_SIZE = 10;
for (let i = 0; i < allRows.length; i += BATCH_SIZE) {
const batch = allRows.slice(i, i + BATCH_SIZE);
await db.insert(myTable).values(batch);
}
Why: D1 fails when rows x columns exceeds ~100-150 parameters.
Column Naming
| Context | Convention | Example |
|---|---|---|
| Drizzle schema | camelCase | caseNumber: text('case_number') |
| Raw SQL queries | snake_case | UPDATE cases SET case_number = ? |
| API responses | Match SQL aliases | SELECT case_number FROM cases |
New Project Setup
When creating a D1 database for a new project, follow this order:
- Deploy Worker first —
npm run build && npx wrangler deploy - Create D1 database —
npx wrangler d1 create project-name-db - Copy database_id to
wrangler.jsoncd1_databasesbinding - Redeploy —
npx wrangler deploy - Run migrations — apply to both local and remote