setting-up-a-data-warehouse-source
Setting up a data warehouse source
Use this skill when the user wants to connect an external data source to PostHog's data warehouse for the first time. The setup has a specific three-step flow (wizard → db-schema → create) — skipping steps leads to failed sources and confused users.
When to use this skill
- The user wants to connect a new source: "connect Stripe", "import my Postgres orders table", "sync Hubspot contacts"
- The user isn't sure what source types PostHog supports
- The user has credentials but doesn't know how to structure the
schemaspayload - The user wants guidance on which sync method to pick per table
Available tools
| Tool | Purpose |
|---|---|
external-data-sources-wizard |
Discover which source types exist and what fields each needs |
external-data-sources-db-schema |
Validate credentials and list tables with available sync methods per table |
external-data-sources-create |
Create the source — requires a schemas array built from the db-schema response |
external-data-sources-check-cdc-prerequisites-create |
Postgres CDC pre-flight check (optional, only for Postgres CDC) |
external-data-sources-webhook-info-retrieve |
Check if a source supports webhooks and whether one has been registered |
external-data-sources-create-webhook-create |
Register a webhook with the external service after source creation |
external-data-sources-update-webhook-inputs-create |
Supply the signing secret manually when auto-registration failed |
external-data-sources-list |
After creation, confirm the source is listed and see its initial status |
external-data-schemas-list |
See per-table sync status once the source is created |
The three-step flow
Every source setup follows the same shape. Don't try to shortcut to external-data-sources-create — you need the
db-schema response to build a valid schemas payload.
┌────────────────────┐
│ 1. wizard │ What source types exist? What fields does each need?
└────────┬───────────┘
▼
┌────────────────────┐
│ 2. db-schema │ Validate creds. List tables + available sync methods per table.
└────────┬───────────┘
▼
┌────────────────────┐
│ 3. create │ Send source_type + credentials + schemas[] to actually create.
└────────────────────┘
Workflow
Step 1 — Discover the source type
Call external-data-sources-wizard (no params). The response is a dict keyed by source type. Each entry describes:
name— the canonical source_type string you'll pass to later calls (e.g."Postgres","Stripe","Hubspot").label/caption— human-readable.fields— the config fields needed (host, port, database, api_key, client_id/secret, ...). Each hasname,type(input, password, switch, select, file-upload), andrequired.featured,unreleasedSource— use to gauge readiness. Skip sources markedunreleasedSource: trueunless the user explicitly asked for a preview.
Match the user's request to a source. If they said "Postgres", look up Postgres. If they said something ambiguous
like "database", present the top relevant matches (Postgres, MySQL, MongoDB, BigQuery, Snowflake, Redshift) and let
them pick.
For OAuth-based sources (Hubspot, Salesforce, Google Ads), the wizard entry hints at an OAuth flow. These typically need the user to authorize in the PostHog UI rather than pasting credentials — explain this and direct them to the source setup page rather than trying to collect tokens in chat. OAuth is about authentication, not about how data flows; OAuth sources still use polling bulk sync, not webhooks.
Gather the required credentials from the user. Never ask for more fields than the wizard entry says are required —
asking for an unnecessary port when the source doesn't need one confuses users.
Step 2 — Validate credentials and discover tables
Call external-data-sources-db-schema with source_type plus all credential fields. This does two things at once:
- Validates the credentials against the live source. Returns 400 with a
messageif anything is wrong (bad host, wrong password, permission denied). Show the error verbatim — it's often actionable ("password authentication failed for user 'x'"). - If valid, returns an array of table entries. Each entry:
{
"table": "orders",
"should_sync": false,
"rows": 1_250_000,
"incremental_available": true, # can do sync_type=incremental
"append_available": true, # can do sync_type=append
"cdc_available": true, # can do sync_type=cdc (null = not enabled for team)
"supports_webhooks": false, # can do sync_type=webhook for real-time push
"incremental_fields": [ # candidates: usually updated_at, created_at, id
{"field": "updated_at", "type": "datetime", "label": "updated_at", ...},
{"field": "created_at", "type": "datetime", ...},
{"field": "id", "type": "integer", ...}
],
"detected_primary_keys": ["id"],
"available_columns": [{"field": "id", "type": "integer", "nullable": false}, ...],
"description": "..."
}
Present this to the user. Don't dump the raw JSON — summarize: which tables were found, row counts, and the default sync method recommendation per table (see sync-type decision guide).
Step 3 — Confirm per-table sync configuration
For each table the user wants to sync, pick a sync_type. See the sync-type decision guide for detailed rules, but the short version is:
- Small / dimension tables (<50k rows, no natural ordering column):
full_refresh— simple and always correct. - Large tables with an
updated_at/modified_at:incremental— much cheaper per sync. - Append-only immutable tables (logs, events):
appendif available — preserves history. - Postgres with CDC enabled and you need near-real-time:
cdc— requires primary keys and Postgres prerequisites. - Sources that support webhooks (currently Stripe): for near-real-time ingestion set
sync_type: "webhook"on the tables wheresupports_webhooks: true, then register the webhook as a post-create step (see step 6 below). Tables that don't support webhooks on the same source still need a bulk sync_type.
For each schema that will use incremental/append/cdc, you also need:
incremental_field— which column to track for high-water-mark ordering. Pick from theincremental_fieldslist returned by db-schema. Preferupdated_atovercreated_at(updated_at catches late-arriving updates; created_at misses them). For integer-only tables, use the monotonically increasing primary key.incremental_field_type— must match the chosen field's type (datetime,timestamp,date,integer,numeric,objectid).primary_key_columns— required for CDC. Usedetected_primary_keysfrom db-schema.
Step 4 — Pick a good prefix
The source's prefix is prepended to table names in HogQL. Tables end up as {prefix}_{table_name}.
- Default to the source type lowercased if there's only one source of that type:
stripe,postgres. - If the user already has a Postgres source, pick something distinguishing:
postgres_prod,postgres_analytics. - Use lowercase, underscore-separated. The prefix becomes part of every HogQL query the user writes.
Confirm the prefix with the user before creating — changing it later is possible but renames every table.
Step 5 — Create the source
Call external-data-sources-create with:
{
"source_type": "Postgres",
"prefix": "postgres_prod",
"payload": {
"host": "...",
"port": "5432",
"dbname": "...",
"user": "...",
"password": "...",
"schema": "public",
"schemas": [
{
"name": "orders",
"should_sync": true,
"sync_type": "incremental",
"incremental_field": "updated_at",
"incremental_field_type": "datetime",
"primary_key_columns": ["id"]
},
{
"name": "users",
"should_sync": true,
"sync_type": "full_refresh"
},
{
"name": "audit_log",
"should_sync": false
}
]
}
}
Rules for the schemas array:
- Every table returned by db-schema should be included, even ones the user doesn't want (set
should_sync: false). Tables the user didn't mention default toshould_sync: false. sync_typeis required only whenshould_sync: true.incremental_field/incremental_field_typemust be present whensync_typeisincrementalorappend.primary_key_columnsmust be present whensync_typeiscdc.
On success you'll get back a source with a new id. The first sync is triggered automatically.
Step 6 — Register a webhook (only when any schema is sync_type: "webhook")
Webhook-type schemas don't start receiving data just by existing — the external service needs to know where to POST
events, and PostHog needs to know how to verify them. This is a second call after source creation, not part of the
external-data-sources-create payload. Do this before telling the user the setup is complete, otherwise they
hear "syncs are running" while the push channel is still unregistered.
Only needed when at least one schema on the source has sync_type: "webhook" and should_sync: true. Currently only
Stripe implements this flow; for everything else skip this step.
Before calling create-webhook, check external-data-sources-webhook-info-retrieve({id}). If it already returns
exists: true, do NOT call create-webhook again — each successful call registers a new external endpoint and would
result in duplicate deliveries.
-
Call
external-data-sources-create-webhook-create({id}). PostHog:- creates the HogFunction that will receive webhook POSTs,
- builds a schema_mapping from external event types to PostHog schema ids,
- calls the source's API (e.g. Stripe) to register the webhook URL and subscribe to the relevant events,
- on Stripe, auto-captures the
signing_secretand stores it securely.
Returns
{success, webhook_url, error}. On success report thewebhook_urlto the user for their records — but they don't need to paste it anywhere; registration is already done. -
If
success: falsewith a permissions error like "API key doesn't have permission to create webhooks":- The HogFunction is still created, just disabled.
- Ask the user to create the webhook manually in the source's dashboard using the returned
webhook_url. - Have them copy the signing secret from the source's webhook settings.
- Call
external-data-sources-update-webhook-inputs-create({id}, {inputs: {signing_secret: "whsec_..."}})to store it. The HogFunction picks it up and verifies incoming payloads.
-
Verify with
external-data-sources-webhook-info-retrieve({id}). A healthy webhook hasexists: true,external_status.status: "enabled", and noerror.
Webhooks are supplementary to bulk sync. The first load of a webhook-enabled schema is still done via polling
(initial_sync_complete flips to true when done); after that, the webhook becomes the primary ingestion path. A
webhook schema will still have a sync_frequency that schedules a periodic bulk refresh as a safety net. This is
expected — not something to "fix".
Step 7 — Confirm and explain what happens next
After creation (and, for webhook schemas, after Step 6):
- Call
external-data-schemas-listto show the user the initial state. - Explain: every enabled schema enters
Running, then moves toCompletedwhen the first sync finishes. First syncs can take anywhere from seconds to hours depending on row count — a multi-million-row table is fine, just slow. - Tell them how to query:
SELECT * FROM {prefix}_{table_name} LIMIT 10in HogQL. - Offer to check back in a few minutes to confirm the initial syncs succeeded.
CDC setup for Postgres (optional, when requested)
If the user wants near-real-time replication from Postgres:
- Before calling db-schema, run
external-data-sources-check-cdc-prerequisites-createwith their Postgres creds. It returns{valid, errors[]}listing anything missing (wal_level, replication slot, publication, permissions). - If
valid: false, present the errors and ask the user to fix on the Postgres side. Don't try to create a CDC source that will immediately fail. - Once prerequisites pass, proceed to db-schema and create. Set
sync_type: "cdc"on the tables that need it, and includeprimary_key_columnsfor each (CDC requires them).
Important notes
- Always validate creds with db-schema before create. The create endpoint will accept invalid creds and then fail
asynchronously — the source appears in the list with status
Errorand no tables. Skipping the validation step just pushes the failure into the background. - Present the table list before creating. Large databases may have hundreds of tables. Don't auto-select them all — row counts and relevance matter for billing. Let the user opt in explicitly.
- Don't invent schemas. Every entry in the
schemasarray must correspond to a real table from the db-schema response. You can't "also add an orders table" unless db-schema found one. - Prefix is load-bearing. It's part of every HogQL query the user will ever write against these tables. Pick something short, descriptive, and not already taken.
- OAuth sources are different. Hubspot, Salesforce, Google Ads etc. need the user to authorize via the PostHog UI. Direct them there — don't try to collect OAuth tokens in chat.
- Webhooks are a separate step after create. Setting
sync_type: "webhook"on a schema doesn't register the webhook — thecreate-webhookcall does. Always follow create → create-webhook → webhook-info for webhook-type schemas, and never leave a webhook schema dangling without registration (it just won't receive events). - Webhook support is source-specific and sparse. Currently only Stripe implements
WebhookSource. Don't promise webhooks for Hubspot, Salesforce, or Postgres — they'll use polling sync. - Row counts drive billing. Warehouse syncing is metered by rows synced. A chatty 500M-row events table synced
hourly is very different from a 10k-row dimension table synced daily. Flag large tables and offer longer sync
frequencies (
sync_frequency: "24hour") as the default.