connector-config
tdx Connector Config
Configure connector_config for activations by discovering fields with tdx connection schema.
Key Commands
# List connections (shows type and name)
tdx connection list
# Discover connector_config fields (ALWAYS run this first)
tdx connection schema <connector_type> # Pass the TYPE (not connection name)
# List all connector types
tdx connection types
Schema vs Settings: schema shows connector_config fields for activations. settings shows credentials for creating connections.
Workflow
tdx connection list # 1. Find connection type
tdx connection schema salesforce_marketing_cloud_v2 # 2. Get schema fields
# 3. Write connector_config → 4. tdx sg push --dry-run
Common Connector Types
Salesforce Marketing Cloud (salesforce_marketing_cloud_v2)
connector_config:
de_name: CustomerSegment # Data Extension name (requires primary key)
shared_data_extension: false
data_operation: upsert # upsert | replace
# For creating new DE:
create_new_de: true
folder_path: Segments/Marketing
primary_column: email
is_sendable: true
sendable_rule: Email Address # "Subscriber Key" | "Email Address"
sendable_column: email
Salesforce CRM (sfdc_v2)
connector_config:
object: Contact
mode: update # append | truncate | update
unique: email # Key field (when mode=update)
upsert: true
AWS S3 (s3_v2)
connector_config:
bucket: my-bucket
path: exports/segments/data.csv
format: csv # csv | tsv | jsonl
compression: gz # none | gz
BigQuery (bigquery_v2)
connector_config:
project: my-gcp-project
dataset: marketing
table: segments
mode: APPEND # APPEND | REPLACE | REPLACE_BACKUP | TRUNCATE
auto_create_table: true
Treasure Data (treasure_data)
connector_config:
database_name: marketing_db
table_name: exported_segments
mode: append # append | replace
Conditional Fields
Schema output shows when fields apply:
unique: Key [text]
Show when: mode=["update"]
Only include unique when mode is update.
Related Skills
- activation - Activation structure (connection, schedule, columns, notifications)
- segment - Segment rule syntax
- journey - Journey structure and activation steps
More from treasure-data/td-skills
pytd
Expert assistance for using pytd (Python SDK) to query and import data with Treasure Data. Use this skill when users need help with Python-based data analysis, querying Presto/Hive, importing pandas DataFrames, bulk data uploads, or integrating TD with Python analytical workflows.
20workflow
Manages TD workflows using `tdx wf` commands. Covers project sync (pull/push/clone), running workflows, monitoring sessions/attempts, task timeline visualization, retry/kill operations, and secrets management. Use when users need to manage, monitor, or debug Treasure Workflow projects via tdx CLI.
3journey
Load when the client wants to create, edit, or manage a CDP customer journey. Use for building journey YAML with segments, activations, and stage steps, modifying journey stages or flow logic (decision points, condition waits, A/B tests), or pushing journey changes to Treasure Data. Also load when the client wants to analyze journey performance, query journey tables, create journey dashboards, or generate journey action reports.
2parent-segment-analysis
Query and analyze CDP parent segment database data. Use `tdx ps desc -o` to get output database schema, then query customers and behavior tables. Use when exploring parent segment data, building reports, or analyzing customer attributes and behaviors.
2trino-optimizer
TD Trino performance optimization including CTAS (5x faster), UDP bucketing for ID lookups, magic comments for join distribution, REGEXP_LIKE vs LIKE, and approx functions.
2engage
Manage Treasure Engage email templates and campaigns using `tdx engage` commands with YAML+HTML configs. Use when creating, editing, previewing, validating, or deploying email templates, email/push campaigns, managing workspaces, or any task involving Engage email content — even if the user only mentions "create an email" or "build an HTML email". Always write YAML definition files alongside HTML — never push raw HTML without a YAML wrapper.
2