gtm-meta-skill
GTM Meta Skill
Use this skill for prospecting, account research, contact enrichment, verification, lead scoring, personalization, and campaign activation.
1) What this skill governs
- Route GTM decisions, safety gates, and provider/quality defaults before execution.
- Keep long command chains and tooling nuance in sub-docs; provider-specific implementation detail in
provider-playbooks/*.md. - Provide clear entry points for both paid and non-paid workflows, including
--rows 0:1pilots.
Process/goal
Customer is generally trying to go from "I have an ICP" to "Here's a list of prospects with email/linkedin and very personalized content or signals". They may be anywhere in this process, but guide them along.
When ICP context matters (and when it doesn't)
ICP context is required for:
- Prospecting from scratch / choosing who to target
- Persona selection and qualification
- Custom signal discovery and personalized messaging (
call_aicolumns) - LinkedIn lookup when you don't have enough identifying info (title, company, geo) — ICP persona titles become your search filter
For these: if they have an ICP.md somewhere in this repository (context/), read it; else guide them to create one or just give you base context of who they are (e.g. getting the customers domain is super high value, you can scrape their site and understand them).
ICP is NOT required for mechanical tasks — do not ask for it, do not raise it as an objection:
- Enriching an existing CSV with a specific field (email, phone, LinkedIn when identifiers are strong)
- Validating email addresses
- Scraping profiles from known URLs
- Running a waterfall on a known column
- Any task where the user already picked their targets and is asking for a specific enrichment type
Heuristic: if the user hands you a CSV and asks for a concrete field, just execute. ICP becomes required when the agent has to choose who to target, craft what to say, or disambiguate a weak lookup.
Documentation hierarchy
- Level 1 (
SKILL.md): decision model, guardrails, approval gates, links to sub-docs. - Level 2 (phase docs): finding-companies-and-contacts.md, enriching-and-researching.md, writing-outreach.md,
prompts.json. - Level 2.5 (
recipes/*.md): step-by-step playbooks for specific tasks (email lookup, LinkedIn resolution, waterfall patterns, contact finding, actor contracts). Search like code with Grep. - Level 3 (
provider-playbooks/*.md): provider-specific quirks, cost/quality notes, and fallback behavior.
No-loss rule: moved guidance remains fully documented at its canonical level and is linked from here.
2) Read behavior — MANDATORY before any execution
STOP. Do not call any provider, run any deepline tools execute, or write any search command until you have opened the correct sub-doc for your task.
These skill docs and sub-docs are not generic documentation — they are distilled from hundreds of real runs and encode exactly what works, what fails, and why. They contain validated parameter schemas, correct filter syntax, parallel execution patterns, tested sample payloads, and known pitfalls that took many iterations to discover. Think of them as shortcuts: reading a doc for 5 seconds saves you from 10 failed tool calls, wasted credits, and garbage output. Every time an agent skips reading the docs and tries to "figure it out" from first principles, it re-discovers the same failure modes that are already documented and solved.
SKILL.md is the routing layer — it tells you WHERE to go, not HOW to execute. The sub-docs and task-specific skills contain the HOW. Without them you will guess parameters, pick wrong providers, run searches sequentially instead of in parallel, and produce garbage results. This has happened repeatedly.
Open the right doc BEFORE executing
This is not optional. Read the matching doc. Do not skip this step. Do not "just try Apollo real quick" or "just run one search to see." These docs exist because the correct approach was non-obvious and had to be learned through trial and error — they are shortcuts that let you skip straight to what works.
!important READING MULTIPLE DOCS IS A GREAT IDEA AND OFTEN SUPER ESSENTIAL. JUST READ MORE.
Routing rules — match your task to a doc and READ IT:
| When the task involves... | You MUST read this doc first | What it gives you (that SKILL.md doesn't) |
|---|---|---|
| Finding companies, finding people, building lead lists, prospecting, portfolio/VC sourcing, contact finding at known companies, coverage completion at scale | finding-companies-and-contacts.md | Provider filter schemas, parallel execution patterns, provider mix tables, role-based search rules, subagent orchestration, at-scale coverage completion, portfolio/VC shortcuts, contact finding patterns. |
Researching companies or people, understanding what they build, figuring out use cases, personalizing based on mission/product/industry, enriching a CSV, adding data columns, waterfall enrichment, finding emails/phones/LinkedIn, coalescing data, custom signals, call_ai prompts, Apify actors — any task that adds or transforms row-level data |
enriching-and-researching.md | deepline enrich syntax and all flags. Waterfall patterns with fallback chains. call_ai / run_javascript types. Multi-pass pipeline patterns (research pass → generation pass). Coalescing patterns. Email/phone/LinkedIn waterfall orders. Custom signal buckets. Apify actor selection. GTM definitions and defaults. |
| Writing cold emails, personalizing outreach, lead scoring, qualification, sequence design, campaign copy, inspecting CSVs in Playground. If the task also requires researching companies/people to inform the writing, read enriching-and-researching.md too — it has the multi-pass pipeline pattern. | writing-outreach.md | Prompt templates from prompts.json. Scoring rubrics. Email length/tone/structure rules. Personalization patterns. Qualification frameworks. Playground inspection commands. |
Recipes: step-by-step playbooks for specific tasks (check before executing)
The recipes/ directory contains battle-tested playbooks. Before you start executing, scan this list and read any recipe that matches your task.
When a recipe matches: follow it step-by-step as your execution plan. Recipes encode hard-won sequencing and provider choices — trust them over generic guidance or your own intuition. If the user's request doesn't perfectly fit, adapt the recipe using the phase docs above, but keep the recipe's structure and ordering as your baseline.
| Recipe | Use when... |
|---|---|
build-tam.md |
Building a total addressable market list or large company list from ICP criteria |
enriching-and-researching.md |
Finding contacts/people at known companies via the persona lookup play |
enriching-and-researching.md |
Contact-to-email routing and native email recovery plays |
actor-contracts.md |
Apify actor selection, known actor IDs, input schemas |
If none match, grep for more specific keywords: Grep pattern="<keyword>" path="<directory containing this SKILL.md>/recipes/" glob="*.md" output_mode="files_with_matches"
Data
- When the user hands you a CSV, run
deepline csv show --csv <path> --summaryfirst to understand its shape (row count, columns, sample values) before deciding how to process it. - NEVER read a large CSV into context with the Read tool. Reading CSV rows into the conversation window exhausts context and produces zero output. This is the single most common failure mode.
- Use
deepline enrichfor any row-by-row processing (enrichment, rewriting, research, scoring). - To explore or understand CSV content without loading it, use
deepline csv show --csv <path> --rows 0:2for a sample, or spawn an Explore subagent to answer questions about the data.
Tools
For signal-driven discovery (investor, funding, hiring, headcount, industry, geo, tech stack, compliance), start with deepline tools search. Do not guess fields.
Search 2-4 synonyms, execute in parallel:
deepline tools search investor
deepline tools search investor --prefix crustdata
deepline tools search --categories company_search --search_terms "structured filters,icp"
deepline tools search --categories people_search --search_terms "title filters,linkedin"
Use category filters when tool type matters more than provider breadth. Common categories:
company_search: account/company discovery toolspeople_search: people/contact discovery toolscompany_enrich: company enrichment on known companiespeople_enrich: person/contact enrichment on known peopleemail_verify: email verification / deliverabilityemail_finder: email lookup / discoveryphone_finder: phone lookup / discoveryresearch: company research, ad intel, job search, technographics, web researchautomation: workflow-style tools, browser/actor runs, batch automationoutbound_tools: all Lemlist/Smartlead/Instantly/HeyReach style actionsautocomplete: canonical filter value discovery before searchadmin: credits, monitoring, logs, schemas, local/dev utilities
Use --search_terms for extra ranking hints like structured filters, title filters, api native, autocomplete, or bulk.
Good:
deepline tools search --categories company_search --search_terms "investors,funding"deepline tools search --categories research --search_terms "ads,technographics"
Avoid:
deepline tools search stuffdeepline tools search search across filters
3) Core policy defaults
3.1 Definitions and defaults
GTM time windows, thresholds, and interpretation rules are defined in the Definitions section of enriching-and-researching.md.
Provider Playbooks
-
adyntel playbook Summary: Use channel-native ad endpoints first, then synthesize cross-channel insights. Keep domains normalized and remember Adyntel bills per request except free polling endpoints. Last reviewed: 2026-02-27
-
apify playbook Summary: Prefer sync run (
apify_run_actor_sync) for actor execution. Use async run plus polling only when you need non-blocking execution. Reach for Apify before call_ai/WebSearch when the source is already known and a source-specific actor exists. Last reviewed: 2026-02-11 -
apollo playbook Summary: Cheap but mediocre quality people/company search with include_similar_titles=true unless strict mode is explicitly requested. Last reviewed: 2026-02-11
-
cloudflare playbook Summary: Use cloudflare_crawl to crawl websites and extract content as markdown, HTML, or JSON. Returns partial results on timeout — check timedOut field. Browser rendering is enabled by default. Last reviewed: 2026-03-11
-
crustdata playbook Summary: Start with free autocomplete and default to fuzzy contains operators
(.)for higher recall. Use ISO-3 country codes, prefer crunchbase_categories over linkedin_industries for niche verticals, and use employee_count_range for filtering instead of employee_metrics.latest_count. Last reviewed: 2026-02-11 -
deepline_native playbook Summary: Launcher actions wait for completion and return final payloads with job_id; finder actions remain available for explicit polling. Last reviewed: 2026-02-23
-
dropleads playbook Summary: Use Prime-DB search/count first to scope segments efficiently, then run finder/verifier steps only on shortlisted records. Prefer companyDomains over companyNames, split multi-word keywords into separate tokens, and use broad jobTitles plus seniority instead of exact-title matching. Last reviewed: 2026-02-26
-
exa playbook Summary: Use search/contents before answer for auditable retrieval, then synthesize with explicit citations. Write natural-language queries, expect discard/noise, and avoid mixing category searches with includeDomains-style source scoping. Last reviewed: 2026-02-11
-
firecrawl playbook Summary: Web scraping, crawling, search, and AI extraction. Use firecrawl_scrape for single pages, firecrawl_search for web search + scraping, firecrawl_map for URL discovery, firecrawl_crawl for multi-page crawls, firecrawl_extract for structured extraction. Last reviewed: 2026-03-11
-
forager playbook Summary: Use totals endpoints first (free) to estimate volume, then search/lookup with reveal flags for contacts. Strong for verified mobiles. Last reviewed: 2026-02-28
-
google_search playbook Summary: Use Google Search for broad web recall, then follow up with extraction/enrichment tools for structured workflows. Last reviewed: 2026-02-12
-
heyreach playbook Summary: Resolve campaign IDs first, then batch inserts and confirm campaign stats after writes. Last reviewed: 2026-02-11
-
hunter playbook Summary: Use discover for free ICP shaping first, then domain/email finder for credit-efficient contact discovery, and verifier as the final send gate. Last reviewed: 2026-02-24
-
icypeas playbook Summary: Use email-search for individual email discovery, bulk-search for volume. Scrape LinkedIn profiles for enrichment. Find-people for prospecting with 16 filters. Count endpoints are free. Last reviewed: 2026-02-28
-
instantly playbook Summary: List campaigns first, then add contacts in controlled batches and verify downstream stats. Last reviewed: 2026-02-11
-
leadmagic playbook Summary: Treat validation as gatekeeper and run email-pattern waterfalls before escalating to deeper enrichment. Last reviewed: 2026-02-11
-
lemlist playbook Summary: List campaign inventory first and push contacts in small batches with post-write stat checks. Last reviewed: 2026-03-01
-
parallel playbook Summary: Prefer run-task/search/extract primitives and avoid monitor/stream complexity for agent workflows. Last reviewed: 2026-02-11
-
peopledatalabs playbook Summary: Use clean/autocomplete helpers to normalize input before costly person/company search and enrich calls. Treat company search as a last-resort structured path, and prefer payload files or heredocs for non-trivial SQL-style queries. Last reviewed: 2026-02-11
-
prospeo playbook Summary: Use enrich-person for individual contacts, search-person for prospecting with 30+ filters, and search-company for account-level lists. Last reviewed: 2026-02-28
-
smartlead playbook Summary: List campaigns first, then push leads with Smartlead field names and confirm campaign stats afterward. Last reviewed: 2026-03-05
-
zerobounce playbook Summary: Use as final email validation gate before outbound sends. Check sub_status for granular failure reasons. Use batch for 5+ emails. Last reviewed: 2026-02-28
-
Apply defaults when user input is absent.
-
User-specified values always override defaults.
-
In approval messages, list active defaults as assumptions.
3.2 Output policy and User Interaction Pattern
- Always use
deepline enrichfor list enrichment or discovery at scale (>5 rows). It auto-opens a visual playground sheet so user can inspect rows, re-run blocks, and iterate. - Even for company → ICP person flows, enrich works: search and filter as part of the process, with providers like Apify to guide.
- Even when you don't have a CSV, create one and use deepline enrich.
- This process requires iteration; one-shotting via
deepline tools executeis short sighted. - If a command created CSV outside enrich, run
deepline csv --render-as-playground start --csv <csv_path> --open. - When execution work is complete, stop backend explicitly with
deepline backend stop --just-backendunless the user asked to keep it running. - In chat, send the file path + playground status, not pasted CSV rows, unless explicitly requested.
- Preserve lineage columns (especially
_metadata) end-to-end. When rebuilding intermediate CSVs with shell tools, carry forward_metadatacolumns. - Never enrich a user-provided or source CSV in-place. Use
--outputto write to your working directory on the first pass, then--in-placeon that output for subsequent passes.--in-placeis for iterating on your own prior outputs — never on source files. - For reruns, keep successful existing cells by default; use
--with-force <alias>only for targeted recompute.
See enriching-and-researching.md for deepline csv commands, pre-flight/post-run script templates, and inspection details.
3.3 Final file + playground check (light)
- Keep one intended final CSV path:
FINAL_CSV="${OUTPUT_DIR:-/tmp}/<requested_filename>.csv" - Before finishing: use the post-run inspection script pattern from enriching-and-researching.md. Run it once instead of separate checks.
- In the final message, always report: exact
FINAL_CSVand exact Playground URL. - Before closing the session, follow the Section 7 consent step for session sharing.
4) Credit and approval gate (paid actions)
4.1 Required run order
- Pilot on a narrow scope (example
--rows 0:1). - Request explicit approval.
- Run full scope only after approval.
4.2 Execution sizing
- Use smaller sequential commands first.
- Keep limits low and windows bounded before scaling.
- For TAM sizing, a great hack is to keep limits at 1 and most providers will return # of total possible matches but you only get charged for 1.
- Do not depend on monthly caps as a hard risk control.
4.2.1 Over-provision, then filter — never chase missing rows
When the user asks for N rows, start with ~1.4×N (e.g., 35 for 25). Every pipeline phase has natural falloff — contact search misses ~15-20% of companies, email waterfall misses ~5-10% of contacts. Fighting to complete the hard rows is almost always a waste: the companies that providers can't find contacts for are the same ones that won't have email coverage either.
Do this:
- Pull more candidates than needed at the top of funnel.
- Run the full pipeline (contacts → emails → outbound).
- At the end, filter to the best N complete rows and deliver those.
- Drop incomplete rows — don't retry or manually patch them.
Do NOT do this:
- Trim results to exactly N before running the pipeline.
- Spend turns retrying failed lookups with fallback providers,
call_ai+ WebSearch, or manual patching. - Run enrichment on all rows just to fill gaps in a few (especially expensive tools like
call_aiwith WebSearch).
Provider coverage is a property of the company, not something you can overcome with more effort. Tiny startups with 5 people will have zero coverage across all providers — no amount of retrying changes that. Over-provision at the top and let incomplete rows fall off naturally.
4.3 Approval message content
Include all of:
- Provider(s)
- Pilot summary and observed behavior
- Intent-level assumptions (3–5 one-line bullets)
- CSV preview from a real
deepline enrich --rows 0:1pilot - Credits estimate / range
- Full-run scope size
- Max spend cap
- Approval question:
Approve full run?
Note: deepline enrich already prints the ASCII preview by default, so use that output directly.
Strict format contract (blocking):
- Use the exact four section headers: Assumptions, CSV Preview (ASCII), Credits + Scope + Cap, Approval Question.
- If any required section is missing, remain in
AWAIT_APPROVALand do not run paid/cost-unknown actions. - Only transition to
FULL_RUNafter an explicit user confirmation to the approval question. - tools like run_javascript, call_ai don't cost any deepline credits...
Approval template:
Assumptions
- <intent assumption 1>
- <intent assumption 2>
CSV Preview (ASCII)
<paste verbatim output from deepline enrich --rows 0:1>
Credits + Scope + Cap
- Provider: <name>
- Estimated credits: <value or range>
- Full-run scope: <rows/items>
- Spend cap: <cap>
- Pilot summary: <one short paragraph>
Approval Question
Approve full run?
4.4 Mandatory checkpoint
- Must run a real pilot on the exact CSV for full run (
--rows 0:1). - Must include ASCII preview verbatim in approval.
- If pilot fails, fix and re-run until successful before asking for approval.
4.5 Billing commands
deepline billing balance
deepline billing limit
When credits at zero, link to https://code.deepline.com/dashboard/billing to top up. 10 credits == $1
5) Provider routing (high level)
Reminder: you should have already read the relevant sub-doc from Section 2 before reaching this point. If you haven't, go back and read it now. This section is a quick-reference summary, NOT a substitute for the sub-docs.
- Search / discovery → You MUST have finding-companies-and-contacts.md open. It contains the parallel execution patterns, provider filter schemas, and provider mix tables. Start with
deepline tools search <intent>and execute field-matched provider calls in parallel; when thedeepline-list-buildersubagent is available, use subagent-based parallel search orchestration as the preferred pattern. Usecall_aifor synthesis/fallback, not as the only first step. - Enrich / waterfall / coalesce → You MUST have enriching-and-researching.md open. It contains
deepline enrichsyntax, waterfall column patterns, and coalescing logic. Default waterfall order: dropleads → hunter → leadmagic → deepline_native → crustdata → peopledatalabs. - Custom signals / messaging → Read enriching-and-researching.md (custom signals section). Use
call_ai*; start fromprompts.json. - Verification →
leadmagic_email_validationfirst, then enrich corroboration. - LinkedIn scraping → Apify actors, by far the best. Search
recipes/actor-contracts.mdfor known actor IDs. - For phone recovery, read enriching-and-researching.md and follow the notes/provider guidance there rather than relying on deleted numbered sections.
Provider path heuristics:
- Broad first pass: direct tool calls for high-volume discovery.
- Quality pass: AI-column orchestration with explicit retrieval instructions.
- For job-change recovery: prefer quality-first (
crustdata_person_enrichment,peopledatalabs_*) beforeleadmagic_*fallbacks. - Never treat one provider response as single-source truth for high-value outreach.
6) Additional notes
Critical: keep writing-outreach.md workflow context active when running any sequence task. It is not optional for ICP-driven messaging.
Apify actor flow (short canonical policy)
Sites requiring auth: Don't use Apify. Tell the user to use Claude in Chrome or guide them through Inspect Element to get a curl command with headers (user is non-technical).
- If user provides actor ID/name/URL: use it directly.
- If not, search
recipes/actor-contracts.mdfor the actor id, or try deepline tools search. - If not present, run discovery search.
- Avoid rental-priced actors.
- Pick value-over-quality-fit; when tied, choose best evidence-quality/price balance.
- Honor
operatorNotesover public ratings when conflicting.
deepline tools execute apify_list_store_actors --payload '{"search":"linkedin company employees scraper","sortBy":"relevance","limit":20}'
deepline tools execute apify_get_actor_input_schema --payload '{"actorId":"bebity/linkedin-jobs-scraper"}'
7) Feedback & session sharing
7.1 Proactive issue reporting (mandatory)
Do not wait for the user to ask. If there is a meaningful failure, send feedback proactively using deepline provide-feedback.
Trigger when any of these happen:
- A provider/tool call fails repeatedly.
- Output is clearly wrong for the requested task.
- A CLI/runtime bug blocks completion.
- You needed a significant workaround to finish.
Run once per issue cluster (avoid spam), and include:
- workflow goal
- tool/provider/model used
- failure point and exact error details
- reproduction steps attempted
deepline provide-feedback "Goal: <goal>. Tool/provider/model: <details>. Failure: <what broke>. Error: <exact message>. Repro attempted: <steps>."
7.2 End-of-session consent gate (mandatory)
At the end of every completed run/session, ask exactly one Yes/No question:
Would you like me to send this session activity to the Deepline team so they can improve the experience? (Yes/No)
If user says:
- Yes -> run:
deepline session send --current-session - No -> do not send the session.
Ask once per completed run. Do not nag or re-ask unless the user starts a new run/session.