customer-research

Installation
SKILL.md

Customer Research

Turn raw customer language and evidence into usable PMM signal. Accepts interview transcripts, sales call recordings, survey responses, support ticket themes, review-site feedback, community discussions, and win-loss notes — and produces structured output: pains, triggers, desired outcomes, vocabulary, objections, and themes. This is the upstream research layer that feeds brand-voice-writer, page-cro, content-reviewer, and competitive-intel-brief.

When to use this skill

  • "Analyze this transcript" or "pull insights from this interview"
  • "What are customers saying about [topic]?"
  • "Voice of customer analysis" or "VOC synthesis"
  • "Jobs-to-be-done research" or "JTBD analysis"
  • "Mine these reviews" or "what are G2/Reddit users saying?"
  • "Build a quote bank from these sources"
  • "Persona research" or "segment these findings"
  • "Why are customers churning?" or "what's driving conversions?"
  • "Synthesize across these [N] transcripts/surveys/sources"

What this skill does NOT do

  • Run live interviews or surveys (it analyzes the outputs)
  • Broad market sizing or TAM analysis
  • Live competitive news tracking (use competitive-intel-brief)
  • Write copy from findings (hand off to brand-voice-writer)

Step 0: Pre-flight check

Read REFERENCES.md from the plugin root and run the pre-flight check described there. Call list_marketing_references() to verify Tiger Den is reachable. If it fails or the tool is not found, STOP — do not continue. Follow the error handling in REFERENCES.md.

Once Tiger Den is confirmed, fetch reference docs:

get_marketing_context(slugs: ["product-marketing-context", "research-synthesis-rubric"])

From product-marketing-context, extract:

  • ICP profiles and persona definitions (used to align segments in Step 4)
  • Current positioning and proof points (used to assess messaging implications in Step 5)
  • Competitor list (used to flag competitive signals during extraction)

From research-synthesis-rubric, extract synthesis quality criteria and confidence labeling standards.

If research-synthesis-rubric is not found: Warn the user: "The research-synthesis-rubric doc isn't in Tiger Den yet — synthesis quality checks will be limited. Consider creating this doc for better-calibrated output." Continue with product-marketing-context alone. Use the local reference files for quality guidance.

Step 1: Accept and classify input

Determine the research mode based on what the user provides:

Mode A — Analyze: Single source (one transcript, one survey export, one set of reviews). Extract signal from the material as provided.

Mode B — Synthesize: Multiple sources (several transcripts, a transcript plus survey data, etc.). Extract signal from each, then cross-reference and merge into a unified synthesis.

Mode C — Mine: User provides review-site or community content (G2 reviews, Reddit threads, Stack Overflow discussions, Hacker News comments, community Slack exports). Default to analyzing only the content provided. If web_search is available, offer: "I can also search the web for additional reviews and community discussions about [topic]. Want me to expand the search?" Only search if the user opts in.

If the mode is ambiguous, ask. If the user provides a single source but mentions others exist, ask whether they want to add more sources before starting.

For each source, identify the type:

  • Interview transcript (1:1 customer or prospect conversation)
  • Sales call recording/transcript
  • Survey responses (open-ended or structured)
  • Support ticket themes or aggregated support data
  • Win/loss interview notes
  • Review-site feedback (G2, Capterra, TrustRadius, etc.)
  • Community discussion (Reddit, HN, Stack Overflow, Discord, Slack)
  • NPS or CSAT responses
  • Churn interview or cancellation feedback

Source type affects quality weighting in Step 3.

Step 2: Extract signal

For each source, extract the following six signal categories. Preserve exact customer language — do not paraphrase quotes.

Pains: What problems, frustrations, or blockers does the customer describe? Capture the specific language they use, not a sanitized summary. Note emotional intensity (mild annoyance vs. acute frustration vs. deal-breaker).

Triggers: What events or moments prompted the customer to seek a solution? These are the "before" moments — what changed in their world that made the status quo unacceptable.

Desired outcomes: What does success look like to the customer? Capture functional outcomes ("queries return in under 100ms"), emotional outcomes ("I stop worrying about the database at 3am"), and social outcomes ("my team sees me as the one who fixed our data stack").

Language and vocabulary: Exact phrases the customer uses to describe their problem, their current tools, and what they want. These are high-value for copywriting and messaging. Flag phrases that appear across multiple sources.

Alternatives considered: What other solutions did the customer evaluate, try, or use before? Include "do nothing" and manual workarounds. Note what they liked and disliked about each alternative.

Objections: What concerns, hesitations, or pushback did the customer raise about Tiger Data (or the category in general)? Separate pre-purchase objections from post-purchase friction.

For each extracted item, tag:

  • Source type (from Step 1 classification)
  • Source identifier (e.g., "Transcript #3", "G2 review, March 2026")
  • Direct quote where available (verbatim, in quotation marks)

Step 3: Cluster and weight

Read references/theme-clustering-guide.md from this skill's directory.

Group extracted signals into themes. A theme is a recurring pattern that appears across multiple data points — not a one-off mention.

For each theme:

  1. Name it with a short, descriptive label (e.g., "Scaling anxiety at ingestion spike" not "Performance concerns")
  2. Count frequency — how many independent sources mention this theme?
  3. Rate intensity — how emotionally charged or urgent is this theme when it appears? (High / Medium / Low)
  4. Assign confidence — based on evidence quality:
    • High: 5+ independent sources, multiple source types, recent (last 6 months)
    • Medium: 3-4 sources, or single source type, or mixed recency
    • Low: 1-2 sources, single source type, or older than 12 months

Read references/source-quality-guidelines.md for source reliability hierarchy and bias adjustments.

Themes with high frequency but low intensity may indicate background friction. Themes with low frequency but high intensity may indicate deal-breakers for a specific segment. Both matter — do not discard low-frequency themes if intensity is high.

Step 4: Segment

If the source material contains signals from identifiably different audiences, split findings by segment. Use the ICP profiles from Step 0 to align segments to known Tiger Data personas where possible.

Segmentation dimensions to consider:

  • Persona or role (developer, DBA, data engineer, engineering manager)
  • Company stage or size (startup, growth, enterprise)
  • Use case (IoT/telemetry, financial analytics, observability, general time-series)
  • Lifecycle stage (evaluating, onboarding, scaling, considering churn)

If the data doesn't support meaningful segmentation (e.g., all sources are from the same persona), skip this step and note: "Source material is concentrated in [segment]. Broader segmentation requires additional research."

Do not invent segments that the data doesn't support.

Step 5: Synthesize and format output

Read references/research-output-templates.md from this skill's directory. Produce the deliverables using those templates.

Apply quality checks from references/source-quality-guidelines.md:

  • Every insight must trace back to at least one source
  • Quotes must be verbatim (flagged if paraphrased)
  • Confidence labels must match the criteria from Step 3
  • Channel biases must be acknowledged where relevant

If research-synthesis-rubric was loaded in Step 0, cross-check the synthesis against its quality criteria before presenting output.

Step 6: Identify gaps and recommend next steps

Review the synthesis for:

  • Under-evidenced themes: Themes with medium or low confidence that would change messaging strategy if confirmed. Flag these as requiring more research.
  • Missing perspectives: Personas or segments not represented in the source material. If ICP profiles from Step 0 identify key personas with no data, call this out.
  • Stale signals: Themes based primarily on sources older than 12 months that may no longer reflect current customer sentiment.
  • Contradictions: Themes where different sources give conflicting signals. Present both sides rather than resolving arbitrarily.

After presenting the synthesis, offer handoffs:

"Want me to take action on these findings? I can:

  • Draft messaging or copy based on these themes (via brand-voice-writer)
  • Recommend CRO improvements for pages targeting these personas (via page-cro)
  • Check if competitor patterns dominate the findings (via competitive-intel-brief)
  • Evaluate whether existing content addresses these themes (via content-reviewer)
  • Search Tiger Den for existing content on these topics"

Output format

Structure the final output in this order. See references/research-output-templates.md for detailed templates.

Research summary

  • Research mode (analyze / synthesize / mine)
  • Sources analyzed (count and types)
  • Date range of source material
  • Key finding in one sentence

Top themes

Theme Frequency Intensity Confidence Signal type Representative quote
... ... ... ... ... ...

List themes in descending order of (frequency x intensity). Cap at 10 themes — move additional themes to an appendix if needed.

Quote bank

Quote Source type Segment Theme tags Usability
... ... ... ... ...

Usability categories: messaging, proof point, objection handling, case study lead, ad copy.

Persona and segment notes

Per segment (if segmentation was applied in Step 4):

  • Defining characteristics
  • Top pains and triggers specific to this segment
  • Language patterns unique to this segment
  • Recommended messaging angle

Messaging implications

Theme Messaging angle Evidence strength Current coverage
... ... ... ...

Current coverage: check against positioning from product-marketing-context. Mark as "aligned" (existing messaging addresses this), "gap" (no current messaging), or "misaligned" (current messaging contradicts customer language).

Gaps requiring more research

Bulleted list of under-evidenced areas, missing perspectives, and recommended follow-up methods (more interviews, survey on specific topic, review mining for a segment, etc.).

Dependencies

  • Required: Tiger Den MCP (for product-marketing-context and content search)
  • Soft dependency: research-synthesis-rubric (Tiger Den doc — skill functions without it but with reduced quality checks)
  • Optional: web_search and web_fetch (for mine mode web expansion)
Weekly Installs
1
GitHub Stars
5
First Seen
Apr 13, 2026