skills/timescale/marketing-skills/competitor-alternatives

competitor-alternatives

Installation
SKILL.md

Competitor Alternatives

Plan and draft competitor comparison and alternative pages that rank for competitive search terms, provide genuine value to evaluators, and position Tiger Data honestly. Covers four formats: singular alternative, alternatives roundup, direct comparison (Tiger Data vs X), and third-party comparison (X vs Y).

This is a content creation skill for public-facing comparison pages. For monitoring what competitors are doing, use competitive-intel-brief. For general SEO-led longform where the competitive frame is secondary, use seo-article-writer.

When to use this skill

  • User asks for a "[competitor] alternative" or "alternatives to [competitor]" page
  • User wants a "Tiger Data vs [competitor]" or "[competitor A] vs [competitor B]" page
  • User asks for competitive comparison content, comparison landing pages, or migration pages
  • User says "how do we compare to X?" and wants a publishable page (not an internal brief)
  • User mentions "competitor teardown" in the context of public-facing content

When NOT to use this skill

  • Monitoring competitor activity, news, or product launches — use competitive-intel-brief
  • Internal sales battlecards or deal-level competitive intel — not covered by this skill
  • Generic product pages without a competitive frame — use brand-voice-writer
  • SEO-led longform where the competitive comparison is one section, not the whole page — use seo-article-writer

Scope

Included: [competitor] alternative pages, [product] vs [competitor] pages, alternatives roundup pages, migration framing, comparison tables

Excluded: Internal-only battlecards, live competitor news scans, generic product pages without competitive positioning

Step 0: Pre-flight check

Read REFERENCES.md from the plugin root and run the pre-flight check described there. Call list_marketing_references() to verify Tiger Den is reachable. If it fails or the tool is not found, STOP — do not continue. Follow the error handling in REFERENCES.md.

Once Tiger Den is confirmed, fetch the primary reference docs:

get_marketing_context(slugs: ["product-marketing-context", "brand-voice-guide"])

Then attempt the competitive guardrails doc separately:

get_marketing_reference(slug: "competitive-messaging-guardrails")

If competitive-messaging-guardrails is not found, inform the user:

"The competitive-messaging-guardrails reference doc is not yet available in Tiger Den. I'll proceed using the competitive framing guidance in product-marketing-context and this skill's editorial neutrality checklist. If your team has specific guardrails for how to discuss competitors publicly, share them now and I'll apply them."

Then continue. This is a soft dependency — the skill is functional without it.

Extract from product-marketing-context:

  • Competitor list (primary and secondary)
  • Tiger Data differentiators and proof points
  • ICP context (who are the buyers evaluating alternatives?)
  • Terminology glossary (product names, feature names — this is the authority for entity naming)

Extract from brand-voice-guide:

  • Web page writing rules
  • Absolute rules (no em dashes, active voice, etc.)
  • AI slop self-check patterns

Tooling check: This skill benefits from web_search and web_fetch for competitor research (Step 3) and verification (Step 7). If web_search is unavailable, inform the user: "Web search isn't available — competitor research will rely on Tiger Den content and any info you provide. I'll skip web-based verification." Then continue.

Also read the three local reference files in this skill's references/ directory:

  • references/page-templates.md
  • references/comparison-table-patterns.md
  • references/editorial-neutrality-checklist.md

Step 1: Identify comparison type and target query

Gather through conversational Q&A, one question at a time:

1. Competitor(s): Which competitor(s) are we comparing against? Confirm against the competitor list in product-marketing-context. If the user names a competitor not in the list, note it and proceed.

2. Page format: Auto-detect from the user's phrasing:

User says Format
"[Competitor] alternative" or "alternative to [competitor]" Format 1: Singular alternative
"[Competitor] alternatives" or "best alternatives to [competitor]" Format 2: Alternatives roundup
"Tiger Data vs [competitor]" or "[competitor] vs Tiger Data" Format 3: Direct comparison
"[Competitor A] vs [Competitor B]" (neither is Tiger Data) Format 4: Third-party comparison

If ambiguous, present the four options and ask the user to pick.

3. Target search query: Propose the primary keyword based on format and competitor. Confirm with the user.

4. Primary audience: Which ICP segment is most likely searching this? Map to personas from product-marketing-context.

Step 2: Verify comparison entity layer

This is a critical guardrail. Before proceeding, classify the comparison:

Layer Example
Company vs company Tiger Data vs InfluxData
Product vs product TimescaleDB vs InfluxDB
Category vs category Time-series database vs general-purpose OLAP

Rules:

If the comparison mixes layers (e.g., "Tiger Data vs InfluxDB" — company vs product), STOP and ask the user to clarify:

"This comparison mixes entity layers: 'Tiger Data' is a company and 'InfluxDB' is a product. Should this be 'TimescaleDB vs InfluxDB' (product vs product) or 'Tiger Data vs InfluxData' (company vs company)? Getting this right affects terminology, SEO targeting, and positioning throughout the page."

If the comparison compares a product to a category (e.g., "InfluxDB vs time-series databases"), ask if the intent is a "best [category] tools" roundup instead.

Once the entity layer is confirmed, use consistent terminology throughout the draft. Do not mix "Tiger Data" (the company) with "TimescaleDB" (the product) interchangeably. The glossary in product-marketing-context is the authority.

Step 3: Research and gather differentiators

Gather raw material for the comparison from these sources, in priority order:

1. product-marketing-context — Extract Tiger Data's differentiators, proof points, and positioning against this specific competitor (if covered).

2. Tiger Den content search — Find existing Tiger Data content about this competitor:

search_content(query: "{competitor name}", limit: 10)
search_content(query: "{competitor name} comparison", limit: 10)
search_content(query: "{competitor name} alternative", limit: 10)

Deduplicate by URL. Flag existing comparison pages that overlap with this one — either update them or ensure the new page adds distinct value.

3. Tiger Docs search (if Tiger Docs MCP is available) — Find technical grounding for feature comparisons:

search_docs(source: "tiger", search_type: "keyword", query: "{feature being compared}")

Use this to verify Tiger Data feature claims (performance numbers, architecture details, API capabilities).

4. Web search (if available) — Research the competitor:

  • "{competitor}" features OR capabilities — what do they offer?
  • "{competitor}" pricing — current pricing model
  • "{competitor}" vs — existing comparison content in the market
  • "{competitor}" reviews G2 OR Capterra — common praise and complaint themes

Be selective with web_fetch — only fetch pages where search snippets leave claims ambiguous.

5. User-provided context — Ask: "Do you have specific differentiators, customer quotes from switchers, migration stories, or proof points you want included? Anything not in our docs?"

Compile into a working brief:

  • Tiger Data strengths for this comparison
  • Competitor strengths (be honest)
  • Tiger Data limitations to acknowledge
  • Competitor limitations
  • Audience fit: who should pick Tiger Data, who should pick the competitor
  • Migration angle (what transfers, what needs reconfiguration)
  • Proof points (customer quotes, benchmarks, case studies)

Present the working brief to the user for review before drafting.

Step 4: Select page format and draft structure

Load the corresponding template from references/page-templates.md. Present the proposed page structure to the user:

  • Proposed URL path
  • Proposed title tag and meta description
  • Section-by-section outline with what each section will cover
  • Which comparison table pattern(s) from references/comparison-table-patterns.md will be used
  • Target word count estimate

Get user approval before writing. Adjust the outline based on feedback.

Step 5: Draft the page

Write the full page content following:

  • The approved structure from Step 4
  • The template from references/page-templates.md
  • The table patterns from references/comparison-table-patterns.md
  • The editorial neutrality checklist from references/editorial-neutrality-checklist.md
  • Brand voice guide rules (active voice, no em dashes, WABL principle)
  • competitive-messaging-guardrails if loaded in Step 0

Content rules for all formats:

  1. Lead with the reader's problem, not Tiger Data's features. The reader is evaluating — respect their process.
  2. Acknowledge competitor strengths. Readers are comparing and will verify claims. Dishonest comparisons destroy credibility.
  3. Be specific about who each solution is best for. "Best for teams who need X" is more useful than "better."
  4. Include a migration section where applicable. Switching costs are a top objection.
  5. No external links to competitor websites. Mention competitors by name but keep all links pointing to Tiger Data properties (tigerdata.com, Tiger Docs) or neutral third-party sources.
  6. Include comparison tables using the patterns in references/comparison-table-patterns.md. Tables are scannable and favored by search engines for featured snippets.
  7. Date the comparison. Include a "Last updated: [date]" note so readers know the information is current.

Write the page to a markdown file in the working directory: {url-slug}.md

Step 6: Post-draft checks

Run these checks against the draft. Fix all issues found, then re-run.

  1. Em dash check — Search for em dashes (—). Replace all instances. Zero tolerance.
  2. Editorial neutrality check — Walk through every dimension in references/editorial-neutrality-checklist.md. Flag and fix any violations.
  3. Entity layer consistency — Verify the draft uses the confirmed entity layer terminology consistently. No mixing company names with product names.
  4. Keyword placement — Primary target keyword appears in H1, first 100 words, and at least one H2.
  5. Link format — All links are full URLs starting with https://. No relative paths, no links to competitor websites.
  6. Words-to-avoid check — Check against the "words to avoid" section from product-marketing-context.
  7. AI slop self-check — Per brand-voice-guide: check for negative seesaws, forced triples, copula dodging, vocabulary clusters, Hallmark-card endings.
  8. Competitive guardrails check — If competitive-messaging-guardrails was loaded in Step 0, check the draft against those guardrails.

Step 7: Competitive verification

If web search is available: Verify competitor claims in the draft are current:

  • Are product features described accurately?
  • Is pricing current?
  • Has the competitor rebranded, deprecated, or launched anything relevant since the research in Step 3?

If issues are found, update the draft and report changes.

If web search is unavailable: Skip this step and add a note to the output:

"Competitive verification skipped — web search was unavailable. Verify competitor claims manually before publishing, particularly pricing and feature descriptions."

Step 8: Offer next steps

Present the completed draft and ask:

"The draft is ready. Want me to:

  • Run content-reviewer for a full quality rubric evaluation
  • Run de-slop for an additional AI pattern cleanup pass
  • Draft additional page formats for this competitor (e.g., also create the 'vs' page alongside the 'alternative' page)
  • Plan the full competitive page set across all competitors (priority matrix)
  • Hand off to seo-article-writer for deeper SEO optimization
  • Hand off to website-content-editor to publish on tigerdata.com
  • Hand off to page-mockup-builder to visualize the layout before design
  • Run competitive-intel-brief for current signals on this competitor"

Output format

The primary output is a markdown file containing:

  • Proposed URL path (as a comment at the top of the file)
  • Title tag and meta description
  • "Last updated" date
  • Full page copy organized by section
  • Comparison tables in markdown table format
  • CTAs with button copy and supporting text
  • Internal links to Tiger Data properties

Dependencies

  • Required: Tiger Den connector (for product-marketing-context, brand-voice-guide, and content search)
  • Soft dependency: competitive-messaging-guardrails Tiger Den reference (enhances guardrails but skill functions without it)
  • Optional: web_search and web_fetch (for competitor research and claim verification), Tiger Docs MCP (for technical feature grounding)
Weekly Installs
2
GitHub Stars
5
First Seen
Apr 13, 2026