skills/timescale/marketing-skills/competitive-intel-brief

competitive-intel-brief

Installation
SKILL.md

Competitive Intel Brief

Track what competitors are actually doing — product releases, pricing changes, messaging shifts, funding, hiring signals — and frame the findings against Tiger Data's current positioning. This is an external intelligence layer for the Product Marketing team.

This skill is distinct from weekly-intel-digest (internal deal context) and community-signal-digest (community conversations). Competitive Intel Brief answers: "What did our competitors do this week that we need to know about?"

When to use this skill

  • "Run the competitive brief" or "competitor update"
  • "What are our competitors doing?"
  • "What did [competitor] do this week/month?"
  • "Competitive scan" or "competitive landscape"
  • "Check on [competitor]"
  • Before a product launch, pricing change, or quarterly review

Step 0: Pre-flight check

Read REFERENCES.md from the plugin root and run the pre-flight check described there. Call list_marketing_references() to verify Tiger Den is reachable. If it fails or the tool is not found, STOP — do not continue. Follow the error handling in REFERENCES.md.

Once Tiger Den is confirmed, fetch reference docs:

get_marketing_context(slugs: ["product-marketing-context", "brand-voice-guide"])

From product-marketing-context, extract:

  • The current competitor list (primary and secondary)
  • Positioning and proof points (used to frame competitor activity)
  • ICP context (used to assess impact of competitor moves)

From brand-voice-guide, extract tone guidance for the analysis sections.

Tooling check: This skill requires web_search and web_fetch. If web_search is unavailable, inform the user and stop — the skill cannot function without it. If web_fetch fails on a specific URL during execution, log the failure and continue with other sources.

Step 1: Gather inputs

For quick invocation ("run the competitive brief" or "competitor update" with no parameters), use all defaults and skip Q&A.

Otherwise, gather through conversational Q&A:

  1. Competitors — Default: primary list from product-marketing-context. User can add ("also check Grafana Labs") or narrow ("just InfluxDB and ClickHouse this time").
  2. Time range — Default: last 7 days. Options: last 14 days, last 30 days (monthly deep dive).
  3. Depth — Default: standard (quick scan). Options: deep dive (adds blog analysis, benchmark search, content gap analysis).
  4. Internal context — Default: Tiger Den only. Options: include Slack search (requires Slack MCP).
User input Search window
Weekly brief (default) Last 7 days
Bi-weekly Last 14 days
Monthly deep dive Last 30 days

Step 1.5: Pre-filter — query intel records

Read references/intel-records-integration.md for the full protocol.

Generate a source_run_id (UUID v4) and record source_run_at (current ISO timestamp). These identify this run for all records created below.

Determine the stale threshold based on time range selected in Step 1:

  • Weekly (7 days): 10 days
  • Bi-weekly (14 days): 14 days
  • Monthly (30 days): 14 days

Compute scan_lookback_start from the time range: 7 days ago for weekly, 14 days ago for bi-weekly, 30 days ago for monthly.

Run two queries to build the known signals set:

Query 1 — recent records (dedup):

manage_intel_records(
  action: "list",
  source_skill: "competitive-intel-brief",
  created_after: "<scan_lookback_start>",
  limit: 100
)

Query 2 — stale records (resurface candidates):

Compute created_before as source_run_at minus the stale threshold. For monthly mode where the stale threshold (14 days) is shorter than the scan window (30 days), clamp created_before to Query 1's created_after (30 days ago) to prevent overlap.

manage_intel_records(
  action: "list",
  source_skill: "competitive-intel-brief",
  status: "new",
  created_after: "<60d_ago>",
  created_before: "<clamped cutoff>",
  limit: 100
)

If either query returns exactly 100 results, paginate (offset: 100, repeat until a page returns < 100).

Merge results into a known signals map keyed by content_hash. Query 2 results are pre-tagged stale_unacted.

If manage_intel_records is not available or both queries fail, set INTEL_RECORDS_AVAILABLE = false and continue without dedup. Note the failure in the brief footer. Do not stop.

Step 2: Fetch GitHub release feeds

For each competitor with an open-source repo, fetch the releases Atom feed using web_fetch. The competitor-to-repo mapping for known competitors:

Competitor Feed URL
InfluxDB https://github.com/influxdata/influxdb/releases.atom
ClickHouse https://github.com/ClickHouse/ClickHouse/releases.atom
QuestDB https://github.com/questdb/questdb/releases.atom
CrateDB https://github.com/crate/crate/releases.atom
DuckDB https://github.com/duckdb/duckdb/releases.atom

For competitors not in this table (added by the user at runtime), skip Atom feeds and rely on web search in Step 3.

web_fetch(url: "https://github.com/{owner}/{repo}/releases.atom")

Parse the Atom XML to extract releases within the selected time range. For each release, capture:

  • Version tag
  • Release date
  • Key changes from release notes (first 500 characters or the headline features)

If a feed fetch fails (rate limit, 404, timeout), note the failure in the brief and continue with other sources. Do not abort.

Step 3: Web search per competitor

For each competitor, run targeted searches. This is a scan, not deep research — move quickly.

Standard depth (3 searches per competitor):

  1. "{competitor}" announcement OR launch — product/feature announcements
  2. "{competitor}" funding OR acquisition OR partnership — business moves
  3. "{competitor}" pricing OR packaging — pricing changes

Deep dive mode (add 5 more per competitor): 4. "{competitor}" blog — content strategy signals 5. "{competitor}" vs TimescaleDB — head-to-head comparison content 6. "{competitor}" vs "Tiger Data" — brand-name comparisons 7. "{competitor}" benchmark — performance claims 8. "{competitor}" hiring engineering — team investment signals

Scope all searches to the selected time range.

For each relevant result, use web_fetch to pull the full article only if the search snippet is insufficient to classify the signal. Do not fetch every result — be selective.

Step 4: Web search — market context

Run broader searches not tied to a specific competitor:

  1. "time-series database" comparison — market landscape shifts
  2. "time-series database" market OR landscape — analyst coverage, new entrants

These surface market-level trends. Limit to 2 searches regardless of depth mode.

Step 5: Search Tiger Den for existing content

For each competitor, search Tiger Den to identify existing Tiger Data content that addresses them. Use an 18-month freshness window calculated from today's date — older content may reference deprecated features or outdated benchmarks.

search_content(query: "{competitor}", published_after: "{18 months before today}", limit: 10)
search_content(query: "{competitor} comparison", published_after: "{18 months before today}", limit: 10)

Run two searches per competitor: one broad (catches case studies, blog posts, general mentions) and one targeted at comparison content (catches vs-pages and benchmarks that may not rank for the competitor name alone). Deduplicate results by URL across queries.

Relevance filter: Not every mention counts as coverage. Before marking a result as "Covered," evaluate whether the content materially addresses the competitor:

  • Covered — Content's primary purpose addresses this competitor (dedicated comparison page, benchmark case study, analysis of their product launch)
  • Gap — Content only mentions the competitor in passing (listed in a footnote, name-dropped once in a broader overview)

When in doubt, mark it as a gap. A false gap produces a recommendation the team can dismiss. A false "covered" hides a real need.

Step 6: Internal context (if Slack included)

Skip this step unless the user opted into Slack search AND the Slack MCP is connected.

Calculate the after date from the selected time range. Run one search per competitor:

slack_search_public_and_private(
  query: "{competitor} after:{start_date_YYYY-MM-DD}",
  sort: "timestamp",
  sort_dir: "desc",
  limit: 10
)

Consider narrowing to high-signal channels (in:#feed-competitor-feedback, in:#competitive-intel) if broad search returns too much noise.

Surface the top 3-5 most relevant internal conversations per competitor. For deal-level competitor context (which deals are being lost and why), refer the user to weekly-intel-digest — this skill intentionally does not duplicate that data.

Step 7: Classify and prioritize

For each signal gathered across Steps 2-6, assign a priority level:

  • High — Direct competitive threat: new feature matching a Tiger Data differentiator, pricing undercut, major funding signaling aggressive expansion, benchmark claim against TimescaleDB/Tiger Data
  • Medium — Notable activity: new release with interesting features, messaging shift, partnership announcement, significant blog content
  • Low — Background signal: routine patch release, minor blog post, general market mention

High-priority items become Priority Alerts at the top of the brief.

Intel records dedup

Skip this sub-step if INTEL_RECORDS_AVAILABLE is false.

After classifying each signal, check it against the known signals set from Step 1.5:

  1. Normalize the source URL — this skill gets raw URLs from web_search/web_fetch/Atom feeds, so manual normalization is required:
    • Force https:// scheme
    • Remove www. prefix from hostname
    • Remove trailing slashes
    • Strip all query parameters (UTM tags, tracking params, session IDs)
    • For GitHub release URLs: strip .atom suffix and anchor fragments
  2. Compute the content hash: sha256(normalized_url)
  3. Look up the hash in the known signals map.
  4. Apply the match logic from references/intel-records-integration.md:
    • Match + handled (reviewed/acted_on/archived): remove from signal list, increment skipped_count.
    • Match + new + recent (age < stale threshold): remove from signal list, increment skipped_count.
    • Match + stale (pre-tagged stale_unacted): move to the stale_items list.
    • No match: keep in the signal list.

Step 7.5: Post-validate and write intel records

Skip this step if INTEL_RECORDS_AVAILABLE is false.

Post-validate (cross-skill dedup)

For each signal remaining after Step 7 that was not in the known signals set from Step 1.5, run:

manage_intel_records(
  action: "find_duplicates",
  content_hash: "<hash>"
)

No record_type parameter — this searches across all record types.

  • Same-skill match found: apply skip/stale logic (same as Step 7).
  • Cross-skill match found (different source_skill): skip record creation but keep the item in the brief output. The user hasn't seen it in the competitive intel context.
  • No match: proceed to record creation.

Write net-new records

For each confirmed net-new signal, create a competitor_signal record:

manage_intel_records(
  action: "create",
  title: "<signal title — first 120 chars>",
  record_type: "competitor_signal",
  summary: "<priority level + positioning impact + 1-2 sentence summary>",
  source_skill: "competitive-intel-brief",
  source_run_id: "<run uuid from Step 1.5>",
  source_run_at: "<run timestamp from Step 1.5>",
  canonical_source_url: "<normalized URL>",
  source_urls: ["<original URL>"],
  content_hash: "<hash>",
  tags: ["<priority: high|medium|low>", "<competitor_name>", "<category: product|business|messaging>"],
  observed_at: "<publication or release date>"
)

Step 8: Compile the brief

Present the brief directly in conversation. Assemble in this order:

Summary

Metric Value
Period covered [date range]
Competitors tracked [count and names]
Total signals found [count]
High-priority alerts [count]
Content gaps identified [count]
New signals [count] (net-new + new to this skill)
Re-surfaced (unacted) [count]
Skipped (already handled) [count]

Priority alerts

Items requiring PMM attention this week. Only include high-priority signals. For each:

[Competitor] — [what happened]

  • Why it matters: [impact on Tiger Data's positioning]
  • Recommended action: [update battlecard, create comparison content, brief sales, etc.]
  • Source: [link]

If there are no high-priority items: "No high-priority competitor moves detected this period."

Per-competitor sections

For each competitor tracked, organize findings into three categories:

[Competitor Name]

Product activity:

  • New releases (from Atom feeds) — version, date, key features
  • Feature announcements (from web search)
  • Technical direction — investment patterns based on release cadence and content

Business activity:

  • Funding, acquisitions, partnerships
  • Hiring signals (engineering hiring surges may signal new product investment)
  • Event presence (conferences, sponsorships)

Messaging and positioning:

  • Blog content themes — what they're publishing about
  • Positioning shifts — changes in how they describe themselves
  • Comparison content — any new "X vs Y" content they've published

Omit empty categories rather than showing "(None found)." If a competitor has no activity in the time window, say: "[Competitor]: No significant activity detected in the last [N] days."

Content gap analysis (deep dive mode only)

Cross-reference competitor moves against Tiger Den search results from Step 5:

  • Covered: [Competitor move] → [Existing Tiger Data content with link]
  • Gap: [Competitor move] → No existing content. Recommend: [content type to create]

Internal awareness (if Slack included)

Summary of internal conversations per competitor from Step 6. Note: for deal-level competitor context, refer to weekly-intel-digest.

Previously flagged — needs attention

Omit this section entirely if there are zero stale items.

Present stale unacted items from the stale_items list (populated in Steps 7 and 7.5):

⏳ Previously flagged — needs attention ([N] items)
• [Competitor] — [signal title] (first surfaced [N] days ago, still unacted) → [link]

These are competitor signals first surfaced in a prior run that remain in new status past the stale threshold (10 days for weekly, 14 days for monthly). They appear here as a nudge — someone should review or act on them.

Step 9: Prompt for follow-up actions

After presenting the brief:

"Want me to help with any of these? I can:

  • Draft competitive positioning copy or a battlecard update (via brand-voice-writer)
  • Search Tiger Docs for technical details on a specific feature comparison
  • Run a deeper dive on any single competitor
  • Search Tiger Den for related content to repurpose"

Rate limit handling

Estimated tool calls per standard run: ~20-25 (3-5 Atom feeds + 9-12 web searches + 6-10 Tiger Den searches + optional Slack). Deep dive roughly doubles this. If rate limits are encountered, report how many signals were processed and suggest retrying with fewer competitors or a narrower time range. Do not fail silently.

Dependencies

  • Required: Tiger Den (for product-marketing-context, brand-voice-guide, and content search), web_search, web_fetch
  • Optional: Slack MCP (internal competitor chatter), Tiger Docs MCP (technical comparison grounding)

Planned future integrations: Content Brief Standardizer (P1) and Deal Battlecard Generator (P2) are not yet built. Until those ship, content gap analysis output is used manually, and battlecard updates go through brand-voice-writer.

Weekly Installs
1
GitHub Stars
5
First Seen
Apr 13, 2026