skills/timescale/marketing-skills/community-signal-digest

community-signal-digest

Installation
SKILL.md

Community Signal Digest

Monitor developer communities for brand mentions, competitive signals, and topic-relevant conversations. Produces a prioritized, actionable digest with engagement angles and auto-drafted responses.

Primary data source: search_external_content — queries DEV.to, Hacker News, YouTube, and Hashnode. No paid seats or accounts required (YouTube support depends on server-side API key configuration).

Optional enrichment: Common Room — adds Reddit, Stack Overflow, X, Slack, Discord, Discourse coverage plus contact data (lead scores, company, segments). If Common Room is connected, results are merged and deduplicated.

When to use this skill

  • "Run my daily community scan" or "community digest"
  • "What are developers saying about us?"
  • "Check community mentions" or "brand monitoring"
  • "Find engagement opportunities"
  • "Run the weekly community review"
  • Any mention of social listening, community signals, or brand mentions

Operating modes

Daily scan — default for non-Monday weekdays. Last 24 hours. Surfaces 3-8 actionable items. Zero friction: one command, immediate output.

Weekly review — default on Mondays or when user says "weekly." Last 7 days. 15-40 items with trend analysis and a "previously flagged" section for stale unacted items from prior runs.

If the user just says "community digest" without specifying, use daily on non-Monday weekdays, weekly on Mondays.

Step 0: Pre-flight check

Read REFERENCES.md from the plugin root and run the pre-flight check described there. Call list_marketing_references() to verify Tiger Den is reachable. If it fails or the tool is not found, STOP — do not continue. Follow the error handling in REFERENCES.md.

Once Tiger Den is confirmed, fetch reference docs:

get_marketing_context(slugs: ["product-marketing-context", "brand-voice-guide"])
  • product-marketing-context: ICP topic keywords, competitor names, positioning, engagement angles, proof points.
  • brand-voice-guide: response tone, phrasing guardrails, non-salesy engagement style.

Step 1: Detect available data sources

Probe for available integrations. Set boolean flags — do NOT stop unless all data sources are unavailable.

  1. External content search: Call search_external_content(keyword_presets: ["brand"], published_after: "<24h_ago>", max_results_per_platform: 1) as a connectivity test. If the tool exists and returns data (even empty results with platformStatus), set EXTERNAL_SEARCH_AVAILABLE = true. If the tool is not found or the call fails, set EXTERNAL_SEARCH_AVAILABLE = false. Do NOT stop.
  2. Common Room: Call commonroom_list_objects(objectType: "Provider"). If the tool exists and returns data, set CR_AVAILABLE = true and store the provider list. If the tool is not found or the call fails, set CR_AVAILABLE = false. Do NOT stop.
  3. Tiger Docs: Check if search_docs tool is available. Set TIGERDOCS_AVAILABLE = true/false. Used for response drafting enrichment only.

Report to the user:

Data sources detected:
  External content search: [available/unavailable] (DEV.to, Hacker News, Hashnode, YouTube)
  Common Room:             [available/unavailable] ([connected platforms if available])
  Tiger Docs:              [available/unavailable]

Stop condition: If BOTH CR_AVAILABLE and EXTERNAL_SEARCH_AVAILABLE are false, STOP. Tell the user: "Neither external content search nor Common Room is available. Run /doctor to check your Tiger Den connection, or verify Common Room MCP is configured." If only one source is available, proceed with it and note the gap in the digest footer.

If CR_AVAILABLE, compare the provider list against expected platforms in references/commonroom-query-patterns.md. Flag any missing primary platforms.

Step 1.5: Pre-filter — query intel records

Read references/intel-records-integration.md for the full protocol.

Generate a source_run_id (UUID v4) and record source_run_at (current ISO timestamp). These identify this run for all records created below.

Determine the stale threshold based on operating mode:

  • Daily: 3 days
  • Weekly: 10 days

Compute scan_lookback_start from the operating mode: 24h ago for daily, 7 days ago for weekly.

Run two queries to build the known signals set:

Query 1 — recent records (dedup):

manage_intel_records(
  action: "list",
  source_skill: "community-signal-digest",
  record_type: "community_signal",
  created_after: "<scan_lookback_start>",
  limit: 100
)

Query 2 — stale records (resurface candidates):

manage_intel_records(
  action: "list",
  source_skill: "community-signal-digest",
  record_type: "community_signal",
  status: "new",
  created_after: "<60d_ago>",
  created_before: "<source_run_at minus stale_threshold>",
  limit: 100
)

If either query returns exactly 100 results, paginate (offset: 100, repeat until a page returns < 100).

Merge results into a known signals map keyed by content_hash. Query 2 results are pre-tagged stale_unacted.

If manage_intel_records is not available or both queries fail, set INTEL_RECORDS_AVAILABLE = false and continue without dedup. Report the failure in the digest footer. Do not stop.

Report to user (append to existing data sources status):

Intel records:          [available/unavailable] — [N] recent + [M] stale unacted

Step 2: Build keyword sets

search_external_content resolves keyword presets server-side — brand, competitors, and topics each expand to both keyword search terms (for HN/YouTube) and platform-native tag slugs (for DEV.to/Hashnode). The skill does not need to build or map these manually.

For Common Room queries, use the brand keyword list from references/commonroom-query-patterns.md directly. Common Room has its own keyword matching and is not affected by this step.

Custom keywords: If the user specifies additional search terms beyond the standard presets, pass them via the keywords parameter on search_external_content (routes to HN/YouTube only) and/or the tags parameter (routes to DEV.to/Hashnode only). Note: explicit keywords do NOT reach DEV.to or Hashnode — those platforms only support tag-based filtering. For valid tag slugs, check the platform directly (e.g., dev.to/tags or Hashnode's tag search) — search_external_content will silently ignore unrecognized tags.

Step 3: Gather inputs

For daily scans ("run my daily community scan" or similar), skip Q&A entirely and use all defaults.

Otherwise, gather through conversational Q&A:

  1. Time range — Default: 24h (daily) / 7d (weekly). Options: 24h, 3d, 7d, 14d, 30d.
  2. Mode — Default: both. Options: brand mentions only, topic signals only, both.
  3. Source filter — Default: all available. Options: external content only, Common Room only, both.
  4. Platform filter — Default: all platforms. Options: filter to specific platforms.
  5. Priority filter — Default: all. Options: respond only, engage only, monitor only.

Compute time boundaries for the selected range:

  • External content search: Pass published_after as ISO 8601 datetime — search_external_content handles platform-specific date format translation internally
  • Common Room: ISO durations (P1D, P3D, P7D, P14D, P30D)

Step 4: External content search

Skip this step if EXTERNAL_SEARCH_AVAILABLE is false.

Compute published_after as an ISO 8601 datetime based on the selected time range (e.g., 24h ago for daily, 7 days ago for weekly).

4a: Brand mentions

search_external_content(
  keyword_presets: ["brand"],
  published_after: "<published_after>",
  max_results_per_platform: 20
)

4b: Topic and competitor signals

search_external_content(
  keyword_presets: ["competitors", "topics"],
  published_after: "<published_after>",
  max_results_per_platform: 20
)

For weekly mode, increase max_results_per_platform to 25.

4c: Platform-specific runs (optional)

If the user specified a platform filter, pass the platforms parameter:

search_external_content(
  keyword_presets: ["brand"],
  published_after: "<published_after>",
  platforms: ["hackernews", "devto"],
  max_results_per_platform: 20
)

Valid platform values: devto, hackernews, hashnode, youtube.

Handling results

  • Results arrive pre-normalized: each item has title, url, author, date, snippet, tags, engagement_score, and platform
  • Cross-platform URL deduplication is handled server-side
  • Check platformStatus in the response — it reports which platforms were searched vs. skipped and why (e.g., no matching tags for DEV.to on a keyword-only query). Note any skipped platforms in the digest footer
  • Deduplicate across the 4a and 4b result sets by URL before merging

Error handling

If search_external_content returns an error or partial results, report which platforms were affected in the digest footer. Do not abort — proceed with whatever results were returned plus Common Room data (if available).

Timeouts: If a call times out, retry with fewer platforms (e.g., platforms: ["hackernews", "devto"]) or lower max_results_per_platform (e.g., 15 instead of 20/25). Do not skip the search entirely on a timeout — a narrower retry is better than no external content data.

Step 5: Common Room queries (optional)

Skip this step if CR_AVAILABLE is false or user selected "external content only".

Read references/commonroom-query-patterns.md for exact filter templates. Run three sub-queries:

  • 5a: Brand mentions — query with brand keyword filter
  • 5b: Topic signals — per-cluster queries scoped to external platforms (Reddit, SO, X, DEV.to)
  • 5c: Negative sentiment backstop — brand keywords + negative sentiment label (l_683607). Do NOT run an unscoped negative-sentiment query.

Deduplicate across all three sub-queries by activity id. Tag each result with source_type: "commonroom".

Step 6: Deduplicate and merge

Merge external content results (Step 4) and Common Room results (Step 5) into a single signal list.

Dedup rules:

  1. Cross-source (external vs. CR) — Match on source URL (normalize: strip trailing slashes, query params, www. prefix, convert to https://). If the same item appears in both, keep the CR version (richer metadata: contactId, sentiment labels) and tag as source_type: "both".
  2. Within external content — Already handled: server-side by search_external_content, and across 4a/4b result sets in Step 4.
  3. Within CR — Already handled in Step 5 via activity id.

If only one source ran (external content only or CR only), skip cross-source dedup — just pass results through.

Intel records dedup

Skip this sub-step if INTEL_RECORDS_AVAILABLE is false.

After merging and deduplicating across data sources, check each signal against the known signals set from Step 1.5:

  1. Compute the content hash: sha256(normalize_url(source_url)). URLs from search_external_content are already normalized. For Common Room URLs, apply the normalization rules in references/intel-records-integration.md.
  2. Look up the hash in the known signals map.
  3. Apply the match logic from references/intel-records-integration.md:
    • Match + handled (reviewed/acted_on/archived): remove from signal list, increment skipped_count.
    • Match + new + recent (age < stale threshold): remove from signal list, increment skipped_count.
    • Match + stale (pre-tagged stale_unacted): move to the stale_items list. Do not enrich or classify — use the original record's data.
    • No match: keep in the signal list for enrichment and classification.

Signals removed here skip Steps 7-9 entirely.

Step 7: Post-query noise filtering

Filter OUT from the merged signal list:

All sources:

  • Tiger Data's own account activity — LinkedIn posts from TigerData (sig_139887), tweets from @TigerDatabase, Tiger Data org articles on DEV.to, Tiger Data YouTube channel videos
  • Retweets/reshares of own content with no added commentary
  • Generic listicle mentions (TimescaleDB in a list without meaningful discussion)

Common Room sources:

  • GitHub repo stars (GitHubRepoStar) — already excluded in-query but verify

External content sources:

  • Items with engagement_score of 0 from topic/competitor queries (low-signal noise). Brand-mention items bypass this filter.
  • Hashnode posts from Tiger Data's publication (if one exists — check author/publication fields)

Report noise filter stats at the bottom of the digest: "Filtered out: X own-account posts, Y plain retweets, Z low-quality items"

Step 8: Enrich signals

All enrichment is optional — skip any sub-step where the dependency is unavailable.

8a: Contact data (CR only) — For Tier 1 and high-opportunity Tier 2 items with a contactId, fetch contact data using the query in references/commonroom-query-patterns.md.

8b: Stack Overflow view counts (WebFetch) — For SO items (from CR), fetch the page via WebFetch and parse the view count. Skip silently on failure.

8c: External content author profiles (best-effort)search_external_content returns author name in its normalized output. For top 3 high-scoring items where more context would help (e.g., need employer/title for prioritization), fetch the author's platform profile page via WebFetch if available. Limit to 3 fetches. Skip entirely if WebFetch is unavailable.

Step 9: Classify and prioritize

Tier assignment — dual path

Path A: CR items (items with source_type: "commonroom" or "both") — use CR activity category labels from references/commonroom-query-patterns.md:

  • Product question / Bug / Complaints / Account support → Respond
  • Feature request → Respond or Monitor
  • Product appreciation → Monitor
  • Negative sentiment + brand mention → escalate to Respond
  • Topic signal (no brand mention) → Engage

Path B: External content items (items with source_type: "external", or ALL items when CR is unavailable) — use keyword heuristics on title/snippet:

Pattern Tier Detection
Brand mention + question markers ("how to", "help", "issue", "error", "won't", "can't", "problem") Respond Keyword match
Brand mention + negative words ("terrible", "slow", "broken", "disappointed", "switching away") Respond Keyword match
Brand mention + comparison ("vs", "versus", "compared to", "benchmark", "alternative") Engage Keyword match
Brand mention + positive ("love", "switched to", "migrated to", "recommend", "amazing") Monitor Keyword match
Competitor mention + relevant topic Engage relevance_score >= 2 with competitor keyword
Topic match only Engage (low) relevance_score == 1

Priority scoring

Compute for all Tier 1 and Tier 2 items:

priority_score = (
  urgency_weight          # 3 = negative/complaint, 2 = product question, 1 = topic signal
  + thread_momentum       # normalize(engagement metrics) on 0-3 scale
  + author_fit            # CR: 3/2/1/0 by lead score percentile. External: 1 (unknown), 2 if profile enrichment found company/title
  + recency_bonus         # 2 = 2-12h ago, 1 = 12-24h, 0 = older
  + response_window       # 2 = unanswered/0 replies, 1 = few replies, 0 = well-answered
)

Take top 3 (daily) or top 5-10 (weekly) by composite score. Break ties by recency.

For weekly reviews, stale unacted items from intel records (Step 9.5) replace the previous "older unengaged opportunities" heuristic. These are items first surfaced 10+ days ago that remain in new status — no one reviewed or acted on them.

Step 9.5: Post-validate and write intel records

Skip this step if INTEL_RECORDS_AVAILABLE is false.

Post-validate (cross-skill dedup)

For each signal that passed Steps 7-9 and was not in the known signals set from Step 1.5, run:

manage_intel_records(
  action: "find_duplicates",
  content_hash: "<hash>"
)

No record_type parameter — this searches across all record types.

  • Same-skill match found: apply skip/stale logic (same as Step 6).
  • Cross-skill match found (different source_skill): skip record creation but keep the item in the digest output. The user hasn't seen it in the community signal context.
  • No match: proceed to record creation.

Write net-new records

For each confirmed net-new signal, create a community_signal record:

manage_intel_records(
  action: "create",
  title: "<signal title — first 120 chars of title or snippet>",
  record_type: "community_signal",
  summary: "<tier + engagement angle + 1-2 sentence summary>",
  source_skill: "community-signal-digest",
  source_run_id: "<run uuid from Step 1.5>",
  source_run_at: "<run timestamp from Step 1.5>",
  canonical_source_url: "<normalized URL>",
  source_urls: ["<original URL>"],
  content_hash: "<hash>",
  tags: ["<tier: respond|engage|monitor>", "<platform: devto|hackernews|reddit|etc>"],
  observed_at: "<published date from the signal>"
)

Step 10: Compile and deliver the digest

Summary section (top of digest)

Metric Value
Time range Last 24 hours / Last 7 days
Sources [list sources used, e.g., "External content (DEV.to, HN, Hashnode, YouTube) + Common Room (Reddit, SO, X, Slack, Discord, Discourse)"]
Total signals (after noise filter) [count]
Respond (action needed) [count]
Engage (opportunities) [count]
Monitor (awareness) [count]
Top platform [platform with most signals]
Sentiment breakdown [positive / neutral / negative counts]
Trending topics [top 3 topics by signal volume]
Noise filtered out [count]
New signals [count] (net-new + new to this skill)
Re-surfaced (unacted) [count]
Skipped (already handled) [count]

Priority Engagements (daily hero section)

Present the top 3-5 highest-priority items across Tier 1 and Tier 2:

[emoji] [rank]. [Platform] ([source badge]) — [one-line summary]
   Posted [time ago] · [engagement metrics]
   Author: [name, title, company] ([lead score if CR] or [platform profile if external])

   Why this matters: [one sentence]
   Engagement angle: [one sentence]

   → [Direct link to original post]
   → [View in Common Room]  ← only for CR / both items

Source badges: (external) for external content only, (Common Room) for CR only, (external + CR) for items found in both.

Use red circle for Tier 1 (Respond) and yellow circle for Tier 2 (Engage).

Links: External content items show source URL only. CR items show source URL + Common Room deep-link. Items found in both show both links.

X/Twitter items: Show reply count only — do not show like counts (unreliable from Common Room).

Quiet day: If fewer than 3 items qualify, show whatever qualifies and note "Light day." If 0 qualify: "Quiet day — [count] brand mention(s), [count] topic signal(s). No items requiring response or engagement."

Weekly variant: Expand to top 5-10.

Tier 1: Respond (needs action)

For each item: platform + source badge + links, author context (from CR enrichment or platform profile), snippet (2-3 sentences), sentiment, urgency, engagement angle. For SO: include view count. For X/Twitter: reply count only.

Tier 2: Engage (opportunity)

For each item: platform + source badge + links, topic match, snippet, engagement angle, engagement signals, opportunity score.

Tier 3: Monitor (awareness)

For each item: platform + source badge + link, snippet, sentiment, signal type (testimonial, trend, feature request, job market, other), suggested action (amplify, bookmark for case study, none).

Previously flagged — needs attention

Omit this section entirely if there are zero stale items.

Present stale unacted items from the stale_items list (populated in Steps 6 and 9.5):

⏳ Previously flagged — needs attention ([N] items)
• [Platform] — [title] (first surfaced [N] days ago) → [link]
• [Platform] — [title] (first surfaced [N] days ago) → [link]

These are signals first surfaced in a prior run that remain in new status past the stale threshold (3 days for daily, 10 days for weekly). They appear here as a nudge — someone should review or act on them, or they will keep re-appearing.

Digest footer

Report:

  • Which data sources were used and which were unavailable (and why)
  • Platform skip notes from platformStatus (e.g., YouTube skipped if no API key configured server-side)
  • Per-platform signal counts
  • Any platform failures from platformStatus
  • Noise filter stats: "Filtered out: X own-account posts, Y plain retweets, Z low-quality items"

Step 11: Auto-draft responses for Tier 1 items

After delivering the digest, automatically draft a suggested response for each Tier 1 (Respond) item. Do not wait for the user to ask.

For each Tier 1 item, produce a suggested response block immediately after the item detail:

💬 Suggested response:
[Draft response text — 2-5 sentences, adapted for the platform]

Pass to brand-voice-writer internally:

  • Original conversation snippet and platform name
  • Engagement angle from the digest
  • Author context (from CR enrichment or platform profile — note if unavailable)
  • Tiger Docs search results (run search_docs if TIGERDOCS_AVAILABLE — skip if not)
  • Brand voice guardrails (from Step 0)
  • Instruction: "Draft a community response — helpful, technically grounded, non-promotional. Lead with value. Adapt formatting for [platform]. Keep it concise — 2-5 sentences."

After the full digest:

"Want me to revise any of these drafts, or draft a response for a Tier 2 item? Tell me the item number."

Tier 2 and Tier 3: draft on request only.

Step 11.5: Write follow-up recommendation records

Skip this step if INTEL_RECORDS_AVAILABLE is false.

After Step 11 completes, create a follow_up_rec record for each Tier 1 drafted response:

manage_intel_records(
  action: "create",
  title: "Response draft: <parent signal title>",
  record_type: "follow_up_rec",
  summary: "<draft response text, first 500 chars>",
  source_skill: "community-signal-digest",
  source_run_id: "<run uuid from Step 1.5>",
  source_run_at: "<source_run_at from Step 1.5>",
  canonical_source_url: "<same URL as parent signal>",
  content_hash: "<sha256('follow_up:' + parent_content_hash)>",
  tags: ["follow_up", "tier1", "<platform>"],
  observed_at: "<same as parent signal>"
)

Step 12: Rate limit handling

External content search: Rate limits are handled server-side by search_external_content. If the tool returns partial results due to rate limiting, the platformStatus field will indicate which platforms were affected. Report this in the digest footer.

Common Room: If rate limits are encountered, catch gracefully and report how many signals were processed. Suggest retrying with a narrower time range or platform filter. Do not fail silently.

Dependencies

  • Required: Tiger Den (for search_external_content, product-marketing-context, and brand-voice-guide)
  • Optional enrichment: Common Room MCP (for Reddit, SO, X, Slack, Discord, Discourse + contact data)
  • Optional: WebFetch (for author profile enrichment on top-priority items), Tiger Docs MCP + brand-voice-writer skill (for response drafting)
Weekly Installs
1
GitHub Stars
5
First Seen
Apr 13, 2026