playbook-sentiment-listening
Sentiment Listening Playbook
Use when
- Operationalises AI-powered social listening and sentiment analysis for a client — covering tool selection matched to EA budgets, NLP keyword setup across four categories, real-time dashboard specification, Net Sentiment Score reporting, and a translation framework that converts sentiment data into concrete marketing decisions. Invoke when a client wants to understand what is being said about their brand online, when brand health monitoring is required as a standing deliverable, when competitive intelligence needs a sentiment dimension, or when crisis prevention is a stated priority.
- Use this skill when it is the closest match to the requested deliverable or workflow.
Do not use when
- Do not use this skill for graphic design, video production, software development, or legal advice beyond the repository's stated scope.
- Do not use it when another skill in this repository is clearly more specific to the requested deliverable.
Workflow
- Collect the required inputs or source material before drafting, unless this skill explicitly generates the intake itself.
- Follow the section order and decision rules in this
SKILL.md; do not skip mandatory steps or required fields. - Review the draft against the quality criteria, then deliver the final output in markdown unless the skill specifies another format.
Anti-Patterns
- Do not invent client facts, performance data, budgets, or approvals that were not provided or clearly inferred from evidence.
- Do not skip required inputs, mandatory sections, or quality checks just to make the output shorter.
- Do not drift into out-of-scope work such as code implementation, design production, or unsupported legal conclusions.
Outputs
- A structured markdown document, plan, playbook, or strategy ready for client-facing or internal use.
References
- Use the inline instructions in this skill now. If a
references/directory is added later, treat its files as the deeper source material and keep thisSKILL.mdexecution-focused.
Required Input
Collect the following before generating any deliverable:
- Client business name and industry (e.g., "Kampala Fresh Bakery — food and beverage")
- Country/city — defaults to Uganda/Kampala if not specified
- Primary goal — select one: brand health monitoring / competitor tracking / crisis prevention / content inspiration
- Available budget for tools — select one: free / USD 30–100 per month / enterprise (USD 300+)
- Languages to monitor — select one or more: English only / English + Luganda / English + Swahili / all three
- Competitors to track — up to three brand names with their social handles if known
- Crisis sensitivity — select one: high (any negative spike triggers alert) / medium (threshold: 3 negative mentions on the same topic within 6 hours) / low (weekly review only)
What Social Listening and Sentiment Analysis Are
Social listening is the practice of monitoring what people say about a brand, a competitor, an industry, or a topic across social platforms, forums, news sites, and review pages. Sentiment analysis extends that monitoring by applying Natural Language Processing (NLP) to classify each piece of content automatically — as Positive, Neutral, Negative, or Mixed — and to extract emotion categories including joy, anger, fear, surprise, and disgust.
The two practices are distinct but inseparable. Listening without sentiment analysis produces a volume count with no meaning. Sentiment analysis without listening has nothing to process. Together they answer three questions that raw platform analytics cannot: What do people actually feel about this brand? Is that feeling improving or deteriorating? What is driving the change?
AI is not optional in this context. The volume of mentions generated by any brand with meaningful market presence is too large for manual review. AI processes thousands of mentions per hour, clusters them by theme, and identifies trends before they become crises. Manual review catches what AI surfaces; it is not the primary mechanism.
Johnsen (2024) is explicit on this point: sentiment data has no value unless it drives a decision. The intelligence → decision → action loop must be made explicit at the outset. Every sentiment report generated by this skill must conclude with a named action, a named owner, and a deadline. Data that produces no action is wasted consultant time and client budget.
1. Tool Selection
Match the tool tier to the client's budget. Do not recommend tools that require payment infrastructure unavailable in Uganda (e.g., US-only credit card processing without international support).
Tool Reference Table
| Tool | Type | Sentiment capability | EA access | Cost (approx.) |
|---|---|---|---|---|
| Google Alerts | Web monitoring | None | Yes | Free |
| Meta Business Suite | Facebook/Instagram comments | Basic (positive/negative flag) | Yes | Free |
| Talkwalker Alerts | Social monitoring | Basic | Yes | Free tier |
| Mention | Brand monitoring + social | Basic sentiment scoring | Yes | USD 29+/month |
| MonkeyLearn | NLP API, custom sentiment models | Yes — trainable | Yes | USD 0–299/month |
| Hootsuite Insights | Social listening + sentiment | Yes — AI-powered | Yes | USD 99+/month |
| Brandwatch | Enterprise listening | Yes — advanced NLP | Yes | Enterprise pricing |
| Africa's Talking + custom NLP | SMS and WhatsApp text analysis | Via API integration | Yes (EA-native) | Pay-per-use |
Recommended Configurations by Budget
Starter (free): Google Alerts (web and news) + Meta Business Suite (Facebook and Instagram) + Talkwalker Alerts (social web) + manual weekly platform search. Sufficient for a small EA business with fewer than 500 monthly brand mentions. Pair with the weekly listening routine in Section 6.
Growth (USD 30–100/month): Add Mention or MonkeyLearn to the starter stack. Use Zapier (free tier) to pipe Mention alerts into a shared Slack channel or Google Sheet for team visibility. MonkeyLearn's API allows custom sentiment training — useful for Ugandan English, Luganda-English code-switching, and local complaint vocabulary.
Scale (enterprise): Brandwatch or Hootsuite Insights as the primary platform. These tools aggregate sentiment across all major social platforms automatically, provide share-of-voice reporting against named competitors, and generate dashboard exports suitable for client presentation. Budget separately for the subscription and for consultant time to configure and maintain keyword sets.
2. Keyword and Topic Setup
Build the keyword taxonomy before configuring any tool. A complete taxonomy ensures critical mentions are captured and irrelevant noise is filtered. Work through all four categories with the client at onboarding; this takes 20–30 minutes and prevents weeks of missed intelligence.
Category 1 — Brand Mentions
| Keyword type | Example | Client's version |
|---|---|---|
| Exact business name | "Kampala Fresh Bakery" | |
| Common misspelling 1 | "Kampala Fresh Bakary" | |
| Common misspelling 2 | "KFB Kampala" | |
| Product or service name | "sourdough bread Kampala" | |
| Founder name (if public-facing) | "Sarah Nakato bakery" | |
| Primary brand hashtag | #KampalaFreshBakery |
Category 2 — Competitor Mentions
| Keyword type | Example | Client's version |
|---|---|---|
| Competitor 1 brand name | "City Breads Uganda" | |
| Competitor 2 brand name | "Kampala Cakes" | |
| Competitor 3 brand name | "Bake House UG" | |
| Competitor product name | "City Breads croissant" | |
| Comparison phrasing | "vs City Breads" |
Monitor competitor sentiment alongside brand sentiment. A competitor negative spike is an intelligence signal, not background noise — see the decision table in Section 5.
Category 3 — Industry Terms
| Keyword type | Example | Client's version |
|---|---|---|
| Customer-language category term | "fresh bread Kampala" | |
| Discovery phrase | "best bakery in Kampala" | |
| Common customer question | "where to buy sourdough Kampala" | |
| Industry hashtag | #KampalaBakery |
Category 4 — Crisis Triggers
| Trigger type | Example keywords | Client's version |
|---|---|---|
| General complaint language | "fraud", "scam", "stolen", "disappointed", "terrible", "avoid" | |
| Service failure language | "no response", "ignored", "kept waiting", "not delivered" | |
| Mobile Money complaint language | "MoMo failed", "Airtel Money stuck", "transaction pending", "refund not received" | |
| EA-specific escalation signals | "report", "expose", "warning ugandans", "consumer protection" | |
| Product or safety complaint | "food poisoning", "expired", "broken", "fake" |
EA-specific keyword considerations:
- Add Luganda equivalents of key brand and product terms. Urban Kampala customers code-switch between English and Luganda in the same post. Examples of terms to consider: "emmere" (food/meal), "ssente" (money), "omusawo" (doctor/health), "omugati" (bread). Work with a Luganda-speaking staff member or the client to identify the correct terms.
- Add Swahili equivalents if the client operates in or targets Kenya, Tanzania, or Rwanda (e.g., "pesa" for money, "chakula" for food).
- Include Mobile Money complaint vocabulary in Category 4 for any client whose customers pay via MTN Mobile Money, Airtel Money, or similar. Payment complaints travel fast on Facebook and X/Twitter in Uganda and are frequently high-priority.
- Monitor WhatsApp public groups manually (see Section 6). WhatsApp is end-to-end encrypted; no automated tool can access it. Identify 3–5 relevant public or semi-public groups (local industry groups, neighbourhood consumer groups, city-specific community groups) and assign a team member to review them weekly.
- If the client operates delivery or logistics services, add boda-boda and delivery complaint language: "delivery late", "boda disappeared", "rider no show", "wrong address delivered".
3. Sentiment Scoring and Reporting
Net Sentiment Score (NSS)
Apply the NSS formula to every weekly and monthly report:
NSS = (Positive mentions − Negative mentions) ÷ Total mentions × 100
Example: 80 positive, 15 negative, 5 neutral = (80 − 15) ÷ 100 × 100 = NSS +65
Benchmarks (Johnsen, 2024):
| NSS range | Interpretation | Required action |
|---|---|---|
| Above +40 | Healthy — brand sentiment is strong | Maintain; surface positive themes for content |
| +20 to +40 | Attention required — monitor closely | Investigate negative themes; review community management |
| Below +20 | Crisis territory — immediate review required | Activate crisis review; brief client within 24 hours |
| Negative (below 0) | Active reputational threat | Escalate to playbook-crisis-communications immediately |
Do not present NSS in isolation. Always pair it with the total mention volume for the period and a note on any significant theme driving the score. A low-volume week with an NSS of −10 carries different weight from a high-volume week with the same score.
Share of Voice (SOV)
SOV = Client mentions ÷ Total category mentions (client + all monitored competitors) × 100
Example: Client has 120 mentions; competitor A has 90; competitor B has 60. Total = 270. SOV = 120 ÷ 270 × 100 = 44.4%.
Calculate SOV monthly. Use it to track whether the client is gaining or losing prominence relative to competitors over time. Do not calculate SOV from a single week — the sample is too small to be meaningful.
Weekly Reporting Template
Produce a one-page weekly summary using this structure:
- Total mentions this week: [number] (vs. [number] last week — [up/down X%])
- Net Sentiment Score: [+/− number] (vs. [number] last week)
- Share of Voice: [%] (vs. last month: [%]) — calculate monthly only; mark as N/A in weekly reports
- Top 3 themes this week: [theme 1] / [theme 2] / [theme 3]
- Alert item: [one specific issue requiring attention, or "None this week"]
- Recommended action: [named action, named owner, deadline]
Keep this summary to one page or screen. Its purpose is to make the intelligence visible and actionable, not to document every mention.
4. Real-Time Dashboard Specification
Specify the following dashboard elements in this priority order when configuring any sentiment tool at growth or scale tier. At the starter tier, replicate this structure manually in a Google Sheet or Looker Studio (free).
Priority 1 — Net Sentiment Score trend Display NSS as a line chart over a rolling 7-day and 30-day window. Both views must be visible simultaneously. Colour-code the chart: green above +40, amber between +20 and +40, red below +20.
Priority 2 — Mention volume by platform Bar chart or stacked area chart showing total mentions split by platform (Facebook, Instagram, X/Twitter, TikTok, Google Business Profile, news/web, other). Updated daily. This identifies which platform is driving volume changes.
Priority 3 — Top negative themes (auto-clustered) A ranked list of the top 5 negative topic clusters from the current week. Tools such as Brandwatch and MonkeyLearn generate these automatically. At starter tier, produce this manually by reviewing all negative mentions and grouping them by subject.
Priority 4 — Crisis alert indicator A visible status indicator (green/red flag or similar) that triggers when 3 or more negative mentions about the same topic appear within a 6-hour window. This is the default medium-sensitivity threshold. Adjust the threshold based on the client's stated crisis sensitivity:
- High sensitivity: 2 negative mentions on the same topic within 4 hours
- Medium sensitivity: 3 negative mentions on the same topic within 6 hours
- Low sensitivity: weekly NSS drop below +20
Priority 5 — Share of Voice vs. competitors A pie chart or grouped bar chart showing the client's SOV against up to two named competitors. Updated monthly. Display this on the dashboard as a "this month vs. last month" comparison.
Priority 6 — Top positive themes (for content inspiration)
A ranked list of the top 3 positive topic clusters. Positive themes surfaced here feed directly into the content calendar (see 11-content-calendar). A positive theme that appears consistently for three or more weeks warrants a dedicated content pillar review (10-content-pillars).
5. Translating Sentiment Into Decisions
Sentiment data has no value until it drives a decision (Johnsen, 2024). Apply this decision table every week. Each signal maps to a specific action with a named cross-reference.
| Sentiment signal | Threshold for action | Required action |
|---|---|---|
| Spike in negative mentions about a product or service | NSS drops 10+ points in one week | Identify the top negative theme; draft a client brief; activate playbook-crisis-communications if the theme is public-facing and spreading |
| Positive theme emerging organically | Same theme appears in 5+ positive mentions in one week | Develop content around the theme within 48 hours; add to 11-content-calendar immediately |
| Competitor negative spike | Competitor NSS drops 15+ points in one week | Review competitor's negative themes; develop positioning content that highlights the client's strength in that area; feed into 09-campaign-strategy |
| NSS falls below +20 | Single observation sufficient | Review the past 7 days of published content and community management responses; identify whether the client's own posts or responses are generating negative reactions |
| Recurring complaint topic | Same complaint theme appears in 3+ separate mentions across 2+ weeks | Escalate to the client's operations or product team with a written brief; do not treat a recurring operational complaint as a social media problem |
| Positive UGC identified | Any organic customer content that meets quality and brand standards | Request permission to reshare; activate playbook-ugc-strategy for the curation and republishing workflow |
| NSS below 0 (net negative) | Single observation sufficient | Treat as an active reputational threat; brief the client immediately; escalate to playbook-crisis-communications without delay |
Customise the action column for the client's specific industry. A food and beverage client treats a "food poisoning" mention differently from a logistics client seeing "delivery fraud". During onboarding, identify the two or three negative topics that would be most damaging for this specific business and set the crisis alert threshold accordingly.
6. Weekly Listening Routine
Embed listening into a fixed weekly schedule. Unscheduled monitoring does not happen consistently. Assign a named person to each day's task.
Monday — Weekend review (15 minutes) Pull all mentions from Friday 5pm to Monday 9am. Note any volume spikes over the weekend. Update the dashboard with the current NSS. Flag any crisis alert items to the client before 10am Monday. Weekends in Uganda are high-activity periods for consumer social media — do not skip the Monday review.
Wednesday — Mid-week NSS check (10 minutes) Check the rolling 7-day NSS. If the score has dropped 5+ points since Monday, identify the driving theme. Review any new negative mentions and confirm that previous community management responses have been given. Flag any emerging negative theme clusters to the client.
Friday — Weekly summary (20 minutes)
Produce the one-page weekly summary using the template in Section 3. Calculate the week's NSS and compare to last week. Identify the top 3 themes. Confirm the alert item and recommended action. Share the summary with the client or present it at the weekly team meeting. File a copy in the client's monthly reporting folder for use in the monthly report (meta-reporting).
Monthly close (45 minutes — end of each calendar month)
Calculate the month's SOV. Produce a sentiment trend chart for the month (NSS by week). Extract the top 5 positive and top 5 negative themes for the month. Identify any recurring complaint topic that has appeared across three or more weeks — escalate operationally. Share the monthly summary with the client and file it for the quarterly review (deck-quarterly-review).
7. EA-Specific Considerations
Multilingual monitoring is not optional. Ugandan customers regularly mix English and Luganda in the same social media post. A complaint that reads "that place is really bad — banakola!" will not be caught by an English-only keyword set. At onboarding, build Luganda and Swahili keyword variants for all Category 1 (brand) and Category 4 (crisis trigger) terms. Use a Luganda-speaking staff member or the client to verify the correct terms — do not use machine translation for this step.
WhatsApp is unmonitorable — plan around this explicitly. WhatsApp is the dominant communication channel in Uganda and is end-to-end encrypted. No external tool can monitor it. Do not imply otherwise to the client. What can be done: brief all customer-facing staff to note recurring WhatsApp complaint themes monthly and report them to the social media manager. Create a simple monthly staff input form (Google Sheet or WhatsApp Group poll) asking: "What were the top 3 complaints or questions you received via WhatsApp this month?" Incorporate the responses into the monthly sentiment summary as "WhatsApp intelligence (staff-reported)."
Facebook Groups carry significant brand conversation. A material proportion of EA consumer and community discussion happens in private or semi-public Facebook Groups rather than on public pages. Identify 3–5 relevant groups at onboarding (local industry groups, neighbourhood groups, city-specific consumer groups such as "Kampala Foodies" or "Kampala Business Network"). Assign a team member to review them manually each week. This content does not appear in any automated tool search.
Informal news outlets amplify negative stories rapidly.
In Uganda, outlets such as Sqoop (sqoop.co.ug), Nile Post (nilepost.co.ug), and Chimp Reports (chimpreports.com) can amplify a negative story within hours and reach audiences that dwarf the brand's own social following. Add their domains to Google Alerts. If a client story appears on any of these outlets, treat it as a Level 2 crisis minimum and activate playbook-crisis-communications.
Africa's Talking integration for SMS and WhatsApp Business.
For clients with high SMS or WhatsApp Business API volumes, Africa's Talking (africastalking.com) provides an EA-native API that can pipe message data into a custom NLP sentiment model via MonkeyLearn or a similar service. This is a growth-tier or scale-tier option; document the integration in the client's tools stack (meta-tools-stack-evaluation) and cost it explicitly before recommending.
Cross-References
playbook-crisis-communications— Activate when NSS drops below 0 or when the crisis alert indicator triggers. The sentiment listening dashboard is the early warning system; the crisis playbook is the response protocol.playbook-ugc-strategy— Positive UGC surfaced through listening feeds directly into the UGC collection and republishing workflow.meta-social-listening— The foundational listening programme covering keyword taxonomy, free tool setup, and listening log. This skill adds AI sentiment scoring, NSS reporting, and the decision framework on top of that foundation. Use both skills together; do not treat them as alternatives.
Quality Criteria
Output meets the standard if it:
- Recommends tools matched to the client's stated budget and confirms EA accessibility — no tools that are unavailable or require payment infrastructure absent in Uganda
- Sets up all four keyword categories (brand, competitor, industry, crisis triggers) with the client's actual terms populated — not left as a blank template
- Applies the NSS formula correctly and includes all three benchmark thresholds (+40, +20, 0) with named required actions for each
- Specifies the dashboard with named metrics, named thresholds, and named chart types — not a vague list of "things to track"
- Provides a sentiment-to-action decision table customised to the client's specific industry, with at least two industry-specific signals identified
- Addresses EA context explicitly: Luganda and/or Swahili keyword variants are built in, the WhatsApp limitation is named with a practical workaround, and Facebook Group monitoring is assigned to a named person
- Delivers a weekly routine as a named schedule (Monday / Wednesday / Friday / monthly) with time estimates and task descriptions — not a vague checklist
- Concludes every report template with a named action, a named owner, and a deadline — in line with Johnsen's (2024) intelligence → decision → action principle
References
- Johnsen, M. (2024) AI in Digital Marketing. Mercury Learning and Information.
- Ltifi, M. (ed.) (2025) Advances in Digital Marketing in the Era of AI. CRC Press.
- Chaffey, D. (2024) Digital Marketing: Strategy, Implementation and Practice. Pearson.
- Bodnar, K. and Cohen, J. (2012) The B2B Social Media Book. John Wiley and Sons.
More from peterbamuhigire/social-media-skills
meta-ai-tools-audit
Produces a structured evaluation of AI marketing tools for a specific client, mapped by function (content creation, SEO, social media management, email marketing, automation, analytics, paid advertising, influencer marketing) with East African market accessibility, cost, and capability ratings. Outputs a recommended AI tool stack calibrated to the client's budget profile in UGX. Invoke when a client asks which AI tools to adopt, wants to assess their current AI tool usage, needs to build an AI-powered martech stack, or is evaluating AI capabilities against their marketing goals.
3platform-instagram-visual-system
>
3caption-writer
Writes social media captions for any platform from a brief. Generates 3 variations — short, medium, and long — with a hashtag set for each. Invoke when the user says "write a caption", "write captions for", "I need post copy for", "draft some caption options", or when a content brief is provided and the user needs caption text. Also invoke when working through a content calendar and post copy is needed for specific items.
3playbook-instagram-dm-sales
>
3training-social-media-fundamentals
Generates a foundational social media training guide for clients and their teams who are completely new to social media marketing, or who have been posting without any strategic understanding. Invoke when the user says "write a social media basics guide", "create a beginner training document", "the client doesn't understand social media", "start-here training", or when a client needs to understand social media before any strategy or content work begins. Distinct from training-client-team (operational handover of an existing strategy) and training-diy-content (content creation for self-managing clients). This skill covers what social media is, how it works, and how to approach it intelligently — the conceptual foundation that makes all downstream strategy work land.
3training-ai-prompt-writing
Produces a practical training guide for client teams on prompt engineering for marketing tasks — covering the Alpha-Beta-Gamma-Delta-Epsilon prompt structure, 10 prompt components, 5 prompting approaches, and 7 copywriting frameworks with worked East African examples. Invoke when the user says "create a prompt writing training guide", "teach my team how to use AI for marketing", "write a prompt engineering workshop", "AI copywriting training for staff", or needs a structured training document for client employees who use AI tools to produce marketing content.
3