skills/louisblythe/salesskills/objection-pattern-learning

objection-pattern-learning

Installation
SKILL.md

Objection Pattern Learning

You are an expert in building sales bots that detect emerging objection patterns across conversations. Your goal is to help developers create systems that identify new objections, cluster similar concerns, and flag them for human review and response development.

Why Pattern Learning Matters

The Discovery Problem

Static objection handling:
- Pre-programmed responses only
- New objections go unaddressed
- Market changes undetected
- Competitors evolve unseen

Bot encounters new objection:
→ Falls back to generic response
→ Loses opportunity
→ Pattern never surfaces

The Learning Advantage

Pattern-aware system:
- Detects novel objections automatically
- Clusters similar concerns
- Alerts humans to trends
- Enables rapid response development

New objection emerges:
→ System flags it
→ Human develops response
→ Bot learns new handling
→ Future conversations improved

Objection Detection

Known Objection Matching

KNOWN_OBJECTIONS = {
    "pricing": {
        "patterns": [
            r"too expensive",
            r"out of.*budget",
            r"can't afford",
            r"cheaper.*alternative"
        ],
        "response_id": "price_value_response"
    },
    "timing": {
        "patterns": [
            r"not.*right time",
            r"too busy",
            r"check back.*later",
            r"maybe next quarter"
        ],
        "response_id": "timing_response"
    }
    # ... more known objections
}

def detect_known_objection(message):
    for category, config in KNOWN_OBJECTIONS.items():
        for pattern in config["patterns"]:
            if re.search(pattern, message, re.IGNORECASE):
                return category, config["response_id"]
    return None, None

Unknown Objection Detection

def detect_unknown_objection(message, sentiment):
    # Negative sentiment but no known match?
    known_cat, _ = detect_known_objection(message)

    if known_cat is None and sentiment < -0.3:
        # Likely an objection we don't recognize
        return {
            "is_unknown": True,
            "message": message,
            "sentiment": sentiment,
            "timestamp": now(),
            "prospect_context": get_context()
        }

    return {"is_unknown": False}

Objection Indicators

Language patterns suggesting objection:
- "But what about..."
- "My concern is..."
- "I'm worried that..."
- "The problem is..."
- "We've tried this before and..."
- "I don't think..."
- "That won't work because..."
- Negative sentiment + question format

Not all negative sentiment is objection:
- "I hate Mondays" → Not product objection
- "I hate that our current tool can't..." → Pain point, not objection

Pattern Clustering

Grouping Similar Objections

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.cluster import DBSCAN

def cluster_objections(objection_texts):
    # Convert to vectors
    vectorizer = TfidfVectorizer(stop_words='english')
    vectors = vectorizer.fit_transform(objection_texts)

    # Cluster similar objections
    clustering = DBSCAN(eps=0.5, min_samples=3)
    clusters = clustering.fit_predict(vectors)

    # Group by cluster
    clustered = {}
    for i, cluster_id in enumerate(clusters):
        if cluster_id not in clustered:
            clustered[cluster_id] = []
        clustered[cluster_id].append(objection_texts[i])

    return clustered

Semantic Similarity

Group objections by meaning, not just keywords:

Cluster: "Security concerns"
- "What about data security?"
- "Is our information safe?"
- "We have strict compliance requirements"
- "How do you handle data privacy?"
- "Can you share your SOC 2 report?"

All different phrasings, same core concern.
System recognizes pattern, creates category.

Emerging Pattern Detection

def detect_emerging_patterns(time_window_days=7):
    # Get recent unknown objections
    recent = get_unknown_objections(days=time_window_days)

    # Cluster them
    clusters = cluster_objections([o["message"] for o in recent])

    # Find clusters with significant volume
    emerging = []
    for cluster_id, objections in clusters.items():
        if len(objections) >= THRESHOLD:  # e.g., 5+ similar objections
            emerging.append({
                "cluster_id": cluster_id,
                "count": len(objections),
                "examples": objections[:5],
                "suggested_category": generate_category_name(objections)
            })

    return emerging

Alerting & Review

Alert Triggers

Generate alert when:

1. New cluster emerges
   - 5+ similar unknown objections in 7 days
   - → Alert: "Emerging objection pattern detected"

2. Spike in known category
   - 50% increase in objection category
   - → Alert: "Pricing objections up 50% this week"

3. High-value deal objection
   - Unknown objection from enterprise prospect
   - → Alert: "Enterprise deal surfaced new concern"

4. Repeated bot failures
   - Same objection causing fallback 3+ times
   - → Alert: "Bot unable to handle objection pattern"

Alert Format

{
  "alert_type": "emerging_objection",
  "severity": "medium",
  "timestamp": "2024-01-20T10:00:00Z",
  "summary": "New objection pattern: Implementation concerns",
  "details": {
    "occurrence_count": 12,
    "first_seen": "2024-01-15",
    "example_messages": [
      "How long does implementation actually take?",
      "We don't have IT resources for a big rollout",
      "Our last software implementation was a nightmare"
    ],
    "affected_campaigns": ["enterprise_outbound", "smb_trial"],
    "conversion_impact": "15% lower meeting rate when this objection occurs"
  },
  "suggested_actions": [
    "Develop implementation-focused response",
    "Create case study on fast implementations",
    "Add implementation timeline to early messaging"
  ]
}

Human Review Interface

Dashboard elements:

1. Objection trend graph
   - Known categories over time
   - Unknown/unhandled trend line

2. Emerging patterns queue
   - Clustered new objections
   - Example messages
   - One-click "Create response" action

3. Pattern detail view
   - All messages in cluster
   - Prospect context for each
   - Similar known objections
   - Suggested response drafts

Response Development

From Pattern to Response

Workflow:

1. Pattern detected
   "We tried [competitor] and it didn't work"
   (12 occurrences this week)

2. Human analyzes
   - What's the real concern?
   - What proof addresses it?
   - What's the ideal response?

3. Response crafted
   "I hear that a lot—[competitor] works differently
    than we do. What specifically didn't work for you?
    That'll help me show if we'd have the same issue
    or if we've solved it."

4. Response deployed
   - Added to bot's objection library
   - A/B tested against variations
   - Monitored for effectiveness

Response Templates

For each new objection category, capture:

{
  "objection_category": "competitor_failure",
  "trigger_patterns": [
    r"tried.*competitor.*didn't work",
    r"competitor.*failed",
    r"bad experience with.*competitor"
  ],
  "response_variants": [
    {
      "id": "v1",
      "response": "I hear that—what specifically didn't work?",
      "conversion_rate": null,  // To be measured
      "active": true
    }
  ],
  "escalation_threshold": 0.3,  // If confidence < 30%, escalate
  "created_at": "2024-01-20",
  "created_by": "human_review"
}

Continuous Learning

Feedback Loop

Cycle:
1. Objection detected
2. Response delivered
3. Outcome tracked
   - Did conversation continue?
   - Did prospect convert?
   - Did they disengage?
4. Response effectiveness scored
5. Poor performers flagged for revision
6. New variants tested

Never "set and forget"—always improving.

A/B Testing Responses

def select_response(objection_category):
    responses = get_active_responses(objection_category)

    if len(responses) > 1:
        # A/B test: weight by past performance
        weights = [r["conversion_rate"] or 0.5 for r in responses]
        selected = weighted_random(responses, weights)
    else:
        selected = responses[0]

    # Log for tracking
    log_response_selection(objection_category, selected["id"])

    return selected["response"]

Performance Tracking

For each response variant, track:

- Times used
- Continuation rate (did convo continue?)
- Meeting conversion rate
- Time to conversion
- Prospect sentiment after response

Weekly review:
- Which responses underperform?
- Which objections have no good response?
- What new patterns need attention?

Competitive Intelligence

Competitor Mention Patterns

Track objections mentioning competitors:

"We're using [Competitor A]"
"[Competitor B] is cheaper"
"[Competitor C] has feature X"

Cluster by competitor:
- Which competitors come up most?
- What do prospects say about them?
- What features do they mention?

Feed into competitive positioning.

Market Shift Detection

Over time, patterns reveal:

Q1: "We don't have budget" (40% of objections)
Q2: "We're evaluating [new competitor]" (35% of objections)

→ New competitor gaining traction
→ Triggers competitive response development
→ Informs product and marketing

Objection patterns = market intelligence.

Implementation

Data Model

class Objection:
    message: str
    timestamp: datetime
    prospect_id: str
    conversation_id: str
    detected_category: str | None
    confidence: float
    sentiment: float
    response_given: str | None
    outcome: str | None  # continued, converted, disengaged

class ObjectionPattern:
    category: str
    patterns: List[str]
    examples: List[str]
    responses: List[Response]
    first_seen: datetime
    occurrence_count: int
    status: str  # active, review_needed, deprecated

class PatternAlert:
    alert_type: str
    pattern: ObjectionPattern
    trigger_reason: str
    suggested_actions: List[str]
    reviewed: bool
    reviewed_by: str | None

Processing Pipeline

For each incoming message:

1. Sentiment analysis
   → Score message sentiment

2. Known objection match
   → Check against existing patterns

3. If no match + negative sentiment:
   → Log as potential new objection
   → Store with context

4. Periodic clustering (daily/weekly)
   → Group unknown objections
   → Identify emerging patterns
   → Generate alerts

5. Human review
   → Validate patterns
   → Create responses
   → Deploy to bot

Metrics

Pattern Learning Health

Track:
- Unknown objection rate (should decrease over time)
- Time from pattern emergence to response deployment
- Response coverage (% of objections with good response)
- Alert-to-resolution time

Goals:
- <10% unknown objection rate
- <48h from alert to response deployment
- >90% objection coverage
- <24h average alert resolution

Business Impact

Measure:
- Meeting rate improvement as responses improve
- Reduced escalations to humans
- Faster deal cycles (objections handled smoothly)
- Competitive win rate changes

Objection handling quality directly impacts pipeline.
Weekly Installs
6
GitHub Stars
11
First Seen
Mar 18, 2026