meddpicc-post-call
MEDDPICC Post-Call CRM Automation
Analyze a sales call transcript against the MEDDPICC framework, score each dimension, auto-fill the CRM (Attio), and create follow-up tasks — so the rep never has to manually log call notes again.
MEDDPICC Framework Reference
Each dimension is scored 0–4:
| Score | Meaning |
|---|---|
| 0 | Not addressed at all |
| 1 | Barely addressed — vague or passing mention |
| 2 | Somewhat addressed — surface-level discussion |
| 3 | Well addressed — clear specifics uncovered |
| 4 | Thoroughly addressed — deep, actionable intel captured |
The 8 Dimensions
| Letter | Dimension | What to extract |
|---|---|---|
| M | Metrics | Quantifiable outcomes the prospect wants to improve (revenue, time saved, cost reduction, headcount efficiency). Look for specific numbers, KPIs, or targets. |
| E | Economic Buyer | The person with final budget authority. Look for mentions of who approves spend, signs contracts, or has veto power. May differ from the person on the call. |
| D | Decision Criteria | Factors that will drive the purchase decision — technical requirements, integration needs, ROI thresholds, compliance, ease of use. |
| D | Decision Process | The formal steps the prospect follows to evaluate and approve a purchase — timeline, stakeholders involved, evaluation stages, legal/procurement review. |
| P | Paper Process | Internal paperwork and approval workflows — procurement rules, legal review, security questionnaires, vendor onboarding, contract redlining. |
| I | Identify Pain | The core problems or challenges the prospect faces. Look for emotional language, frustration signals, business impact of the status quo. |
| C | Champion | An internal advocate for your solution. Someone who has influence, access to the economic buyer, and a personal win tied to your success. |
| C | Competition | Competitors mentioned, alternative solutions being evaluated, or build-vs-buy considerations. Also "do nothing" as a competitor. |
Overall Score Calculation
Overall MEDDPICC Score = (sum of 8 dimension scores / 32) × 100, rounded to nearest integer.
Workflow
Step 0: Gather Inputs
Ask the user for:
- Which call? — Company name, contact name, or meeting date. If the user provides a Granola meeting link or ID, use it directly.
- Sales stage — What stage is this deal at? (Discovery, Demo, POC, Negotiation, etc.) This provides context for how to weight the MEDDPICC gaps — early-stage calls naturally score lower on Paper Process, for instance.
- Any context the user wants to add — Things they noticed but may not be in the transcript (e.g., "the CTO seemed skeptical but didn't say it outright").
If the user already provided some of this context in their message, don't re-ask. Extract what you can and only ask about what's missing.
Step 1: Gather Deal Context (Multi-Source)
Attempt data sources in priority order. Each source that returns data adds to the picture — do NOT stop after the first hit. Merge everything before scoring.
1a. Granola (primary — meeting transcripts)
- If a meeting ID or link is provided: call
get_meeting_transcriptwith the meeting ID.- If transcript access fails (paid tier required), fall back to
query_granola_meetingswith specific questions about the meeting.
- If transcript access fails (paid tier required), fall back to
- If a company/contact name is provided: call
query_granola_meetingswith "meeting with {name}" to find the most recent match. - If no match: call
list_meetingsand scan for likely candidates by date or attendee.
Once you have the meeting ID, gather full context by calling query_granola_meetings with targeted queries:
- "What metrics or KPIs were discussed in meeting {id}?"
- "Who is the decision maker or budget owner mentioned in meeting {id}?"
- "What competitors or alternatives were mentioned in meeting {id}?"
- "What are the next steps from meeting {id}?"
- "What pain points or challenges did the prospect describe in meeting {id}?"
- "What is the budget or pricing discussed in meeting {id}?"
- "What is the decision process or timeline discussed in meeting {id}?"
If get_meeting_transcript succeeds, read the full transcript yourself — it's more reliable than querying for fragments.
Important: list_meetings only returns the last 30 days. For older deals, Granola may return nothing even though meetings happened months ago. Note this limitation when reporting gaps.
1b. Attio Notes & Comments (fallback #1)
If Granola returns no meetings, or to supplement Granola data:
- Call
semantic-search-noteswith "{company name} deal pain points metrics decision criteria" to find relevant notes. - For each note found, call
get-note-bodyto read the full content. - Call
list-commentswithparent_object: "deals"andparent_record_id: {deal_id}to check for inline deal comments.
1c. Gmail Threads (fallback #2)
If both Granola and Attio notes are empty:
- Call
gmail_search_messageswith the company name or contact email to find relevant threads. - Call
gmail_read_threadon the most relevant threads (limit to 3-5 most recent). - Extract any MEDDPICC-relevant signals: pricing discussions, objections, next steps, competitor mentions.
1d. Insufficient Data Protocol
If ALL sources return empty, do NOT skip the deal. Instead:
- Set all dimension scores to 0.
- Write field values as:
"0/4 — No data available from Granola, Attio notes, or Gmail. Requires qualification call." - Flag the deal prominently in the results summary as NEEDS RE-QUALIFICATION.
- Suggest 3 specific discovery questions to ask on the next call, one each for Pain, Metrics, and Decision Criteria.
Step 2: Find the Company and Deal in Attio
In parallel:
- Call
search-recordswithobject = "companies"and the company name as query. - Call
search-recordswithobject = "deals"and the company/deal name as query.
If the deal doesn't exist yet:
- Use
upsert-recordto ensure the company exists (match ondomains). - Use
create-recordto create the deal linked to the company. - Set the deal owner to the current user (from
whoami).
Note the record_id for the deal — you'll need it for notes and tasks.
Also call list-attribute-definitions with object = "deals" and query = "meddpicc" to check whether MEDDPICC custom fields have been created. If they exist, you'll populate them in Step 4.
Step 3: Analyze the Transcript — MEDDPICC Scoring
Read through the entire transcript (or Granola query results) and evaluate each dimension.
For each of the 8 MEDDPICC components, produce:
- Score (0–4)
- Evidence — Direct quotes or specific references from the transcript that justify the score.
- What was uncovered — A 1-2 sentence summary of what the rep learned about this dimension.
- Gap / recommendation — What's still missing and what the rep should do next to fill the gap.
Automatic Extraction Targets
Beyond scoring, actively extract and structure these specific data points:
Competitor mentions:
- Named competitors (companies, products)
- Alternative approaches ("we're also looking at building in-house")
- Incumbent solutions ("we currently use X")
- "Do nothing" signals ("we might just keep doing it manually")
Budget discussions:
- Specific dollar amounts or ranges mentioned
- Budget cycle timing ("our fiscal year starts in January")
- Budget ownership ("marketing owns this budget" vs "IT")
- Pricing reactions (positive, negative, sticker shock)
- Comparison anchors ("we pay $X for our current tool")
Decision criteria:
- Must-have vs nice-to-have requirements
- Technical requirements (integrations, security, compliance)
- Business requirements (ROI, payback period)
- Political requirements (stakeholder alignment, board approval)
Champion signals:
- Who on the call showed enthusiasm or advocacy?
- Who volunteered to do internal selling?
- Who offered to make introductions to other stakeholders?
- Who shared internal context that wasn't asked for?
Next steps mentioned:
- Explicit commitments ("I'll send you the API docs")
- Implicit next steps ("we should probably loop in our CTO")
- Timeline markers ("we need to decide by Q3")
- Follow-up meeting requests
Step 4: Update Attio (Cumulative Merge)
MEDDPICC is a living qualification — each call adds to the picture. Never blindly overwrite existing scores. Always read existing data first and merge upward.
4-pre. Read Existing MEDDPICC Data
Before writing anything, call get-records-by-ids with the deal record ID and read the current values of:
meddpicc_score, meddpicc_metrics, meddpicc_decision_criteria, meddpicc_competition, meddpicc_budget_total.
4a. Merge Rules
For each MEDDPICC dimension:
- Score goes UP, never down — if the existing score is 3 and the new call only reveals a 2, keep 3. A previous call already uncovered deeper intel. Only increase if the new call adds genuinely new evidence.
- Text fields APPEND, don't replace — prepend the new call's findings with a date header, then keep the prior content below. Format:
[2026-04-06] 3/4 — New finding from this call. [2026-03-27] 2/4 — Previous finding from earlier call. - Budget updates — only overwrite if the new amount is explicitly discussed and differs. A missing budget mention on a follow-up call does NOT zero out a previously captured number.
- Overall score — recalculate from the merged (highest-per-dimension) scores, not from the latest call alone.
4b. Update the Deal Record
Update these existing deal fields based on the merged analysis:
| Field | When to update |
|---|---|
value |
If a specific deal value, budget, or pricing was discussed and differs from current |
deal_confidence |
Recalculate 1-5 based on MEDDPICC score: Score 0-20→1, 21-40→2, 41-60→3, 61-80→4, 81-100→5 |
stage |
Only if the call clearly signals a stage change (e.g., they agreed to a POC) — confirm with user before changing |
If MEDDPICC custom fields exist on the deals object (check Step 2), also update:
meddpicc_score— overall score (0-100), calculated from merged dimension scoresmeddpicc_metrics— cumulative score + summary (date-stamped entries)meddpicc_decision_criteria— cumulative key criteria extractedmeddpicc_competition— cumulative competitors and positioningmeddpicc_budget_total— latest confirmed budget amount
4b. Create a MEDDPICC Analysis Note
Use create-note on the deal record with this structure:
# MEDDPICC Call Analysis — {Company Name}
**Date:** {call date}
**Attendees:** {names from transcript}
**Sales Stage:** {current stage}
**Overall MEDDPICC Score:** {score}/100
---
## Scorecard
| Dimension | Score | Summary |
|-----------|-------|---------|
| M — Metrics | {0-4} | {one-line summary} |
| E — Economic Buyer | {0-4} | {one-line summary} |
| D — Decision Criteria | {0-4} | {one-line summary} |
| D — Decision Process | {0-4} | {one-line summary} |
| P — Paper Process | {0-4} | {one-line summary} |
| I — Identified Pain | {0-4} | {one-line summary} |
| C — Champion | {0-4} | {one-line summary} |
| C — Competition | {0-4} | {one-line summary} |
---
## Key Findings
### Competitor Intelligence
{Structured competitor mentions with context}
### Budget & Pricing
{Budget discussions, reactions, anchoring points}
### Decision Criteria & Process
{What drives the decision, who's involved, timeline}
### Pain Points
{Core challenges with evidence/quotes}
### Champion Assessment
{Who is the champion, strength of advocacy, risks}
---
## Critical Gaps
{Top 2-3 MEDDPICC dimensions that scored lowest, with specific recommendations for what to ask or do next}
---
## Next Steps
{Numbered list of action items with owners and deadlines where discussed}
4c. Create Follow-Up Tasks
For each concrete next step identified, create an Attio task using create-task:
- Content: Clear, actionable description (e.g., "Send API documentation to {contact} at {company}")
- Deadline: If a date was mentioned, use it. If "next week" or similar, calculate the date. If no date, set deadline 3 business days from now.
- Linked record: Link to the deal record (
linked_record_object: "deals",linked_record_id: {deal_id}) - Assignee: Current user's workspace_member_id (from
whoami)
Always create at least these tasks:
- One task per explicit commitment made on the call
- A "Fill MEDDPICC gaps" task for any dimension scored 0-1, describing what to ask/do
- A "Send follow-up email" task if no follow-up was discussed
4d. Update the Sales List Entry (if applicable)
If the company has an entry in the "Sales" list (api_slug: sales), update it:
notes— Append a one-line summary: "MEDDPICC {date}: Score {X}/100. Key gaps: {gaps}. Next: {action}."stage— Only if confirmed by userestimated_contract_value— If budget/pricing was discussedclose_confidence— Based on MEDDPICC score (same mapping as deal_confidence)
Step 5: Present Results to User
Show the user a concise summary:
- MEDDPICC Scorecard — The 8 dimensions with scores, color-coded (use text indicators: scores 0-1 are gaps, 2 are partial, 3-4 are solid)
- Overall Score with context ("65/100 — typical for a Discovery call, but you're missing Economic Buyer entirely")
- Top 3 gaps with specific coaching tips (what to ask on the next call)
- What was updated in Attio — List the deal fields changed, note created, tasks created
- Competitor landscape — Quick summary of competitive positioning
- Recommended next actions — Prioritized by impact on closing
Ask if they want to:
- Edit any of the scores or extracted data before finalizing
- Create additional tasks
- Draft a follow-up email (defer to
meeting-followupskill) - Schedule a next meeting based on the gaps identified
Attio Field Setup Guide
For the skill to populate MEDDPICC-specific fields, create these custom attributes on the Deals object in Attio (Settings → Objects → Deals → Add attribute):
| Attribute Name | API Slug (suggested) | Type | Description |
|---|---|---|---|
| MEDDPICC Score | meddpicc_score |
Number | Overall MEDDPICC qualification score (0-100). Calculated from 8 dimensions scored 0-4 each. Below 40 = early/weak qualification, 40-60 = developing, 60-80 = strong, 80+ = fully qualified. |
| Metrics | meddpicc_metrics |
Text | Quantifiable business outcomes the prospect wants to improve — revenue targets, time savings, cost reduction, headcount efficiency. Include specific numbers and KPIs when available. |
| Decision Criteria | meddpicc_decision_criteria |
Text | Factors driving the purchase decision — technical requirements (integrations, security, compliance), business requirements (ROI threshold, payback period), and political requirements (stakeholder alignment). |
| Competition | meddpicc_competition |
Text | Competitors being evaluated, incumbent solutions, build-vs-buy considerations, and "do nothing" risk. Include how we differentiate against each. |
| Budget Total | meddpicc_budget_total |
Currency (USD) | Total budget amount discussed or estimated for the deal. The prospect's stated budget ceiling, approved spend, or pricing anchor. |
The skill works without these fields — it always creates a detailed note and tasks. With the fields, it also populates structured data for filtering, reporting, and pipeline views.
Optional Fields (require Attio paid plan for record-reference type)
If your Attio plan supports record-reference custom attributes, you can also add:
| Attribute Name | API Slug (suggested) | Type | Description |
|---|---|---|---|
| Champion | meddpicc_champion |
Record Reference → People (or Text on free plans) | Internal advocate for our solution — name, role, advocacy strength, and their personal win tied to our success. |
| Economic Buyer | meddpicc_economic_buyer |
Record Reference → People (or Text on free plans) | Person with final budget authority and veto power — name, title, and whether we have direct access or need to go through someone. |
These are nice-to-haves. Champion and Economic Buyer info is always captured in the MEDDPICC note regardless.
Attio MCP Tool Reference
| Task | Tool | Key parameters |
|---|---|---|
| Find company | search-records |
object: "companies", query |
| Find deal | search-records |
object: "deals", query |
| Check MEDDPICC fields exist | list-attribute-definitions |
object: "deals", query: "meddpicc" |
| Update deal | update-record |
object: "deals", record_id, attributes |
| Create note on deal | create-note |
parent_object: "deals", parent_record_id, title, content |
| Create follow-up task | create-task |
content, deadline_at, linked_record_object, linked_record_id |
| Update Sales list entry | update-list-entry-by-record-id |
list: "sales", parent_object: "companies", attributes |
| Get user identity | whoami |
— |
| List deal attributes | list-attribute-definitions |
object: "deals" |
| Get full deal record | get-records-by-ids |
object: "deals", record_ids: [uuid] |
Granola MCP Tool Reference
| Task | Tool |
|---|---|
| Get full transcript | get_meeting_transcript (meeting_id) |
| Query meeting content | query_granola_meetings (query, optional document_ids) |
| List recent meetings | list_meetings |
| Get meeting metadata | get_meetings |
Environment
| Variable | Source | Purpose |
|---|---|---|
| Attio | MCP (Attio connector) | CRM — deals, companies, notes, tasks |
| Granola | MCP (Granola connector) | Meeting transcripts and summaries |
| Gmail | MCP (Gmail connector) | Email context if needed for follow-up |
| Google Calendar | MCP (Google Calendar connector) | Meeting scheduling context |
Stage-Weighted Expectations
Not every dimension needs to score high at every stage. Use this as a coaching guide:
| Stage | Expected strong (3-4) | Acceptable weak (0-2) |
|---|---|---|
| Discovery | Identified Pain, Metrics | Paper Process, Competition |
| Demo | Pain, Metrics, Decision Criteria | Paper Process |
| POC/Trial | Pain, Metrics, Criteria, Champion | Paper Process |
| Negotiation | ALL should be 2+ | None — gaps here are red flags |
| Contract | ALL should be 3+ | None — missing info blocks close |
Flag any dimension that's weaker than expected for the stage as a priority gap.
More from extruct-ai/gtm-cowork-skills
key-account-plan
Create comprehensive Key Account Plans for clients/prospects. Use this skill whenever the user mentions 'account plan', 'key account', 'account planning', 'strategic account', 'client plan', 'customer success plan', or wants to create a structured plan for managing an important client relationship. Also trigger when users ask to 'plan for [company name]', 'review account strategy', 'prepare account review', or mention MEDDPICC/MEDDPIC analysis alongside client planning. This skill pulls data from Attio CRM (company records, contacts, deals, notes, emails, call recordings, MEDDPICC scores) and Google Drive, then produces a polished .docx Key Account Plan document and creates summary notes + tasks back in Attio. Even if the user just names a company and says something like 'let's do an account plan' or 'prepare for my QBR with [company]', use this skill.
1user-context
>
1meeting-prep
>
1pipeline-review
>
1meeting-followup
>
1company-people-list
>
1