competitive-analysis
Competitive Analysis Skill
Procedural guidance for turning competitor products into structured intelligence that feeds PRDs, design briefs, and visual direction decisions.
When to Use
- Running
/landscape [initiative] [competitors...] - Scoping a new initiative and need market context
- Entering Define phase and the PRD needs competitive evidence
- A customer mentions a competitor in research or Slack signals
- Evaluating "build vs. match" decisions for a feature area
Inputs
- Initiative name
- List of competitor product names and/or URLs
- Optional: specific feature area or flow to focus on (e.g., "onboarding", "dashboard", "agent configuration")
Required Context
Load before analysis:
pm-workspace-docs/company-context/product-vision.md- Know what AskElephant IS and IS NOTpm-workspace-docs/company-context/strategic-guardrails.md- Red flags for copying vs. differentiatingpm-workspace-docs/company-context/personas.md- Who we build for (evaluate competitors through our persona lens)- Initiative's
research.mdif it exists - Customer-voiced competitive signals
Competitor Tiering
Classify every competitor before analysis:
| Tier | Definition | Analysis Depth | Examples |
|---|---|---|---|
| Direct | Same product category, same target buyer | Full profile + UX deep dive | Momentum, Reevo, Gong |
| Indirect | Different product, solves same job-to-be-done | Profile + feature comparison | Day.ai, Clari, Chorus |
| Adjacent | Different category, shares a design/automation pattern | Pattern extraction only | Zapier, Make, Tray.ai, Relay.app |
Methodology
1. Define Analysis Dimensions
Tie dimensions to the specific initiative, not a generic feature checklist. Ask:
- What user problem does this initiative solve?
- What flows/screens are most relevant to compare?
- What decision are we trying to make? (build, match, leapfrog, ignore)
Example dimensions for an "agent automation" initiative:
- Agent configuration UX (how complex is setup?)
- Trigger/action model (visual builder vs. code vs. natural language)
- Error handling and observability (can users debug failed automations?)
- Integration depth (shallow webhook vs. deep CRM field mapping)
- Trust signals (how does the product communicate what the AI will do?)
2. Gather Intelligence
Sources to check for each competitor:
- Product website - Positioning, pricing, target persona messaging
- Product screenshots/demos - Via web search for "[product] dashboard screenshot", "[product] UI demo"
- G2 screenshot galleries -
g2.com/products/[product]/screenshots(often 5-15 real product screenshots) - Interactive demos - Search for "[product] interactive demo" or "[product] product tour" (Navattic, Storylane, Reprise embeds)
- YouTube walkthroughs - Search "[product] demo walkthrough [year]" for recent UI screenshots
- G2/Capterra reviews - What real users praise and complain about
- Job postings - What they're building next (hiring for = investing in)
- Changelog/blog - Recent feature launches and roadmap signals (often include UI screenshots of new features)
- Customer mentions - Search initiative's
research.mdandpm-workspace-docs/signals/for competitor names - Social/community - Reddit, Twitter/X, LinkedIn posts comparing tools
- Help docs/knowledge base - Often contain detailed UI screenshots showing actual product screens
Use web search extensively. Do NOT make up competitor features -- cite sources. Prioritize capturing real UI screenshots from these sources using the browser-use subagent.
3. Capture Real Competitor UI Screenshots
For Direct and Indirect competitors, go beyond feature lists. Capture actual competitor product UIs first, then generate mockups only as a supplement.
Step 3a: Screenshot Capture (Primary)
Use the browser-use subagent to navigate to competitor product pages and take real screenshots:
-
Search for screenshot sources: Web search for
"[product] dashboard","[product] UI","[product] demo","[product] product tour". Look for:- Product marketing pages with embedded screenshots
- Demo/tour pages (many SaaS products have interactive demos or Navattic/Storylane embeds)
- G2 screenshot galleries (g2.com/products/[product]/screenshots)
- YouTube demo walkthrough thumbnails
- Product documentation with UI screenshots
- Blog posts announcing features (often include UI previews)
-
Use
browser-usesubagent to visit the best URLs and take screenshots:- Navigate to the page
- Scroll to the relevant UI section
- Take a screenshot
- Save to
assets/competitive/with the naming convention below
-
Naming convention for real screenshots:
[competitor]-[screen]-screenshot.png(e.g.,gainsight-health-dashboard-screenshot.png)[competitor]-[flow]-screenshot-[N].pngfor multi-step flows (e.g.,vitally-setup-wizard-screenshot-1.png)
-
Annotate each screenshot in the competitive landscape doc with:
- Source URL
- Date captured
- What it shows (screen name, key patterns visible)
Step 3b: AI-Generated Mockups (Supplement)
When real screenshots aren't available (login-gated, no public demos, or need to illustrate a pattern comparison across competitors):
- Use image generation to create representative comparison mockups
- Naming convention for generated mockups:
[pattern]-comparison-mockup.pngor[competitor]-[pattern]-mockup.png - Always label generated images clearly in the doc: "AI-generated representation based on public documentation and marketing materials"
General Guidelines
- Always prefer real screenshots over generated mockups -- they're more credible and show actual UX details
- Capture the flow, not just individual screens (onboarding sequence, configuration wizard, dashboard layout)
- Note interaction patterns: drag-and-drop, form-based, natural language, wizard-style
- Save all images to
pm-workspace-docs/initiatives/active/[name]/assets/competitive/ - For each initiative, aim for at least 2-3 real competitor screenshots per Direct competitor
4. Build Feature Matrix
Rows = capabilities relevant to THIS initiative (not a generic checklist) Columns = competitors + AskElephant (current) + AskElephant (proposed)
Use these ratings:
- Leading - Best-in-class implementation
- Parity - Meets market expectation
- Basic - Functional but limited
- Missing - Not available
- N/A - Not applicable to this product
5. Map Differentiation
Categorize each capability:
| Category | Meaning | Strategic Response |
|---|---|---|
| Table Stakes | Everyone has it, customers expect it | Must match, don't over-invest |
| Parity Zone | Most competitors have it, some don't | Match if evidence demands it |
| Opportunity Gap | Few or no competitors serve this well | Potential differentiator -- validate with users |
| AskElephant Unique | Only we have this (or could) | Protect and amplify |
6. Extract Design Vocabulary
Identify the language and patterns competitors use:
- Adopt: Patterns that are becoming user expectations (e.g., "visual workflow builder" for automations)
- Reject: Patterns that conflict with our values (e.g., surveillance dashboards, complexity-as-power)
- Leapfrog: Patterns we can do better because of our unique position (meeting context, CRM knowledge)
Required Output Sections
The competitive-landscape.md document MUST include:
1. TL;DR
2-3 sentence market position summary. Where does AskElephant sit? What's the primary differentiation opportunity?
2. Competitor Profiles
Per competitor (Direct and Indirect tiers):
- Product: Name + URL
- Tier: Direct / Indirect / Adjacent
- Positioning: How they describe themselves (use their actual tagline)
- Target Persona: Who they sell to
- Key Strengths: 2-3 things they do well
- Key Weaknesses: 2-3 gaps or complaints (cite G2/review sources)
- Relevance to This Initiative: Why this competitor matters for this specific work
3. Feature Matrix
Table with initiative-specific capabilities as rows, competitors as columns.
4. UX Pattern Inventory
For each key flow relevant to the initiative:
- How does Competitor A handle it?
- How does Competitor B handle it?
- What's the emerging "best practice" pattern?
- Where are users frustrated? (from reviews)
- Screenshot/mockup references
5. Visual Reference Gallery
Organized by flow or screen type, with clear labeling:
Real Competitor Screenshots (captured from product pages, demos, G2):
- Link to each image with source URL, date captured, and what it demonstrates
- These are primary references for design decisions
AI-Generated Comparison Mockups (created when real screenshots unavailable):
- Clearly labeled as generated representations
- Used to illustrate pattern comparisons across competitors or when products are login-gated
6. Differentiation Map
Table categorizing each capability as Table Stakes / Parity Zone / Opportunity Gap / AskElephant Unique.
7. Design Vocabulary
- Patterns to Adopt: List with rationale
- Patterns to Reject: List with rationale (cite anti-vision when relevant)
- Patterns to Leapfrog: Where our unique context enables better solutions
8. Strategic Recommendations
- What to match (table stakes we're missing)
- What to leapfrog (opportunity gaps we can own)
- What to ignore (competitor features that don't serve our personas)
- Risks if we don't act
Save Locations
- Analysis document:
pm-workspace-docs/initiatives/active/[name]/competitive-landscape.md - Real competitor screenshots:
pm-workspace-docs/initiatives/active/[name]/assets/competitive/[competitor]-[screen]-screenshot.png - Generated comparison mockups:
pm-workspace-docs/initiatives/active/[name]/assets/competitive/[pattern]-comparison-mockup.png - Competitive signals from other sources: append to existing
competitive-landscape.mdor create if missing
Image Naming Convention
| Type | Pattern | Example |
|---|---|---|
| Real screenshot | [competitor]-[screen]-screenshot.png |
gainsight-health-dashboard-screenshot.png |
| Multi-step flow | [competitor]-[flow]-screenshot-[N].png |
vitally-setup-wizard-screenshot-1.png |
| Generated mockup | [pattern]-comparison-mockup.png |
health-score-patterns-comparison-mockup.png |
| Generated per-competitor | [competitor]-[pattern]-mockup.png |
churnzero-alert-ux-mockup.png |
Integration Points
- Research Analyst: When analyzing transcripts, competitive mentions feed into the Competitor Profiles section
- PRD Writer: The Feature Matrix and Differentiation Map provide "Competitive Evidence" for the PRD
- Design Brief: Design Vocabulary section feeds directly into the brief's "References" and "Patterns to Adopt/Reject"
- Visual Design: The UX Pattern Inventory and Visual Reference Gallery inform mockup generation directions
Anti-Patterns
- Copying competitor features without understanding WHY they built them
- Generic feature comparison that isn't tied to the specific initiative
- Listing features without evaluating UX quality and user satisfaction
- Treating "competitor has it" as sufficient evidence to build (needs user evidence too)
- Ignoring adjacent competitors that share relevant design patterns
- Analysis paralysis -- the goal is actionable intelligence, not an exhaustive report
When to Refresh
- Before entering a new initiative phase (Discovery -> Define -> Build)
- When a competitor launches a significant update in the same space
- When customer research surfaces new competitor mentions
- Quarterly for ongoing initiatives