competitive-analysis

Installation
SKILL.md

Competitive Analysis

Skill for systematic competitive analysis with parallel sub-agents, website scraping via Playwright MCP, diff tracking, threat scoring, and CSV/XLSX output (compatible with Google Sheets and Excel).

When to Use

  • User asks to perform a competitive analysis
  • User wants to compare a product against competitors
  • Need to update competitor data and detect changes

Execution Algorithm

Step 0: Check Prerequisites

Before starting, verify that Playwright MCP is available:

  1. Try calling mcp__mcp-playwright__browser_navigate with url https://example.com
  2. If it fails or tool is not found:
    • Run: claude mcp add playwright -- npx @playwright/mcp@latest
    • Tell the user: "Installing Playwright MCP... Please restart Claude Code and run the analysis again."
    • STOP execution

If Playwright is available — proceed to Step 1.

Step 1: Load Competitor List

Read the file competitors.yaml in this skill's directory. The file contains a list of competitors with names and URLs. If the user specified particular competitors — use only those. Otherwise — use all from the file.

# Example competitors.yaml structure
competitors:
  - name: "Competitor 1"
    url: "https://example1.com"

Step 2: Launch Parallel Sub-Agents (one per competitor)

For each competitor in the list, launch a separate sub-agent via the Agent tool. All sub-agents run in parallel — each collects data for its own competitor.

IMPORTANT: launch all sub-agents simultaneously, NOT sequentially.

Pass the following instruction to each sub-agent (substituting the competitor's data):


Sub-agent instruction (template):

You are a competitive analysis sub-agent. Your task is to collect data about the competitor "{name}" ({url}) and save the result as JSON.

### A. Website Scraping via Playwright MCP

Visit the following pages of the competitor. On each page, make two calls:

1. **Homepage** ({url}):
   - mcp__mcp-playwright__browser_navigate(url="{url}")
   - mcp__mcp-playwright__browser_snapshot()
   - Extract: tagline, product description, value proposition, positioning

2. **Pricing page** ({url}/pricing):
   - mcp__mcp-playwright__browser_navigate(url="{url}/pricing")
   - mcp__mcp-playwright__browser_snapshot()
   - Extract: pricing plans (name, price, period, features), business model, sales model
   - If the page doesn't load (404/redirect) — record "No public pricing"

3. **Features page** ({url}/features):
   - mcp__mcp-playwright__browser_navigate(url="{url}/features")
   - mcp__mcp-playwright__browser_snapshot()
   - Extract: key features, advanced features, integrations, platforms, customization

4. **About page** ({url}/about):
   - mcp__mcp-playwright__browser_navigate(url="{url}/about")
   - mcp__mcp-playwright__browser_snapshot()
   - Extract: company description, team, investors, geography

5. **Careers page** ({url}/careers or {url}/jobs):
   - mcp__mcp-playwright__browser_navigate(url="{url}/careers")
   - mcp__mcp-playwright__browser_snapshot()
   - Extract: number of open positions, hiring directions
   - If /careers doesn't work — try /jobs

If any page is unavailable — skip it and record "Unavailable" in the corresponding fields.

### B. Review Search via WebSearch

Execute the following queries:

- WebSearch("{name} reviews")
- WebSearch("{name} vs competitors")
- WebSearch("{name} user feedback")

From results extract:
- What clients praise (at least 3 points)
- What clients complain about (at least 3 points)

### C. Fill JSON and Save

Fill the JSON according to the following structure and save to file reports/raw/{name_slug}.json:

```json
{
  "overview": {
    "name": "{name}",
    "link": "{url}",
    "tagline": "...",
    "description": "..."
  },
  "marketing": {
    "name": "{name}",
    "tagline": "...",
    "value_proposition": "...",
    "positioning": "...",
    "marketing_channels": ["..."],
    "marketing_strategies": ["..."],
    "keywords": ["..."],
    "social_profiles": ["twitter: ...", "linkedin: ...", "github: ..."]
  },
  "product": {
    "name": "{name}",
    "link": "{url}/features",
    "use_cases": ["..."],
    "key_features": ["..."],
    "advanced_features": ["..."],
    "customization": ["..."],
    "integrations": ["..."],
    "differentiators": ["..."],
    "support": ["..."],
    "platforms": ["..."]
  },
  "pricing": {
    "name": "{name}",
    "business_model": ["..."],
    "plans": ["Free: $0/mo — ...", "Pro: $20/mo — ...", "Business: $40/mo — ..."],
    "sales_model": "..."
  },
  "audience": {
    "name": "{name}",
    "users": ["..."],
    "buyers": ["..."],
    "company_types": ["..."],
    "company_size": ["..."],
    "geography": ["..."],
    "other_stakeholders": ["..."]
  },
  "reviews": {
    "name": "{name}",
    "what_clients_praise": ["..."],
    "what_clients_complain_about": ["..."]
  },
  "swot": {
    "name": "{name}",
    "strengths": ["..."],
    "weaknesses": ["..."],
    "opportunities": ["..."],
    "threats": ["..."]
  }
}

All array values are strings. Do not use nested objects. Plans should be written as a single string: "Name: $price/period — description". Social profiles as an array of strings: "platform: link". Support — array of strings: "Email", "Chat", "Documentation", etc.

IMPORTANT: Every section must contain the "name" field with the value "{name}".


---

### Step 3: Merge Results into CSV

Once all sub-agents have finished, verify that JSON files for each competitor exist in the `reports/raw/` directory. Then run:

```bash
python3 scripts/merge_to_csv.py reports/raw/ reports/competitive_YYYY-MM-DD.csv

Where YYYY-MM-DD is today's date. The script merges all JSON files from reports/raw/ into a single CSV table following the template format from templates/competitive-template.csv.

Step 4: Try XLSX Conversion

Attempt to create an XLSX file with formatting:

python3 scripts/convert_to_xlsx.py reports/competitive_YYYY-MM-DD.csv

If openpyxl is installed, this creates an .xlsx file with section headers highlighted in green, bold parameter names, auto-width columns, and alternating row colors. If openpyxl is not available, the script prints a message and exits gracefully — the CSV is still the primary output.

Step 5: Diff with Previous Analysis (if available)

Check whether a previous CSV file exists in the reports/ directory (a file matching competitive_*.csv, excluding the current one). If it does:

python3 scripts/competitive_diff.py reports/competitive_YYYY-MM-DD.csv reports/competitive_PREVIOUS_DATE.csv

The script outputs JSON with a change delta and threat score for each competitor. Save the output to reports/diff_YYYY-MM-DD.json.

Step 6: Generate Summary

python3 scripts/generate_summary.py reports/competitive_YYYY-MM-DD.csv [reports/diff_YYYY-MM-DD.json]

The second argument (diff) is optional — pass it only if the file exists. The script outputs a text summary to stdout.

Step 7: Show Results to User

  1. Display the summary from Step 6
  2. Report the CSV file path: reports/competitive_YYYY-MM-DD.csv
  3. If XLSX was created, report: reports/competitive_YYYY-MM-DD.xlsx
  4. Suggest opening the file in Google Sheets or Excel

Briefly summarize:

  • How many competitors were analyzed
  • Key findings
  • Threat scores (if diff was available)
  • Major changes since the last analysis (if any)

Output Format

Primary output — CSV file (opens in Google Sheets, Excel, Numbers). Optional — XLSX file with formatting (section headers, bold, auto-width).

CSV structure:

  • Column A — parameters (row names)
  • Columns B, C, D... — competitors (one per column)
  • Sections separated by empty rows
  • Section headers: OVERVIEW / ОБЗОР, MARKETING / МАРКЕТИНГ, PRODUCT / ПРОДУКТ, PRICING / ЦЕНООБРАЗОВАНИЕ, AUDIENCE / АУДИТОРИЯ, REVIEWS & REPUTATION / ОТЗЫВЫ И РЕПУТАЦИЯ, SWOT ANALYSIS / SWOT-АНАЛИЗ

Error Handling

  • If a competitor's page doesn't load — skip it and record "Unavailable" in the corresponding fields
  • If Playwright MCP is not available — prompt the user to restart after installation (Step 0)
  • If WebSearch is not available — skip review search, fill the REVIEWS section as "Data not collected"
  • If a Python script crashes — show the error to the user and suggest filling the report manually
  • If a sub-agent doesn't return a result — skip that competitor and notify the user

Dependencies

  • MCP Playwright — for scraping competitor websites (mcp__mcp-playwright__browser_navigate, mcp__mcp-playwright__browser_snapshot)
  • WebSearch — for searching reviews
  • Agent — for parallel sub-agent execution (one per competitor)
  • Python 3 — for scripts (stdlib only + openpyxl optional for XLSX)
Installs
15
First Seen
Mar 31, 2026