analysis

SKILL.md

Analysis

Overview

analysis provides systematic analytical operations for understanding code, skills, processes, data, and patterns. It helps extract insights, identify improvements, recognize patterns, and make data-driven decisions.

Purpose: Transform raw information into actionable insights through systematic analysis

The 5 Analysis Operations:

  1. Code Analysis - Quality, complexity, patterns, technical debt
  2. Skill Analysis - Effectiveness, usage patterns, improvement opportunities
  3. Process Analysis - Efficiency, bottlenecks, optimization opportunities
  4. Data Analysis - Metrics, trends, statistical insights
  5. Pattern Recognition - Cross-artifact patterns, recurring themes, systemic insights

Key Benefits:

  • Data-Driven Decisions: Base improvements on evidence, not assumptions
  • Pattern Discovery: Identify recurring themes across multiple artifacts
  • Quality Insights: Understand code/skill quality objectively
  • Process Optimization: Find bottlenecks and inefficiencies
  • Trend Identification: Spot improving/degrading patterns over time

When to Use

Use analysis when:

  1. Understanding Code Quality - Analyze codebase for patterns, complexity, technical debt
  2. Evaluating Skill Effectiveness - Assess which skills work well, which need improvement
  3. Optimizing Processes - Identify bottlenecks, inefficiencies in workflows
  4. Making Data-Driven Decisions - Use metrics and trends to guide improvements
  5. Discovering Patterns - Find recurring themes across code, skills, or processes
  6. Measuring Progress - Track improvements over time quantitatively
  7. Identifying Opportunities - Discover improvement and optimization opportunities
  8. Post-Review Analysis - After reviews, analyze findings for systemic insights
  9. Continuous Improvement - Feed insights back into development process

Operations

Operation 1: Code Analysis

Purpose: Analyze code for quality, complexity, patterns, and technical debt

When to Use This Operation:

  • Assessing codebase quality
  • Identifying refactoring opportunities
  • Understanding code complexity
  • Detecting code smells
  • Planning technical debt reduction

Process:

  1. Define Analysis Scope

    • Which code to analyze? (files, modules, entire codebase)
    • What aspects? (quality, complexity, patterns, debt)
    • What questions to answer?
  2. Gather Code Metrics

    • Lines of code (LOC)
    • Function/class count
    • Cyclomatic complexity
    • Duplication levels
    • Comment density
  3. Identify Patterns

    • Common code patterns used
    • Recurring structures
    • Naming conventions
    • Architecture patterns
    • Design patterns applied
  4. Detect Code Smells

    • Long functions (>50 lines)
    • Deep nesting (>3 levels)
    • Duplicated code
    • Complex conditionals
    • Poor naming
  5. Generate Insights

    • Overall quality assessment
    • Complexity hotspots
    • Refactoring priorities
    • Pattern recommendations
    • Technical debt inventory

Validation Checklist:

  • Analysis scope clearly defined
  • Key metrics collected
  • Patterns identified (at least 2-3)
  • Code smells detected (if any)
  • Quality assessment completed
  • Actionable insights generated
  • Recommendations prioritized

Outputs:

  • Code quality assessment
  • Complexity metrics
  • Identified patterns
  • Code smells list
  • Refactoring recommendations
  • Technical debt inventory

Time Estimate: 30-90 minutes (varies by scope)

Example:

Code Analysis: Authentication Module
=====================================

Scope: auth/ directory (15 files, 3,200 LOC)

Metrics:
- Total LOC: 3,200
- Functions: 85
- Classes: 12
- Average function length: 25 lines (good)
- Cyclomatic complexity: Average 4.2 (acceptable)

Patterns Identified:
1. Decorator pattern for authentication checks (used 12x)
2. Strategy pattern for auth methods (OAuth, JWT, API key)
3. Factory pattern for token generation

Code Smells Detected:
❌ 3 functions >100 lines (validate_token, process_oauth, refresh_session)
❌ 2 files with >15% code duplication
⚠️ 5 functions with complexity >10
⚠️ Inconsistent error handling (some raise, some return None)

Quality Assessment: 7/10 (Good with improvements needed)

Recommendations:
1. [High] Refactor 3 long functions into smaller units
2. [High] Extract duplicated code to shared utilities
3. [Medium] Standardize error handling (use exceptions consistently)
4. [Low] Add docstrings to 8 functions missing them

Technical Debt Estimate: 8-12 hours to address all issues

Operation 2: Skill Analysis

Purpose: Analyze skill effectiveness, usage patterns, and identify improvement opportunities

When to Use This Operation:

  • Evaluating skill ecosystem health
  • Understanding which skills are most valuable
  • Identifying underutilized skills
  • Planning skill improvements
  • Measuring skill development efficiency

Process:

  1. Collect Skill Metrics

    • Number of skills in ecosystem
    • Lines of code per skill
    • Build time per skill
    • Pattern distribution (workflow/task/reference)
    • Quality scores (from review-multi)
  2. Analyze Usage Patterns

    • Which skills used most frequently?
    • Which skills rarely used?
    • Skill dependencies (which skills require others?)
    • Integration patterns (how skills compose)
  3. Assess Effectiveness

    • Do skills achieve stated purposes?
    • User satisfaction with skills
    • Time savings delivered
    • Quality improvements enabled
  4. Identify Improvement Opportunities

    • Skills with low quality scores
    • Skills with usability issues
    • Missing functionality gaps
    • Integration opportunities
  5. Generate Recommendations

    • Skills to improve (with specific changes)
    • Skills to deprecate (if any)
    • New skills to build (gaps identified)
    • Integration opportunities

Validation Checklist:

  • Skill metrics collected for all skills
  • Usage patterns analyzed
  • Effectiveness assessed (evidence-based)
  • Improvement opportunities identified
  • Recommendations prioritized
  • Actionable insights generated

Outputs:

  • Skill ecosystem metrics
  • Usage pattern analysis
  • Effectiveness assessment
  • Improvement opportunities list
  • Prioritized recommendations

Time Estimate: 45-90 minutes

Example:

Skill Ecosystem Analysis
========================

Skills in Ecosystem: 8
Total LOC: ~25,000 lines
Average Build Time: 6.8 hours/skill
Efficiency Gain: 70.6% faster than baseline

Pattern Distribution:
- Workflow: 5 skills (63%)
- Task: 3 skills (38%)

Quality Scores (Structure):
- All 8 skills: 5/5 (Grade A)
- 100% structural excellence

Usage Patterns (Inferred):
- Most Used: development-workflow (used to build skills 8-9)
- High Value: planning-architect, task-development, todo-management (used in every skill)
- Recently Added: review-multi, context-engineering (usage TBD)

Effectiveness Assessment:
✅ Bootstrap strategy working (efficiency compounding)
✅ All skills achieve stated purposes
✅ Quality maintained through rapid building
✅ Progressive disclosure effective (token optimization)

Improvement Opportunities:
1. Add Quick Reference to 3 early skills → DONE ✅
2. Refine vague validation in 3 skills → Low priority
3. Build remaining Layer 2 skills → IN PROGRESS

Recommendations:
1. [High] Complete Layer 2 (3 skills remaining)
2. [Medium] Conduct comprehensive reviews on planning-architect, development-workflow
3. [Low] Refine script detection accuracy (pattern detection)

Insights:
- Skills built faster over time (compound efficiency)
- Standards evolved (Quick Reference added during skill 4-5)
- Continuous improvement cycle working (review → improve → validate)

Operation 3: Process Analysis

Purpose: Analyze workflow efficiency, identify bottlenecks, and discover optimization opportunities

When to Use This Operation:

  • Optimizing development workflows
  • Identifying process inefficiencies
  • Reducing cycle times
  • Improving team productivity
  • Streamlining operations

Process:

  1. Map Current Process

    • Document process steps
    • Identify decision points
    • Note hand-offs and dependencies
    • Measure duration of each step
  2. Collect Process Metrics

    • Cycle time (start to finish)
    • Wait time (delays, blockers)
    • Active time (actual work)
    • Rework time (fixes, iterations)
    • Throughput (completions per time period)
  3. Identify Bottlenecks

    • Steps with longest duration
    • Steps with most wait time
    • Steps with highest rework rate
    • Resource constraints
    • Dependency blockages
  4. Analyze Efficiency

    • Time utilization (active vs wait)
    • Automation opportunities
    • Parallelization potential
    • Waste identification (unnecessary steps)
  5. Generate Optimization Recommendations

    • Bottleneck elimination strategies
    • Automation opportunities
    • Process simplification
    • Parallel work enablement
    • Waste reduction

Validation Checklist:

  • Process mapped completely
  • Metrics collected for all steps
  • Bottlenecks identified (at least 1-2)
  • Efficiency analysis completed
  • Optimization opportunities found
  • Recommendations prioritized by impact
  • Estimated improvement quantified

Outputs:

  • Process map (visual or textual)
  • Process metrics
  • Bottleneck analysis
  • Efficiency assessment
  • Optimization recommendations with estimated impact

Time Estimate: 60-120 minutes

Example:

Process Analysis: Skill Development Workflow
============================================

Current Process (Before development-workflow):
1. Research (ad-hoc): 2-4 hours
2. Planning (informal): 1-2 hours
3. Implementation: 12-20 hours
4. Testing: 2-3 hours

Total Cycle Time: 17-29 hours per skill

Bottlenecks Identified:
❌ Research phase: No systematic approach → wide time variance
❌ Planning: Informal → often incomplete, causes rework
❌ Implementation: No task breakdown → often get lost

Process Efficiency:
- Active time: 60-70% (actual work)
- Wait time: 10-15% (thinking, decisions)
- Rework: 20-25% (fixing incomplete plans)

After development-workflow Implementation:
1. Research (skill-researcher): 1 hour (systematic)
2. Planning (planning-architect): 1.5 hours (comprehensive)
3. Tasks (task-development): 45 min (clear breakdown)
4. Implementation: 8-15 hours (guided by prompts)
5. Validation: 30-60 min

Total Cycle Time: 12-18 hours per skill

Improvements:
✅ Research: 50-60% faster (systematic approach)
✅ Planning: More thorough but faster (structured process)
✅ Implementation: 30-40% faster (clear tasks, good prompts)
✅ Rework: Reduced to 5-10% (better planning)

Overall Improvement: 35-40% cycle time reduction
Quality Impact: Improved (more systematic, better planning)

Optimization Recommendations:
1. [Applied] Use development-workflow for all skills ✅
2. [Future] Automate research aggregation
3. [Future] Template-based planning for common patterns
4. [Future] Continuous validation during development (not just end)

Operation 4: Data Analysis

Purpose: Analyze metrics, trends, and statistical patterns in data

When to Use This Operation:

  • Understanding quantitative data
  • Identifying trends over time
  • Making evidence-based decisions
  • Measuring improvements
  • Validating hypotheses

Process:

  1. Define Analysis Questions

    • What questions need answering?
    • What decisions depend on this analysis?
    • What hypotheses to test?
  2. Collect Data

    • Gather relevant metrics
    • Ensure data quality and completeness
    • Document data sources
    • Note collection methodology
  3. Calculate Metrics

    • Basic statistics (mean, median, min, max)
    • Distributions (variance, standard deviation)
    • Rates and percentages
    • Trends over time
    • Correlations (if applicable)
  4. Identify Trends

    • Improving trends (getting better)
    • Degrading trends (getting worse)
    • Stable patterns (consistent)
    • Anomalies (outliers, unusual data points)
  5. Generate Insights

    • What does the data show?
    • What are the implications?
    • What actions should be taken?
    • What should be monitored going forward?

Validation Checklist:

  • Analysis questions clearly defined
  • Data collected completely
  • Metrics calculated correctly
  • Trends identified (improving/degrading/stable)
  • Insights generated (what data shows)
  • Recommendations actionable
  • Conclusions evidence-based

Outputs:

  • Calculated metrics
  • Trend analysis
  • Data visualizations (tables, charts if helpful)
  • Statistical insights
  • Evidence-based recommendations

Time Estimate: 45-90 minutes

Example:

Data Analysis: Skill Build Efficiency
======================================

Question: Is build efficiency actually improving over time?

Data Collected (8 skills):
| Skill # | Name | Build Time | Efficiency vs Baseline |
|---------|------|------------|----------------------|
| 1 | planning-architect | 20.0h | 0% (baseline) |
| 2 | task-development | 5.0h | 75% faster |
| 3 | todo-management | 3.5h | 82.5% faster |
| 4 | prompt-builder | 3.0h | 85% faster |
| 5 | skill-researcher | 2.5h | 87.5% faster |
| 6 | workflow-skill-creator | 2.5h | 87.5% faster |
| 8 | development-workflow | 5.5h | 72.5% faster |
| 9 | review-multi | 13.0h | 35% faster |

Metrics:
- Mean build time (skills 2-9): 5.6 hours
- Median build time: 4.25 hours
- Range: 2.5h to 13h
- Average efficiency gain: 70.6% faster than baseline

Trends Identified:
✅ Improving: Skills 2-6 show increasing efficiency (75% → 87.5%)
⚠️ Plateau: Skills 6 efficiency plateaus at 87.5%
⚠️ Outliers: Skill 8 (5.5h) and Skill 9 (13h) break trend

Outlier Analysis:
- Skill 8 (development-workflow): 5.5h (slower than trend)
  Reason: First workflow composition, new pattern learning
  Acceptable: Still 72.5% faster than baseline

- Skill 9 (review-multi): 13h (much slower)
  Reason: High complexity (13 files, 4 scripts, detailed rubrics)
  Acceptable: Still 35% faster than baseline 20h

Insights:
1. Efficiency compounds skills 2-6 (each faster than previous)
2. Efficiency plateaus around 85-90% (cannot get faster than certain minimums)
3. Complex skills (review-multi) still benefit from workflow (35% faster)
4. New patterns (workflow composition) add learning time but still faster

Conclusion: ✅ Hypothesis CONFIRMED
- Build efficiency IS improving
- Compound gains through skill 6
- Plateau at 85-90% for simple skills
- Complex skills still benefit (35%+ faster)

Recommendations:
1. Continue using development-workflow (proven effective)
2. Expect 85-90% efficiency for simple/medium skills
3. Expect 30-50% efficiency for complex/novel patterns
4. Track actual vs estimated times for better prediction

Operation 5: Pattern Recognition

Purpose: Identify recurring patterns, themes, and systemic insights across multiple artifacts

When to Use This Operation:

  • Analyzing multiple reviews/analyses
  • Identifying systemic issues
  • Discovering best practices from evidence
  • Understanding ecosystem trends
  • Extracting learnings for future work

Process:

  1. Collect Artifacts

    • Gather all relevant data (reviews, analyses, metrics, feedback)
    • Ensure sufficient sample size (3+ instances minimum)
    • Document artifact sources and dates
  2. Identify Recurring Themes

    • Issues appearing in multiple artifacts
    • Practices working consistently well
    • Common failure modes
    • Repeated patterns (good or bad)
  3. Categorize Patterns

    • Structural patterns (organization, naming)
    • Content patterns (documentation styles)
    • Quality patterns (anti-patterns, best practices)
    • Process patterns (workflow effectiveness)
    • Temporal patterns (evolution over time)
  4. Assess Pattern Significance

    • Frequency (how often appears?)
    • Impact (how much does it matter?)
    • Consistency (always true or sometimes?)
    • Causation (what causes this pattern?)
  5. Extract Insights and Recommendations

    • Document discovered patterns
    • Explain significance and impact
    • Provide actionable recommendations
    • Update guidelines/templates with learnings

Validation Checklist:

  • Multiple artifacts analyzed (3+ minimum)
  • Recurring themes identified (2+ patterns)
  • Patterns categorized by type
  • Significance assessed (frequency, impact)
  • Insights evidence-based (not speculation)
  • Recommendations actionable
  • Learnings documented for future use

Outputs:

  • Identified patterns (with evidence)
  • Pattern significance assessment
  • Systemic insights
  • Updated guidelines/templates
  • Recommendations for future work

Time Estimate: 60-120 minutes

Example:

Pattern Recognition: Skill Review Findings (7 Skills)
=====================================================

Artifacts Analyzed: 7 structure reviews + 7 pattern analyses

Recurring Patterns Identified:

PATTERN 1: Quick Reference Evolution
- Frequency: 3 of 7 skills (43%)
- Observation: Skills 1-3 lack Quick Reference, skills 4-8 have it
- Significance: Standard evolved during development
- Impact: User experience (medium)
- Causation: Learned importance during skill 4-5 development
- Recommendation: Add to early skills retroactively → DONE ✅

PATTERN 2: Progressive Disclosure Compliance
- Frequency: 7 of 7 skills (100%)
- Observation: All skills maintain SKILL.md + references/ structure
- Significance: Fundamental design principle
- Impact: Context optimization (high)
- Recommendation: Continue applying in all future skills

PATTERN 3: Validation Specificity Improvement
- Frequency: Evolved over skills 1-8
- Observation: Earlier skills have some vague validation, later skills more specific
- Significance: Quality improvement over time
- Impact: Validation reliability (medium)
- Recommendation: Refine vague criteria in early skills (low priority)

PATTERN 4: Complexity vs Build Time
- Frequency: 8 data points
- Observation: Complex skills take longer even with workflow (review-multi 13h vs others 2.5-5.5h)
- Significance: Complexity matters more than experience
- Impact: Estimation accuracy (high)
- Recommendation: Adjust estimates based on complexity, not just efficiency gains

PATTERN 5: Best Practices Adoption
- Frequency: 7 of 7 skills (100%)
- Observation: All skills have validation checklists, examples, error documentation
- Significance: Strong quality foundation
- Impact: Quality consistency (high)
- Recommendation: Document these as mandatory standards

Systemic Insights:
1. Standards evolve through building (Quick Reference example)
2. Continuous improvement works (retroactive improvements possible)
3. Complexity dominates build time (more than experience level)
4. Best practices highly adopted (100% consistency)
5. Structural excellence across board (all 5/5)

Recommendations for Future:
1. Document evolved standards in common-patterns.md ✅
2. Apply retroactive improvements systematically ✅
3. Adjust time estimates based on complexity tiers
4. Continue tracking patterns for continuous learning
5. Update skill-builder-generic with discovered patterns

Best Practices

1. Define Clear Questions

Practice: Start analysis with specific questions to answer

Rationale: Clear questions focus analysis, prevent meandering exploration

Application: Write 2-5 specific questions before beginning analysis

2. Collect Sufficient Data

Practice: Ensure adequate sample size for reliable patterns

Rationale: Small samples (n=1-2) can be misleading, n≥3 shows patterns

Application: Analyze at least 3 instances before claiming pattern

3. Quantify When Possible

Practice: Use metrics and numbers, not just qualitative assessment

Rationale: Quantitative data enables objective comparison and trend tracking

Application: Count, measure, calculate - then interpret

4. Separate Observation from Interpretation

Practice: Clearly distinguish what you observe from what you conclude

Rationale: Prevents bias, enables others to validate conclusions

Application: "Observation: X. Interpretation: This suggests Y because Z."

5. Prioritize Insights

Practice: Not all insights are equally important - prioritize by impact

Rationale: Focus on high-impact findings, don't get lost in details

Application: Tag insights as Critical/High/Medium/Low impact

6. Make Recommendations Actionable

Practice: Every insight should lead to specific, actionable recommendation

Rationale: Analysis without action is academic - need practical application

Application: For each insight, specify: "Recommendation: Do X to achieve Y"

7. Document and Share

Practice: Record analysis findings for future reference

Rationale: Learnings compound when captured and shared

Application: Create analysis reports, update guidelines with patterns

8. Validate Conclusions

Practice: Test conclusions with additional data or expert review

Rationale: Prevents false patterns, ensures reliability

Application: When possible, validate findings with second analyst or additional data


Common Mistakes

Mistake 1: Analysis Paralysis

Symptom: Endless analysis without decisions or actions

Cause: Perfect information seeking, fear of deciding

Fix: Set time box (e.g., 2 hours max), make decision with available data

Prevention: Define analysis questions and stopping criteria upfront

Mistake 2: Small Sample Size

Symptom: Claiming patterns from 1-2 instances

Cause: Insufficient data collection

Fix: Gather more data (minimum n=3), acknowledge limitations if small sample

Prevention: Check sample size before concluding patterns

Mistake 3: Confirmation Bias

Symptom: Finding only evidence supporting preconceived ideas

Cause: Looking for confirmation, not truth

Fix: Actively seek disconfirming evidence, consider alternative explanations

Prevention: Define questions objectively, analyze all data (not cherry-pick)

Mistake 4: Confusing Correlation with Causation

Symptom: Assuming A causes B because they occur together

Cause: Logical fallacy

Fix: Identify plausible causal mechanisms, test with additional evidence

Prevention: Use careful language: "correlated with" not "causes"

Mistake 5: No Actionable Recommendations

Symptom: Interesting findings but unclear what to do

Cause: Analysis without application thinking

Fix: For each finding, ask "So what? What should we do?"

Prevention: Require actionable recommendation for each insight

Mistake 6: Ignoring Context

Symptom: Misinterpreting data due to missing context

Cause: Analyzing data without understanding circumstances

Fix: Gather context (why data collected, what was happening, any special circumstances)

Prevention: Document context alongside data


Quick Reference

The 5 Analysis Operations

Operation Focus When to Use Time Key Output
Code Analysis Quality, complexity, patterns Assessing codebase, refactoring 30-90m Quality assessment, refactoring priorities
Skill Analysis Effectiveness, usage, improvements Evaluating skill ecosystem 45-90m Effectiveness assessment, improvement opportunities
Process Analysis Efficiency, bottlenecks, optimization Optimizing workflows 60-120m Bottleneck analysis, process optimization
Data Analysis Metrics, trends, statistics Evidence-based decisions 45-90m Metrics, trends, insights
Pattern Recognition Cross-artifact patterns, systemic insights Continuous improvement 60-120m Identified patterns, systemic recommendations

Analysis Types

Type Input Output Methods
Quantitative Numbers, metrics Statistics, trends Calculate, compare, trend analysis
Qualitative Text, observations Themes, patterns Categorize, synthesize, interpret
Comparative Multiple artifacts Similarities, differences Side-by-side comparison, contrast
Temporal Data over time Trends, changes Time-series analysis, before/after
Root Cause Problems Underlying causes 5 Whys, fishbone, causal analysis

Key Metrics for Skills

Build Efficiency:

  • Build time per skill
  • Efficiency vs baseline (%)
  • Time savings (hours)

Quality:

  • Review scores (1-5 scale)
  • Anti-pattern count
  • Best practice adherence (%)

Usage:

  • Skills used (frequency)
  • Integration patterns
  • User satisfaction

Ecosystem:

  • Total skills
  • Pattern distribution
  • Dependency graph
  • Completion percentage

Analysis Checklist Template

Analysis: [Topic]
==================

Questions:
1. [Question 1]
2. [Question 2]

Data Collected:
- [Source 1]: [Data]
- [Source 2]: [Data]

Metrics Calculated:
- [Metric 1]: [Value]
- [Metric 2]: [Value]

Patterns/Trends Identified:
1. [Pattern 1]: [Evidence]
2. [Pattern 2]: [Evidence]

Insights:
- [Insight 1]
- [Insight 2]

Recommendations:
1. [Priority] [Recommendation 1]
2. [Priority] [Recommendation 2]

For More Information

  • Code analysis techniques: references/code-analysis-guide.md
  • Skill metrics: references/skill-metrics-guide.md
  • Pattern recognition: references/pattern-recognition-guide.md

analysis transforms data into insights, enabling evidence-based improvement of code, skills, and processes throughout the development ecosystem.

Weekly Installs
4
Installed on
claude-code4
opencode2
kilo1
windsurf1
zencoder1
cline1