technology-evaluation
SKILL.md
Technology Evaluation
Systematic approach to evaluating and comparing technical options with clear criteria and documented rationale.
When to Use This Skill
- Choosing between libraries or frameworks
- Evaluating architectural approaches
- Comparing implementation strategies
- Making build vs buy decisions
- Documenting why a technology was chosen
Evaluation Framework
Quick Evaluation (5-Minute Check)
For simple decisions, answer these questions:
## Quick Evaluation: {Technology/Approach}
1. **Does it solve the problem?** Yes/No/Partially
2. **Is it maintained?** Check last commit, open issues
3. **Does it fit our stack?** Python, existing dependencies
4. **Is it simple enough?** For a PoC, simplicity wins
5. **Can we switch later?** Is the decision reversible?
**Decision**: Use / Don't Use / Investigate Further
Standard Evaluation Matrix
For comparing multiple options:
## Evaluation: {Decision Topic}
### Options
| Option | Description |
|--------|-------------|
| A: {name} | {brief description} |
| B: {name} | {brief description} |
| C: {name} | {brief description} |
### Criteria (weighted)
| Criterion | Weight | Description |
|-----------|--------|-------------|
| Simplicity | 3 | Easy to understand and implement |
| Fit | 3 | Aligns with existing patterns |
| Maintenance | 2 | Actively maintained, good docs |
| Performance | 1 | Meets performance needs |
| Flexibility | 1 | Can adapt to changing requirements |
### Scores (1-5, 5 is best)
| Criterion | Weight | Option A | Option B | Option C |
|-----------|--------|----------|----------|----------|
| Simplicity | 3 | ? | ? | ? |
| Fit | 3 | ? | ? | ? |
| Maintenance | 2 | ? | ? | ? |
| Performance | 1 | ? | ? | ? |
| Flexibility | 1 | ? | ? | ? |
| **Weighted Total** | | **?** | **?** | **?** |
### Recommendation
**Choose: {Option}**
**Rationale**: {Why this option wins}
Deep Evaluation (For Significant Decisions)
When the decision has lasting impact:
## Deep Evaluation: {Technology/Approach}
### Context
**Problem Statement**: {What we're trying to solve}
**Current State**: {How things work today}
**Constraints**:
- {Constraint 1}
- {Constraint 2}
### Options Analysis
#### Option A: {Name}
**Description**: {What this option entails}
**Pros**:
- {Advantage 1}
- {Advantage 2}
**Cons**:
- {Disadvantage 1}
- {Disadvantage 2}
**Risks**:
- {Risk 1}: {Mitigation}
**Effort**: {Low/Medium/High}
**Example**:
```python
# How this would look in our codebase
Option B: {Name}
{Same structure as Option A}
Comparison Summary
| Aspect | Option A | Option B |
|---|---|---|
| Learning curve | {Low/Med/High} | {Low/Med/High} |
| Implementation effort | {Low/Med/High} | {Low/Med/High} |
| Long-term maintenance | {Low/Med/High} | {Low/Med/High} |
| Reversibility | {Easy/Hard} | {Easy/Hard} |
Recommendation
Recommended: {Option}
Key Reasons:
- {Primary reason}
- {Secondary reason}
Trade-offs Accepted:
- {What we're giving up by choosing this}
Decision Record
If this is a significant architectural decision, create an ADR in docs/adr/.
## Evaluation Criteria Reference
### For Libraries/Dependencies
| Criterion | How to Evaluate |
|-----------|-----------------|
| **Maintenance** | Last release date, open issues, response time |
| **Popularity** | GitHub stars, downloads, community size |
| **Documentation** | Quality, examples, API reference |
| **Compatibility** | Python version, dependency conflicts |
| **Size** | Bundle size, transitive dependencies |
| **License** | Compatible with project license? |
| **Security** | Known vulnerabilities, security practices |
### For Architectural Approaches
| Criterion | How to Evaluate |
|-----------|-----------------|
| **Simplicity** | Lines of code, cognitive complexity |
| **Testability** | Easy to unit test? Mock dependencies? |
| **Extensibility** | Easy to add features later? |
| **Performance** | Meets requirements? Bottlenecks? |
| **Consistency** | Fits existing patterns in codebase? |
### For PoC-Specific Criteria
| Criterion | Weight | Rationale |
|-----------|--------|-----------|
| **Speed to implement** | High | PoC needs quick results |
| **Simplicity** | High | Avoid over-engineering |
| **Reversibility** | Medium | Can change later |
| **Scalability** | Low | Not the PoC focus |
| **Production-readiness** | Low | This is exploratory |
## Research Checklist
When evaluating a technology:
### Quick Research (15 min)
- [ ] Read the README/homepage
- [ ] Check GitHub stars and last commit date
- [ ] Scan open issues for red flags
- [ ] Look at a basic example
- [ ] Check if it's in our Python version
### Standard Research (1 hour)
- [ ] Complete quick research
- [ ] Read getting started guide
- [ ] Try a minimal implementation
- [ ] Check for our specific use case in docs
- [ ] Look for comparison articles
- [ ] Check dependency tree
### Deep Research (half day)
- [ ] Complete standard research
- [ ] Build a prototype
- [ ] Test edge cases
- [ ] Evaluate error handling
- [ ] Check performance characteristics
- [ ] Review source code quality
## Common Evaluation Scenarios
### Choosing a Validation Library
```markdown
## Evaluation: Data Validation
**Context**: Need to validate configuration files
**Options**:
1. Pydantic (already in project)
2. Cerberus
3. Marshmallow
4. Manual validation
**Quick Decision**: Use Pydantic
- Already a dependency
- Team familiar with it
- Fits existing patterns
- Excellent for our use case
**No further evaluation needed** - clear winner.
Choosing Between Approaches
## Evaluation: Error Handling Strategy
**Context**: How to handle and report validation errors
**Options**:
1. Raise exceptions immediately
2. Collect all errors, then raise
3. Return Result objects
**Criteria**:
- User experience (see all errors at once)
- Implementation simplicity
- Consistency with existing code
**Analysis**:
| Approach | UX | Simplicity | Consistency |
|----------|-----|------------|-------------|
| Immediate | ⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ |
| Collect all | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐ |
| Result objects | ⭐⭐⭐⭐ | ⭐⭐ | ⭐⭐ |
**Recommendation**: Collect all errors
- Better UX outweighs slight complexity increase
- Pydantic supports this natively
Decision Documentation
When to Create an ADR
Create an ADR when:
- Choosing a new dependency
- Establishing a pattern that others should follow
- Making a decision that's hard to reverse
- The decision required significant research
Quick Decision Record
For simpler decisions, document inline:
# Decision: Use dataclasses instead of plain dicts for internal data
# Rationale: Type safety, IDE support, and consistency with Pydantic models
# Alternatives: TypedDict (less clear), plain dict (no type safety)
# Date: 2026-02-05
@dataclass
class InternalConfig:
name: str
value: int
Context File Documentation
Include decisions in implementation context:
## Key Decisions
| Decision | Choice | Rationale |
|----------|--------|-----------|
| Validation library | Pydantic | Already in project, excellent fit |
| Error handling | Collect all | Better UX, Pydantic supports it |
| Config format | YAML | Human-readable, already used |
Anti-Patterns
❌ Resume-Driven Development
Choosing technology because it's trendy:
- Focus on what solves the problem simply
- Boring technology is often the right choice
❌ Over-Evaluation
Spending days evaluating when the choice is obvious:
- If one option is clearly better, choose it
- Use quick evaluation for simple decisions
❌ Ignoring Existing Choices
Adding new tools when existing ones work:
- Check what's already in the project first
- Consistency has value
❌ Paralysis by Comparison
Getting stuck comparing minor differences:
- If options are roughly equal, pick one and move on
- The best decision is often just making a decision
Integration with Agents
This skill is primarily used by:
- Requirements Analyst - For evaluating technology options
- Python Expert - For implementation approach decisions
- Code Reviewer - For validating technology choices
Weekly Installs
1
Repository
franciscosanche…factu-esFirst Seen
13 days ago
Security Audits
Installed on
mcpjam1
claude-code1
junie1
windsurf1
zencoder1
crush1