testing-strategy
Testing Strategy
What I Do
Provide universal testing strategies and best practices that apply across different programming languages and project types.
Universal Testing Framework
Test Organization
# Universal test directory structure
tests/
├── unit/ # Unit tests
│ ├── core/ # Core functionality tests
│ └── utils/ # Utility function tests
├── integration/ # Integration tests
├── e2e/ # End-to-end tests
├── performance/ # Performance tests
├── conftest.py # Shared fixtures
└── test_data/ # Test data files
Test Coverage Standards
Minimum Coverage Targets:
- Unit Tests: 90%+ code coverage
- Integration Tests: 80%+ scenario coverage
- End-to-End Tests: Critical user journey coverage
- Performance Tests: Baseline performance metrics
# Universal test coverage monitoring
import pytest
import coverage
def run_tests_with_coverage():
"""Run tests with coverage measurement"""
cov = coverage.Coverage()
cov.start()
# Run pytest
exit_code = pytest.main([
"--cov=src",
"--cov-report=term-missing",
"--cov-fail-under=90",
"tests/"
])
cov.stop()
cov.save()
return exit_code
When to Use Me
Use this skill when:
- Setting up testing for new projects
- Standardizing testing across teams
- Creating reusable testing patterns
- Implementing quality assurance processes
Universal Testing Examples
Test Fixtures
# Universal test fixture patterns
import pytest
from typing import Generator
@pytest.fixture(scope="module")
def database_connection() -> Generator:
"""Universal database fixture"""
# Setup
conn = create_test_database()
yield conn
# Teardown
conn.close()
cleanup_test_database()
@pytest.fixture
def sample_data():
"""Universal sample data fixture"""
return {
"valid_input": get_valid_input(),
"edge_cases": get_edge_cases(),
"invalid_input": get_invalid_input()
}
Parameterized Testing
# Universal parameterized testing
import pytest
@pytest.mark.parametrize("input,expected", [
([1, 2, 3], 6), # Normal case
([], 0), # Empty input
([-1, 0, 1], 0), # Mixed values
([1.5, 2.5], 4.0) # Float values
])
def test_sum_function(input, expected):
"""Test sum function with various inputs"""
assert sum(input) == expected
Mocking and Isolation
# Universal mocking patterns
from unittest.mock import patch, MagicMock
import requests
def test_api_call():
"""Test API calls with mocking"""
# Mock external API
mock_response = MagicMock()
mock_response.json.return_value = {"status": "success"}
with patch("requests.get", return_value=mock_response):
result = make_api_call()
assert result == {"status": "success"}
requests.get.assert_called_once_with("https://api.example.com/data")
Performance Testing
# Universal performance testing
import time
import pytest
@pytest.mark.performance
def test_processing_speed():
"""Test processing speed meets requirements"""
start_time = time.time()
# Run the operation
result = process_large_dataset()
duration = time.time() - start_time
assert duration < 5.0, f"Processing took {duration:.2f}s, expected <5.0s"
assert result.is_valid()
Best Practices
- Consistency: Apply same testing patterns across projects
- Automation: Integrate testing into CI/CD pipelines
- Isolation: Test components in isolation
- Documentation: Document testing approaches clearly
Compatibility
Applies to:
- All programming languages
- Any software project type
- Cross-project testing standardization
- Organizational quality assurance
More from jr2804/prompts
python-ultimate
>-
33output-quality
Detect and eliminate generic, low-quality "AI slop" patterns in natural language, code, and design. Use when REVIEWING existing content (text, code, or visual designs) for quality issues, cleaning up generic patterns, or establishing quality standards. Focuses on pattern detection—not content creation.
8cli-vstash
Local document memory with semantic search for AI-assisted workflows. Use when managing project documentation, codebases, or research papers that need persistent memory across sessions. Triggers on: vstash add/search/ask commands, document ingestion, semantic search, RAG pipelines, local knowledge bases, or configuring vstash for personal projects.
5python-linter
Guide coding agents to fix specific Python linter issues from Ruff. Use when encountering Ruff linter errors identified by alpha-numeric codes (e.g., B008, S108, PLC0415). Provides context-aware resolution strategies for common linter issues.
1cli-cytoscnpy
CLI tool for CytoScnPy - code metrics analysis for Python projects. Use when running code quality scans, calculating cyclomatic complexity, Halstead metrics, maintainability index, or generating project statistics. Triggers on: cytoscnpy commands, code metrics analysis, complexity reports, maintainability scoring, or MCP server setup for LLM integration.
1skill-creator
Guide for creating effective skills. This skill should be used when users want to create a new skill (or update an existing skill) that extends Claude's capabilities with specialized knowledge, workflows, or tool integrations.
1