testing-strategy
SKILL.md
Testing Strategy
Design and implement effective, maintainable, and fast test suites across Python and TypeScript ecosystems.
When to Use This Skill
- Writing unit, integration, or E2E tests
- Designing test architecture for a new feature
- Setting up test infrastructure (fixtures, factories, mocking)
- Optimizing CI test execution
- Configuring coverage thresholds
Core Principles
1. Test Pyramid Strategy
- Prioritize unit tests for fast feedback (70%)
- Write integration tests for component interactions (20%)
- Use E2E tests sparingly for critical user journeys (10%)
- Every layer should add unique confidence, not duplicate coverage
- Prefer testing behavior over implementation details
2. Test Quality Over Quantity
- Write tests that document intent — tests are executable specifications
- Each test should have a single reason to fail
- Use descriptive test names:
test_invoice_rejects_negative_amount - Avoid testing implementation details — test the contract
- Keep tests independent and deterministic (no shared mutable state)
- Prefer arrange-act-assert (AAA) pattern
3. Fast Feedback Loops
- Unit tests should run in under 1 second per file
- Mark slow tests with
@pytest.mark.slowfor selective execution - Use in-memory databases or fakes over real services in unit tests
- Parallelize test execution with
pytest-xdistwhen beneficial - Structure tests for selective CI execution (path-filtered)
4. Effective Mocking
- Mock at boundaries (I/O, network, time, randomness)
- Never mock what you don't own — use adapters/wrappers instead
- Prefer fakes over mocks when behavior matters
- Use
unittest.mock.patchfor Python,vi.mockfor Vitest - Use
respxfor mockinghttpxcalls,responsesforrequests - Verify mocked interactions only when the interaction itself is the behavior
5. Test Data Management
- Use factories (factory_boy, Faker) over raw fixtures for complex data
- Keep test data minimal — only set fields relevant to the test
- Use builders or object mothers for complex object graphs
- Avoid loading large datasets — prefer focused, targeted data
- Use snapshot testing for API response shape verification
Python Testing Patterns (pytest)
Project Test Structure
apps/{app-name}/
├── src/{package_name}/
│ ├── __init__.py
│ ├── models.py
│ └── services.py
├── tests/
│ ├── conftest.py # Shared fixtures
│ ├── factories.py # Test data factories
│ ├── unit/
│ │ ├── test_models.py
│ │ └── test_services.py
│ ├── integration/
│ │ ├── conftest.py # Integration-specific fixtures
│ │ ├── test_api.py
│ │ └── test_database.py
│ └── e2e/
│ └── test_workflows.py
└── pyproject.toml
Fixtures and Conftest
import pytest
from typing import AsyncIterator
from httpx import AsyncClient, ASGITransport
from app.main import app
@pytest.fixture
def sample_invoice() -> dict:
"""Minimal invoice data for testing."""
return {
"id": "INV-001",
"amount": 100.0,
"currency": "EUR",
}
@pytest.fixture
async def async_client() -> AsyncIterator[AsyncClient]:
"""Async HTTP client for FastAPI integration tests."""
transport = ASGITransport(app=app)
async with AsyncClient(transport=transport, base_url="http://test") as client:
yield client
@pytest.fixture(scope="session")
def anyio_backend() -> str:
return "asyncio"
Factory Pattern (factory_boy)
import factory
from faker import Faker
from app.models import Invoice, Customer
fake = Faker("es_ES")
class CustomerFactory(factory.Factory):
class Meta:
model = Customer
id = factory.LazyFunction(lambda: fake.uuid4())
name = factory.LazyFunction(lambda: fake.company())
nif = factory.LazyFunction(lambda: fake.nif())
email = factory.LazyFunction(lambda: fake.company_email())
class InvoiceFactory(factory.Factory):
class Meta:
model = Invoice
id = factory.Sequence(lambda n: f"INV-{n:04d}")
amount = factory.LazyFunction(lambda: round(fake.pyfloat(min_value=10, max_value=10000), 2))
customer = factory.SubFactory(CustomerFactory)
currency = "EUR"
Parametrized Tests
import pytest
from app.validators import validate_nif
@pytest.mark.parametrize(
"nif,expected_valid",
[
("12345678A", True),
("B12345678", True), # CIF
("", False),
("123", False),
("XXXXXXXXX", False),
],
ids=["valid-personal", "valid-company", "empty", "too-short", "invalid-format"],
)
def test_validate_nif(nif: str, expected_valid: bool) -> None:
"""Validate Spanish NIF/CIF formats."""
assert validate_nif(nif) == expected_valid
Async Test Patterns
import pytest
from unittest.mock import AsyncMock
@pytest.mark.asyncio
async def test_create_invoice(async_client: AsyncClient) -> None:
"""Test creating an invoice via the API."""
payload = {"amount": 150.0, "customer_id": "CUST-001"}
response = await async_client.post("/invoices", json=payload)
assert response.status_code == 201
data = response.json()
assert data["amount"] == 150.0
assert "id" in data
@pytest.mark.asyncio
async def test_service_calls_external_api(mocker) -> None:
"""Verify the service calls the tax authority API."""
mock_client = AsyncMock()
mock_client.post.return_value.status_code = 200
mocker.patch("app.services.tax_client", mock_client)
result = await submit_invoice("INV-001")
mock_client.post.assert_called_once()
assert result.status == "submitted"
Mocking HTTP Calls (respx)
import respx
import httpx
import pytest
@pytest.mark.asyncio
async def test_fetch_exchange_rate() -> None:
"""Test fetching exchange rates with mocked HTTP."""
with respx.mock:
respx.get("https://api.example.com/rates/EUR").mock(
return_value=httpx.Response(200, json={"rate": 1.08})
)
rate = await fetch_exchange_rate("EUR")
assert rate == 1.08
Snapshot Testing for API Responses
import pytest
from syrupy.assertion import SnapshotAssertion
@pytest.mark.asyncio
async def test_invoice_list_response(
async_client: AsyncClient,
snapshot: SnapshotAssertion,
) -> None:
"""Verify the invoice list API response shape."""
response = await async_client.get("/invoices")
assert response.status_code == 200
assert response.json() == snapshot
Markers and Test Categories
# pyproject.toml
[tool.pytest.ini_options]
markers = [
"slow: marks tests as slow (deselect with '-m \"not slow\"')",
"integration: integration tests requiring external services",
"e2e: end-to-end tests",
]
asyncio_mode = "auto"
FastAPI Integration Testing
TestClient with Dependency Overrides
import pytest
from httpx import AsyncClient, ASGITransport
from app.main import create_app
from app.dependencies import get_db_session
@pytest.fixture
async def app_with_test_db():
"""Create app with test database override."""
app = create_app()
async def override_db():
async with test_session() as session:
yield session
app.dependency_overrides[get_db_session] = override_db
yield app
app.dependency_overrides.clear()
@pytest.fixture
async def client(app_with_test_db) -> AsyncIterator[AsyncClient]:
transport = ASGITransport(app=app_with_test_db)
async with AsyncClient(transport=transport, base_url="http://test") as c:
yield c
Testing Auth-Protected Endpoints
@pytest.fixture
def auth_headers() -> dict[str, str]:
"""Create valid JWT auth headers for testing."""
token = create_test_jwt(user_id="test-user", tenant="test-tenant")
return {"Authorization": f"Bearer {token}"}
@pytest.mark.asyncio
async def test_protected_endpoint_requires_auth(client: AsyncClient) -> None:
response = await client.get("/invoices")
assert response.status_code == 401
@pytest.mark.asyncio
async def test_protected_endpoint_with_auth(
client: AsyncClient, auth_headers: dict[str, str]
) -> None:
response = await client.get("/invoices", headers=auth_headers)
assert response.status_code == 200
TypeScript/Frontend Testing
Vitest for Unit Tests
import { describe, it, expect, vi } from 'vitest';
import { calculateTax } from './tax';
describe('calculateTax', () => {
it('calculates 21% IVA correctly', () => {
expect(calculateTax(100, 0.21)).toBe(21);
});
it('returns 0 for zero amount', () => {
expect(calculateTax(0, 0.21)).toBe(0);
});
it('throws for negative amounts', () => {
expect(() => calculateTax(-100, 0.21)).toThrow('Amount must be positive');
});
});
React Component Testing
import { render, screen, fireEvent } from '@testing-library/react';
import { InvoiceForm } from './InvoiceForm';
describe('InvoiceForm', () => {
it('submits invoice with valid data', async () => {
const onSubmit = vi.fn();
render(<InvoiceForm onSubmit={onSubmit} />);
fireEvent.change(screen.getByLabelText('Amount'), { target: { value: '100' } });
fireEvent.click(screen.getByRole('button', { name: /submit/i }));
expect(onSubmit).toHaveBeenCalledWith(
expect.objectContaining({ amount: 100 })
);
});
});
Playwright E2E Testing
import { test, expect } from '@playwright/test';
test.describe('Invoice Creation Flow', () => {
test('creates a new invoice end-to-end', async ({ page }) => {
await page.goto('/invoices/new');
await page.fill('[name="customerName"]', 'Acme Corp');
await page.fill('[name="amount"]', '500');
await page.click('button[type="submit"]');
await expect(page.getByText('Invoice created')).toBeVisible();
await expect(page).toHaveURL(/\/invoices\/INV-/);
});
});
CI Test Optimization
Monorepo Selective Testing
jobs:
test:
steps:
- name: Detect changed projects
id: changes
uses: dorny/paths-filter@v3
with:
filters: |
api:
- 'apps/easyfactu-api/**'
- 'packages/py/**'
web:
- 'apps/easyfactu-web/**'
- 'packages/ts/**'
- name: Test API
if: steps.changes.outputs.api == 'true'
run: uv run pytest apps/easyfactu-api -v --tb=short
- name: Test Web
if: steps.changes.outputs.web == 'true'
run: pnpm --filter easyfactu-web test
Parallel Test Execution
# Run pytest in parallel (requires pytest-xdist)
uv run pytest -n auto --dist=loadfile
# Run with coverage in CI
uv run pytest --cov=src --cov-report=xml --cov-report=term-missing -n auto
Coverage Configuration
# pyproject.toml
[tool.coverage.run]
source = ["src"]
branch = true
omit = ["*/tests/*", "*/__main__.py"]
[tool.coverage.report]
fail_under = 80
show_missing = true
exclude_lines = [
"pragma: no cover",
"if TYPE_CHECKING:",
"if __name__ == .__main__.",
"@overload",
]
Coverage Guidelines
- Target 80%+ overall coverage as a quality gate
- Focus on branch coverage over line coverage
- Don't chase 100% — focus on meaningful, high-risk code paths
- Exclude generated code, type stubs, and config files
- Use
# pragma: no coversparingly and with justification
Guidelines
- Be precise about test failures and root causes
- Provide complete, copy-pasteable test code
- Suggest the appropriate test level for each scenario
- Recommend tools and libraries with specific version compatibility
- Flag flakiness risks proactively
- Check
pyproject.tomlfor pytest configuration and test dependencies
Weekly Installs
1
Repository
franciscosanche…factu-esFirst Seen
12 days ago
Security Audits
Installed on
mcpjam1
claude-code1
junie1
windsurf1
zencoder1
crush1