snakepolish
<file_paths>$ARGUMENTS</file_paths>
Snake Polish - Implementation Phase
Execute the implementation plan from /python3-development:stinkysnake Phase 9. Implement functions following the modernization plan, run tests iteratively until all pass.
Arguments
<file_paths/>
Prerequisites
Before invoking this skill, ensure:
/python3-development:stinkysnakephases 1-8 completed- Modernization plan reviewed and refined
- Interfaces designed and documented
- Failing tests written by
python3-development:python-pytest-architect
Instructions
Step 1: Load Context
Read the stinkysnake plan artifacts:
ARTIFACTS TO LOAD:
- [ ] Modernization plan (Phase 3 output)
- [ ] Plan review feedback (Phase 4-5 output)
- [ ] Interface definitions (Phase 7 output)
- [ ] Failing test files (Phase 8 output)
Step 2: Verify Test Baseline
Run tests to confirm failing state:
uv run pytest <file_paths/> -v --tb=short 2>&1 | head -100
Expected: Tests fail because implementations don't exist yet.
If tests pass: Stop. Implementation already complete or tests are not testing the right things.
Step 3: Implementation Order
Follow this implementation sequence:
IMPLEMENTATION ORDER:
1. Type definitions (TypeAlias, TypedDict, Protocol)
2. Data structures (dataclass, Pydantic models)
3. Utility functions (pure functions, no side effects)
4. Core business logic
5. Integration points (API clients, file I/O)
6. Entry points (CLI commands, handlers)
Step 4: Implement Following Plan
For each planned change:
FOR EACH IMPLEMENTATION ITEM:
1. Read the interface/protocol definition
2. Read the failing test(s) for this component
3. Implement the function/class
4. Run targeted tests: uv run pytest -k "test_name" -v
5. If fails: debug and fix
6. If passes: move to next item
Step 5: Modern Python Patterns
Apply these patterns during implementation:
Type Annotations
# Use modern union syntax
def process(data: str | None) -> dict[str, Any]:
...
# Use TypeGuard for narrowing
def is_valid_user(obj: object) -> TypeGuard[User]:
return isinstance(obj, dict) and "id" in obj
Protocol-Based Design
from typing import Protocol
class Serializable(Protocol):
def to_dict(self) -> dict[str, Any]: ...
def save(item: Serializable) -> None:
data = item.to_dict()
...
Dataclass Patterns
from dataclasses import dataclass, field
@dataclass(slots=True, frozen=True)
class Config:
name: str
options: list[str] = field(default_factory=list)
Pydantic for Validation
from pydantic import BaseModel, Field
class APIResponse(BaseModel):
status: int = Field(ge=100, le=599)
data: dict[str, Any]
model_config = {"strict": True}
Modern Libraries
# httpx for async HTTP
async with httpx.AsyncClient() as client:
response = await client.get(url)
# orjson for fast JSON
data = orjson.loads(response.content)
output = orjson.dumps(result, option=orjson.OPT_INDENT_2)
# tomlkit for TOML with comments preserved
doc = tomlkit.parse(content)
doc["section"]["key"] = value
Step 6: Iterative Test Loop
After each implementation batch:
# Run full test suite
uv run pytest <file_paths/> -v --tb=short
# If failures remain, focus on failing tests
uv run pytest <file_paths/> -v --tb=long -x # Stop on first failure
Step 7: Static Analysis Verification
Before completion, verify code quality:
# Format check
uv run ruff format --check <file_paths/>
# Lint check
uv run ruff check <file_paths/>
# Type check — match hooks/CI; ty when repo runs ty (do not infer mypy from [tool.mypy] alone)
uv run ty check <file_paths/>
# uv run mypy <file_paths/> --strict
Fix any issues that arise.
Step 8: Final Test Run
Confirm all tests pass:
uv run pytest <file_paths/> -v --cov --cov-report=term-missing
Success Criteria:
- All tests pass
- No type errors
- No lint errors
- Coverage meets project threshold (typically 80%+)
Completion
When all tests pass and static analysis is clean:
- Report implementation summary
- List any deferred items or technical debt
- Reference documentation updates needed (from Phase 6)
Error Handling
Test Failures That Indicate Test Bugs
If a test failure appears to be a test bug rather than implementation bug:
- Document the suspected test issue
- Check the test against the interface specification
- If test is wrong: fix the test, document the fix
- If unclear: flag for review, continue with other implementations
Blocked Implementations
If an implementation is blocked:
- Document the blocker
- Check if it's a dependency ordering issue
- If external dependency: note and continue with independent items
- If architectural issue: flag for plan revision
References
../stinkysnake/SKILL.md- Parent workflow../../agents/python-cli-architect.md- Implementation agent../python3-development/SKILL.md- Modern Python patterns
More from jamie-bitflight/claude_skills
perl-lint
This skill should be used when the user asks to lint Perl code, run perlcritic, check Perl style, format Perl code, run perltidy, or mentions Perl Critic policies, code formatting, or style checking.
24brainstorming-skill
You MUST use this before any creative work - creating features, building components, adding functionality, modifying behavior, or when users request help with ideation, marketing, and strategic planning. Explores user intent, requirements, and design before implementation using 30+ research-validated prompt patterns.
11design-anti-patterns
Enforce anti-AI UI design rules based on the Uncodixfy methodology. Use when generating HTML, CSS, React, Vue, Svelte, or any frontend UI code. Prevents "Codex UI" — the generic AI aesthetic of soft gradients, floating panels, oversized rounded corners, glassmorphism, hero sections in dashboards, and decorative copy. Applies constraints from Linear/Raycast/Stripe/GitHub design philosophy: functional, honest, human-designed interfaces. Triggers on: UI generation, dashboard building, frontend component creation, CSS styling, landing page design, or any task producing visual interface code.
7python3-review
Comprehensive Python code review checking patterns, types, security, and performance. Use when reviewing Python code for quality issues, when auditing code before merge, or when assessing technical debt in a Python codebase.
7hooks-guide
Cross-platform hooks reference for AI coding assistants — Claude Code, GitHub Copilot, Cursor, Windsurf, Amp. Covers hook authoring in Node.js CJS and Python, per-platform event schemas, inline-agent hooks and MCP in agent frontmatter, common JSON I/O, exit codes, best practices, and a fetch script to refresh docs from official sources. Use when writing, reviewing, or debugging hooks for any AI assistant.
7agent-creator
Create high-quality Claude Code agents from scratch or by adapting existing agents as templates. Use when the user wants to create a new agent, modify agent configurations, build specialized subagents, or design agent architectures. Guides through requirements gathering, template selection, and agent file generation following Anthropic best practices (v2.1.63+).
6