python3-review
<review_targets>$ARGUMENTS</review_targets>
Python Code Review
The model performs comprehensive code review across multiple quality dimensions.
Arguments
<review_targets/>
Instructions
- Read target files from arguments
- Check each dimension listed below
- Report findings with severity and location
- Suggest fixes with code examples
Review Dimensions
Consult ../python3-development/references/python3-standards.md when checking code against shared architecture, typing, testing, or CLI rules. The dimensions below supplement that document; they do not replace it.
1. Type Safety
Check for:
- Missing type hints on function parameters and return types
- Use of
Anywithout justification - Legacy typing imports (
List,Dict,Optional,Union) - Missing Protocol definitions for duck typing
- Incorrect use of TypeVar, Generic, or ParamSpec
Severity: High (type errors cause runtime failures)
# Bad - missing types
def process(data):
return data.get("value")
# Good - complete types
def process(data: dict[str, int]) -> int | None:
return data.get("value")
2. Error Handling
Check for:
- Bare
except:orexcept Exception: - Swallowed exceptions (catch and ignore)
- Missing context in re-raised exceptions
- Exceptions that should use
add_note() - Non-specific exception types
Severity: High (silent failures cause data corruption)
# Bad - swallowed exception
try:
result = risky_call()
except Exception:
pass # Silent failure
# Good - specific handling with context
try:
result = risky_call()
except ConnectionError as e:
e.add_note(f"Failed connecting to {host}")
raise
3. Security
Check for:
- SQL queries with string formatting (injection risk)
subprocess.run(..., shell=True)with user input- Hardcoded credentials or API keys
eval()orexec()with external input- Pickle with untrusted data
- Path traversal vulnerabilities
- Missing input validation
Severity: Critical (security vulnerabilities)
# Bad - SQL injection risk
query = f"SELECT * FROM users WHERE id = {user_id}"
# Good - parameterized query
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
4. Performance
Check for:
- List membership checks instead of sets (
in listvsin set) - String concatenation in loops
- Repeated function calls that could be cached
- N+1 query patterns
- Synchronous I/O in async contexts
- Missing
__slots__for data classes with many instances
Severity: Medium (degraded performance)
# Bad - O(n) lookup on each iteration
valid_codes = [200, 201, 204]
for code in codes:
if code in valid_codes: # O(n) each time
process(code)
# Good - O(1) lookup
VALID_CODES = {200, 201, 204}
for code in codes:
if code in VALID_CODES: # O(1) each time
process(code)
5. Modern Patterns
Check for:
- Legacy typing imports when builtin generics available
- Missing walrus operator opportunities
- If/elif chains that should be match-case
unittest.mockin pytest tests- Manual implementations duplicating stdlib
Severity: Low (technical debt)
# Bad - legacy pattern
from typing import Optional
result = expensive_call()
if result:
process(result)
# Good - modern pattern
if result := expensive_call():
process(result)
6. Code Structure
Check for:
- Functions longer than 50 lines
- Classes with too many responsibilities
- Deep nesting (more than 3 levels)
- Circular imports
- Missing
__all__in public modules - Dead code (unreachable or unused)
Severity: Medium (maintainability)
7. Documentation
Check for:
- Public functions without docstrings
- Outdated docstrings (don't match signature)
- Missing type information in docstrings when types unclear
- Complex logic without explanatory comments
Severity: Low (maintainability)
Report Format
For each finding, report:
## [SEVERITY] [Category]: [Brief Description]
**Location**: `file.py:123` in `function_name`
**Issue**: Detailed explanation of the problem.
**Fix**:
```python
# Suggested fix with code example
Impact: Why this matters (security, performance, reliability).
---
## Review Checklist
```text
TYPE SAFETY
- [ ] All functions have complete type hints
- [ ] No legacy typing imports (List, Dict, Optional, Union)
- [ ] TypeVar/Protocol used appropriately
- [ ] Generic types are correct
ERROR HANDLING
- [ ] No bare except clauses
- [ ] No swallowed exceptions
- [ ] Exceptions have context (add_note or from)
- [ ] Specific exception types used
SECURITY
- [ ] No SQL injection vulnerabilities
- [ ] No command injection (shell=True with user input)
- [ ] No hardcoded secrets
- [ ] Input validation present
PERFORMANCE
- [ ] Sets used for membership testing
- [ ] No string concatenation in loops
- [ ] Appropriate caching used
- [ ] Async patterns correct
MODERN PATTERNS
- [ ] Builtin generics used (list, dict, not List, Dict)
- [ ] Walrus operator where beneficial
- [ ] Match-case for dispatch
- [ ] pytest-mock instead of unittest.mock
STRUCTURE
- [ ] Functions under 50 lines
- [ ] No deep nesting (>3 levels)
- [ ] No circular imports
- [ ] __all__ defined in public modules
DOCUMENTATION
- [ ] Public functions have docstrings
- [ ] Docstrings match signatures
- [ ] Complex logic commented
Summary Format
End the review with:
## Review Summary
**Files Reviewed**: [count]
**Total Findings**: [count]
| Severity | Count |
|----------|-------|
| Critical | X |
| High | X |
| Medium | X |
| Low | X |
**Top Issues**:
1. [Most important issue]
2. [Second most important issue]
3. [Third most important issue]
**Recommendation**: [APPROVE / REQUEST CHANGES / BLOCK]
References
More from jamie-bitflight/claude_skills
perl-lint
This skill should be used when the user asks to lint Perl code, run perlcritic, check Perl style, format Perl code, run perltidy, or mentions Perl Critic policies, code formatting, or style checking.
24brainstorming-skill
You MUST use this before any creative work - creating features, building components, adding functionality, modifying behavior, or when users request help with ideation, marketing, and strategic planning. Explores user intent, requirements, and design before implementation using 30+ research-validated prompt patterns.
11design-anti-patterns
Enforce anti-AI UI design rules based on the Uncodixfy methodology. Use when generating HTML, CSS, React, Vue, Svelte, or any frontend UI code. Prevents "Codex UI" — the generic AI aesthetic of soft gradients, floating panels, oversized rounded corners, glassmorphism, hero sections in dashboards, and decorative copy. Applies constraints from Linear/Raycast/Stripe/GitHub design philosophy: functional, honest, human-designed interfaces. Triggers on: UI generation, dashboard building, frontend component creation, CSS styling, landing page design, or any task producing visual interface code.
7hooks-guide
Cross-platform hooks reference for AI coding assistants — Claude Code, GitHub Copilot, Cursor, Windsurf, Amp. Covers hook authoring in Node.js CJS and Python, per-platform event schemas, inline-agent hooks and MCP in agent frontmatter, common JSON I/O, exit codes, best practices, and a fetch script to refresh docs from official sources. Use when writing, reviewing, or debugging hooks for any AI assistant.
7agent-creator
Create high-quality Claude Code agents from scratch or by adapting existing agents as templates. Use when the user wants to create a new agent, modify agent configurations, build specialized subagents, or design agent architectures. Guides through requirements gathering, template selection, and agent file generation following Anthropic best practices (v2.1.63+).
6python3-add-feature
Guided workflow for adding new features to Python projects. Use when planning a new feature implementation, when adding functionality with proper test coverage, or when following TDD to build features incrementally.
6