review-staged
Review Staged Files Skill
Generate AI-powered code review comments for your staged files (git staged changes) before committing. Catch issues early in the development process using the same rigorous review standards as PR reviews.
Usage
/review-staged # Review all staged files
/review-staged --verbose # Show detailed analysis
Examples:
/review-staged- Review all currently staged files/review-staged --verbose- Show detailed analysis with full context
What this skill does
- Checks for staged files using
git diff --staged --name-only - Fetches staged changes using
git diff --staged - Performs architectural review: Questions design decisions, checks for scope creep, validates use cases
- Analyzes changes for security, testing, design patterns, and code quality issues
- Differentiates contexts: CLI code vs GitHub Actions code (different standards)
- Creates actionable feedback: Specific refactoring suggestions based on file names and patterns
- Runs the test suite and measures per-test timing — flags any test taking > 1 second as a performance regression
- Generates structured review document saved to a markdown file
- Shows summary of all issues found organized by severity
Engineering Review Principles
This skill enforces the same principles as the PR review skill:
Architectural Review
- Design Decision Validation: Questions "why" before reviewing "how"
- Scope Creep Detection: Flags expansions beyond Agent365 deployment/management
- Use Case Validation: Requires concrete scenarios for new features
- Overlap Detection: Identifies duplication with existing tools (Azure CLI, Portal)
- YAGNI Enforcement: Questions features without documented need
Architecture & Patterns
- .NET architect patterns: Reviews follow .NET best practices
- Azure CLI alignment: Ensures consistency with az cli patterns and conventions
- Cross-platform compatibility: Validates Windows, Linux, and macOS compatibility (for CLI code)
Design Patterns
- KISS (Keep It Simple, Stupid): Prefers simple, straightforward solutions
- DRY (Don't Repeat Yourself): Identifies code duplication
- SOLID principles: Especially Single Responsibility Principle
- YAGNI (You Aren't Gonna Need It): Avoids over-engineering
- One class per file: Enforces clean code organization
Code Quality
- No large files: Flags files over 500 additions
- Function reuse: Encourages reusing functions across commands
- No special characters: Avoids emojis in logs/output (Windows compatibility)
- Self-documenting code: Prefers clear code over excessive comments
- Minimal changes: Makes only necessary changes to solve the problem
Testing Standards
-
Framework: xUnit, FluentAssertions, NSubstitute for .NET; pytest/unittest for Python
-
Quality over quantity: Focus on critical paths and edge cases
-
CLI reliability: CLI code without tests is BLOCKING
-
GitHub Actions tests: Strongly recommended (HIGH severity) but not blocking
-
Mock external dependencies: Proper mocking patterns
-
Test performance — measured by running, not just static analysis: The review ALWAYS runs the full test suite and reports per-test timing. Any test method taking > 1 second is flagged as a performance regression (HIGH severity). The finding must include:
- The slow test class and method name(s) with their measured time
- The root cause (cold
AzCliHelpertoken cache, missingWarmAzCliTokenCachecall, real subprocess not mocked, etc.) - The fix (warmup call pattern,
loginHintResolverinjection, etc.) - Expected time after fix
If all tests complete in < 1 second each: emit an INFO — PASS finding with the total suite time.
Do not skip the test run. Static code analysis alone missed the regression in
da6f750; only measurement catches it reliably.
Security
- No hardcoded secrets: Use environment variables or Azure Key Vault
- Credential management: Follow az cli patterns for CLI code; use GitHub Secrets for Actions
Context Awareness
The skill differentiates between:
- CLI code (strict requirements): Cross-platform, reliable, must have tests
- GitHub Actions code (GitHub-specific): Linux-only is acceptable, tests strongly recommended
Review Output
Generated review is saved to:
.codereviews/claude-staged-<timestamp>.md
The review includes:
- Summary: Overview of changes and key concerns
- Critical Issues: Blocking issues that must be fixed
- High Priority: Important issues that should be addressed
- Medium Priority: Issues that improve code quality
- Low Priority: Suggestions for enhancement
- Informational: Best practices and recommendations
Implementation
The skill uses Claude Code directly for semantic code analysis (same as review-pr):
- Claude Code reads
.claude/agents/pr-code-reviewer.mdfor review process guidelines - Claude Code reads
.github/copilot-instructions.mdfor coding standards - Claude Code gets staged files:
git diff --staged --name-only - Claude Code gets staged changes:
git diff --staged - Claude Code performs semantic analysis using its own capabilities
- Claude Code identifies specific issues with line numbers and code references
- Claude Code runs the full test suite with per-test timing:
Parse the output for lines matchingcd src && dotnet test tests.proj --configuration Release --logger "console;verbosity=normal" 2>&1[X s]or[X,XXX ms]patterns. Extract test class name, method name, and duration. Flag any test method taking > 1 second. Group findings by test class and include the measured times in the review. - Claude Code writes markdown file to
.codereviews/claude-staged-<timestamp>.md
Test timing output format (from dotnet test --logger "console;verbosity=normal"):
Passed SomeTests.Method_Scenario_ExpectedResult [< 1 ms]
Passed OtherTests.Method_Slow [22 s]
Any line showing [X s] where X ≥ 1 is a slow test. Report all such tests in a dedicated finding.
Key Advantages:
- ✅ No API key required - uses Claude Code's existing authentication
- ✅ Better semantic analysis - Claude Code has full context
- ✅ Catch issues before committing
- ✅ Same rigorous review standards as PR reviews
- ✅ Works offline (no GitHub required)
Workflow
-
Stage your changes:
git add <files> -
Review staged files:
/review-staged- Analyzes all staged changes
- Generates review document
- Shows summary of issues
-
Address issues: Fix any blocking or high-priority issues
-
Re-review if needed:
/review-staged -
Commit:
git commit -m "your message"
When to Use
- Before committing: Catch issues early
- Before creating a PR: Ensure quality before sharing
- After addressing PR comments: Verify fixes are correct
- During code cleanup: Validate refactoring changes
- When learning: Get feedback on coding patterns
Requirements
- Git repository with staged changes
- Repository must follow Agent365 DevTools coding standards
.claude/agents/pr-code-reviewer.mdmust exist (for review guidelines).github/copilot-instructions.mdmust exist (for coding standards)
See Also
- README.md - Detailed documentation
/review-pr- Review pull requests on GitHub