code-review-amplifier
Code Review Amplifier
Core principle: A perfect code review serves seven concurrent functions on a shared artifact. AI can fully handle 2/7, partially handle 2/7, and barely touch 3/7. This skill maximizes total review quality by doing what AI does well (surface scanning, velocity) while arming the human reviewer with context and questions for the dimensions that require human judgment (design coherence, knowledge transfer, mentoring).
The goal is not to produce a review. The goal is to make the human reviewer's next 15 minutes dramatically more effective.
The Seven Dimensions of Code Review
Every code review, consciously or not, operates across these dimensions. Most reviews only cover 2-3 of them. A "perfect" review touches all seven with appropriate depth.
| # | Dimension | Core Question | AI Role |
|---|---|---|---|
| D1 | Correctness | Does this code do what it claims? | Pre-scan: Flag logic issues, edge cases, type mismatches, missing error handling |
| D2 | Design Coherence | Does this fit the system's architecture? | Arm the human: Surface relevant architectural context, generate design questions |
| D3 | Readability | Can the next person understand this? | Pre-scan: Flag complexity, naming, structure, readability issues |
| D4 | Security & Resilience | Does this introduce vulnerabilities? | Pre-scan: Check for common vulnerability patterns, data exposure, failure modes |
| D5 | Knowledge Transfer | Do more people now understand this area? | Route: Suggest who else should see this code and why |
More from andurilcode/craftwork
deep-document-processor
>
4summarizer
Apply this skill whenever the user asks to summarize, condense, distill, or compress any content — a document, article, meeting notes, conversation, codebase, book, research paper, video transcript, or any other source material. Triggers on phrases like 'summarize this', 'give me the TL;DR', 'condense this', 'what are the key points?', 'distill this down', 'brief me on this', 'what's the gist?', 'BLUF this', 'executive summary', 'compress this for me', or any request to reduce content while preserving its essential value. Also trigger when the user pastes a long text and implicitly wants it shortened, when they share a link and ask 'what does this say?', or when they ask for meeting notes or action items from a transcript. This skill does NOT apply to 'explain X to me' (use topic-explainer) or 'write a summary section for my doc' (use technical-writing). This skill is for when source material exists and needs to be compressed.
3inversion-premortem
Apply inversion and pre-mortem thinking whenever the user asks to evaluate a plan, strategy, architecture, feature, or decision before execution — or when they want to stress-test something that already exists. Triggers on phrases like "is this a good idea?", "what could go wrong?", "review this plan", "should we do this?", "are we missing anything?", "stress-test this", "what are the risks?", or any request to validate a decision or design. Use this skill proactively — if the user is about to commit to something, this skill should be consulted even if they don't ask for it explicitly.
3llms-txt-generator
Generate llms.txt-style context documents — token-budgeted, section-per-concept Markdown optimized for LLM and RAG consumption. Use this skill whenever someone asks to generate an llms.txt, create LLM-friendly documentation, produce a context document for a library or codebase, build a RAG-ready reference, make docs 'agent-readable', create a developer quick-reference, or says anything like 'generate context for X', 'make an llms.txt for this repo', 'create a reference doc for NotebookLM', 'turn these docs into something an LLM can use', 'context document', 'developer cheatsheet from docs'. Also trigger when someone provides a GitHub repo URL and asks for documentation synthesis, or when working inside a codebase and asked to produce a self-contained reference of how it works. This is the context engineer's doc generation tool — it turns sprawling documentation into precise, structured, token-efficient context.
3context-compressor
>
3probabilistic-thinking
Apply probabilistic and Bayesian thinking whenever the user needs to reason under uncertainty, compare risks, prioritize between options, update beliefs based on new evidence, or make decisions without complete information. Triggers on phrases like "what are the odds?", "how likely is this?", "should I be worried about X?", "which risk is bigger?", "does this data change anything?", "is this a signal or noise?", "what's the probability?", "how confident are we?", or any situation where decisions are being made based on incomplete or ambiguous evidence. Also trigger when someone is treating uncertain outcomes as certainties, or when probability language is being used loosely ("probably", "unlikely", "very likely") without quantification. Don't leave uncertainty unexamined.
3