rt-ica
RT-ICA: Reverse Thinking - Information Completeness Assessment
Purpose
RT-ICA surfaces what the executor needs to know before it can act without stopping.
For every goal (top-level and each decomposed sub-goal), the model should:
- Reverse-think prerequisites from the goal
- Assess information completeness for each prerequisite
- Either BLOCK planning until missing inputs are obtained, or APPROVE with explicit assumptions
<core_rule>
RT-ICA should be performed before planning, delegation, or solution design on:
- The overall goal/request
- Each decomposed goal or sub-goal that could fail due to missing information
If any required condition is MISSING, stop and request only the missing information.
</core_rule>
Complexity Model
Task complexity is not implementation difficulty — it is the ratio of project-specific knowledge required to context window available.
Training data provides craft knowledge (language patterns, framework APIs, tooling). That is free. What consumes context budget is everything specific to this project: schemas, conventions, constraints, interfaces, user preferences, existing system behavior. That knowledge must be loaded before the agent can act.
This changes how RT-ICA results inform task design:
flowchart TD
RTICA["RT-ICA conditions enumerated"] --> Measure["Estimate knowledge payload:<br>how much project-specific context<br>must be loaded to satisfy conditions?"]
Measure --> Ratio{Knowledge payload<br>vs context window?}
Ratio -->|"< 40% of window<br>Room to work"| Proceed["Single task — execute directly"]
Ratio -->|"40-70% of window<br>Tight but workable"| Combine["Look for steps sharing<br>the same knowledge payload —<br>combine them into one task"]
Combine --> Proceed
Ratio -->|"> 70% of window<br>No room to implement"| Decompose["Decompose into subtasks<br>that each need a smaller<br>subset of the knowledge"]
Step combining: When two steps need the same project knowledge loaded, combining them is nearly free — the knowledge is loaded once, both steps execute in the remaining space. Splitting them wastes context by loading the same knowledge twice. Step boundaries belong where information gaps exist, not where implementation boundaries happen to fall.
Dynamic vs static constraints: RT-ICA produces dynamic constraints — discovered fresh from the current goal, disposable after use. These provide visibility into edges that would cause problems if crossed blindly: scope creep, missing user opinions, abstract requirements that need to become definite. This is different from static process constraints (hardcoded gates, enforcement hooks, "MUST do X before Y" rules baked into workflow definitions) which carry maintenance cost and go stale. RT-ICA's value is turning the abstract into the definite for each specific task.
Activation Triggers
<activation_triggers>
Invoke RT-ICA when receiving ANY of:
- Spec, request, ticket, user story, PRD, architecture design, RFC
- Request to produce a plan, execution order, agent delegation, guardrails, acceptance criteria, or rollout steps
- Any multi-step engineering effort with dependencies, unknowns, constraints, or risk
Integration Points (where RT-ICA checkpoints MUST occur):
- Before creating the top-level plan
- Before delegating tasks to specialized agents (per-agent input completeness)
- Before finalizing acceptance criteria (verify testability inputs exist)
- Before defining rollout/ops steps (verify env and access inputs exist)
</activation_triggers>
Definitions
| Term | Definition |
|---|---|
| Goal | A desired outcome the user wants |
| Condition | A prerequisite that must be true to achieve the goal |
| Required Information | Concrete data needed to confirm or satisfy a condition |
| AVAILABLE | Explicitly present in the input material |
| DERIVABLE | Inferred with high confidence from provided material (must show basis) |
| MISSING | Not present and not safely inferable |
RT-ICA Procedure
Apply this procedure to each goal and sub-goal:
Step 1: Goal Reconstruction
Produce:
- Goal statement: One sentence describing the desired outcome
- Output form: What deliverable proves success (artifact, behavior, metric, deployment state)
- Scope boundaries: In-scope/out-of-scope if stated
Step 2: Reverse Prerequisite Enumeration
Work backwards from the goal to list ALL conditions required for success.
<condition_categories>
Include conditions in these categories (where applicable):
| Category | Example Conditions |
|---|---|
| Functional requirements | Features, behaviors, user flows |
| Non-functional requirements | Latency, throughput, availability, compliance, security |
| Interfaces/Integration | APIs, schemas, dependencies, external systems |
| Environment/Runtime | Cloud, region, OS, language, build system |
| Data requirements | Sources, quality, migration, retention |
| Access/Permissions | Repos, secrets, credentials, IAM |
| Operational constraints | SLOs, oncall, monitoring, incident response |
| Delivery constraints | Timeline, release process, approvals |
| Verification needs | Tests, canaries, acceptance criteria, observability |
| Risks/Failure modes | Rollback, data loss, security exposure |
</condition_categories>
For each condition, specify:
- Condition name
- Required information to verify/satisfy it
- Why it matters (one line)
Step 3: Availability Verification
For each condition, set status:
| Status | Evidence Required |
|---|---|
| AVAILABLE | Cite exact source snippet or section name |
| DERIVABLE | State the inference and basis |
| MISSING | State exactly what information is needed |
Step 4: Completeness Decision
IF any condition is MISSING:
DECISION = BLOCKED
ELSE:
DECISION = APPROVED
Step 5: Action Based on Decision
<decision_actions>
IF BLOCKED:
- Do NOT plan
- Ask ONLY for missing inputs
- Structure questions by category, ordered by criticality
- Prefer multiple-choice or constrained questions when possible
- If user explicitly requests assumption-based planning:
- Proceed with explicit assumptions for each missing condition
- Include risk note per assumption
- Add validation tasks to confirm assumptions early
IF APPROVED:
- Proceed to normal planning
- Carry forward the validated condition list
- Mark DERIVABLE items as "assumptions to confirm"
- Enforce constraints as guardrails
</decision_actions>
Output Format
<output_format>
The model MUST produce this summary block for each goal/sub-goal:
RT-ICA SUMMARY
Goal:
- [one sentence]
Success Output:
- [deliverable/observable result]
Conditions (reverse prerequisites):
1. [Condition] | Requires: [info] | Why: [1 line]
2. [Condition] | Requires: [info] | Why: [1 line]
...
Verification:
- [Condition 1]: [AVAILABLE|DERIVABLE|MISSING] | Evidence/Basis: [text]
- [Condition 2]: [AVAILABLE|DERIVABLE|MISSING] | Evidence/Basis: [text]
...
Decision:
- [APPROVED|BLOCKED]
--- IF BLOCKED ---
Missing Inputs Requested:
[Category]:
- [missing item question] (why needed)
- [missing item question] (why needed)
[Category]:
- [missing item question] (why needed)
--- IF APPROVED ---
Assumptions to Confirm (DERIVABLE only):
- [assumption] | Basis: [basis] | Validation step: [how to confirm early]
</output_format>
Integration with CoVe-Style Planning
<cove_integration>
Recommended sequence with RT-ICA:
A) RT-ICA on top-level goal
B) Draft plan and decomposition
C) RT-ICA on each major workstream/sub-goal
D) Assign agents with clearly bounded deliverables
E) Verification pass: cross-check plan against conditions and acceptance criteria
F) Refinement pass: resolve gaps, reduce risk, ensure ordering and guardrails
</cove_integration>
Planning Deliverables (After APPROVED)
<planning_deliverables>
After RT-ICA APPROVED decision, produce a plan that includes:
| Section | Contents |
|---|---|
| Workstreams | Logical groupings and ordering |
| Agent Assignment | Which agent handles each workstream |
| Guardrails | Safety, security, correctness, operational constraints |
| Acceptance Criteria | Testable, measurable success conditions |
| Risk Register | Top risks, mitigations, rollback strategy |
| Dependencies | Internal and external dependencies |
| Verification Plan | Tests, monitoring, canary, QA |
| Change Management | Rollout, communications, documentation |
</planning_deliverables>
Guardrails
RT-ICA exists to prevent hallucinated constraints. Without it, models fill knowledge gaps with training data patterns — inventing requirements and presenting them as facts. These guardrails protect that function.
Redirection rule:
When you notice yourself generating a value, constraint, or requirement that you cannot source from the input material — that impulse is a discovery, not a mistake. It reveals a gap. Redirect it:
- The unsourced content becomes a new MISSING condition
- Add it to the unknowns list with what you were about to fill in as a suggested default
- Continue the assessment
Speculation is the signal that refinement is needed. The goal is not to suppress gap-filling — it is to catch it happening and route it into the unknowns list instead of into the plan as fact.
Reflection checkpoint: Before writing each condition's status (AVAILABLE / DERIVABLE / MISSING), pause and reflect using the sequential-thinking MCP. For each condition, the thinking step should answer: "Can I source this from the input material, or am I generating it from training patterns?" This external reflection makes the redirection rule structural — the pause is a tool call, not an internal decision that can be skipped.
Never present unsourced content as verified. Plan with MISSING conditions only when the user explicitly requests assumption-based planning.
Best practice:
- Keep missing-input questions minimal and high signal
- Prefer early validation tasks for DERIVABLE items
- Block planning when information is insufficient — localize the block to affected tasks where possible, not the entire plan
Question Templates
<question_templates>
When requesting missing inputs, use structured questions:
Environment/Infrastructure:
- "What is the target environment (prod/stage/dev), and where will this run (cloud/region/account)?"
Success Criteria:
- "What are the success metrics or acceptance criteria (latency, correctness, SLO)?"
Integration:
- "Which systems/APIs are in scope, and what are their interface contracts (schema/version)?"
Technical Constraints:
- "Are there constraints on language/framework/build tooling?"
Approvals:
- "Who owns approvals for release and security review (if required)?"
</question_templates>
Example: RT-ICA in Action
User Request: "Build a user authentication service"
RT-ICA Summary:
RT-ICA SUMMARY
Goal:
- Implement user authentication service for the application
Success Output:
- Deployed service that authenticates users and issues session tokens
Conditions (reverse prerequisites):
1. Auth protocol | Requires: OAuth2/OIDC/custom spec | Why: Determines implementation approach
2. User store | Requires: Database type, schema | Why: Persistence layer dependency
3. Session management | Requires: Token format, expiry rules | Why: Security policy compliance
4. Integration points | Requires: API consumers list | Why: Interface contract design
5. Security requirements | Requires: Compliance standards (SOC2, HIPAA) | Why: Audit requirements
6. Deployment target | Requires: Cloud/region/infra | Why: Runtime configuration
Verification:
- Auth protocol: MISSING | Need: Which protocol to implement
- User store: DERIVABLE | Basis: Project uses PostgreSQL per docker-compose.yml
- Session management: MISSING | Need: Token format and expiry policy
- Integration points: MISSING | Need: List of services calling auth
- Security requirements: MISSING | Need: Compliance requirements if any
- Deployment target: AVAILABLE | Evidence: README specifies AWS us-east-1
Decision:
- BLOCKED
Missing Inputs Requested:
Authentication Design:
- Which auth protocol: OAuth2, OIDC, or custom JWT? (determines implementation)
- Session token expiry policy? (security requirement)
Integration:
- Which services will consume this auth service? (API contract design)
Compliance:
- Are there compliance requirements (SOC2, HIPAA, etc.)? (audit scope)
Anti-Patterns
<anti_patterns>
Planning without RT-ICA:
User: "Build auth service"
Model: "Here's my plan: 1. Create user table, 2. Add login endpoint..."
Problem: Assumed requirements, will likely need rework
Asking too many questions:
Model asks 20 questions about edge cases before understanding core requirements
Problem: Overwhelms user, delays progress on high-signal items
Proceeding with silent assumptions:
Model: "I'll assume OAuth2 since that's common..."
Problem: Assumption may be wrong, causes rework or security issues
</anti_patterns>
Related Skills
agent-orchestration- Scientific delegation framework for orchestrator-to-agent workflowssubagent-contract- DONE/BLOCKED signaling protocol for sub-agents
Sources
| Source | Attribution | Access Date |
|---|---|---|
| RT-ICA Framework | Liu et al., 2025 - Reverse Thinking Enhances Missing Information Detection in LLMs | 2026-01-20 |
| CoVe (Chain of Verification) | Dhuliawala et al., 2023 - Chain-of-Verification Reduces Hallucination | 2026-01-20 |
Note: This skill adapts the RT-ICA (Reverse Thinking for Information Completeness Assessment) framework for planning workflows.
More from jamie-bitflight/claude_skills
perl-lint
This skill should be used when the user asks to lint Perl code, run perlcritic, check Perl style, format Perl code, run perltidy, or mentions Perl Critic policies, code formatting, or style checking.
24brainstorming-skill
You MUST use this before any creative work - creating features, building components, adding functionality, modifying behavior, or when users request help with ideation, marketing, and strategic planning. Explores user intent, requirements, and design before implementation using 30+ research-validated prompt patterns.
11design-anti-patterns
Enforce anti-AI UI design rules based on the Uncodixfy methodology. Use when generating HTML, CSS, React, Vue, Svelte, or any frontend UI code. Prevents "Codex UI" — the generic AI aesthetic of soft gradients, floating panels, oversized rounded corners, glassmorphism, hero sections in dashboards, and decorative copy. Applies constraints from Linear/Raycast/Stripe/GitHub design philosophy: functional, honest, human-designed interfaces. Triggers on: UI generation, dashboard building, frontend component creation, CSS styling, landing page design, or any task producing visual interface code.
7python3-review
Comprehensive Python code review checking patterns, types, security, and performance. Use when reviewing Python code for quality issues, when auditing code before merge, or when assessing technical debt in a Python codebase.
7hooks-guide
Cross-platform hooks reference for AI coding assistants — Claude Code, GitHub Copilot, Cursor, Windsurf, Amp. Covers hook authoring in Node.js CJS and Python, per-platform event schemas, inline-agent hooks and MCP in agent frontmatter, common JSON I/O, exit codes, best practices, and a fetch script to refresh docs from official sources. Use when writing, reviewing, or debugging hooks for any AI assistant.
7agent-creator
Create high-quality Claude Code agents from scratch or by adapting existing agents as templates. Use when the user wants to create a new agent, modify agent configurations, build specialized subagents, or design agent architectures. Guides through requirements gathering, template selection, and agent file generation following Anthropic best practices (v2.1.63+).
6