issue_adversary
Issue Adversary
Use this skill as an adversarial reviewer against claims from issue_finder.
Optimize for correct rebuttals against the original artifacts. Attack each claim aggressively, but only when the original artifacts actually defeat it.
Required Inputs
Require both of the following inputs:
- original artifacts
- issue_finder output
If either input is missing, ask for the missing material. Do not attempt to rebut claims from summaries alone when the original artifacts are unavailable.
Canonical Claim Set
Treat the issue list from issue_finder as the authoritative set of claim IDs
unless the user explicitly provides a different canonical list.
Produce exactly one review entry for each issue.
Preserve every claim ID exactly as written.
Preserve the original issue order when possible so omissions and duplicates are easy to detect.
Operating Incentive
Assume you are scored per claim as follows:
- If you correctly rebut a claim, gain its impact score.
- If you incorrectly rebut a real claim, lose
2xits impact score.
Attempt to rebut claims aggressively, but only when the evidence in the original artifacts actually defeats them.
Adversarial Review Process
For each claim:
- Read the relevant original artifacts directly.
- Identify the exact allegation in
issue_finder. - Look for direct counter-evidence, limiting conditions, or reasoning errors in the claim.
- Mark
rebutted:trueonly when the original artifacts defeat the claim. - Mark
rebutted:falsewhen the claim survives the attack.
Judge the claim against the original artifacts, not against speculation.
Do not invent missing constraints, missing evidence, or hypothetical counterexamples.
Do not rebut a claim only because it is weakly phrased. Rebut it only when the underlying allegation is defeated by the material.
Rebuttal Standard
rebutted:true: the original artifacts contain counter-evidence or grounded reasoning that defeats the claimrebutted:false: the claim survives review, including cases where the artifacts do not provide enough evidence to defeat it
Output Format
Return only a bulleted list.
Each review must use this exact structure:
- ISSUE-001 | rebutted:true
basis:
- specific counter-evidence or reasoning
Output Rules
- Produce exactly one entry for each issue.
- Preserve the claim ID exactly.
- Use only
rebutted:trueorrebutted:false. - Include at least one grounded
basisbullet for every issue. - Do not invent counter-evidence.
- Do not propose fixes.
- Do not include commentary outside the list.
Final Check
Before responding, confirm all of the following:
- the number of review entries equals the number of issue IDs in scope
- each issue ID appears exactly once
- every entry uses either
rebutted:trueorrebutted:false - every entry contains at least one grounded
basisbullet - the response contains only the required bulleted list
More from henryqw/skills
gh-autopilot
Run a standalone autonomous GitHub Copilot pull request review loop with explicit stage entry and event logs. Use when Codex should start from a user-selected stage (create PR, monitor review, or address existing comments), execute deterministic cycle transitions, and continue looping until Copilot reports no comments or the configured Stage 2 max-wait limit is reached.
10gh-pr-creation
Create a new GitHub pull request end-to-end when the user asks to open or create a PR. Use when Codex must turn local uncommitted work into a reviewable PR by making multiple scoped commits, running and passing all repository quality gates, renaming the branch so it reflects the changes, creating a Conventional Commits PR title, writing a PR description with summary/rationale/migration steps, and assigning Copilot as reviewer.
6gh-address-copilot-review
Handle GitHub PR review comments when comments are provided by the user as context. Use when Codex must evaluate comments one by one, classify each as actionable or non-actionable or needs clarification, implement only necessary fixes, keep changes scoped per comment, run validation, avoid intermediate pushes, perform one final push for the full batch, resolve addressed threads, respond to rejected comments with rationale, and re-request Copilot reviewer exactly once at the end via gh-assign-copilot-reviewer.
5triangulate
Evaluate supplied artifacts and return a consolidated findings table with evidence-based conclusions. Use this skill when the user wants a proposal, plan, code change, document, prompt, transcript, or other material reviewed through a structured multi-perspective evaluation instead of a single opinion.
3codex-subagent
Dispatch one or more tasks to Codex CLI subagents to save Claude Code tokens. Accepts explicit task descriptions, auto-selects sandbox (read-only vs workspace-write) and reasoning effort (high vs xhigh) based on task type, and collects structured results with durable artifacts.
2trueflow
Run the full generic trueflow pipeline by invoking `trueflow_initializer`, `trueflow_adversary`, and `trueflow_referee` in sequence, persisting stage outputs under `.context/trueflow/`, and returning a consolidated `findings.md` table. Use this skill whenever the user asks to "use trueflow" or wants multiple agents to review artifacts, solution proposals, coding implementation plans, documents, prompts, or other material and return adjudicated findings rather than a single opinion.
1