triangulate
Triangulate
Use this skill to evaluate supplied artifacts and return a consolidated findings table with evidence-based conclusions.
Handle the full review flow end to end. Own validation, persistence, and final rendering, and keep user-facing responses focused on the review outcome rather than implementation details.
Workspace
Use .context/triangulate/ as the durable workspace.
Persist only canonical JSON stage artifacts plus the final human-facing report:
.context/triangulate/initializer.json.context/triangulate/normalized.json.context/triangulate/adversary.json.context/triangulate/referee.json.context/triangulate/findings.md
Create .context/triangulate/ if it does not already exist.
Do not persist stage markdown files.
Inputs
- original artifacts
- optional domain context
- optional evaluation goal
- optional constraints
If the original artifacts are missing, ask for them before starting.
Canonical Row Schema
Use JSON for all internal handoffs. Every subagent output must use the same canonical top-level shape:
{
"rows": []
}
The meaning of fields depends on stage, but row identity always uses:
indexcontext_topicfinding_id
All evidence-bearing fields must be single-line strings and must include specific evidence references.
An evidence reference must identify where the support came from, such as file path, section name, heading, paragraph, or line range.
Examples:
spec/auth.md lines 41-57README section "Failure handling"api/openapi.yaml path /sessions/{id}
Stage Schemas
Initializer JSON
{
"rows": [
{
"index": 1,
"context_topic": "<short noun phrase>",
"finding_id": "FINDING-001",
"importance": 10,
"claim": "<short finding>",
"basis": "<grounded basis>",
"evidence_refs": [
"<specific evidence reference>"
]
}
]
}
Normalizer JSON
{
"rows": [
{
"index": 1,
"context_topic": "<normalized short noun phrase>",
"finding_id": "FINDING-001",
"importance": 10,
"claim": "<normalized short finding>",
"basis": "<grounded basis retained or tightened>",
"evidence_refs": [
"<specific evidence reference>"
]
}
]
}
Adversary JSON
{
"rows": [
{
"index": 1,
"context_topic": "<must exactly match normalized output>",
"finding_id": "FINDING-001",
"status": "challenged",
"basis": "<grounded counter-evidence or grounded survival reason>",
"evidence_refs": [
"<specific evidence reference>"
]
}
]
}
status may be only challenged or not challenged.
Referee JSON
{
"rows": [
{
"index": 1,
"context_topic": "<must exactly match normalized output>",
"finding_id": "FINDING-001",
"verdict": "upheld",
"explanation": "<short evidence-based explanation>",
"evidence_refs": [
"<specific evidence reference>"
]
}
]
}
verdict may be only upheld, unclear, or rejected.
Validation Rules
The top-level orchestrator must validate all of the following.
Initializer validation
- output is valid JSON
- top-level object contains
rows rowsis an array- every row contains
index,context_topic,finding_id,importance,claim,basis, andevidence_refs indexvalues are sequential integers starting at1finding_idvalues are sequential in the formFINDING-001,FINDING-002,FINDING-003importancevalues are only10,5, or1context_topic,claim, andbasisare non-empty single-line stringsevidence_refsis a non-empty array of non-empty single-line strings- rows are sorted by
importancedescending finding_idvalues are uniquerowsmay be empty only when no evidence-grounded findings are warranted
Normalizer validation
- output is valid JSON
- top-level object contains
rows rowsis an array- every row contains
index,context_topic,finding_id,importance,claim,basis, andevidence_refs - duplicate or overlapping initializer findings may be merged, split, removed, or rewritten only when the result is more canonical, non-redundant, and evidence-grounded
indexvalues are sequential integers starting at1finding_idvalues are sequential in the formFINDING-001,FINDING-002,FINDING-003importancevalues are only10,5, or1context_topic,claim, andbasisare non-empty single-line stringsevidence_refsis a non-empty array of non-empty single-line strings- rows are sorted by
importancedescending finding_idvalues are uniquerowsmay be empty only when no findings remain after normalization
Adversary validation
- output is valid JSON
- row count exactly matches normalized row count
- every row contains
index,context_topic,finding_id,status,basis, andevidence_refs index,context_topic, andfinding_idexactly match normalized output row-for-rowstatusis onlychallengedornot challengedbasisis a non-empty single-line stringevidence_refsis a non-empty array of non-empty single-line strings- no duplicate
finding_id
Referee validation
- output is valid JSON
- row count exactly matches normalized row count
- every row contains
index,context_topic,finding_id,verdict,explanation, andevidence_refs index,context_topic, andfinding_idexactly match normalized output row-for-rowverdictis onlyupheld,unclear, orrejectedexplanationis a non-empty single-line stringevidence_refsis a non-empty array of non-empty single-line strings- no duplicate
finding_id
Review Process
Pass 1: generate candidate findings
Invoke the initializer subagent using the prompt file at
./references/initializer.md with:
- original artifacts
- optional domain context
- optional evaluation goal
- optional constraints
Require JSON output matching the initializer schema.
Validate it.
Persist the raw validated JSON to .context/triangulate/initializer.json.
Pass 2: normalize the claim set
Invoke the normalizer subagent using the prompt file at
./references/normalizer.md with:
- original artifacts
- the validated initializer JSON
- optional domain context still relevant
- optional evaluation goal still relevant
- optional constraints still relevant
Require JSON output matching the normalizer schema.
Validate it.
Persist the raw validated JSON to .context/triangulate/normalized.json.
Pass 3: challenge the normalized claims
Invoke the adversary subagent using the prompt file at
./references/adversary.md with:
- original artifacts
- the validated normalized JSON
- optional domain context still relevant
- optional evaluation goal still relevant
- optional constraints still relevant
Require JSON output matching the adversary schema.
Validate it against normalized output.
Persist the raw validated JSON to .context/triangulate/adversary.json.
Pass 4: adjudicate the final conclusions
Invoke the referee subagent using the prompt file at
./references/referee.md with:
- original artifacts
- the validated normalized JSON
- the validated adversary JSON
- optional domain context still relevant
- optional evaluation goal still relevant
- optional constraints still relevant
Require JSON output matching the referee schema.
Validate it against normalized output.
Persist the raw validated JSON to .context/triangulate/referee.json.
Consolidation
After all four stages succeed, write .context/triangulate/findings.md as a
Markdown table with this exact header:
| Index | Context / Topic | Normalized Finding | Adversary Finding | Referee Verdict |
|---|---|---|---|---|
Use normalized rows as the authoritative row order.
If normalized rows is empty, write only the header row and separator row, then
return success.
For each row, render exactly:
Indexfrom normalized outputContext / Topicfrom normalized outputNormalized Findingas:FINDING-001; importance 10; claim: <claim>; basis: <basis>; refs: <ref1>, <ref2>Adversary Findingas:FINDING-001; challenged; basis: <basis>; refs: <ref1>, <ref2>or:FINDING-001; not challenged; basis: <basis>; refs: <ref1>, <ref2>Referee Verdictas:FINDING-001; upheld; <explanation>; refs: <ref1>, <ref2>or replaceupheldwithunclearorrejectedas needed
Do not rewrite any copied cell text during consolidation except to render the required final string formats from validated JSON.
Stage Output Correction
If a stage executes and returns content, but that content is malformed or fails schema or row-identity validation, do not fail immediately.
Instead, re-invoke the same stage with:
- the original stage inputs
- the invalid returned output verbatim
- the exact list of validation failures or parse failures
- an instruction to return corrected JSON only
Make up to 2 correction attempts per stage.
Treat valid JSON that fails schema or row-identity checks as malformed output
for both retry handling and final error classification.
If a correction attempt succeeds, persist the corrected JSON and continue the workflow normally.
If all correction attempts fail, stop and return the terminal error for that stage.
Failure Handling
If any subagent is unavailable or fails during execution, stop immediately.
If a subagent returns malformed output, use the correction loop above before failing the run.
Do not continue to later stages after a failure.
Do not repair, replace, simulate, approximate, or complete a failed stage.
Return only this exact error format:
Error: <stage_name> failed due to <unavailability|malformed output|execution failure>.
Use stage_name values initializer, normalizer, adversary, referee, or
consolidation.
Final Output
On success, return only the full contents of .context/triangulate/findings.md
exactly as written.
Hard Prohibitions
- Never perform substantive evaluation work in the top-level orchestrator.
- Never normalize findings in the top-level orchestrator.
- Never add findings in the top-level orchestrator.
- Never remove findings in the top-level orchestrator.
- Never challenge findings in the top-level orchestrator.
- Never adjudicate findings in the top-level orchestrator.
- Never propose fixes in the top-level orchestrator.
- Never substitute your own reasoning for subagent outputs.
- Never persist stage markdown files.
- Never add any text before or after the final consolidated table.
More from henryqw/skills
gh-autopilot
Run a standalone autonomous GitHub Copilot pull request review loop with explicit stage entry and event logs. Use when Codex should start from a user-selected stage (create PR, monitor review, or address existing comments), execute deterministic cycle transitions, and continue looping until Copilot reports no comments or the configured Stage 2 max-wait limit is reached.
10gh-pr-creation
Create a new GitHub pull request end-to-end when the user asks to open or create a PR. Use when Codex must turn local uncommitted work into a reviewable PR by making multiple scoped commits, running and passing all repository quality gates, renaming the branch so it reflects the changes, creating a Conventional Commits PR title, writing a PR description with summary/rationale/migration steps, and assigning Copilot as reviewer.
6gh-address-copilot-review
Handle GitHub PR review comments when comments are provided by the user as context. Use when Codex must evaluate comments one by one, classify each as actionable or non-actionable or needs clarification, implement only necessary fixes, keep changes scoped per comment, run validation, avoid intermediate pushes, perform one final push for the full batch, resolve addressed threads, respond to rejected comments with rationale, and re-request Copilot reviewer exactly once at the end via gh-assign-copilot-reviewer.
5codex-subagent
Dispatch one or more tasks to Codex CLI subagents to save Claude Code tokens. Accepts explicit task descriptions, auto-selects sandbox (read-only vs workspace-write) and reasoning effort (high vs xhigh) based on task type, and collects structured results with durable artifacts.
2trueflow
Run the full generic trueflow pipeline by invoking `trueflow_initializer`, `trueflow_adversary`, and `trueflow_referee` in sequence, persisting stage outputs under `.context/trueflow/`, and returning a consolidated `findings.md` table. Use this skill whenever the user asks to "use trueflow" or wants multiple agents to review artifacts, solution proposals, coding implementation plans, documents, prompts, or other material and return adjudicated findings rather than a single opinion.
1gh-pilot
Iteratively drive a PR through GitHub Copilot review using a simple loop with direct `gh` commands and no helper scripts. Reuse existing Copilot feedback first, fetch unresolved thread state via GraphQL, request/re-request Copilot when needed, and require a fresh Copilot pass after pushed fixes.
1