Verify Skill

SKILL.md

Verify Skill

Purpose

This meta-skill guides domain experts through a structured verification of any skill in this repository. It produces a detailed verification report that records the expert's assessment of parameters, citations, and methodology — then submits the report to GitHub Discussions for community knowledge building.

Verification is the highest-impact contribution a domain expert can make. Every skill starts as ai-generated and needs human verification to progress to community-reviewed or expert-verified.

When to Use This Skill

Activate when the user:

  • Says "verify a skill", "验证这个 skill", "review this skill's accuracy"
  • Wants to check whether a skill's parameters and citations are correct
  • Has domain expertise and wants to contribute a verification
  • Is reviewing a skill before using it in their research

Research Planning Protocol

Before starting the verification process, you MUST:

  1. Identify the target skill — Which skill will be verified? Read its SKILL.md.
  2. Assess the reviewer's qualifications — What is their domain expertise and experience level?
  3. Define verification scope — Will this be a full verification or focused on specific sections?
  4. Note inherent limitations — Verification without lab replication cannot confirm all claims; flag what can and cannot be verified from literature alone.
  5. Present the verification plan to the user and WAIT for confirmation before proceeding.

For detailed methodology guidance, see skills/research-literacy/SKILL.md.

⚠️ Verification Notice

This skill was generated by AI from academic literature. All parameters, thresholds, and citations require independent verification before use in research. If you find errors, please open an issue.


Prerequisites

Before running this skill, verify:

  1. gh CLI is installed and authenticated

    Run: gh auth status

    If not authenticated, tell the user:

    To submit verification reports, you need the GitHub CLI. Install it from https://cli.github.com/ and run gh auth login. Alternatively, I can save the report as a local markdown file.

  2. The target skill exists — The skill directory must exist under skills/ in the repository.


Interactive Flow (6 Phases)

Progress Tracking (Required)

At the start of the verification, you MUST create a task list (using TodoWrite or equivalent) with these items:

  • Phase 1: Cognitive Alignment
  • Phase 2: Experience Collection
  • Phase 3: Test Scenario Construction
  • Phase 4: Item-by-Item Assessment
  • Phase 5: Apply Corrections to Skill
  • Phase 6: Generate Report and Submit to GitHub Discussions ← DO NOT SKIP

Mark each item as you complete it. Do NOT consider the verification complete until ALL items are checked, especially Phase 6 (GitHub submission).

Phase 1 — Cognitive Alignment

Goal: Ensure the reviewer understands what the skill does before assessing it.

  1. Read the target skill's SKILL.md in full.

  2. Present a summary to the reviewer:

    • What this skill does (purpose and domain)
    • Typical use scenarios (when it triggers)
    • Key parameters and claims it makes (list the specific numbers and citations)
    • What verification means in this context
  3. Ask the reviewer: "Does this summary match your understanding of the skill? Anything to clarify before we proceed?"

  4. Wait for confirmation before moving to Phase 2.

Next → Phase 2: Experience Collection (4 remaining phases until GitHub submission)

Phase 2 — Experience Collection

Goal: Understand the reviewer's domain knowledge to calibrate the verification.

Present these questions:

Q1: How familiar are you with this domain? (1-5)

  • 1 = Heard of it
  • 2 = Read about it
  • 3 = Studied it formally
  • 4 = Use it in my research
  • 5 = I am a specialist in this area

Q2: Have you used these methods in your research?

  • Yes, I use them regularly (describe briefly)
  • Yes, I have used them before (describe context)
  • No, but I know the literature well
  • No, I am learning about this area

Q3: For the key parameters listed in Phase 1, what values do you typically use?

  • Present each key parameter from the skill and ask the reviewer what value they use or expect
  • This is a per-parameter question — iterate through the most important parameters
  • Accept "I don't know" as a valid answer

Q4: What pitfalls have you encountered that this skill should mention?

  • Free text — any practical warnings from experience

Next → Phase 3: Test Scenario Construction (3 remaining phases until GitHub submission)

Phase 3 — Test Scenario Construction

Goal: Test the skill against a realistic scenario to evaluate its practical advice.

  1. Ask the reviewer to describe a scenario:

    "Describe a real or realistic dataset and research question where you would use the methods covered by this skill. Include: modality, sample size, conditions, and what you are trying to find."

  2. Run the target skill against this scenario (simulate how the skill would respond to the described research question).

  3. Present the skill's recommendations for this scenario.

  4. Ask the reviewer to evaluate:

    • Did the skill give appropriate recommendations for this scenario?
    • Were there any incorrect suggestions?
    • What important advice was missing?

Next → Phase 4: Item-by-Item Assessment (2 remaining phases until GitHub submission)

Phase 4 — Item-by-Item Assessment

Goal: Systematic parameter-by-parameter verification with structured scoring.

For each key claim in the skill (parameters, thresholds, citations, methodological recommendations), present a table row and ask the reviewer to assess:

Assessment format:

# Parameter Skill Says Citation Your Verdict Notes
1 [param name] [value from skill] [cited source] ✅ / ⚠️ / ❌ / ❓ [reviewer's explanation]
2 ... ... ... ... ...

Verdict options:

  • Confirmed — The value and citation are correct
  • ⚠️ Context-dependent — Correct in some contexts but not universally; needs qualification
  • Incorrect — The value or citation is wrong (reviewer provides the correct information)
  • Cannot verify — The reviewer does not have enough expertise or resources to confirm

Process each parameter interactively — present 3-5 parameters at a time, get the reviewer's verdicts, then continue with the next batch.

After all parameters are assessed, collect overall ratings (1-5 stars each):

  1. Parameter accuracy — Are the numerical values and thresholds correct?
  2. Completeness — Does the skill cover all important aspects of this methodology?
  3. Practical usefulness — Would this skill actually help a researcher do better work?
  4. Pitfall awareness — Does the skill warn about common mistakes and edge cases?

⚠️ CRITICAL: Do NOT stop here. Next → Phase 5: Apply Corrections, then Phase 6: Submit to GitHub Discussions. The verification is NOT complete without submission.

Phase 5 — Apply Corrections

Goal: Update the skill based on verification findings.

If Phase 4 produced any ❌ (Incorrect) or ⚠️ (Context-dependent) verdicts:

  1. List all corrections needed — Summarize what parameters, citations, or methodology need updating based on the reviewer's verdicts.

  2. Apply corrections to the skill's SKILL.md:

    • Fix incorrect parameter values (replace with reviewer-provided values and citations)
    • Add missing caveats or context qualifications for ⚠️ items
    • Update citations where the reviewer identified errors
    • Add pitfalls and warnings from the reviewer's experience (Phase 2 Q4)
  3. Update the skill's review_status in the YAML frontmatter:

    • If the reviewer's familiarity was 4-5 → set to "expert-verified"
    • If the reviewer's familiarity was 2-3 → set to "community-reviewed"
    • If the reviewer's familiarity was 1 → keep as "ai-generated"
  4. Commit the changes with a descriptive message, e.g.: fix: update [skill-name] parameters per expert verification

  5. Present the diff to the reviewer for confirmation.

If Phase 4 produced NO corrections needed (all ✅), skip to Phase 6 but still update review_status if appropriate.

⚠️ CRITICAL: You are NOT done. You MUST proceed to Phase 6 to submit the verification report to GitHub Discussions.

Phase 6 — Report Generation and Submission

Goal: Generate a structured verification report and submit it to GitHub Discussions.

  1. Generate the verification report using the format below.

  2. Present the complete report to the reviewer. Present options:

    • Approve all — Submit as shown
    • Delete sections — Remove specific sections
    • Anonymize — Replace identifying information (name, institution) with generic descriptions
    • Save locally only — Save without submitting to GitHub
    • Abort — Cancel without saving
  3. Wait for explicit confirmation before submitting.

  4. Submit to GitHub Discussions in the "Verification" category.

Submission command:

gh api graphql -f query='
mutation {
  createDiscussion(input: {
    repositoryId: "REPO_ID",
    categoryId: "VERIFICATION_CATEGORY_ID",
    title: "Verification Report: SKILL_NAME",
    body: "REPORT_BODY_HERE"
  }) {
    discussion {
      url
    }
  }
}'

To get the required IDs:

gh api graphql -f query='
{
  repository(owner: "HaoxuanLiTHUAI", name: "awesome_cognitive_and_neuroscience_skills") {
    id
    discussionCategories(first: 10) {
      nodes {
        id
        name
      }
    }
  }
}'

After successful submission, display the Discussion URL to the reviewer.

If submission fails: Save the report to ~/.cache/awesome-neuro-skills/pending-verifications/YYYY-MM-DD-skill-name.md and provide manual submission instructions.


Verification Report Format

## Verification Report: [skill-name]

### Reviewer Profile
- **Domain**: [e.g., "cognitive neuroscience"]
- **Experience**: [e.g., "5 years EEG research"]
- **Familiarity with this topic**: [1-5 from Q1]
- **Context**: [e.g., "currently running oddball paradigm study"]

### Verification Scenario
> [Description of the test scenario used in Phase 3]

### Skill Evaluation Against Scenario
> [Summary of how well the skill performed on the test scenario]

### Parameter Review
| # | Parameter | Skill Says | Citation | Verdict | Notes |
|---|-----------|-----------|----------|---------|-------|
| 1 | [param] | [value] | [citation] | ✅/⚠️/❌/❓ | [explanation] |

### Expert Insights
> [Reviewer's professional knowledge that supplements or corrects the skill — from Q3, Q4, and Phase 3 feedback]

### Overall Scores
| Dimension | Score |
|-----------|-------|
| Parameter accuracy | [1-5 stars] |
| Completeness | [1-5 stars] |
| Practical usefulness | [1-5 stars] |
| Pitfall awareness | [1-5 stars] |

### Suggested Improvements
- [Concrete suggestions for updates to the skill]

---
*Submitted via the `verify-skill` meta-skill.*

Verification Depth Guidance

What CAN be verified from literature

  • Whether a cited paper exists and the citation is correct
  • Whether the cited paper actually recommends the stated parameter value
  • Whether the methodology matches current field consensus
  • Whether important caveats or alternatives are mentioned

What CANNOT be verified without lab work

  • Whether specific parameter values are optimal for all datasets
  • Whether the pipeline actually produces valid results on real data
  • Whether edge cases and failure modes are exhaustively covered

Flag this distinction clearly in the report.


Completion Checklist

Before considering this verification COMPLETE, you MUST confirm ALL of the following:

  • Skill SKILL.md has been updated with corrections (if any ❌/⚠️ verdicts in Phase 4)
  • review_status has been updated in the skill's YAML frontmatter
  • Changes have been committed to git
  • Verification report has been submitted to GitHub Discussions (or saved locally if gh is unavailable)
  • Discussion URL (or local file path) has been shown to the reviewer

If any item above is unchecked, GO BACK and complete it now. Do NOT end the conversation.

Weekly Installs
0
GitHub Stars
10
First Seen
Jan 1, 1970