devtu-optimize-skills
Optimizing ToolUniverse Skills
Best practices for high-quality research skills with evidence grading and source attribution.
Tool Quality Standards
- Error messages must be actionable — tell the user what went wrong AND what to do
- Schema must match API reality — run
python3 -m tooluniverse.cli run <Tool> '<json>'to verify - Coverage transparency — state what data is NOT included
- Input validation before API calls — don't silently send invalid values
- Cross-tool routing — name the correct tool when query is out-of-scope
- No silent parameter dropping — if a parameter is ignored, say so
Core Principles (13 Patterns)
Full details: references/optimization-patterns.md
| # | Pattern | Key Idea |
|---|---|---|
| 1 | Tool Interface Verification | get_tool_info() before first call; maintain corrections table |
| 2 | Foundation Data Layer | Query aggregator (Open Targets, PubChem) FIRST |
| 3 | Versioned Identifiers | Capture both ENSG00000123456 and .12 version |
| 4 | Disambiguation First | Resolve IDs, detect collisions, build negative filters |
| 5 | Report-Only Output | Narrative in report; methodology in appendix only if asked |
| 6 | Evidence Grading | T1 (mechanistic) → T2 (functional) → T3 (association) → T4 (mention) |
| 7 | Quantified Completeness | Numeric minimums per section (>=20 PPIs, top 10 tissues) |
| 8 | Mandatory Checklist | All sections exist, even if "Limited evidence" |
| 9 | Aggregated Data Gaps | Single section consolidating all missing data |
| 10 | Query Strategy | High-precision seeds → citation expansion → collision-filtered broad |
| 11 | Tool Failure Handling | Primary → Fallback 1 → Fallback 2 → document unavailable |
| 12 | Scalable Output | Narrative report + JSON/CSV bibliography |
| 13 | Synthesis Sections | Biological model + testable hypotheses, not just paper lists |
Optimized Skill Workflow
Phase -1: Tool Verification (check params)
Phase 0: Foundation Data (aggregator query)
Phase 1: Disambiguation (IDs, collisions, baseline)
Phase 2: Specialized Queries (fill gaps)
Phase 3: Report Synthesis (evidence-graded narrative)
Testing Standards
Full details: references/testing-standards.md
Critical rule: NEVER write skill docs without testing all tool calls first.
- 30+ tests per skill, 100% pass rate
- All tests use real data (no placeholders)
- Phase + integration + edge case tests
- SOAP tools (IMGT, SAbDab, TheraSAbDab) need
operationparameter - Distinguish transient errors (retry) from real bugs (fix)
- API docs are often wrong — always verify with actual calls
Pattern 14: Reasoning Frameworks Over Tool Catalogs (CRITICAL)
Skills that just list tools ("call A, then B, then C") score 3-5/10 in usefulness tests. Skills that explain HOW to interpret and combine data score 7-9/10. Every skill MUST include:
14a. Interpretation Tables
Map raw API data to biological/clinical meaning. Don't just retrieve — explain.
| Bad (tool catalog) | Good (reasoning framework) |
|---|---|
| "Get GO terms from MGnify" | GO terms → interpretation table: butyrate genes = barrier integrity, LPS genes = inflammation |
| "Get DepMap dependency scores" | Score < -0.5 = essential, but pan-essential = bad drug target (toxicity); selective = good target |
| "Get FAERS counts" | PRR > 5 = strong signal, but signal ≠ causation (channeling bias, notoriety bias) |
14b. Synthesis Phases
Every multi-phase skill needs a final phase that answers "so what?" — not just collecting data:
- "What changed and why does it matter?"
- "Is this cause or consequence?"
- "What's the actionable recommendation?"
14c. Honest Limitations
If a tool API can't deliver what the skill promises, say so explicitly. Don't describe aspirational capabilities. Example: "DepMap_get_gene_dependencies returns gene metadata only, NOT per-cell-line CRISPR scores."
Pattern 15: Computational Procedures When Tools Can't Help
Some scientific analyses require computation, not just API queries. When no tool exists for a capability, embed a Python code procedure directly in the skill using packages available in ToolUniverse (pandas, scipy, numpy, statsmodels, biopython, networkx).
When to use computational procedures:
| Gap | Procedure | Packages |
|---|---|---|
| API doesn't return needed data (e.g., DepMap scores) | Download CSV + pandas analysis | pandas |
| Statistical testing (differential abundance, enrichment) | scipy.stats + FDR correction | scipy, statsmodels |
| Sequence analysis (alignment, conservation) | Biopython SeqIO + pairwise alignment | biopython |
| Chemical similarity (analog search, fingerprints) | RDKit fingerprints + Tanimoto | rdkit (visualization extra) |
| Network analysis (hub genes, clustering) | NetworkX graph metrics | networkx |
| Scoring algorithms (ACMG classification, viability scores) | Custom Python functions | built-in |
| Dose feasibility (Cmax vs IC50 comparison) | Numerical comparison + PK data | pandas, numpy |
Template for computational procedures in skills:
**Computational procedure: [Name]**
[When to use this: explain the gap it fills]
\`\`\`python
# [What this computes]
# Requires: [packages] (included in ToolUniverse dependencies)
import pandas as pd
from scipy.stats import mannwhitneyu
# Input: [describe expected input format]
# Output: [describe output]
# [Full working code with example data]
\`\`\`
[Interpretation guidance for the output]
Key rules for computational procedures:
- Only use packages in ToolUniverse dependencies (pyproject.toml): pandas, scipy, numpy, networkx, requests, biopython (optional extra)
- Include example data so the procedure is immediately testable
- Explain the output — a code block without interpretation is useless
- Note when external data download is needed (e.g., DepMap CSV from depmap.org)
Pattern 15b: Download-and-Process for Datasets Without REST APIs
Many critical scientific datasets have NO REST API but provide bulk download files. Skills should include concrete download-and-process instructions when this is the only path to essential data.
Template for download-and-process procedures:
**Step 1: Download data files**
- URL: [exact download page URL]
- Files needed: [filename] (~[size]) — [what it contains]
- Registration: [required/not required]
- Update frequency: [quarterly/annually/etc.]
**Step 2: Process with Python**
[Working code with pandas/scipy that loads the CSV and produces the analysis]
**Step 3: Interpret results**
[Table mapping output values to biological/clinical meaning]
**When files are not available**: [Fallback strategy using API tools]
Known download-only datasets that skills reference:
| Dataset | Download URL | Files | Used By |
|---|---|---|---|
| DepMap CRISPR | depmap.org/portal/download/all/ | CRISPRGeneEffect.csv (~300MB), Model.csv (~2MB) | functional-genomics, cell-line-profiling |
| TCGA clinical | portal.gdc.cancer.gov | Clinical + mutation TSVs | cancer-genomics-tcga |
| GTEx expression | gtexportal.org/home/downloads | GTEx_Analysis_v8_Annotations.csv | expression-data-retrieval |
| ClinGen gene-disease | clinicalgenome.org/docs/ | gene_curation_list.tsv | variant-interpretation |
| gnomAD constraint | gnomad.broadinstitute.org/downloads | constraint metrics TSV | functional-genomics |
Critical rule: Always include a fallback for when the download is unavailable (user may not have registration, file may be too large, etc.). The fallback should use available API tools even if they provide less complete data.
Common Anti-Patterns
| Anti-Pattern | Fix |
|---|---|
| "Search Log" reports | Keep methodology internal; report findings only |
| Missing disambiguation | Add collision detection; build negative filters |
| No evidence grading | Apply T1-T4 grades; label each claim |
| Empty sections omitted | Include with "None identified" |
| No synthesis | Add biological model + hypotheses |
| Silent failures | Document in Data Gaps; implement fallbacks |
| Wrong tool parameters | Verify via get_tool_info() before calling |
| GTEx returns nothing | Try versioned ID ENSG*.version |
| No foundation layer | Query aggregator first |
| Untested tool calls | Test-driven: test script FIRST |
| Tool catalog without interpretation | Add interpretation tables explaining what data means |
| Aspirational capabilities | Be honest when APIs can't deliver; add computational procedure instead |
| Missing statistical analysis | Add scipy/pandas code procedure for computation the tools can't do |
Quick Fixes for User Complaints
| Complaint | Fix |
|---|---|
| "Report too short" | Add Phase 0 foundation + Phase 1 disambiguation |
| "Too much noise" | Add collision filtering |
| "Can't tell what's important" | Add T1-T4 evidence tiers |
| "Missing sections" | Add mandatory checklist with minimums |
| "Too long/unreadable" | Separate narrative from JSON |
| "Just a list of papers" | Add synthesis sections |
| "Tool failed, no data" | Add retry + fallback chains |
Skill Template
---
name: [domain]-research
description: [What + when triggers]
---
# [Domain] Research
## Workflow
Phase -1: Tool Verification → Phase 0: Foundation → Phase 1: Disambiguate
→ Phase 2: Search → Phase 3: Report
## Phase -1: Tool Verification
[Parameter corrections table]
## Phase 0: Foundation Data
[Aggregator query]
## Phase 1: Disambiguation
[IDs, collisions, baseline]
## Phase 2: Specialized Queries
[Query strategy, fallbacks]
## Phase 3: Report Synthesis
[Evidence grading, mandatory sections]
## Output Files
- [topic]_report.md, [topic]_bibliography.json
## Quantified Minimums
[Numbers per section]
## Completeness Checklist
[Required sections with checkboxes]
Additional References
- Detailed patterns: references/optimization-patterns.md
- Testing standards: references/testing-standards.md
- Case studies (4 real fixes): references/case-studies.md
- Checklists (review + release): references/checklists.md
More from mims-harvard/tooluniverse
tooluniverse-sequence-retrieval
Retrieves biological sequences (DNA, RNA, protein) from NCBI and ENA with gene disambiguation, accession type handling, and comprehensive sequence profiles. Creates detailed reports with sequence metadata, cross-database references, and download options. Use when users need nucleotide sequences, protein sequences, genome data, or mention GenBank, RefSeq, EMBL accessions.
1.4Ktooluniverse-image-analysis
Production-ready microscopy image analysis and quantitative imaging data skill for colony morphometry, cell counting, fluorescence quantification, and statistical analysis of imaging-derived measurements. Processes ImageJ/CellProfiler output (area, circularity, intensity, cell counts), performs Dunnett's test, Cohen's d effect size, power analysis, Shapiro-Wilk normality tests, two-way ANOVA, polynomial regression, natural spline regression with confidence intervals, and comparative morphometry. Supports CSV/TSV measurement tables, multi-channel fluorescence data, colony swarming assays, and neuron counting datasets. Use when analyzing microscopy measurement data, colony area/circularity, cell count statistics, swarming assays, co-culture ratio optimization, or answering questions about imaging-derived quantitative data.
379tooluniverse-literature-deep-research
Comprehensive literature deep research across any academic domain using 120+ ToolUniverse tools. Conducts subject disambiguation, systematic literature search with citation network expansion, evidence grading (T1-T4), and structured theme extraction. Produces detailed reports with mandatory completeness checklists, integrated models, and testable hypotheses. Use when users need thorough literature reviews, target/drug/disease profiles, topic deep-dives, claim verification, or systematic evidence synthesis. Supports biomedical (genes, proteins, drugs, diseases), computer science, social science, and general academic topics. For single factoid questions, uses a fast verification mode with inline answer.
347tooluniverse
Router skill for ToolUniverse tasks. First checks if specialized tooluniverse skills (105+ skills covering disease/drug/target research, gene-disease associations, clinical decision support, genomics, epigenomics, proteomics, comparative genomics, chemical safety, toxicology, systems biology, and more) can solve the problem, then falls back to general strategies for using 2300+ scientific tools. Covers tool discovery, multi-hop queries, comprehensive research workflows, disambiguation, evidence grading, and report generation. Use when users need to research any scientific topic, find biological data, or explore drug/target/disease relationships. ALSO USE for any biology, medicine, chemistry, pharmacology, or life science question — even simple factoid questions like "how many X in protein Y", "what drug interacts with Z", "what gene causes disease W", or "translate this sequence". These questions benefit from database lookups (UniProt, PubMed, ChEMBL, ClinVar, GWAS Catalog, etc.) rather than answering from memory alone. When in doubt about a scientific fact, USE THIS SKILL to verify against real databases.
257tooluniverse-drug-research
Generates comprehensive drug research reports with compound disambiguation, evidence grading, and mandatory completeness sections. Covers identity, chemistry, pharmacology, targets, clinical trials, safety, pharmacogenomics, and ADMET properties. Use when users ask about drugs, medications, therapeutics, or need drug profiling, safety assessment, or clinical development research.
254setup-tooluniverse
Install and configure ToolUniverse for any use case — MCP server (chat-based), CLI (command line with 9 subcommands), or Python SDK (Coding API with 3 calling patterns). Covers uv/uvx setup, MCP configuration for 12+ AI clients (Cursor, Claude Desktop, Windsurf, VS Code, Codex, Gemini CLI, Trae, Cline, etc.), full CLI reference (tu list/grep/find/info/run/test/status/build/serve), Coding API quickstart, agentic tools, code executor, API key walkthrough, skill installation, and upgrading. Use when user asks how to set up ToolUniverse, which access mode to use (MCP vs CLI vs SDK), configuring MCP servers, using the CLI, troubleshooting installation, upgrading, or mentions installing ToolUniverse or setting up scientific tools. Also triggers for "how do I use ToolUniverse", "what's the best way to access tools", "command line", "tu command", "coding API", "tu build".
251