skills/auralshin/agent-skills/defi-data-fetcher

defi-data-fetcher

SKILL.md

DeFi Data Fetcher

Purpose

Collect DeFi metrics from prioritized sources, normalize them, reconcile cross-source conflicts, and return a source-attributed dataset with freshness and confidence labels.

Use this skill when

  • The user asks for current or historical DeFi metrics (TVL, APY, volume, fees, revenue, token prices).
  • The user wants protocol/token comparisons across chains.
  • The user needs a clean dataset before risk or strategy analysis.

Do not use this skill when

  • The task is transaction signing or broadcasting.
  • The task is pure protocol economic risk scoring (use defi-risk-evaluator).

External dependency profile

  • Dependency level: High for live/current metrics.
  • Primary sources: protocol-native APIs/subgraphs and official analytics.
  • Secondary sources: DeFiLlama and market data aggregators.
  • Validation/backfill: direct RPC reads.
  • Offline fallback: supports normalization/reconciliation/reporting on user-provided snapshots only.

Workflow

  1. Clarify query scope:
    • Protocols/tokens/chains
    • Time window (latest, 24h, 7d, custom)
    • Required metrics
  2. Build source plan with references/source-priority.md.
  3. Fetch using ordered providers and keep retrieval timestamps.
  4. Normalize fields/units via references/metric-definitions.md.
  5. Apply freshness policy from references/freshness-sla.md.
  6. Reconcile conflicts (median + spread analysis) and assign confidence.
  7. If live fetch is unavailable, switch to references/offline-fallback.md mode and state limits.
  8. Return required schema.

Data quality rules

  • Always separate apy_base and apy_reward.
  • Percentages are decimal internally (0.12 = 12%).
  • All timestamps must be UTC ISO-8601.
  • Never hide source disagreement; show spread and confidence.
  • Explicitly flag stale or partial coverage.

Required output format

{
  "query_scope": {
    "protocols": ["string"],
    "chains": ["string"],
    "time_window": "string",
    "requested_metrics": ["string"]
  },
  "fetch_mode": "live|offline_snapshot",
  "source_plan": {
    "primary": ["string"],
    "secondary": ["string"],
    "validation": ["string"]
  },
  "metrics": [
    {
      "metric": "tvl_usd|apy_base|apy_reward|volume_24h_usd|fees_24h_usd|revenue_24h_usd|price_usd",
      "entity": "protocol_or_token",
      "chain": "string",
      "value": 0,
      "as_of": "ISO-8601",
      "freshness_status": "fresh|stale|unknown",
      "confidence": "high|medium|low",
      "spread_pct": 0,
      "sources": ["string"]
    }
  ],
  "reconciliation_notes": ["string"],
  "quality_flags": ["string"],
  "summary": "2-4 sentence summary"
}

Bundled resources

  • references/metric-definitions.md: Canonical metric semantics.
  • references/source-priority.md: Source ranking and failover policy.
  • references/freshness-sla.md: Metric-specific freshness thresholds.
  • references/offline-fallback.md: Behavior when live providers are unavailable.
  • scripts/normalize_metrics.py: Deterministic normalization + optional reconciliation mode.

Use scripts/normalize_metrics.py --reconcile when you have multiple rows per metric/entity/chain and need consistent confidence/spread outputs.

Weekly Installs
1
First Seen
14 days ago
Installed on
amp1
cline1
opencode1
cursor1
kimi-cli1
codex1