forecasting-reverso
Reverso Time Series Forecasting
Produce zero-shot univariate time series forecasts using the Reverso foundation model family (arXiv:2602.17634), implemented in NumPy/Numba for CPU-only container execution.
Setup (run once per conversation)
uv pip install numba --system --break-system-packages
cp /mnt/skills/user/forecasting-reverso/scripts/reverso.py /home/claude/reverso.py
cp /mnt/skills/user/forecasting-reverso/scripts/load_checkpoint.py /home/claude/load_checkpoint.py
Obtaining Weights
Two paths depending on network access:
Path A: Direct download (HuggingFace allow-listed)
import urllib.request, os
os.makedirs("/tmp/reverso", exist_ok=True)
url = "https://huggingface.co/shinfxh/reverso/resolve/main/checkpoints/reverso_small/checkpoint.pth"
urllib.request.urlretrieve(url, "/tmp/reverso/checkpoint.pth")
Path B: User upload (HuggingFace not accessible)
If the download fails with a network error, tell the user:
I can't reach HuggingFace from this environment. Please download the checkpoint from https://huggingface.co/shinfxh/reverso/blob/main/checkpoints/reverso_small/checkpoint.pth and upload it here.
Then load from /mnt/user-data/uploads/checkpoint.pth.
Loading weights
from load_checkpoint import load_checkpoint
weights = load_checkpoint("/tmp/reverso/checkpoint.pth") # or upload path
Model Configuration
Reverso Small uses this config (matching the published args.json):
from reverso import ReversoConfig
config = ReversoConfig(d_model=64, module_list=["conv", "attn", "conv", "attn"])
Forecasting
from reverso import forecast, warmup_jit
warmup_jit() # ~2s one-time JIT compilation
result = forecast(
series=data, # 1-D array/list of floats
prediction_length=96, # how many future steps
weights=weights, # dict from load_checkpoint
config=config,
)
The function handles preprocessing (NaN interpolation, padding, min-max normalization) and autoregressive rollout internally.
Key parameters
flip_equivariant=True — averages forward pass on original and vertically-flipped input. Slightly improves single-step predictions but can dampen amplitude over multi-step rollout. Default is False.
Input Handling
Accept time series as Python list, NumPy array, CSV column, or inline values. Convert to 1-D float array before calling forecast().
For CSV/DataFrame input, ask the user which column to forecast if ambiguous.
The model's context window is 2048 steps. Series shorter than 2048 are left-padded with the first value. Series longer than 2048 use only the most recent 2048 observations. Provide at least a few hundred real data points for meaningful results — heavily padded context degrades forecast quality because the long convolution kernels process mostly constant input.
Visualization
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(12, 4))
n = len(history)
ax.plot(range(n), history, label="Historical", color="#2563eb")
ax.plot(range(n, n + len(preds)), preds,
label="Forecast", color="#dc2626", linewidth=2)
ax.axvline(x=n, color="gray", linestyle="--", alpha=0.4)
ax.set_xlabel("Time step"); ax.set_ylabel("Value")
ax.legend(); fig.tight_layout()
fig.savefig("/mnt/user-data/outputs/forecast.png", dpi=150)
Performance
| Phase | Latency |
|---|---|
| numba install (uv) | ~1.6s |
| Weight loading (.pth) | <1s |
| JIT warmup | ~2s |
| Forward pass (L=2048) | ~80ms |
| 96-step forecast (2 chunks) | ~160ms |
| 192-step forecast (4 chunks) | ~320ms |
Container Environment Limits
Each forward pass takes ~65ms at L=2048. In the ephemeral container, reject batch forecasting requests that would exceed ~1500 forward passes (~100s wall time) to avoid timeouts.
Detect the container environment by checking for /mnt/user-data or /mnt/skills:
import os
IN_CONTAINER = os.path.exists("/mnt/user-data")
Estimate cost before running when processing multiple series:
n_forwards = n_series * n_windows * max(1, pred_length // 48)
est_seconds = n_forwards * 0.065
if IN_CONTAINER and est_seconds > 100:
# Reject or subsample
max_series = int(1500 / (n_windows * max(1, pred_length // 48)))
Practical limits at ~100s budget:
| Scenario | Series | Windows | Pred steps | Forwards | Time |
|---|---|---|---|---|---|
| Single series, 96-step | 1 | 1 | 2 chunks | 2 | 0.1s |
| Small dataset (sz_taxi) | 156 | 6 | 48 | 936 | 61s |
| Medium dataset, short horizon | 300 | 4 | 48 | 1200 | 78s |
| Large dataset (m4_yearly) | 22974 | 1 | 48 | 22974 | 25min ✗ |
When a request exceeds the budget, inform the user with the estimated time and suggest either subsampling or running locally. For benchmark evaluation of large datasets, recommend running outside the container.
Limitations
The model is strongest with periodic or quasi-periodic signals and full 2048-point context. Short series (under ~200 points) are heavily padded and produce degraded forecasts — this is a model limitation, not an implementation bug. Edge cases: binary-valued input (e.g. step functions normalizing to exactly 0/1) and series ending at the exact min-max boundary are out-of-distribution for the training data.
For architecture details, weight mapping, and debugging guidance, read references/architecture.md.
More from oaustegard/claude-skills
developing-preact
Specialized Preact development skill for standards-based web applications with native-first architecture and minimal dependency footprint. Use when building Preact projects, particularly those involving data visualization, interactive applications, single-page apps with HTM syntax, Web Components integration, CSV/JSON data parsing, WebGL shader visualizations, or zero-build solutions with vendored ESM imports.
106reviewing-ai-papers
Analyze AI/ML technical content (papers, articles, blog posts) and extract actionable insights filtered through enterprise AI engineering lens. Use when user provides URL/document for AI/ML content analysis, asks to "review this paper", or mentions technical content in domains like RAG, embeddings, fine-tuning, prompt engineering, LLM deployment.
80exploring-codebases
>-
64mapping-codebases
Generate navigable code maps for unfamiliar codebases. Extracts exports/imports via AST (tree-sitter) to create _MAP.md files per directory showing classes, functions, methods with signatures and line numbers. Use when exploring repositories, understanding project structure, analyzing unfamiliar code, or before modifications. Triggers on "map this codebase", "explore repo", "understand structure", "what does this project contain", or when starting work on an unfamiliar repository.
50accessing-github-repos
GitHub repository access in containerized environments using REST API and credential detection. Use when git clone fails, or when accessing private repos/writing files via API.
44asking-questions
Guidance for asking clarifying questions when user requests are ambiguous, have multiple valid approaches, or require critical decisions. Use when implementation choices exist that could significantly affect outcomes.
42