skills/eventual-inc/daft/daft-udf-tuning

daft-udf-tuning

SKILL.md

Daft UDF Tuning

Optimize User-Defined Functions for performance.

UDF Types

Type Decorator Use Case
Stateless @daft.func Simple transforms. Use async for I/O-bound tasks.
Stateful @daft.cls Expensive init (e.g., loading models). Supports gpus=N.
Batch @daft.func.batch Vectorized CPU/GPU ops (NumPy/PyTorch). Faster.

Quick Recipes

1. Async I/O (Web APIs)

@daft.func
async def fetch(url: str):
    async with aiohttp.ClientSession() as s:
        return await s.get(url).text()

2. GPU Batch Inference (PyTorch/Models)

@daft.cls(gpus=1)
class Classifier:
    def __init__(self):
        self.model = load_model().cuda() # Run once per worker

    @daft.method.batch(batch_size=32)
    def predict(self, images):
        return self.model(images.to_pylist())

# Run with concurrency
df.with_column("preds", Classifier(max_concurrency=4).predict(df["img"]))

Tuning Keys

  • max_concurrency: Total parallel UDF instances.
  • gpus=N: GPU request per instance.
  • batch_size: Rows per call. Too small = overhead; too big = OOM.
  • into_batches(N): Pre-slice partitions if memory is tight.
Weekly Installs
14
GitHub Stars
5.3K
First Seen
Feb 27, 2026
Installed on
opencode14
gemini-cli14
claude-code14
github-copilot14
codex14
kimi-cli14