mlflow-agent
MLflow Agent
Master dispatcher for MLflow workflows. Reads user intent and invokes the right sub-skill.
Trigger
Use when the user wants to do anything with MLflow but hasn't specified which skill to use.
Process
- Read the user's request and identify intent
- Map to the appropriate skill:
- Tracing / instrumentation →
instrumenting-with-mlflow-tracing - Evaluation / scoring →
agent-evaluation - Debug a trace →
analyze-mlflow-trace - Debug a chat session →
analyze-mlflow-chat-session - Search traces →
retrieving-mlflow-traces - Metrics / costs →
querying-mlflow-metrics - Getting started →
mlflow-onboarding - Docs / API questions →
searching-mlflow-docs
- Tracing / instrumentation →
- If intent is unclear, ask ONE clarifying question, then dispatch
- Invoke the matched skill using the Skill tool
Key Rules
- Never do the work yourself — always dispatch to the appropriate sub-skill
- One clarifying question maximum before dispatching
- If the user says "evaluate AND trace", dispatch tracing first, then evaluation
- If the user's request spans multiple skills, handle them in logical order (setup → instrument → evaluate)
More from panlm/mlflow-skills
agent-evaluation
Use this when you need to EVALUATE OR IMPROVE or OPTIMIZE an existing LLM agent's output quality - including improving tool selection accuracy, answer quality, reducing costs, or fixing issues where the agent gives wrong/incomplete responses. Evaluates agents systematically using MLflow evaluation with datasets, scorers, and tracing. IMPORTANT - Always also load the instrumenting-with-mlflow-tracing skill before starting any work. Covers end-to-end evaluation workflow or individual components (tracing setup, dataset creation, scorer definition, evaluation execution).
1analyzing-mlflow-trace
Analyzes a single MLflow trace to answer a user query about it. Use when the user provides a trace ID and asks to debug, investigate, find issues, root-cause errors, understand behavior, or analyze quality. Triggers on "analyze this trace", "what went wrong with this trace", "debug trace", "investigate trace", "why did this trace fail", "root cause this trace".
1mlflow-onboarding
Onboards users to MLflow by determining their use case (GenAI agents/apps or traditional ML/deep learning) and guiding them through relevant quickstart tutorials and initial integration. If an experiment ID is available, it should be supplied as input to help determine the use case. Use when the user asks to get started with MLflow, set up tracking, add observability, or integrate MLflow into their project. Triggers on "get started with MLflow", "set up MLflow", "onboard to MLflow", "add MLflow to my project", "how do I use MLflow".
1retrieving-mlflow-traces
Retrieves MLflow traces using CLI or Python API. Use when the user asks to get a trace by ID, find traces, filter traces by status/tags/metadata/execution time, query traces, or debug failed traces. Triggers on "get trace", "search traces", "find failed traces", "filter traces by", "traces slower than", "query MLflow traces".
1searching-mlflow-docs
Searches and retrieves MLflow documentation from the official docs site. Use when the user asks about MLflow features, APIs, integrations (LangGraph, LangChain, OpenAI, etc.), tracing, tracking, or requests to look up MLflow documentation. Triggers on "how do I use MLflow with X", "find MLflow docs for Y", "MLflow API for Z".
1querying-mlflow-metrics
Fetches aggregated trace metrics (token usage, latency, trace counts, quality evaluations) from MLflow tracking servers. Triggers on requests to show metrics, analyze token usage, view LLM costs, check usage trends, or query trace statistics.
1