instrumenting-with-mlflow-tracing
MLflow Tracing Instrumentation Guide
Language-Specific Guides
Based on the user's project, load the appropriate guide:
- Python projects: Read
references/python.md - TypeScript/JavaScript projects: Read
references/typescript.md
If unclear, check for package.json (TypeScript) or requirements.txt/pyproject.toml (Python) in the project.
What to Trace
Trace these operations (high debugging/observability value):
| Operation Type | Examples | Why Trace |
|---|---|---|
| Root operations | Main entry points, top-level pipelines, workflow steps | End-to-end latency, input/output logging |
| LLM calls | Chat completions, embeddings | Token usage, latency, prompt/response inspection |
| Retrieval | Vector DB queries, document fetches, search | Relevance debugging, retrieval quality |
| Tool/function calls | API calls, database queries, web search | External dependency monitoring, error tracking |
| Agent decisions | Routing, planning, tool selection | Understand agent reasoning and choices |
| External services | HTTP APIs, file I/O, message queues | Dependency failures, timeout tracking |
Skip tracing these (too granular, adds noise):
- Simple data transformations (dict/list manipulation)
- String formatting, parsing, validation
- Configuration loading, environment setup
- Logging or metric emission
- Pure utility functions (math, sorting, filtering)
Rule of thumb: Trace operations that are important for debugging and identifying issues in your application.
Verification
After instrumenting the code, always verify that tracing is working.
Planning to evaluate your agent? Tracing must be working before you run
agent-evaluation. Complete verification below first.
- Run the instrumented code — execute the application or agent so that at least one traced operation fires
- Confirm traces are logged — use
mlflow.search_traces()orMlflowClient().search_traces()to check that traces appear in the experiment:
import mlflow
traces = mlflow.search_traces(experiment_ids=["<experiment_id>"])
print(f"Found {len(traces)} trace(s)")
assert len(traces) > 0, "No traces were logged — check tracking URI and experiment settings"
- Verify spans were captured — confirm the trace contains the expected spans, not just an empty shell:
trace = traces.iloc[0]
spans = mlflow.get_trace(trace.trace_id).data.spans
print(f"Trace has {len(spans)} span(s)")
for span in spans:
print(f" - {span.name} ({span.span_type})")
- Report the result — tell the user how many traces and spans were found and confirm tracing is working
If no traces appear
Check these in order:
- Tracking URI not set — is
mlflow.set_tracking_uri(...)called before the agent run? Without this, traces go to a local./mlrunsdirectory instead of the configured server. - Autolog warnings — did
mlflow.autolog()or framework-specificmlflow.<framework>.autolog()raise any warnings during setup? Check stderr for patching failures. - Wrong experiment ID — verify the experiment ID passed to
search_traces()matches the experiment active when the code ran (mlflow.get_experiment_by_name(...)to confirm). - Network/auth issues — can the process reach the tracking server? Check for connection errors or 401/403 responses in logs.
For automated validation, use agent-evaluation/scripts/validate_tracing_runtime.py.
Feedback Collection
Log user feedback on traces for evaluation, debugging, and fine-tuning. Essential for identifying quality issues in production.
See references/feedback-collection.md for:
- Recording user ratings and comments with
mlflow.log_feedback() - Capturing trace IDs to return to clients
- LLM-as-judge automated evaluation
Reference Documentation
Production Deployment
See references/production.md for:
- Environment variable configuration
- Async logging for low-latency applications
- Sampling configuration (MLFLOW_TRACE_SAMPLING_RATIO)
- Lightweight SDK (
mlflow-tracing) - Docker/Kubernetes deployment
Advanced Patterns
See references/advanced-patterns.md for:
- Async function tracing
- Multi-threading with context propagation
- PII redaction with span processors
Distributed Tracing
See references/distributed-tracing.md for:
- Propagating trace context across services
- Client/server header APIs
More from panlm/mlflow-skills
mlflow-agent
>
1agent-evaluation
Use this when you need to EVALUATE OR IMPROVE or OPTIMIZE an existing LLM agent's output quality - including improving tool selection accuracy, answer quality, reducing costs, or fixing issues where the agent gives wrong/incomplete responses. Evaluates agents systematically using MLflow evaluation with datasets, scorers, and tracing. IMPORTANT - Always also load the instrumenting-with-mlflow-tracing skill before starting any work. Covers end-to-end evaluation workflow or individual components (tracing setup, dataset creation, scorer definition, evaluation execution).
1analyzing-mlflow-trace
Analyzes a single MLflow trace to answer a user query about it. Use when the user provides a trace ID and asks to debug, investigate, find issues, root-cause errors, understand behavior, or analyze quality. Triggers on "analyze this trace", "what went wrong with this trace", "debug trace", "investigate trace", "why did this trace fail", "root cause this trace".
1mlflow-onboarding
Onboards users to MLflow by determining their use case (GenAI agents/apps or traditional ML/deep learning) and guiding them through relevant quickstart tutorials and initial integration. If an experiment ID is available, it should be supplied as input to help determine the use case. Use when the user asks to get started with MLflow, set up tracking, add observability, or integrate MLflow into their project. Triggers on "get started with MLflow", "set up MLflow", "onboard to MLflow", "add MLflow to my project", "how do I use MLflow".
1retrieving-mlflow-traces
Retrieves MLflow traces using CLI or Python API. Use when the user asks to get a trace by ID, find traces, filter traces by status/tags/metadata/execution time, query traces, or debug failed traces. Triggers on "get trace", "search traces", "find failed traces", "filter traces by", "traces slower than", "query MLflow traces".
1searching-mlflow-docs
Searches and retrieves MLflow documentation from the official docs site. Use when the user asks about MLflow features, APIs, integrations (LangGraph, LangChain, OpenAI, etc.), tracing, tracking, or requests to look up MLflow documentation. Triggers on "how do I use MLflow with X", "find MLflow docs for Y", "MLflow API for Z".
1