opik
Opik — Observability for LLM Agents
Integrating with Opik always means adding all three components unless the user explicitly asks for only one:
- Tracing — instrument LLM calls with the appropriate integration or
@opik.track - Entrypoint — mark the top-level function with
entrypoint=Truefor Local Runner and UI integration - Agent Configuration — externalize all tunable parameters into
opik.Config: model names, temperatures, top_p, max_tokens, all prompts and prompt templates, and any other runtime parameters the user may want to compare or optimize
Setup
Environment Config Decision Tree
Before adding Opik config, inspect the project's existing config approach. Follow this decision tree exactly:
-
Check for existing
.env/.env.localfiles anddotenvusage in code.- If the project loads a
.envfile (viapython-dotenv,dotenv, or framework auto-loading): appendOPIK_API_KEYandOPIK_WORKSPACEto that same file. Do NOT create a separate config file. - If there is a
.env.exampleor.env.sample: also update it with the new Opik vars (using placeholder values) so future developers know which vars are needed.
- If the project loads a
-
If no
.envfile exists:- Python: create or update
~/.opik.config(INI format). This is the SDK's native config file. - TypeScript/JavaScript: create
.env(or.env.localif the project uses Next.js or similar).
- Python: create or update
-
Never introduce a second config mechanism. If the project already uses
.envfor API keys, do NOT also create~/.opik.config. If it uses~/.opik.config, do NOT add Opik vars to.env. -
Never overwrite existing values. If
OPIK_API_KEYis already set in.env, leave it. Only add vars that are missing. -
Prefer setting
project_namein code, not in env files — one machine may log to many projects. -
If the user provides an API key and workspace in the prompt, use those values directly. If they provide only an API key, ask for the workspace or default to
"default"for local OSS.
Config Formats
Python ~/.opik.config (INI):
[opik]
api_key=your-api-key
url_override=https://www.comet.com/opik/api
workspace=your-workspace
Environment variables (append to existing .env):
# Opik
OPIK_API_KEY=your-api-key
OPIK_URL_OVERRIDE=https://www.comet.com/opik/api
OPIK_WORKSPACE=your-workspace
TypeScript uses OPIK_WORKSPACE as the env var and workspaceName in new Opik({...}).
Standard Deployments
- Cloud:
https://www.comet.com/opik/api— requiresapi_key+workspace - Local OSS:
http://localhost:5173/api— usually workspacedefault - Self-hosted: use the deployment's custom URL, following the project's existing config style
Interactive Config (optional)
opik configure
opik configure --use_local
npx opik-ts configure
npx opik-ts configure --use-local
Set the project name in code:
@opik.track(project_name="my-project")
def run():
...
const client = new Opik({ projectName: "my-project" });
Python Instrumentation
import opik
@opik.track(entrypoint=True, name="my-agent")
def agent(query: str) -> str:
context = retrieve(query)
return generate(query, context)
@opik.track(type="tool")
def retrieve(query: str) -> list:
return search_db(query)
@opik.track(type="llm")
def generate(query: str, context: list) -> str:
return llm_call(query, context)
result = agent("What is ML?")
opik.flush_tracker() # required in scripts
Valid span types for manual instrumentation: general, llm, tool, guardrail.
Framework integrations — these capture tokens, model, and cost automatically:
from opik.integrations.openai import track_openai # OpenAI
from opik.integrations.anthropic import track_anthropic # Anthropic
from opik.integrations.langchain import OpikTracer # LangChain
from opik.integrations.crewai import track_crewai # CrewAI
from opik.integrations.dspy import OpikCallback # DSPy
from opik.integrations.adk import track_adk_agent_recursive # Google ADK
CRITICAL — LiteLLM OpikLogger inside @opik.track:
If the codebase uses litellm AND you are adding @opik.track decorators, you MUST pass current_span_data via the metadata parameter on every litellm.completion() / litellm.acompletion() call. This tells the OpikLogger callback to nest under the active trace. Without it, OpikLogger creates orphaned top-level traces that are separate from your @opik.track hierarchy.
from opik import track
from opik.opik_context import get_current_span_data
from litellm.integrations.opik.opik import OpikLogger
import litellm
litellm.callbacks = [OpikLogger()]
@track
def call_llm(messages, model="gpt-4o"):
return litellm.completion(
model=model,
messages=messages,
metadata={
"opik": {
"current_span_data": get_current_span_data(),
"tags": ["litellm"],
},
},
)
@track(entrypoint=True)
def agent(query: str) -> str:
return call_llm([{"role": "user", "content": query}])
This pattern applies whenever you see litellm.completion or litellm.acompletion in existing code that you are instrumenting with @opik.track.
TypeScript Instrumentation
import { Opik } from "opik";
const client = new Opik({ projectName: "my-project" });
const trace = client.trace({
name: "my-agent",
input: { query: "What is ML?" },
});
const toolSpan = trace.span({
name: "retrieve-context",
type: "tool",
input: { query: "What is ML?" },
});
// retrieval logic
toolSpan.end({ output: { documents: [] } });
const llmSpan = trace.span({
name: "generate-response",
type: "llm",
input: { prompt: "What is ML?" },
});
// model call
llmSpan.end({ output: { response: "Machine learning is..." } });
trace.end({ output: { response: "Machine learning is..." } });
await client.flush();
Prefer the client-based path in TypeScript. Use projectName in code rather than machine-wide config when possible.
For framework-specific integrations such as Vercel AI SDK or LangChain.js, see references/tracing-typescript.md.
Always await client.flush() before exit.
Valid span types for manual instrumentation: general, llm, tool, guardrail.
Threads (Conversations)
Group conversation turns via thread_id. Each turn = one trace; shared thread_id = one thread.
@opik.track(entrypoint=True)
def handle_message(session_id: str, message: str) -> str:
opik.update_current_trace(thread_id=session_id)
return generate_response(session_id, message)
Thread metrics:
from opik.evaluation import evaluate_threads
from opik.evaluation.metrics.conversation import (
SessionCompletenessQuality, UserFrustrationMetric, ConversationalCoherenceMetric,
)
results = evaluate_threads(project_name="chat-agent", metrics=[
SessionCompletenessQuality(), UserFrustrationMetric(), ConversationalCoherenceMetric(),
])
Use for chat agents, support bots, multi-step assistants. Skip for single-shot agents or batch processing.
Pitfalls: Missing thread_id → turns appear as unrelated traces. Shared thread_id across users → conversations get mixed.
Agent Configuration
Externalize the parts of your agent you expect to tune over time into versioned, immutable config snapshots. This includes prompts, models, temperatures, token limits, and other runtime parameters you may want to compare, optimize, or roll out gradually.
CRITICAL — Search for existing config classes first. Before creating a new config, search the codebase for existing classes that hold tunable parameters (model names, temperatures, prompts, token limits, etc.). Look for names like AgentConfig, Config, Settings, AgentSettings, ModelConfig, or any @dataclass/Pydantic model with fields like model, temperature, system_prompt, max_tokens. An existing config class is a migration target, not a reason to skip this step. If found, convert it to inherit from opik.Config:
- Replace the existing base (
@dataclass,BaseModel, plain class) withopik.Config - Convert plain
strprompt fields toopik.Prompt - Wire up
get_or_create_config()inside the entrypoint - Update all call sites that reference the old config to use the new Opik-managed config
import opik
class AgentConfig(opik.Config):
model: str
temperature: float
system_prompt: opik.Prompt
DEFAULT_CONFIG = AgentConfig(
model="gpt-4o",
temperature=0.7,
system_prompt=opik.Prompt(
name="agent-system-prompt",
project_name="my-agent",
prompt="You are a helpful assistant for {{product}}.",
),
)
client = opik.Opik()
@opik.track(entrypoint=True, project_name="my-agent")
def run_agent(question: str) -> str:
cfg = client.get_or_create_config(
fallback=DEFAULT_CONFIG,
project_name="my-agent",
# optional: env="staging" | version="v1" | version="latest" (default: prod)
)
return llm_call(
model=cfg.model,
temperature=cfg.temperature,
system_prompt=cfg.system_prompt.format(product="Opik"),
question=question,
)
get_or_create_config()must be inside@opik.track— raises error otherwise- On first call with no existing config, auto-creates from
fallbackand returns it - On backend failure, returns
fallbackwithis_fallback=True(never breaks the agent) - Deploy to environment:
client.set_config_env(version="v1", env="prod")— admin/ops only - Prompt fields: use
opik.Promptfor string-based templates,opik.ChatPromptfor multi-turn message templates;project_nameis required on both and must match theproject_namein@opik.trackandget_or_create_config - Extract: model, temperature, top_p, max_tokens, system prompt, tunable params
- Don't extract: API keys, structural logic, true constants
Local Runner (opik connect)
Pair your local agent with the Opik browser UI. Get a pairing code from the UI, then:
opik connect --pair <CODE> python3 app.py # Python
opik connect --pair <CODE> npx tsx app.ts # TypeScript
Replace python3 app.py or npx tsx app.ts with the normal command you use to start your app locally.
Python: @track(entrypoint=True) + type-hinted parameters for schema discovery.
TypeScript: track({ entrypoint: true, params: [{name, type}] }, fn).
After pairing: entrypoint registered as agent, UI shows input form, jobs from UI or Optimizer trigger runs.
| Issue | Fix |
|---|---|
| No entrypoint found | Add entrypoint=True (Python) or entrypoint: true (TS) |
| Invalid pair code | Codes expire — get a new one |
| Connection refused | Check Opik server (OSS) or API key (Cloud) |
get_or_create_config fails saying some fields reference the wrong project |
The project_name on one or more opik.Prompt / opik.ChatPrompt fields doesn't match the project_name passed to get_or_create_config — make them consistent |
Anti-Patterns
| Anti-Pattern | Fix |
|---|---|
Existing config class left unconverted (e.g., @dataclass with model/temperature/prompt fields) |
Convert to opik.Config subclass — an existing config is a migration target, not a skip signal |
| Hardcoded config | Use opik.Config + get_or_create_config() |
| Missing entrypoint | Add entrypoint=True for Local Runner |
| No thread_id on conversational agent | Wire thread_id from session ID |
get_or_create_config() outside @track |
Must be inside decorated function |
TS missing params |
Add explicit params array |
Missing flush_tracker() in scripts |
Call before exit |
References
| Topic | File |
|---|---|
| Python SDK (decorators, async, distributed, config, entrypoint) | references/tracing-python.md |
| TypeScript SDK (client, decorators, entrypoint, params) | references/tracing-typescript.md |
| REST API | references/tracing-rest-api.md |
| All integrations | references/integrations.md |
| Core concepts (traces, spans, threads, metadata) | references/observability.md |
Test Suites, run_tests(), 60+ built-in metrics, legacy evaluate() |
references/evaluation.md |
More from comet-ml/opik-skills
instrument
Add Opik tracing to an existing codebase. Detects language (Python/TypeScript), identifies LLM frameworks, adds appropriate decorators and integrations, marks entrypoints, and wires up environment config. Use for "instrument my code", "add opik tracing", "add observability", or "trace my agent".
129agent-config
Opik Agent Configuration — Blueprints, get_agent_config() with selectors, environment tags, Prompt/ChatPrompt fields, deploy_to(), MaskIDs, and config lifecycle.
5agent-ops
Agent lifecycle — architecture, configuration (Blueprints), Local Runner, evaluation, threads, production monitoring. Use for "evaluate my agent", "connect my agent", "configure my agent", "add guardrails".
5opik-connect
Opik Connect (Local Runner) — pair your local agent with the Opik browser UI for Python and TypeScript.
5instrument-typescript
Adding Opik observability to TypeScript/JS LLM apps — track() with entrypoint and explicit params for Local Runner, framework integrations.
5evaluation-suites
Opik Evaluation Suites — assertions, execution policies, CI integration. Replaces old Datasets API.
5