Automatic Tracing
MLflow Tracing is integrated with various GenAI libraries and provides one-line automatic tracing experience for each library (and the combination of them!). This page shows detailed examples to integrate MLflow with popular GenAI libraries.
Supported Integrations
Each integration automatically captures your application's logic and intermediate steps based on your implementation of the authoring framework / SDK. Click on the logo of your library to see the detailed integration guide.
Below are quick-start examples for some of the most popular integrations. Remember to install the necessary packages for each library you intend to use (e.g., pip install openai langchain langgraph anthropic dspy
).
Combining Manual and Automatic Tracing
The @mlflow.trace
decorator can be used in conjunction with auto tracing to create powerful, integrated traces. This is particularly useful for:
- 🔄 Complex workflows that involve multiple LLM calls
- 🤖 Multi-agent systems where different agents use different LLM providers
- 🔗 Chaining multiple LLM calls together with custom logic in between
Basic Example
Here's a simple example that combines OpenAI auto-tracing with manually defined spans:
import mlflow
import openai
from mlflow.entities import SpanType
mlflow.openai.autolog()
@mlflow.trace(span_type=SpanType.CHAIN)
def run(question):
messages = build_messages(question)
# MLflow automatically generates a span for OpenAI invocation
response = openai.OpenAI().chat.completions.create(
model="gpt-4o-mini",
max_tokens=100,
messages=messages,
)
return parse_response(response)
@mlflow.trace
def build_messages(question):
return [
{"role": "system", "content": "You are a helpful chatbot."},
{"role": "user", "content": question},
]
@mlflow.trace
def parse_response(response):
return response.choices[0].message.content
run("What is MLflow?")
Running this code generates a single trace that combines the manual spans with the automatic OpenAI tracing.
Multi-Framework Example
You can also combine different LLM providers in a single trace. For example:
This example requires installing LangChain in addition to the base requirements:
pip install --upgrade langchain langchain-openai
import mlflow
import openai
from mlflow.entities import SpanType
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
# Enable auto-tracing for both OpenAI and LangChain
mlflow.openai.autolog()
mlflow.langchain.autolog()
@mlflow.trace(span_type=SpanType.CHAIN)
def multi_provider_workflow(query: str):
# First, use OpenAI directly for initial processing
analysis = openai.OpenAI().chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "Analyze the query and extract key topics."},
{"role": "user", "content": query},
],
)
topics = analysis.choices[0].message.content
# Then use LangChain for structured processing
llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template(
"Based on these topics: {topics}\nGenerate a detailed response to: {query}"
)
chain = prompt | llm
response = chain.invoke({"topics": topics, "query": query})
return response
# Run the function
result = multi_provider_workflow("Explain quantum computing")
Disabling Tracing
To disable tracing, the mlflow.tracing.disable()
API will cease the collection of trace data from within MLflow and will not log
any data to the MLflow Tracking service regarding traces.
To enable tracing (if it had been temporarily disabled), the mlflow.tracing.enable()
API will re-enable tracing functionality for instrumented models
that are invoked.
Next Steps
Manual Tracing: Learn how to add custom tracing to your application logic
Integration Guides: Explore detailed guides for specific libraries and frameworks
Viewing Traces: Learn how to explore and analyze your traces in the MLflow UI
Querying Traces: Programmatically search and retrieve trace data for analysis