MLflow Trace UI in Jupyter Notebook Demo
This notebook is a quick showcase of the MLflow Trace UI within Jupyter Notebooks.
We begin with some toy examples to explain the display functionality, and end by building a simple RAG demo to showcase more of the UI features.
Download this NotebookPrerequisites
Please make sure you have the following packages installed for this demo.
mlflow >= 2.20
openai
Optionally, for the RAG demo at the end, you’ll need:
langchain
langchain-community
beautifulsoup4
You can run the cell below to install all these packages (make sure to restart the kernel afterwards)
[ ]:
%pip install mlflow>=2.20 openai langchain langchain-community beautifulsoup4
When is the MLflow Trace UI displayed?
The UI is only displayed when the MLflow Tracking URI is set to an HTTP tracking server, as this is where the UI assets are served from. If you don’t use a remote tracking server, you can always start one locally by running the mlflow server
CLI command. By default, the tracking server will be running at http://localhost:5000
.
For this tutorial, please make sure your tracking URI is set correctly!
[ ]:
import mlflow
# replace with your own URI
tracking_uri = "http://localhost:5000"
mlflow.set_tracking_uri(tracking_uri)
# set a new experiment to avoid
# cluttering the default experiment
experiment = mlflow.set_experiment("mlflow-trace-ui-demo")
Once that’s set up, the trace UI should automatically show up for the following events. Examples of each will be provided below:
When the cell code generates a trace
When a
mlflow.entities.Trace
object is displayed (e.g. via IPython’sdisplay
function, or when it is the last value returned in a cell)When
mlflow.search_traces()
is called
Example 1: Generating a trace within a cell
Traces can be generated by automatic tracing integrations (e.g. with mlflow.openai.autolog()
), or when you run a manually traced function. For example:
[ ]:
# Simple manual tracing example
import mlflow
@mlflow.trace
def foo(input):
return input + 1
# running foo() generates a trace
foo(1)
[ ]:
# Automatic tracing with OpenAI
import os
from getpass import getpass
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass("Enter your OpenAI API key: ")
[ ]:
from openai import OpenAI
import mlflow
mlflow.openai.autolog()
client = OpenAI()
# creating a chat completion will generate a trace
client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "hello!"}],
)
Example 2: Displaying a Trace object
The trace UI will also show up when an MLflow Trace entity is displayed. This can happen in two ways:
Explicitly displaying a trace object with IPython’s
display()
When a trace object happens to be the last evaluated expression in a cell
[ ]:
# Explicitly calling `display()`
trace = mlflow.get_last_active_trace()
display(trace)
# Even if the last expression does not result in a trace,
# display(trace) will still trigger the UI display
print("Test")
[ ]:
# Displaying as a result of the trace being the last expression
trace
Example 3: Calling mlflow.search_traces()
MLflow provides the mlflow.search_traces()
API to conveniently search through all traces in an experiment. When this API is called in a Jupyter notebook, the trace UI will render all the traces in a paginated view. There is a limit to how many traces can be rendered in a single cell output. By default the maximum is 10, but this can be configured by setting the MLFLOW_MAX_TRACES_TO_DISPLAY_IN_NOTEBOOK
environment variable.
[ ]:
mlflow.search_traces(experiment_ids=[experiment.experiment_id])
Disabling the UI
The display is enabled by default, but if you’d prefer for it not to be shown, you can run mlflow.tracing.disable_notebook_display()
disable it. You will have to rerun the cells (or simply clear the cell outputs) in order to remove the displays that have already rendered.
If you’d like to re-enable the auto-display functionality, simply call mlflow.tracing.enable_notebook_display()
.
[ ]:
mlflow.tracing.disable_notebook_display()
# no UI will be rendered
trace
[ ]:
mlflow.tracing.enable_notebook_display()
# re-enable display
trace
Conclusion
That’s the basics! We hope you’ll find the Jupyter integration useful. As always, please file an issue at https://github.com/mlflow/mlflow/issues if you find any problems, or if you want to leave any feedback.
In the next few cells, we have a short RAG demo that will create a trace with more realistic data, so you can get a better feel of what working with this UI will be like.
[ ]:
from langchain_core.vectorstores import InMemoryVectorStore
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
# define necessary RAG entities
llm = ChatOpenAI(model="gpt-4o-mini")
embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
vector_store = InMemoryVectorStore(embeddings)
[ ]:
import bs4
from langchain import hub
from langchain_community.document_loaders import WebBaseLoader
from langchain_core.runnables import RunnablePassthrough
from langchain_text_splitters import RecursiveCharacterTextSplitter
# generate sample doc chunks from the MLflow documentation
loader = WebBaseLoader(
web_paths=("https://mlflow.org/docs/latest/llms/tracing/index.html",),
bs_kwargs={"parse_only": bs4.SoupStrainer(class_=("document"))},
)
docs = loader.load()
# add documents to the vector store
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
all_splits = text_splitter.split_documents(docs)
vector_store.add_documents(documents=all_splits)
retriever = vector_store.as_retriever(
search_type="similarity",
search_kwargs={"k": 3},
)
# Define prompt for question-answering
prompt = hub.pull("rlm/rag-prompt")
[ ]:
# define our chain
chain = {"context": retriever, "question": RunnablePassthrough()} | prompt | llm
[ ]:
import mlflow
# call the langchain autolog function so that traces will be generated
mlflow.langchain.autolog()
response = chain.invoke("What is MLflow Tracing?")