Auto Tracing Integrations
MLflow Tracing is integrated with various GenAI libraries and provide one-line automatic tracing experience for each library (and the combination of them!). Click on the icon below to see detailed examples to integrate MLflow with your favorite library.
Is your favorite library missing from the list? Consider contributing to MLflow Tracing or submitting a feature request to our Github repository.
Enabling Multiple Auto Tracing Integrations
As the GenAI tool ecosystem grows, it becomes common to combine multiple libraries to build a compound AI system. With MLflow Tracing, you can enable auto-tracing for such multi-framework models and get unified tracing experience.
For example, the following example enables both LangChain and OpenAI automatic tracing:
import mlflow
from langchain.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI
# Enable MLflow Tracing for both LangChain and OpenAI
mlflow.langchain.autolog()
mlflow.openai.autolog()
# Optional: Set a tracking URI and an experiment
mlflow.set_experiment("LangChain")
mlflow.set_tracking_uri("http://localhost:5000")
# Define a chain that uses OpenAI as an LLM provider
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.7, max_tokens=1000)
prompt_template = PromptTemplate.from_template(
"Answer the question as if you are {person}, fully embodying their style, wit, personality, and habits of speech. "
"Emulate their quirks and mannerisms to the best of your ability, embracing their traitsโeven if they aren't entirely "
"constructive or inoffensive. The question is: {question}"
)
chain = prompt_template | llm | StrOutputParser()
chain.invoke(
{
"person": "Linus Torvalds",
"question": "Can I just set everyone's access to sudo to make things easier?",
}
)
MLflow will generates a single trace that combines LangChain steps and OpenAI LLM call, allowing you to inspect the raw input and output passed to the OpenAI LLM.
Disabling Auto Tracing
Auto tracing for each library can be disabled by calling mlflow.<library>.autolog(disable=True)
. Moreover, you can disable tracing for all integrations using or mlflow.autolog(disable=True)
.