Skip to main content

Tracing a GenAI App

MLflow Tracing provides comprehensive visibility into your GenAI application's execution, helping you debug, optimize, and understand your app's behavior. With tracing, you can see exactly what happens inside your application - from user inputs to model outputs, including all intermediate steps, latencies, and token usage.

Quick Example

Here's how easy it is to add tracing to your GenAI application:

import mlflow


@mlflow.trace
def ask_question(question: str) -> str:
"""Simple traced function that processes a question."""
response = call_llm(question)
return response


ask_question("What is MLflow?")

Choose your development environment

Select the quickstart guide that matches your development environment:

Development EnvironmentUse this guide if...
*Notebook**
Get started with MLflow Tracing directly in a Notebook
You develop in Notebooks and want the simplest setup with no authentication configuration needed
Local IDE
Set up MLflow Tracing in your local development environment
You develop in VS Code, PyCharm, or any other local IDE and need to connect to MLflow

What you'll build

In either quickstart, you'll create a simple GenAI application that:

  • Automatically captures detailed traces of each request
  • Provides insights into token usage, latency, and application flow
  • Enables debugging and optimization of your GenAI pipeline

Ready to get started? Choose your development environment above to begin building your first traced GenAI application.