Skip to main content

Tracing a GenAI App (IDE)

This quickstart helps you integrate your GenAI app with MLflow Tracing using MLflow OSS in your local IDE development environment.

What you'll achieve

By the end of this tutorial, you will have:

  • A local MLflow Experiment for your GenAI app
  • Set up your local development environment with MLflow OSS
  • Used MLflow Tracing to instrument your app.

Prerequisites

  • Python 3.9+: Local Python environment
  • OpenAI API Key: For accessing OpenAI models (or other LLM provider)

Step 1: Install MLflow

When working in your local IDE, you need to install MLflow OSS.

pip install --upgrade "mlflow>=3.1"
MLflow Version Recommendation

While tracing features are available in MLflow 2.15.0+, it is strongly recommended to install MLflow 3 (specifically 3.1 or newer) for the latest GenAI capabilities, including expanded tracing features and robust support.

Step 2: Start MLflow Tracking Server

note

An MLflow Experiment is the container for your GenAI application. With MLflow OSS, you can run a local tracking server to store your experiments and traces.

  1. Start the MLflow tracking server in your terminal:

    mlflow server --host 127.0.0.1 --port 8080
  2. Open your browser and navigate to http://127.0.0.1:8080 to access the MLflow UI

  3. You'll see the MLflow tracking interface where experiments and traces will be displayed

Step 3: Set up your environment

note

For MLflow OSS, you'll need to configure your environment to connect to your local tracking server and set up API keys for your LLM provider.

Set the following environment variables in your terminal:

export MLFLOW_TRACKING_URI=http://127.0.0.1:8080
export OPENAI_API_KEY=<your-openai-api-key>

Step 4: Instrument your application

Select the appropriate integration for your application:

  1. Install the required Python packages:

    pip install -U mlflow openai
  2. Create a Python file named app.py in your project directory:

    Here, we use the @mlflow.trace decorator that makes it easy to trace any Python application combined with the OpenAI automatic instrumentation to capture the details of the call to the OpenAI SDK.

    import mlflow
    import openai
    import os

    # Set the MLflow tracking URI
    mlflow.set_tracking_uri("http://127.0.0.1:8080")

    # Enable MLflow's autologging to instrument your application with Tracing
    mlflow.openai.autolog()

    # Create an OpenAI client
    client = openai.OpenAI()


    # Use the trace decorator to capture the application's entry point
    @mlflow.trace
    def my_app(input: str):
    # This call is automatically instrumented by `mlflow.openai.autolog()`
    response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
    {
    "role": "system",
    "content": "You are a helpful assistant.",
    },
    {
    "role": "user",
    "content": input,
    },
    ],
    )
    return response.choices[0].message.content


    my_app(input="What is MLflow?")
  3. Run the application:

    python app.py

Step 5: View the Trace in MLflow

  1. Navigate to your MLflow UI at http://127.0.0.1:8080
  2. You will now see the generated trace in the Traces tab
  3. Click on the trace to view its details

Understanding the Trace

The trace you've just created shows:

  • Root Span: Represents the inputs to the my_app(...) function
    • Child Span: Represents the LLM completion request
  • Attributes: Contains metadata like model name, token counts, and timing information
  • Inputs: The messages sent to the model
  • Outputs: The response received from the model

This simple trace already provides valuable insights into your application's behavior, such as:

  • What was asked
  • What response was generated
  • How long the request took
  • How many tokens were used (affecting cost)

Next Steps

Congratulations! You've successfully built your first GenAI application with MLflow OSS Tracing!

tip

For more complex applications like RAG systems or multi-step agents, MLflow Tracing provides even more value by revealing the inner workings of each component and step. You can also experiment with different LLM providers and compare their performance using MLflow's tracking capabilities.