Skip to main content

Lightweight Tracing SDK Optimized for Production Usage

MLflow offers a lightweight tracing SDK package called mlflow-tracing that includes only the essential functionality for tracing and monitoring of your GenAI applications. This package is designed for production environments where minimizing dependencies and deployment size is ideal.

Why Use the Lightweight SDK?

🏎️ Faster Deployment

The package size and dependencies are significantly smaller than the full MLflow package, allowing for faster deployment times in dynamic environments such as Docker containers, serverless functions, and cloud-based applications.

🔧 Simplified Dependency Management

A smaller set of dependencies means less work keeping up with dependency updates, security patches, and potential breaking changes from upstream libraries. It also reduces the chances of dependency collisions and incompatibilities.

📦 Portability

With a fewer number of dependencies, MLflow Tracing can be seamlessly deployed across differing environments and platforms, without worrying about compatibility issues.

🔒 Fewer Security Risks

Each dependency potentially introduces security vulnerabilities. By reducing the number of dependencies, MLflow Tracing minimizes the attack surface and reduces the risk of security breaches.

Installation

Install the lightweight SDK using pip:

pip install mlflow-tracing
warning

Do not install the full mlflow package together with the lightweight mlflow-tracing SDK, as this may cause version conflicts and namespace resolution issues.

Quickstart

Here's a simple example using the lightweight SDK with OpenAI for logging traces to an experiment on a remote MLflow server. If you haven't set up a remote MLflow server yet, please refer to the Choose Your Backend section below.

import mlflow
import openai

# Set the tracking URI (e.g., Databricks, SageMaker, or self-hosted server)
mlflow.set_tracking_uri("<your-tracking-uri>")
mlflow.set_experiment("<your-experiment-name>")

# Enable auto-tracing for OpenAI
mlflow.openai.autolog()

# Use OpenAI as usual
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4.1-mini", messages=[{"role": "user", "content": "What is MLflow?"}]
)

Choose Your Backend

The lightweight SDK works with various observability platforms. Choose your favorite one and follow the instructions to set up your tracing backend.

Databricks Lakehouse Monitoring for GenAI is a go-to solution for monitoring your GenAI application with MLflow Tracing. It provides access to a robust, fully functional monitoring dashboard for operational excellence and quality analysis.

Lakehouse Monitoring for GenAI can be used regardless of whether your application is hosted on Databricks or not.

Sign up for free and get started in a minute to run your GenAI application with complete observability.

Monitoring

Supported Features

The lightweight SDK includes all essential tracing functionalities for monitoring your GenAI applications. Click the cards below to learn more about each supported feature.

Features Not Included

The following MLflow features are not available in the lightweight package:

  • MLflow tracking server and UI
  • Run management APIs (e.g. mlflow.start_run)
  • Model logging and evaluation
  • Model / prompt registry
  • MLflow AI Gateway
  • Other MLflow features unrelated to tracing

For these features, use the full MLflow package by installing pip install mlflow.