Skip to main content

Lightweight Tracing SDK Optimized for Production Usage

MLflow offers a lightweight tracing SDK package called mlflow-tracing that includes only the essential functionality for tracing and monitoring of your GenAI applications. This package is designed for production environments where minimizing dependencies and deployment size is critical.

Why Use the Lightweight SDK?

🏎️ Faster Deployment

The package size and dependencies are significantly smaller than the full MLflow package, allowing for faster deployment times in dynamic environments such as Docker containers, serverless functions, and cloud-based applications.

🔧 Simplified Dependency Management

A smaller set of dependencies means less work keeping up with dependency updates, security patches, and potential breaking changes from upstream libraries. It also reduces the chances of dependency collisions and incompatibilities.

📦 Enhanced Portability

With fewer dependencies, MLflow Tracing can be seamlessly deployed across different environments and platforms, without worrying about compatibility issues.

🔒 Reduced Security Risk

Each dependency potentially introduces security vulnerabilities. By reducing the number of dependencies, MLflow Tracing minimizes the attack surface and reduces the risk of security breaches.

Installation

Install the lightweight SDK using pip:

pip install mlflow-tracing
warning

Do not install the full mlflow package together with the lightweight mlflow-tracing SDK, as this may cause version conflicts and namespace resolution issues.

Quickstart

Here's a simple example using the lightweight SDK with OpenAI for logging traces to an experiment on a remote MLflow server:

import mlflow
import openai

# Set the tracking URI to your MLflow server
mlflow.set_tracking_uri("http://your-mlflow-server:5000")
mlflow.set_experiment("genai-production-monitoring")

# Enable auto-tracing for OpenAI
mlflow.openai.autolog()

# Use OpenAI as usual - traces will be automatically logged
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o-mini", messages=[{"role": "user", "content": "What is MLflow?"}]
)

print(response.choices[0].message.content)

Choose Your Backend

The lightweight SDK works with various observability platforms. Choose your preferred option and follow the instructions to set up your tracing backend.

MLflow is a fully open-source project, allowing you to self-host your own MLflow server in your own infrastructure. This is a great option if you want to have full control over your data and are restricted in using cloud-based services.

In self-hosting mode, you will be responsible for running the tracking server instance and scaling it to your needs. We strongly recommend using a SQL-based tracking server on top of a performant database to minimize operational overhead and ensure high availability.

Setup Steps:

  1. Install MLflow server: pip install mlflow[extras]
  2. Configure backend store (PostgreSQL/MySQL recommended)
  3. Configure artifact store (S3, Azure Blob, GCS, etc.)
  4. Start server: mlflow server --backend-store-uri postgresql://... --default-artifact-root s3://...

Refer to the tracking server setup guide for detailed guidance.

OSS Tracing

Supported Features

The lightweight SDK includes all essential tracing functionalities for monitoring your GenAI applications. Click the cards below to learn more about each supported feature.

⚡️ Automatic Tracing for 15+ AI Libraries

MLflow Tracing SDK supports one-line integration with all of the most popular LLM/GenAI libraries including OpenAI, Anthropic, LangChain, LlamaIndex, Hugging Face, DSPy, and any LLM provider that conforms to OpenAI API format. This automatic tracing capability allows you to monitor your GenAI application with minimal effort and easily switch between different libraries.

⚙️ Manual Instrumentation

MLflow Tracing SDK provides a simple and intuitive API for manually instrumenting your GenAI application. Manual instrumentation and automatic tracing can be used together, allowing you to trace advanced applications containing custom code and have fine-grained control over the tracing behavior.

🏷️ Tagging and Filtering Traces

By annotating traces with custom tags, you can add more context to your traces to group them and simplify the process of searching for them later. This is useful when you want to trace an application that runs across multiple request sessions or track specific user interactions.

🔍 Advanced Search and Querying

Search and filter traces using powerful SQL-like syntax based on execution time, status, tags, metadata, and other attributes. Perfect for debugging issues, analyzing performance patterns, and monitoring production applications.

📊 Production Monitoring

Configure asynchronous logging, handle high-volume tracing, and integrate with enterprise observability platforms. Includes comprehensive production deployment patterns and best practices for scaling your tracing infrastructure.

Production Configuration Example

Here's a complete example of setting up the lightweight SDK for production use:

import mlflow
import os
from your_app import process_user_request

# Configure MLflow for production
mlflow.set_tracking_uri(os.getenv("MLFLOW_TRACKING_URI", "http://mlflow-server:5000"))
mlflow.set_experiment(os.getenv("MLFLOW_EXPERIMENT_NAME", "production-genai-app"))

# Enable automatic tracing for your LLM library
mlflow.openai.autolog() # or mlflow.langchain.autolog(), etc.


@mlflow.trace
def handle_user_request(user_id: str, session_id: str, message: str):
"""Production endpoint with comprehensive tracing."""

# Add production context to trace
mlflow.update_current_trace(
tags={
"user_id": user_id,
"session_id": session_id,
"environment": "production",
"service_version": os.getenv("SERVICE_VERSION", "1.0.0"),
}
)

try:
# Your application logic here
response = process_user_request(message)

# Log success metrics
mlflow.update_current_trace(
tags={"response_length": len(response), "processing_successful": True}
)

return response

except Exception as e:
# Log error information
mlflow.update_current_trace(
tags={
"error": True,
"error_type": type(e).__name__,
"error_message": str(e),
},
)
raise

Features Not Included

The following MLflow features are not available in the lightweight package:

  • MLflow Tracking Server and UI - Use the full MLflow package to run the server
  • Run Management APIs - mlflow.start_run(), mlflow.log_metric(), etc.
  • Model Logging and Evaluation - Model serialization and evaluation frameworks
  • Model Registry - Model versioning and lifecycle management
  • MLflow Projects - Reproducible ML project format
  • MLflow Recipes - Predefined ML workflows
  • Other MLflow Components - Features unrelated to tracing

For these features, use the full MLflow package: pip install mlflow

Migration from Full MLflow

If you're currently using the full MLflow package and want to switch to the lightweight SDK for production:

1. Update Dependencies

# Remove full MLflow
pip uninstall mlflow

# Install lightweight SDK
pip install mlflow-tracing

2. Update Import Statements

Most tracing functionality remains the same:

# These imports work the same way
import mlflow
import mlflow.openai
from mlflow.tracing import trace

# These features are NOT available in mlflow-tracing:
# import mlflow.sklearn # ❌ Model logging
# mlflow.start_run() # ❌ Run management
# mlflow.log_metric() # ❌ Metric logging

3. Update Configuration

Focus on tracing-specific configuration:

# Configure tracking URI (same as before)
mlflow.set_tracking_uri("http://your-server:5000")
mlflow.set_experiment("your-experiment")


# Tracing works the same way
@mlflow.trace
def your_function():
# Your code here
pass

Package Size Comparison

PackageSizeDependenciesUse Case
mlflow~1000MB20+ packagesDevelopment, experimentation, full ML lifecycle
mlflow-tracing~5MB5-8 packagesProduction tracing, monitoring, observability

The lightweight SDK is 95% smaller than the full MLflow package, making it ideal for:

  • Container deployments
  • Serverless functions
  • Edge computing
  • Production microservices
  • CI/CD pipelines

Summary

The MLflow Tracing SDK provides a production-optimized solution for monitoring GenAI applications with:

  • Minimal footprint for fast deployments
  • Full tracing capabilities for comprehensive monitoring
  • Flexible backend options from self-hosted to enterprise platforms
  • Easy migration path from full MLflow package
  • Production-ready features including async logging and error handling

Whether you're running a small prototype or a large-scale production system, the lightweight SDK provides the observability you need without the overhead you don't.