Skip to main content
MLflow Tracing provides automatic, no-code instrumentation for 20+ GenAI libraries. Combined with Portkey, get comprehensive traces plus gateway features like caching, fallbacks, and load balancing.

Why MLflow + Portkey?

No-Code Integrations

Automatic instrumentation for 20+ GenAI libraries with one line

Detailed Traces

Capture inputs, outputs, and metadata for every step

Debug with Confidence

Easily pinpoint issues with comprehensive trace data

Gateway Intelligence

Portkey adds caching, fallbacks, and load balancing to every request

Quick Start

pip install mlflow openai opentelemetry-exporter-otlp-proto-http
import os
import mlflow
from openai import OpenAI

# Send traces to Portkey
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://api.portkey.ai/v1/logs/otel"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = "x-portkey-api-key=YOUR_PORTKEY_API_KEY"
os.environ["OTEL_EXPORTER_OTLP_TRACES_PROTOCOL"] = "http/protobuf"

# Enable MLflow instrumentation
mlflow.openai.autolog()

# Use Portkey gateway
client = OpenAI(
    api_key="YOUR_PORTKEY_API_KEY",
    base_url="https://api.portkey.ai/v1"
)

response = client.chat.completions.create(
    model="@openai-prod/gpt-4.1",  # Provider slug from Model Catalog
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)
OpenTelemetry traces in Portkey

Setup

  1. Add provider in Model Catalog → get provider slug (e.g., @openai-prod)
  2. Get Portkey API key
  3. Use model="@provider-slug/model-name" in requests

Supported Libraries

LLM Providers: OpenAI, Anthropic, Cohere, Google AI, Azure OpenAI Frameworks: LangChain, LlamaIndex, Haystack Vector Databases: Pinecone, ChromaDB, Weaviate, Qdrant

Next Steps


See Your Traces in Action

Once configured, view your MLflow instrumentation combined with Portkey gateway intelligence in the Portkey dashboard:
OpenTelemetry traces in Portkey