Skip to main content
Arize Phoenix provides open-source AI observability with powerful visualization and debugging tools. Combined with Portkey, get automatic trace collection plus gateway features like caching, fallbacks, and load balancing.

Why Phoenix + Portkey?

Visual Debugging

Powerful UI for exploring traces, spans, and debugging LLM behavior

OpenInference Standard

Industry-standard semantic conventions for AI/LLM observability

Evaluation Tools

Built-in tools for evaluating model performance and behavior

Gateway Intelligence

Portkey adds caching, fallbacks, and load balancing to every request

Quick Start

pip install arize-phoenix-otel openai openinference-instrumentation-openai
import os
from phoenix.otel import register
from openinference.instrumentation.openai import OpenAIInstrumentor
from openai import OpenAI

# Send traces to Portkey
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://api.portkey.ai/v1/logs/otel"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = "x-portkey-api-key=YOUR_PORTKEY_API_KEY"

# Initialize Phoenix
register(set_global_tracer_provider=False)
OpenAIInstrumentor().instrument()

# Use Portkey gateway
client = OpenAI(
    api_key="YOUR_PORTKEY_API_KEY",
    base_url="https://api.portkey.ai/v1"
)

response = client.chat.completions.create(
    model="@openai-prod/gpt-4.1",  # Provider slug from Model Catalog
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)
OpenTelemetry traces in Portkey

Setup

  1. Add provider in Model Catalog → get provider slug (e.g., @openai-prod)
  2. Get Portkey API key
  3. Use model="@provider-slug/model-name" in requests

What Gets Captured

Phoenix uses OpenInference semantic conventions:
  • Messages: Full conversation history with roles and content
  • Model Info: Model name, temperature, and parameters
  • Token Usage: Input/output token counts for cost tracking
  • Errors: Detailed error information when requests fail
  • Latency: End-to-end request timing
Supported providers: OpenAI, Anthropic, Bedrock, Vertex AI, Azure OpenAI, and more.

Configuration Options

Custom Span Attributes

Add custom attributes to your traces:
from opentelemetry import trace

tracer = trace.get_tracer(__name__)

with tracer.start_as_current_span("custom_operation") as span:
    span.set_attribute("user.id", "user123")
    span.set_attribute("session.id", "session456")
    response = client.chat.completions.create(...)

Sampling Configuration

Control trace sampling for production:
from opentelemetry.sdk.trace.sampling import TraceIdRatioBased

# Sample 10% of traces
register(
    set_global_tracer_provider=False,
    sampler=TraceIdRatioBased(0.1)
)

Troubleshooting

Ensure both OTEL_EXPORTER_OTLP_ENDPOINT and OTEL_EXPORTER_OTLP_HEADERS are correctly set before initializing Phoenix.
Call OpenAIInstrumentor().instrument() before creating your OpenAI client.

Next Steps


See Your Traces in Action

Once configured, view your Phoenix instrumentation combined with Portkey gateway intelligence in the Portkey dashboard:
OpenTelemetry traces in Portkey