Skip to main content
Available on all Portkey plans.
Specify a prioritized list of providers/models. If the primary LLM fails, Portkey automatically falls back to the next in line.

Examples

{
  "strategy": { "mode": "fallback" },
  "targets": [
    { "override_params": { "model": "@openai-prod/gpt-4o" } },
    { "override_params": { "model": "@anthropic-prod/claude-3-5-sonnet-20241022" } }
  ]
}
The @provider-slug/model-name format automatically routes to the correct provider. Set up providers in Model Catalog.
Create and use configs in your requests.

Trigger on Specific Status Codes

By default, fallback triggers on any non-2xx status code. Customize with on_status_codes:
{
  "strategy": { "mode": "fallback", "on_status_codes": [429, 503] },
  "targets": [
    { "provider": "@openai-prod" },
    { "provider": "@azure-prod" }
  ]
}

Tracing Fallback Requests

Portkey logs all requests in a fallback chain. To trace:
  1. Filter logs by Config ID to see all requests using that config
  2. Filter by Trace ID to see all attempts for a single request

Considerations

  • Ensure fallback LLMs are compatible with your use case
  • A single request may invoke multiple LLMs
  • Each LLM has different latency and pricing