Skip to main content
The Model Catalog is a centralized hub for viewing and managing all AI providers and models within your organization. It abstracts raw API keys and scattered environment variables into governed Provider Integrations and Models, giving you complete control over how your teams access and use AI.

Upgrading from Virtual Keys

The Model Catalog upgrades the Virtual Key experience by introducing a centralized, organization-level management layer, offering advantages like:
  • Centralized provider and model management - no more duplicate configs across workspaces.
  • Fine-grained control: budgets, rate limits, and model allow-lists at both org and workspace level.
  • Inline usage: just pass model="@provider/model_slug"
Need help? See our Migration Guide ➜
Model Catalog - Provider and Models

AI Providers

AI Providers are what you use in your code. Each provider has:
  • ✅ A unique slug (e.g., @openai-prod)
  • ✅ Securely stored credentials
  • ✅ Budget and rate limits
  • ✅ Access to specific models
To use: Add a provider, then use @provider-slug/model-name in your code.

Models

The Models section is a gallery of all AI models available in your workspace. Each Model entry includes:
  • ✅ Model slug (@openai-prod/gpt-4o)
  • ✅ Ready-to-use code snippets
  • ✅ Input/output token limits
  • ✅ Pricing information (where available)
View all available models →

Adding an AI Provider

Add providers via UI (follow the steps below) or API.
1

Go to AI Providers → Add Provider

Navigate to the Model Catalog in your Portkey dashboard.
Portkey Model Catalog - Add Provider
2

Select the AI Service

Choose from list (OpenAI, Anthropic, etc.) or Self-hosted / Custom.
Portkey Model Catalog - Add Provider - Choose Service
3

Choose or Create Credentials

If credentials already exist:
  • Select from the dropdown (if your org admin set them up)
  • Skip to step 4 - no API keys needed!
If creating new credentials:
  • Choose “Create new credentials”
  • Enter your API keys here
Creating new credentials here automatically creates a workspace-linked integration. To share credentials across multiple workspaces, create them in the Integrations page (org admin only).
Model Catalog - Add credentials
4

Name your provider & save

Choose a name and slug for this provider. The slug (e.g., openai-prod) will be used in your code like @openai-prod/gpt-4o.
Model Catalog - Add Provider Details

Using Provider Models

Once you have AI Providers set up, use their models in your applications. There are three methods - we recommend the model prefix format for clarity. In Portkey, model strings follow this format: @provider_slug/model_name
Model String Format
For example, @openai-prod/gpt-4o, @anthropic/claude-3-sonnet, @bedrock-us/claude-3-sonnet-v1
import { Portkey } from 'portkey-ai';
const client = new Portkey({ apiKey: "PORTKEY_API_KEY" });

const resp = await client.chat.completions.create({
  model: '@openai-prod/gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }]
});

2. Using the provider header

Specify the provider separately using the provider parameter. Remember to add the @ before your provider slug.
import { Portkey } from 'portkey-ai';
const client = new Portkey({
	apiKey: "PORTKEY_API_KEY",
	provider: "@openai-prod"
});

const resp = await client.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }]
});

3. Specify provider in the config

Portkey’s configs are simple JSON structures that help you define routing logic for LLM requests. Learn more here. There are three ways to specify providers in configs: Method 1: Model in override_params (Recommended) Specify provider and model together in override_params. Works great with multi-provider strategies:
{
	"strategy": { "mode": "fallback" },
	"targets": [{
		"override_params": { "model": "@openai-prod/gpt-4o" }
	}, {
		"override_params": { "model": "@anthropic/claude-3-sonnet" }
	}]
}
Method 2: Provider in target Specify provider directly in the target (remember the @ symbol):
{
	"strategy": { "mode": "single" },
	"targets": [{
		"provider": "@openai-prod",
		"override_params": {
			"model": "gpt-4o"
		}
	}]
}
Method 3: Legacy virtual_key (Backwards Compatible) The virtual_key field still works:
{
	"strategy": { "mode": "single" },
	"targets": [{
		"virtual_key": "openai-prod"
	}]
}
Ordering: config (if provided) defines base; override_params merges on top (last write wins for scalars, deep merge for objects like metadata).

How It Works: Credentials → Providers → Models

Virtual Keys

Learn how Portkey’s virtual key system works: use one Portkey API key to access multiple providers and models
Think of it like a password manager:
  1. Store your credentials once (at the org level) - This is called an “Integration”
    • Like saving your OpenAI API key in a password vault
    • You can share it with multiple workspaces without re-entering it
  2. Use it in your workspace - This becomes a “Provider”
    • Like having a saved login that appears in your workspace
    • Each workspace can have different settings (budgets, rate limits) for the same credentials
  3. Call specific models - Use the model slug in your code
    • Format: @provider-slug/model-name (e.g., @openai-prod/gpt-4o)
Quick Start: When adding a provider in Model Catalog, choose either:
  • Use existing credentials from your organization (if your admin set them up)
  • Create new credentials for just this workspace (creates a workspace-linked integration automatically)

Manage Credentials

For org admins: Learn how to centrally manage credentials and share them across workspaces

Managing Access and Controls

Each Integration in Portkey acts as a control point where you can configure:

Budget Limits

Set spending controls at the Integration level to prevent unexpected costs. You can configure:
  • Cost-based limits: Maximum spend in USD (e.g., $1000/month)
  • Token-based limits: Maximum tokens consumed (e.g., 10M tokens/week)
  • Periodic resets: Weekly or monthly budget refreshes
Budget Limits Configuration
These limits cascade down to all AI Providers created from that Integration.

Budget Management

Set up cost controls and spending limits for your AI usage

Rate Limits

Control request rates to manage load and prevent abuse:
  • Requests per minute/hour/day: Set appropriate throughput limits
  • Concurrent request limits: Control parallel processing
  • Burst protection: Prevent sudden spikes in usage
Rate limits help you maintain service quality and prevent any single user or team from monopolizing resources.

Rate Limiting

Configure request rate controls to ensure fair usage and prevent abuse

Workspace Provisioning

Control which workspaces in your organization can access specific AI Providers:
  • Selective access: Choose which teams can use production vs development providers
  • Environment isolation: Keep staging and production resources separate
  • Department-level control: Give finance different access than engineering
Workspace Provisioning Interface
This hierarchical approach ensures teams only have access to the resources they need.

Workspace Provisioning

Manage workspace access to AI providers and models

Model Provisioning

Fine-tune which models are available through each Integration:
  • Model allowlists: Only expose specific models (e.g., only GPT-4 for production)
  • Model denylists: Block access to expensive or experimental models
  • Custom model addition: Add your fine-tuned or self-hosted models
Model Provisioning Settings
Model provisioning helps you maintain consistency and control costs across your organization.

Model Provisioning

Configure which models are available through each integration

Advanced Model Management

Custom Models

The Model Catalog isn’t limited to standard provider models. You can add:
  • Fine-tuned models: Your custom OpenAI or Anthropic fine-tunes
  • Self-hosted models: Models running on your infrastructure
  • Private models: Internal models not publicly available
Each custom model gets the same governance controls as standard models.

Custom Models

Add and manage your fine-tuned, self-hosted, or private models

Overriding Model Details (Custom Pricing)

Override default model pricing for:
  • Negotiated rates: If you have enterprise agreements with providers
  • Internal chargebacks: Set custom rates for internal cost allocation
  • Free tier models: Mark certain models as free for specific teams
Custom pricing ensures your cost tracking accurately reflects your actual spend.

Custom Pricing

Configure custom pricing for models with special rates

Self-hosted AI Providers

Coming Soon