Skip to main content

Quick Start

Get started with Predibase in under 2 minutes:
from portkey_ai import Portkey

# 1. Install: pip install portkey-ai
# 2. Add @predibase provider in model catalog
# 3. Use it:

portkey = Portkey(api_key="PORTKEY_API_KEY")

response = portkey.chat.completions.create(
    model="@predibase/llama-3-8b-instruct",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

Add Provider in Model Catalog

Before making requests, add Predibase to your Model Catalog:
  1. Go to Model Catalog → Add Provider
  2. Select Predibase
  3. Enter your Predibase API key
  4. Name your provider (e.g., predibase)

Complete Setup Guide

See all setup options and detailed configuration instructions

Predibase Capabilities

Serverless Endpoints

Predibase offers LLMs like Llama 3, Mistral, Gemma, etc. on its serverless infrastructure that you can query instantly.
Sending Predibase Tenant IDPredibase expects your account tenant ID along with the API key in each request. With Portkey, you can send your Tenant ID with the user param while making your request.
from portkey_ai import Portkey

portkey = Portkey(api_key="PORTKEY_API_KEY", provider="@predibase")

response = portkey.chat.completions.create(
    model="llama-3-8b-instruct",
    messages=[{"role": "user", "content": "Hello!"}],
    user="PREDIBASE_TENANT_ID"  # Required: Your Predibase tenant ID
)

print(response.choices[0].message.content)

Using Fine-Tuned Models

Predibase allows you to deploy and use fine-tuned models with adapters. Use the special format with your model identifier from your Predibase dashboard:
Fine-Tuned Model Format:
model = base_model:adapter-repo-name/adapter-version-number
For example: llama-3-8b:sentiment-analysis/1
from portkey_ai import Portkey

portkey = Portkey(api_key="PORTKEY_API_KEY", provider="@predibase")

response = portkey.chat.completions.create(
    model="llama-3-8b:sentiment-analysis/1",  # Base model + adapter
    messages=[{"role": "user", "content": "This product is amazing!"}],
    user="PREDIBASE_TENANT_ID"
)

print(response.choices[0].message.content)

Dedicated Deployments

Route requests to your dedicated deployed models by passing the deployment name in the model parameter:
from portkey_ai import Portkey

portkey = Portkey(api_key="PORTKEY_API_KEY", provider="@predibase")

response = portkey.chat.completions.create(
    model="my-dedicated-mistral-deployment",  # Your deployment name
    messages=[{"role": "user", "content": "Hello!"}],
    user="PREDIBASE_TENANT_ID"
)

print(response.choices[0].message.content)

JSON Schema Mode

Enforce JSON schema for all Predibase models by setting response_format to json_object with your schema:
from portkey_ai import Portkey
from pydantic import BaseModel, constr

# Define JSON Schema with Pydantic
class Character(BaseModel):
    name: constr(max_length=10)
    age: int
    strength: int

portkey = Portkey(api_key="PORTKEY_API_KEY", provider="@predibase")

response = portkey.chat.completions.create(
    model="llama-3-8b",
    messages=[{"role": "user", "content": "Create a character profile"}],
    user="PREDIBASE_TENANT_ID",
    response_format={
        "type": "json_object",
        "schema": Character.schema()
    }
)

print(response.choices[0].message.content)

Supported Models

Predibase provides access to various open-source and fine-tuned models:
  • Llama 3 (various sizes)
  • Mistral
  • Zephyr
  • Your custom fine-tuned models
Check Predibase’s documentation for the complete model list and fine-tuning options.

Next Steps

For complete SDK documentation:

SDK Reference

Complete Portkey SDK documentation