Skip to main content
Portkey provides a robust gateway to facilitate the integration of your locally hosted models through Ollama.
If you are running the open source Portkey Gateway, refer to this guide on how to connect Portkey with Ollama.

Integration Steps

1

Expose your Ollama API

Expose your Ollama API using a tunneling service like ngrok or make it publicly accessible. Skip this if you’re self-hosting the Gateway.For using Ollama with ngrok, here’s a useful guide:
ngrok http 11434 --host-header="localhost:11434"
2

Add to Model Catalog

  1. Go to Model Catalog → Add Provider
  2. Enable “Local/Privately hosted provider” toggle
  3. Select Ollama as the provider type
  4. Enter your Ollama URL in Custom Host: https://your-ollama.ngrok-free.app
  5. Name your provider (e.g., my-ollama)

Complete Setup Guide

See all setup options
3

Use in Your Application

from portkey_ai import Portkey

portkey = Portkey(
    api_key="PORTKEY_API_KEY",
    provider="@my-ollama"
)

response = portkey.chat.completions.create(
    model="llama3",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)
Or use custom host directly:
from portkey_ai import Portkey

portkey = Portkey(
    api_key="PORTKEY_API_KEY",
    provider="ollama",
    custom_host="https://your-ollama.ngrok-free.app"
)
Important: For Ollama integration, you only need to pass the base URL to customHost without the version identifier (such as /v1) - Portkey handles the rest!

Supported Models

Ollama supports a wide range of models including:
  • Llama 3, Llama 3.1, Llama 3.2
  • Mistral, Mixtral
  • Gemma, Gemma 2
  • Phi-3
  • Qwen 2
  • And many more!
Check Ollama’s model library for the complete list.

Next Steps

For complete SDK documentation:

SDK Reference

Complete Portkey SDK documentation