Skip to main content

Quick Start

Get started with LemonFox AI in under 2 minutes:
from portkey_ai import Portkey

# 1. Install: pip install portkey-ai
# 2. Add @lemonfox-ai provider in model catalog
# 3. Use it:

portkey = Portkey(api_key="PORTKEY_API_KEY")

response = portkey.chat.completions.create(
    model="@lemonfox-ai/llama-8b-chat",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

Add Provider in Model Catalog

Before making requests, add LemonFox AI to your Model Catalog:
  1. Go to Model Catalog → Add Provider
  2. Select LemonFox AI
  3. Enter your LemonFox API key
  4. Name your provider (e.g., lemonfox-ai)

Complete Setup Guide

See all setup options and detailed configuration instructions

Lemonfox AI Documentation

Explore the official Lemonfox AI documentation

LemonFox Capabilities

Streaming

Stream responses for real-time output:
from portkey_ai import Portkey

portkey = Portkey(api_key="PORTKEY_API_KEY", provider="@lemonfox-ai")

stream = portkey.chat.completions.create(
    model="llama-8b-chat",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)

for chunk in stream:
    print(chunk.choices[0].delta.content or "", end="", flush=True)

Image Generation

Generate images with Stable Diffusion XL:
from portkey_ai import Portkey

portkey = Portkey(api_key="PORTKEY_API_KEY", provider="@lemonfox-ai")

image = portkey.images.generate(
    prompt="A cute baby sea otter",
    size="1024x1024"
)

print(image.data[0].url)

Speech-to-Text

Transcribe audio with Whisper:
from portkey_ai import Portkey

portkey = Portkey(api_key="PORTKEY_API_KEY", provider="@lemonfox-ai")

audio_file = open("/path/to/file.mp3", "rb")

transcription = portkey.audio.transcriptions.create(
    model="whisper-1",
    file=audio_file
)

print(transcription.text)

Supported Models

Chat Models:
  • Mixtral AI
  • Llama 3.1 8B
  • Llama 3.1 70B
Speech-To-Text:
  • Whisper large-v3
Image Generation:
  • Stable Diffusion XL (SDXL)

Supported Endpoints and Parameters

EndpointSupported Parameters
/chat/completionsmessages, max_tokens, temperature, top_p, stream, presence_penalty, frequency_penalty
/images/generationsprompt, response_format, negative_prompt, size, n
/audio/transcriptionstranslate, language, prompt, response_format, file

Next Steps

For complete SDK documentation:

SDK Reference

Complete Portkey SDK documentation