Quick Start
Get Deepinfra working in 3 steps:Tip: You can also set
provider="@deepinfra" in Portkey() and use just model="nvidia/Nemotron-4-340B-Instruct" in the request.Add Provider in Model Catalog
- Go to Model Catalog → Add Provider
- Select Deepinfra
- Choose existing credentials or create new by entering your Deepinfra API key
- Name your provider (e.g.,
deepinfra-prod)
Complete Setup Guide →
See all setup options, code examples, and detailed instructions
Supported Models
Deepinfra hosts a wide range of open-source models for text generation. View the complete list:Deepinfra Models
Browse all available models on Deepinfra
nvidia/Nemotron-4-340B-Instructmeta-llama/Meta-Llama-3.1-405B-InstructQwen/Qwen2.5-72B-Instruct
Next Steps
Add Metadata
Add metadata to your Deepinfra requests
Gateway Configs
Add gateway configs to your Deepinfra requests
Tracing
Trace your Deepinfra requests
Fallbacks
Setup fallback from OpenAI to Deepinfra
SDK Reference
Complete Portkey SDK documentation

