Quick Start
Get started with Hugging Face in under 2 minutes:Add Provider in Model Catalog
Before making requests, add Hugging Face to your Model Catalog:- Go to Model Catalog → Add Provider
- Select Hugging Face
- Enter your Hugging Face access token
- (Optional) Add a Custom Host if using a dedicated Hugging Face Inference Endpoint
- Name your provider (e.g.,
huggingface)
If you have a dedicated server hosted on Hugging Face, enter your dedicated endpoint URL in the Custom Host field during provider setup. This allows you to route requests to your private Hugging Face deployment.
Complete Setup Guide
See all setup options and detailed configuration instructions
Supported Models
Hugging Face provides access to thousands of text generation models through their Inference endpoints, including:- Meta Llama 3.2, Llama 3.1, Llama 3
- Mistral, Mixtral
- Qwen 2.5
- Phi-3
- Gemma, Gemma 2
- And thousands more!
Next Steps
Gateway Configs
Add fallbacks, load balancing, and more
Observability
Monitor and trace your Hugging Face requests
Prompt Library
Manage and version your prompts
Metadata
Add custom metadata to requests
SDK Reference
Complete Portkey SDK documentation

