Skip to main content

Quick Start

Get started with Dashscope in under 2 minutes:
from portkey_ai import Portkey

# 1. Install: pip install portkey-ai
# 2. Add @dashscope provider in model catalog
# 3. Use it:

portkey = Portkey(api_key="PORTKEY_API_KEY")

response = portkey.chat.completions.create(
    model="@dashscope/qwen-turbo",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

Add Provider in Model Catalog

Before making requests, add Dashscope to your Model Catalog:
  1. Go to Model Catalog → Add Provider
  2. Select Dashscope
  3. Enter your Dashscope API key
  4. Name your provider (e.g., dashscope)

Complete Setup Guide

See all setup options and detailed configuration instructions

Dashscope Documentation

Explore the official Dashscope documentation

Dashscope Capabilities

Embeddings

Generate embeddings for text using Dashscope embedding models:
from portkey_ai import Portkey

portkey = Portkey(api_key="PORTKEY_API_KEY", provider="@dashscope")

response = portkey.embeddings.create(
    input="Your text string goes here",
    model="text-embedding-v3"
)

print(response.data[0].embedding)

Supported Models

Chat Models:
  • qwen-long, qwen-max, qwen-max-0428, qwen-max-0403, qwen-max-0107
  • qwen-plus, qwen-plus-0806, qwen-plus-0723, qwen-plus-0624, qwen-plus-0206
  • qwen-turbo, qwen-turbo-0624, qwen-turbo-0206
  • qwen2-57b-a14b-instruct, qwen2-72b-instruct, qwen2-7b-instruct, qwen2-1.5b-instruct, qwen2-0.5b-instruct
  • qwen1.5-110b-chat, qwen1.5-72b-chat, qwen1.5-32b-chat, qwen1.5-14b-chat, qwen1.5-7b-chat, qwen1.5-1.8b-chat, qwen1.5-0.5b-chat
  • codeqwen1.5-7b-chat
  • qwen-72b-chat, qwen-14b-chat, qwen-7b-chat, qwen-1.8b-longcontext-chat, qwen-1.8b-chat
  • qwen2-math-72b-instruct, qwen2-math-7b-instruct, qwen2-math-1.5b-instruct
Embedding Models:
  • text-embedding-v1, text-embedding-v2, text-embedding-v3

Supported Endpoints and Parameters

EndpointSupported Parameters
/chat/completionsmessages, max_tokens, temperature, top_p, stream, presence_penalty, frequency_penalty
/embeddingsmodel, input, encoding_format, dimensions, user

Advanced Features

Track End-User IDs

Monitor user-level costs and requests by passing user IDs:
from portkey_ai import Portkey

portkey = Portkey(api_key="PORTKEY_API_KEY", provider="@dashscope")

response = portkey.chat.completions.create(
    model="qwen-turbo",
    messages=[{"role": "user", "content": "Hello!"}],
    user="user_123456"  # Track this user's usage
)
Portkey Logs with User ID

Learn More About Metadata

Explore how to use custom metadata to enhance your request tracking and analysis

Gateway Configurations

Use Portkey’s Gateway features for advanced routing and reliability: Example: Conditional Routing
{
  "strategy": {
    "mode": "conditional",
    "conditions": [
      {
        "query": { "metadata.user_plan": { "$eq": "paid" } },
        "then": "qwen-turbo"
      },
      {
        "query": { "metadata.user_plan": { "$eq": "free" } },
        "then": "gpt-3.5"
      }
    ],
    "default": "gpt-3.5"
  },
  "targets": [
    {
      "name": "qwen-turbo",
      "provider": "@dashscope"
    },
    {
      "name": "gpt-3.5",
      "provider": "@openai"
    }
  ]
}
Conditional Routing Diagram

Guardrails

Enforce input/output checks with custom hooks:
{
  "provider": "@dashscope",
  "before_request_hooks": [{
    "id": "input-guardrail-id-xx"
  }],
  "after_request_hooks": [{
    "id": "output-guardrail-id-xx"
  }]
}

Learn More About Guardrails

Enhance security and reliability with Portkey Guardrails

Next Steps

For complete SDK documentation:

SDK Reference

Complete Portkey SDK documentation

For the most up-to-date information on supported features and endpoints, please refer to our API Reference.