The Chat Completions API is the most widely adopted format for LLM interaction. Portkey makes it work with every provider — send the same POST /v1/chat/completions request to OpenAI, Anthropic, Gemini, Bedrock, or any of the 3000+ supported models.
from portkey_ai import Portkeyportkey = Portkey(api_key="PORTKEY_API_KEY")response = portkey.chat.completions.create( model="@openai-provider/gpt-4o", messages=[{"role": "user", "content": "Explain quantum computing in simple terms"}])print(response.choices[0].message.content)
Switch model to use any provider — @anthropic-provider/claude-sonnet-4-5-20250514, @google-provider/gemini-2.0-flash, or any of the 3000+ supported models.
The Portkey SDK is a superset of the OpenAI SDK, so all Chat Completions methods work identically. The OpenAI SDK also works directly with Portkey’s base URL:
Copy
Ask AI
from openai import OpenAIclient = OpenAI( api_key="PORTKEY_API_KEY", base_url="https://api.portkey.ai/v1")response = client.chat.completions.create( model="@openai-provider/gpt-4o", messages=[{"role": "user", "content": "Explain quantum computing in simple terms"}])print(response.choices[0].message.content)
For free-form JSON output without a strict schema:
Copy
Ask AI
response = portkey.chat.completions.create( model="@openai-provider/gpt-4o", messages=[{"role": "user", "content": "List 3 programming languages and their main use cases as JSON"}], response_format={"type": "json_object"})
Pass the full conversation history in the messages array:
Copy
Ask AI
response = portkey.chat.completions.create( model="@openai-provider/gpt-4o", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "My name is Alice."}, {"role": "assistant", "content": "Hello Alice! How can I help you?"}, {"role": "user", "content": "What is my name?"} ])print(response.choices[0].message.content) # "Your name is Alice."