Model Providers
OpenResponses supports various model providers that can be used with theprovider@model_name convention. This page provides examples of how to use each supported provider with curl commands.
Supported Model Providers
| Provider | API Endpoint |
|---|---|
| openai | https://api.openai.com/v1 |
| claude | https://api.anthropic.com/v1 |
| anthropic | https://api.anthropic.com/v1 |
| groq | https://api.groq.com/openai/v1 |
| togetherai | https://api.together.xyz/v1 |
| gemini | https://generativelanguage.googleapis.com/v1beta/openai/ |
| https://generativelanguage.googleapis.com/v1beta/openai/ | |
| deepseek | https://api.deepseek.com |
| ollama | http://localhost:11434/v1 |
| xai | https://api.x.ai/v1 |
Provider Examples
OpenAI
Claude / Anthropic
Groq
TogetherAI
Google / Gemini
DeepSeek
Ollama (Local)
xAI
Using Custom Model Providers
For any model provider not listed above, you can use themodel_endpoint@model_name convention by directly specifying the full API endpoint URL:
Example with Local LLM Server
provider@model-name or model_endpoint@model_name, where:
provider: The model provider (e.g., claude, deepseek, google, openai)model_endpoint: For locally deployed models or any custom model provider, the endpoint URL where chat/completions is availablemodel-name: The specific model to use from that provider or endpoint