OpenAI Codex With Your Own Model
Codex In Action With OpenResponses
Key Benefits
- Run Codex with Custom Models: Integrate OpenAI Codex with OpenResponses to use any model of your choice
- Extend Functionality: Enhance Codex capabilities with additional MCP tools or custom tools and integrations
- Simple Deployment: Quick setup with no separate installation needed - just follow the quickstart guide
- Full Control: Maintain complete ownership of your code data and model choices
Step-by-Step Setup Instructions
1. Run OpenResponses Service
Launch OpenResponses using Docker:2. Install OpenAI Codex
Install the Codex CLI globally:3. Configure and Run Codex with Your Preferred Model
Set OpenResponses as your base API and configure your API key:-m
flag:
Example with Locally Deployed Model
Example with Claude
Example with DeepSeek
Example with Google Gemini
provider@model-name
or model_endpoint@model_name
, where:
provider
: The model provider (e.g., claude, deepseek, google, openai)model_endpoint
: For locally deployed models or any custom model provider, the endpoint URL where chat/completions is availablemodel-name
: The specific model to use from that provider or endpoint
Supported Model Providers
OpenResponses supports a variety of model providers that can be used with theprovider@model_name
convention. For a complete list of supported providers with detailed examples for each, see our Model Providers documentation.