This repository lets you use Anthropic's Claude Code CLI with OpenAI models such as GPT-5-Codex, GPT-5.1, and others via a local LiteLLM proxy.
⚠️ ATTENTION⚠️ If you're here to set up
your own LiteLLM ServerwithLibreChatas the web UI (or any other OpenAI / Anthropic API compatible client, for that matter), head over to the litellm-server-boilerplate repository. It contains a "boilerplate" version of this repo with Claude Code CLI stuff stripped away, an optionalLibreChatset up, and aREADMEwhich specifically explains how to buildyour own AI agents and assistantson top of it.
- OpenAI API key 🔑
- Anthropic API key 🔑 - optional (if you decide not to remap some Claude models to OpenAI)
- Either uv or Docker Desktop, depending on your preferred setup method
If you are going to use GPT-5 via API for the first time, OpenAI may require you to verify your identity via Persona. You may encounter an OpenAI error asking you to “verify your organization.” To resolve this, you can go through the verification process here:
-
Clone this repository:
git clone https://github.com/teremterem/claude-code-gpt-5-codex.git cd claude-code-gpt-5-codex -
Configure Environment Variables:
Copy the template file to create your
.env:cp .env.template .env
Edit
.envand add your OpenAI API key:OPENAI_API_KEY=your-openai-api-key-here # Optional: only needed if you plan to use Anthropic models # ANTHROPIC_API_KEY=your-anthropic-api-key-here # Optional (see .env.template for details): # LITELLM_MASTER_KEY=your-master-key-here # Optional: specify the remaps explicitly if you need to (the values you see # below are the defaults - see .env.template for more info) # REMAP_CLAUDE_HAIKU_TO=gpt-5.1-codex-mini-reason-none # REMAP_CLAUDE_SONNET_TO=gpt-5-codex-reason-medium # REMAP_CLAUDE_OPUS_TO=gpt-5.1-reason-high # Some more optional settings (see .env.template for details) ...
-
Run the proxy:
-
EITHER via
uv(make sure to install uv first):OPTION 1: Use a script for
uv:./uv-run.sh
OPTION 2: Run via a direct
uvcommand:uv run litellm --config config.yaml
-
OR via
Docker(make sure to install Docker Desktop first):OPTION 3: Run
Dockerin the foreground:./run-docker.sh
OPTION 4: Run
Dockerin the background:./deploy-docker.sh
OPTION 5: Run
Dockervia a direct command:docker run -d \ --name claude-code-gpt-5 \ -p 4000:4000 \ --env-file .env \ --restart unless-stopped \ ghcr.io/teremterem/claude-code-gpt-5:latest
NOTE: To run with this command in the foreground instead of the background, remove the
-dflag.To see the logs, run:
docker logs -f claude-code-gpt-5
To stop and remove the container, run:
./kill-docker.sh
NOTE: The
Dockeroptions above will pull the latest image fromGHCRand will ignore all your local files except.env. For more detailedDockerdeployment instructions and more options (like buildingDockerimage from source yourself, usingDocker Compose, etc.), see docs/DOCKER_TIPS.md
-
-
Install Claude Code (if you haven't already):
npm install -g @anthropic-ai/claude-code
-
Connect it to the proxy:
ANTHROPIC_BASE_URL=http://localhost:4000 claude
If you set
LITELLM_MASTER_KEYin your.envfile (see.env.templatefor details), pass it as the Anthropic API key for the CLI:ANTHROPIC_API_KEY="<LITELLM_MASTER_KEY>" \ ANTHROPIC_BASE_URL=http://localhost:4000 \ claudeNOTE: In this case, if you've previously authenticated, run
claude /logoutfirst. -
That's it! Your Claude Code client will now use the OpenAI models that this repo recommends by default (unless you explicitly specified different choices in your
.envfile). 🎯
You can find the full list of available OpenAI models in the OpenAI API documentation. Additionally, this proxy allows you to control the reasoning effort level for each model by appending it to the model name following the pattern -reason-<effort> (or -reasoning-<effort>, if you prefer). Here are some examples:
gpt-5.1-codex-mini-reason-nonegpt-5.1-codex-mini-reason-mediumgpt-5.1-codex-mini-reason-high
If you don't specify the reasoning effort level (i.e. only specify the model name, like gpt-5.1-codex-mini), it will use the default level for the model.
NOTE: Theoretically, you can use arbitrary models from arbitrary providers, but for providers other than OpenAI or Anthropic, you will need to specify the provider as a prefix in the model name, e.g.
gemini/gemini-pro,gemini/gemini-pro-reason-disableetc. (as well as set the respective API key for that provider in your.envfile).
The Web Search tool currently does not work with this setup. You may see an error like:
API Error (500 {"error":{"message":"Error calling litellm.acompletion for non-Anthropic model: litellm.BadRequestError: OpenAIException - Invalid schema for function 'web_search': 'web_search_20250305' is not valid under any of the given schemas.","type":"None","param":"None","code":"500"}}) · Retrying in 1 seconds… (attempt 1/10)
This is planned to be fixed soon.
NOTE: The
Fetchtool (getting web content from specific URLs) is not affected and works normally.
