Changelog Update
Introducing GPT-5.1-Codex-Max
Today we are releasing GPT-5.1-Codex-Max, our new frontier agentic coding model.
GPT‑5.1-Codex-Max is built on an update to our foundational reasoning model, which is trained on agentic tasks across software engineering, math, research, and more. GPT‑5.1-Codex-Max is faster, more intelligent, and more token-efficient at every stage of the development cycle–and a new step towards becoming a reliable coding partner.
Starting today, the CLI and IDE Extension will default to gpt-5.1-codex-max for users that are signed in with ChatGPT. API access for the model will come soon.
For non-latency-sensitive tasks, we’ve also added a new Extra High (xhigh) reasoning effort, which lets the model think for an even longer period of time for a better answer. We still recommend medium as your daily driver for most tasks.
If you have a model specified in your config.toml configuration file, you can instead try out gpt-5.1-codex-max for a new Codex CLI session using:
codex --model gpt-5.1-codex-max
You can also use the /model slash command in the CLI. In the Codex IDE Extension you can select GPT-5.1-Codex from the dropdown menu.
If you want to switch for all sessions, you can change your default model to gpt-5.1-codex-max by updating your config.toml configuration file:
model = "gpt-5.1-codex-max”