quot;` | Outputs a step-by-step human explanation. | | 6 | `codex "Carefully review this repo, and propose 3 high impact well-scoped PRs"` | Suggests impactful PRs in the current codebase. | | 7 | `codex "Look for vulnerabilities and create a security review report"` | Finds and explains security bugs. | --- ## Installation**From npm (Recommended)** npm install -g @openai/codex # or yarn global add @openai/codex # or bun install -g @openai/codex # or pnpm add -g @openai/codex **Build from source** # Clone the repository and navigate to the CLI package git clone https://github.com/openai/codex.git cd codex/codex-cli # Enable corepack corepack enable # Install dependencies and build pnpm install pnpm build # Linux-only: download prebuilt sandboxing binaries (requires gh and zstd). ./scripts/install\_native\_deps.sh # Get the usage and the options node ./dist/cli.js --help # Run the locally-built CLI directly node ./dist/cli.js # Or link the command globally for convenience pnpm link --- ## Configuration guideCodex configuration files can be placed in the `~/.codex/` directory, supporting both YAML and JSON formats. ### Basic configuration parameters| Parameter | Type | Default | Description | Available Options | | --- | --- | --- | --- | --- | | `model` | string | `o4-mini` | AI model to use | Any model name supporting OpenAI API | | `approvalMode` | string | `suggest` | AI assistant's permission mode | `suggest` (suggestions only) `auto-edit` (automatic edits) `full-auto` (fully automatic) | | `fullAutoErrorMode` | string | `ask-user` | Error handling in full-auto mode | `ask-user` (prompt for user input) `ignore-and-continue` (ignore and proceed) | | `notify` | boolean | `true` | Enable desktop notifications | `true`/`false` | ### Custom AI provider configurationIn the `providers` object, you can configure multiple AI service providers. Each provider requires the following parameters: | Parameter | Type | Description | Example | | --- | --- | --- | --- | | `name` | string | Display name of the provider | `"OpenAI"` | | `baseURL` | string | API service URL | `"https://api.openai.com/v1"` | | `envKey` | string | Environment variable name (for API key) | `"OPENAI_API_KEY"` | ### History configurationIn the `history` object, you can configure conversation history settings: | Parameter | Type | Description | Example Value | | --- | --- | --- | --- | | `maxSize` | number | Maximum number of history entries to save | `1000` | | `saveHistory` | boolean | Whether to save history | `true` | | `sensitivePatterns` | array | Patterns of sensitive information to filter in history | `[]` | ### Configuration examples1. YAML format (save as `~/.codex/config.yaml`): model: o4-mini approvalMode: suggest fullAutoErrorMode: ask-user notify: true 2. JSON format (save as `~/.codex/config.json`): { "model": "o4-mini", "approvalMode": "suggest", "fullAutoErrorMode": "ask-user", "notify": true } ### Full configuration exampleBelow is a comprehensive example of `config.json` with multiple custom providers: { "model": "o4-mini", "provider": "openai", "providers": { "openai": { "name": "OpenAI", "baseURL": "https://api.openai.com/v1", "envKey": "OPENAI\_API\_KEY" }, "openrouter": { "name": "OpenRouter", "baseURL": "https://openrouter.ai/api/v1", "envKey": "OPENROUTER\_API\_KEY" }, "gemini": { "name": "Gemini", "baseURL": "https://generativelanguage.googleapis.com/v1beta/openai", "envKey": "GEMINI\_API\_KEY" }, "ollama": { "name": "Ollama", "baseURL": "http://localhost:11434/v1", "envKey": "OLLAMA\_API\_KEY" }, "mistral": { "name": "Mistral", "baseURL": "https://api.mistral.ai/v1", "envKey": "MISTRAL\_API\_KEY" }, "deepseek": { "name": "DeepSeek", "baseURL": "https://api.deepseek.com", "envKey": "DEEPSEEK\_API\_KEY" }, "xai": { "name": "xAI", "baseURL": "https://api.x.ai/v1", "envKey": "XAI\_API\_KEY" }, "groq": { "name": "Groq", "baseURL": "https://api.groq.com/openai/v1", "envKey": "GROQ\_API\_KEY" } }, "history": { "maxSize": 1000, "saveHistory": true, "sensitivePatterns": \[\] } } ### Custom instructionsYou can create a `~/.codex/instructions.md` file to define custom instructions: \- Always respond with emojis \- Only use git commands when explicitly requested ### Environment variables setupFor each AI provider, you need to set the corresponding API key in your environment variables. For example: # OpenAI export OPENAI\_API\_KEY="your-api-key-here" # OpenRouter export OPENROUTER\_API\_KEY="your-openrouter-key-here" # Similarly for other providers --- ## FAQOpenAI released a model called Codex in 2021 - is this related? In 2021, OpenAI released Codex, an AI system designed to generate code from natural language prompts. That original Codex model was deprecated as of March 2023 and is separate from the CLI tool. Which models are supported? Any model available with [Responses API](https://platform.openai.com/docs/api-reference/responses). The default is `o4-mini`, but pass `--model gpt-4.1` or set `model: gpt-4.1` in your config file to override. Why does `o3` or `o4-mini` not work for me? It's possible that your [API account needs to be verified](https://help.openai.com/en/articles/10910291-api-organization-verification) in order to start streaming responses and seeing chain of thought summaries from the API. If you're still running into issues, please let us know! How do I stop Codex from editing my files? Codex runs model-generated commands in a sandbox. If a proposed command or file change doesn't look right, you can simply type **n** to deny the command or give the model feedback. Does it work on Windows? Not directly. It requires [Windows Subsystem for Linux (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install) - Codex has been tested on macOS and Linux with Node 22. --- ## Zero data retention (ZDR) usageCodex CLI **does** support OpenAI organizations with [Zero Data Retention (ZDR)](https://platform.openai.com/docs/guides/your-data#zero-data-retention) enabled. If your OpenAI organization has Zero Data Retention enabled and you still encounter errors such as: ``` OpenAI rejected the request. Error details: Status: 400, Code: unsupported_parameter, Type: invalid_request_error, Message: 400 Previous response cannot be used for this organization due to Zero Data Retention. ``` You may need to upgrade to a more recent version with: `npm i -g @openai/codex@latest` --- ## Codex open source fundWe're excited to launch a **$1 million initiative** supporting open source projects that use Codex CLI and other OpenAI models. - Grants are awarded up to **$25,000** API credits. - Applications are reviewed **on a rolling basis**. **Interested? [Apply here](https://openai.com/form/codex-open-source-fund/).** --- ## ContributingThis project is under active development and the code will likely change pretty significantly. We'll update this message once that's complete! More broadly we welcome contributions - whether you are opening your very first pull request or you're a seasoned maintainer. At the same time we care about reliability and long-term maintainability, so the bar for merging code is intentionally **high**. The guidelines below spell out what "high-quality" means in practice and should make the whole process transparent and friendly. ### Development workflow- Create a *topic branch* from `main` - e.g. `feat/interactive-prompt`. - Keep your changes focused. Multiple unrelated fixes should be opened as separate PRs. - Use `pnpm test:watch` during development for super-fast feedback. - We use **Vitest** for unit tests, **ESLint** + **Prettier** for style, and **TypeScript** for type-checking. - Before pushing, run the full test/type/lint suite: ### Git hooks with HuskyThis project uses [Husky](https://typicode.github.io/husky/) to enforce code quality checks: - **Pre-commit hook**: Automatically runs lint-staged to format and lint files before committing - **Pre-push hook**: Runs tests and type checking before pushing to the remote These hooks help maintain code quality and prevent pushing code with failing tests. For more details, see [HUSKY.md](https://github.com/openai/codex/blob/main/codex-cli/HUSKY.md). pnpm test && pnpm run lint && pnpm run typecheck - If you have **not** yet signed the Contributor License Agreement (CLA), add a PR comment containing the exact text ``` I have read the CLA Document and I hereby sign the CLA ``` The CLA-Assistant bot will turn the PR status green once all authors have signed. # Watch mode (tests rerun on change) pnpm test:watch # Type-check without emitting files pnpm typecheck # Automatically fix lint + prettier issues pnpm lint:fix pnpm format:fix ### DebuggingTo debug the CLI with a visual debugger, do the following in the `codex-cli` folder: - Run `pnpm run build` to build the CLI, which will generate `cli.js.map` alongside `cli.js` in the `dist` folder. - Run the CLI with `node --inspect-brk ./dist/cli.js` The program then waits until a debugger is attached before proceeding. Options: - In VS Code, choose **Debug: Attach to Node Process** from the command palette and choose the option in the dropdown with debug port `9229` (likely the first option) - Go to chrome://inspect in Chrome and find **localhost:9229** and click **trace** ### Writing high-impact code changes1. **Start with an issue.** Open a new one or comment on an existing discussion so we can agree on the solution before code is written. 2. **Add or update tests.** Every new feature or bug-fix should come with test coverage that fails before your change and passes afterwards. 100% coverage is not required, but aim for meaningful assertions. 3. **Document behaviour.** If your change affects user-facing behaviour, update the README, inline help (`codex --help`), or relevant example projects. 4. **Keep commits atomic.** Each commit should compile and the tests should pass. This makes reviews and potential rollbacks easier. ### Opening a pull request- Fill in the PR template (or include similar information) - **What? Why? How?** - Run **all** checks locally (`npm test && npm run lint && npm run typecheck`). CI failures that could have been caught locally slow down the process. - Make sure your branch is up-to-date with `main` and that you have resolved merge conflicts. - Mark the PR as **Ready for review** only when you believe it is in a merge-able state. ### Review process1. One maintainer will be assigned as a primary reviewer. 2. We may ask for changes - please do not take this personally. We value the work, we just also value consistency and long-term maintainability. 3. When there is consensus that the PR meets the bar, a maintainer will squash-and-merge. ### Community values- **Be kind and inclusive.** Treat others with respect; we follow the [Contributor Covenant](https://www.contributor-covenant.org/). - **Assume good intent.** Written communication is hard - err on the side of generosity. - **Teach & learn.** If you spot something confusing, open an issue or PR with improvements. ### Getting helpIf you run into problems setting up the project, would like feedback on an idea, or just want to say *hi* - please open a Discussion or jump into the relevant issue. We are happy to help. Together we can make Codex CLI an incredible tool. **Happy hacking!** 🚀 ### Contributor license agreement (CLA)All contributors **must** accept the CLA. The process is lightweight: 1. Open your pull request. 2. Paste the following comment (or reply `recheck` if you've signed before): ``` I have read the CLA Document and I hereby sign the CLA ``` 3. The CLA-Assistant bot records your signature in the repo and marks the status check as passed. No special Git commands, email attachments, or commit footers required. #### Quick fixes| Scenario | Command | | --- | --- | | Amend last commit | `git commit --amend -s --no-edit && git push -f` | The **DCO check** blocks merges until every commit in the PR carries the footer (with squash this is just the one). ### Releasing `codex`To publish a new version of the CLI, run the following in the `codex-cli` folder to stage the release in a temporary directory: ``` pnpm stage-release ``` Note you can specify the folder for the staged release: ``` RELEASE_DIR=$(mktemp -d) pnpm stage-release "$RELEASE_DIR" ``` Go to the folder where the release is staged and verify that it works as intended. If so, run the following from the temp folder: ``` cd "$RELEASE_DIR" npm publish ``` ### Alternative build options#### Nix flake developmentPrerequisite: Nix >= 2.4 with flakes enabled (`experimental-features = nix-command flakes` in `~/.config/nix/nix.conf`). Enter a Nix development shell: nix develop This shell includes Node.js, installs dependencies, builds the CLI, and provides a `codex` command alias. Build and run the CLI directly: nix build ./result/bin/codex --help Run the CLI via the flake app: nix run .#codex --- ## Security & responsible AIHave you discovered a vulnerability or have concerns about model output? Please e-mail **[[email protected]](https://github.com/openai/)** and we will respond promptly. --- ## LicenseThis repository is licensed under the [Apache-2.0 License](https://github.com/openai/codex/blob/main/LICENSE).