What is OpenAI Codex CLI
OpenAI Codex CLI is an open‑source command‑line tool that brings the power of our latest reasoning models directly to your terminal. It acts as a lightweight coding agent that can read, modify, and run code on your local machine to help you build features faster, squash bugs, and understand unfamiliar code. Because the CLI runs locally, your source code never leaves your environment unless you choose to share it.
Put simply — Codex is an AI-powered terminal agent that understands your codebase, makes edits, runs scripts, writes commits, and works entirely on your machine.
Key Functionality:
- Multimodal input: pass text, screenshots, or diagrams
- Reads your project structure, dependencies, and files
- Can suggest or directly apply edits
- Integrates with Git (commits, diffs, PRs)
- Runs tests, migrations, and shell scripts inside a sandbox
- Supports MCP server connections
- Approval modes: from manual to fully autonomous
Example:
codex "write unit tests for utils/date.ts"
Codex will find the file, generate the tests, run them, and show the result. If everything passes, it offers to apply the changes.
One thing that sets it apart from tools like Claude Code — it supports multiple model providers, not just OpenAI. You can run it with Gemini, Groq, Anthropic, and any API-compatible LLM.
Why I need JS SDK?
Codex CLI is a powerful tool, but in fact it is just a binary that communicates via stdin/stdout.
It has a non-interactive mode where you can send a request and receive a response in the console. There is a pipe mode where Codex reads and writes JSON — this is closer to an API, but still raw.
You throw data at it in the right format → it responds in its own → you parse → deal with timings, errors, and sessions.
That's how the idea to make a JS SDK that works well with this process was born:
- Runs Codex CLI from Node.js
- Has a convenient interface for configuration, sending requests, accepting and rejecting changes
- Allows you to subscribe to events and errors
- Serializes commands and parses responses
- TypeScript typed request / response types
What I built: codex-js-sdk
I made an SDK that runs Codex CLI inside a Node.js process and provides a convenient API on top of pipe mode.
You don't have to worry about stdin/stdout, fiddle with timings, write your own parser, or reverse engineer the communication protocol — just connect and use it like a regular LLM agent.
What it can do:
🔄 Runs and manages the Codex CLI process (start, stop, restart, abort)
✉️ Sends requests from the user
📎 Supports sending images
🧠 Allows you to confirm or cancel commands or file changes
📡 Allows you to subscribe to responses from LLM
🗂 Allows you to override configuration, configure providers, etc.
Quick start
First, you will need to install the codex itself.
IMPORTANT: JS SDK only works with the Native (Rust) version of codex. The functionality of the standard and Native versions may differ slightly, but OpenAI clearly plans to develop this version.
Keep in mind that OpenAI Codex itself was released two months ago and is still quite raw, while the Rust version was released a couple of weeks ago and the API protocol is likely to change actively.
Let's start the installation:
To install, run:
npm i -g @openai/codex@native
This will install codex globally (recommended). You can also install it locally in your project, but then you will have to pass the path to the bin file in the JS SDK configuration.
Codex also actively uses ripgrep, so it is also advisable to install it:
MacOS:
brew install ripgrep
Rust:
cargo install ripgrep
Ubuntu / Debian:
sudo apt install ripgrep
Then install the JS SDK in the project:
npm install codex-js-sdk
Next, in any Node.js code where you intend to work with codex:
1. Import the SDK code and types and create an instance
import { CodexSDK, LogLevel } from 'codex-js-sdk';
import { CodexResponse, CodexMessageTypeEnum, ModelReasoningEffort, ModelReasoningSummary, SandboxPermission, AskForApproval } from 'codex-js-sdk';
// Create a new SDK instance
const sdk = new CodexSDK({
// Optional: Set custom working directory, could be relative to the current working directory (process.cwd())
cwd: './my-project',
// Optional: Configure logging level
logLevel: LogLevel.DEBUG,
// Optional: Specify custom path to codex binary (if installed locally)
codexPath: './node_modules/.bin/codex',
// Optional: Set custom environment variables (by default, the SDK will use the process.env)
env: {
OPENAI_API_KEY: 'sk-proj-...'
}
});
2. Set up response and errors handlers
sdk.onResponse((response: CodexResponse) => {
console.log('Received response: ', response);
if (response.msg.type === CodexMessageTypeEnum.TASK_COMPLETE) {
sdk.stop();
process.exit(0);
}
});
sdk.onError((response: CodexResponse) => {
console.error('Error:', response);
sdk.stop();
});
*3. Start codex process and send a message: *
// Start the Codex process (if not started yet)
sdk.start();
const requestId = sdk.sendUserMessage([
{ type: 'text', text: 'Hello, What is your task?' }
]);
You will start receiving messages in the onResponse handler:
Received response {
id: 'a9c62ad3-2215-4d31-a91a-f40d2ae43711',
msg: { type: 'task_started' }
}
Received response {
id: 'a9c62ad3-2215-4d31-a91a-f40d2ae43711',
msg: {
type: 'agent_reasoning',
text: '**Explaining my task**\n' +
'\n' +
`The user asked, "Hello, what is your task?" It seems they're looking for a brief description, so I'll clarify that I'm an AI assistant focused on coding tasks. I'll summarize my role: "I'm here to help by editing and testing code files, diagnosing issues, and more." I want to keep it straightforward without making any code changes. I'll make sure my response clearly reflects that!`
}
}
Received response {
id: 'a9c62ad3-2215-4d31-a91a-f40d2ae43711',
msg: {
type: 'agent_message',
message: 'I’m your on-demand coding assistant: I can browse the repository in our session, edit or add files with precise patches, run builds and tests, diagnose failures, and help you implement or fix features. Just let me know what you need next!'
}
}
Received response {
id: 'a9c62ad3-2215-4d31-a91a-f40d2ae43711',
msg: {
type: 'task_complete',
last_agent_message: 'I’m your on-demand coding assistant: I can browse the repository in our session, edit or add files with precise patches, run builds and tests, diagnose failures, and help you implement or fix features. Just let me know what you need next!'
}
}
As can be seen from the response: we received a message that the task had started, followed by the agent_reasoning information explaining the task, then the answer to the question itself, and finally the completion of the task.
This is enough to wrap it up in a nice chat interface using some assistant-ui or copilotkit and get a prototype of a vibe coding generation service, and you don't even have to write your own agent-loop to work with files!
Advanced: event types, command approval, patches, image input
Now let's take a closer look at the types of events that Codex sends us and the actions we can send back to it:
User requests:
All requests from the user are sent in the following format:
interface Request<Operation> {
id: string;
op: Operation;
}
Below, I will only consider the Operation type.
Message from the user
It can be text, an image, or mixed:
type InputItem =
| { type: 'text'; text: string }
| { type: 'image'; image_url: string }
| { type: 'local_image'; path: string };
interface UserInputOperation {
type: 'user_input';
items: InputItem[];
}
Via JS SDK:
const requestId = sdk.sendUserMessage([
{ type: 'text', text: 'What is in this pictures?' },
{ type: 'image', image_url: 'data:image/png;base64,iVBORw0KGgo........' },
{ type: 'local_image', path: './assets/flexbe_logo.png' }
]);
Confirmation of command execution:
By default, codex only executes safe commands; for example, it cannot delete a file without asking the user. In response to such a request, the user must reply with the following message:
enum ReviewDecision {
APPROVED = 'approved',
APPROVED_FOR_SESSION = 'approved_for_session',
DENIED = 'denied',
ABORT = 'abort'
}
interface ExecApprovalOperation {
type: 'exec_approval';
id: string;
decision: ReviewDecision;
}
Using in SDK:
sdk.handleCommand(requestId, true); // approved
sdk.handleCommand(requestId, true, true); // approved for session
sdk.handleCommand(requestId, false); // denied
Confirming changes:
Similar to executing commands, codex will ask for confirmation when changing files:
interface PatchApprovalOperation {
type: 'patch_approval';
id: string;
decision: ReviewDecision;
}
Using in SDK:
sdk.handlePatch(requestId, true); // approved
sdk.handlePatch(requestId, true, true); // approved for session
sdk.handlePatch(requestId, false); // denied
Canceling a request or command/change
interface InterruptOperation {
type: 'interrupt';
}
Using in SDK:
sdk.abort(requestId);
The SDK also allows you to send raw messages of any type:
sdk.sendRaw({
id: 'request-id',
op: {
type: 'interrupt',
}
})
Codex response format
To subscribe to messages from Codex, execute the following in SDK:
const dispose = sdk.onResponse((response) => console.log(response));
In addition to several system messages, you can process 13 different messages from Codex:
Task start and completion events (from receiving a message from the user to the final response)
export interface TaskStartedMessage {
type: CodexMessageTypeEnum.TASK_STARTED;
}
export interface TaskCompleteMessage {
type: CodexMessageTypeEnum.TASK_COMPLETE;
last_agent_message?: string;
}
Error during execution:
export interface ErrorMessage {
type: CodexMessageTypeEnum.ERROR;
message: string;
}
Model Reasoning:
interface AgentReasoningMessage {
type: CodexMessageTypeEnum.AGENT_REASONING;
text: string;
}
Message from agent:
interface AgentMessage {
type: CodexMessageTypeEnum.AGENT_MESSAGE;
message: string;
}
Request for confirmation of command execution
export interface ExecApprovalRequestMessage {
type: CodexMessageTypeEnum.EXEC_APPROVAL_REQUEST;
command: string[];
cwd: string;
reason?: string;
}
Command execution
export interface ExecCommandBeginMessage {
type: CodexMessageTypeEnum.EXEC_COMMAND_BEGIN;
call_id: string;
command: string[];
cwd: string;
}
Result of command execution
export interface ExecCommandEndMessage {
type: CodexMessageTypeEnum.EXEC_COMMAND_END;
call_id: string;
command?: string[];
stdout: string;
stderr: string;
exit_code: number;
}
Start of communication with the MCP server
export interface McpToolCallBeginMessage {
type: CodexMessageTypeEnum.MCP_TOOL_CALL_BEGIN;
call_id: string;
server: string;
tool: string;
arguments?: any;
}
Result of accessing the MCP server
export interface McpToolCallEndMessage {
type: CodexMessageTypeEnum.MCP_TOOL_CALL_END;
call_id: string;
result: {
is_error?: boolean;
[key: string]: any;
};
}
Request to the user to confirm file changes
export interface ApplyPatchApprovalRequestMessage {
type: CodexMessageTypeEnum.APPLY_PATCH_APPROVAL_REQUEST;
changes: Record<string, FileChange>;
reason?: string;
grant_root?: string;
}
Start of application of changes
export interface PatchApplyBeginMessage {
type: CodexMessageTypeEnum.PATCH_APPLY_BEGIN;
call_id: string;
auto_approved: boolean;
changes: Record<string, FileChange>;
}
Result of applying changes (success or error)
export interface PatchApplyEndMessage {
type: CodexMessageTypeEnum.PATCH_APPLY_END;
call_id: string;
stdout: string;
stderr: string;
success: boolean;
}
Advanced configuration
You can configure codex behavior using the config file located in the ~/.codex/config.toml
directory, or by overriding some parameters when initializing the SDK. For example, to change the model to gemini:
const sdk = new CodexSDK({
config: {
model: 'gemini-2.5-pro-preview-06-05',
model_provider: 'gemini',
}
});
See official codex config.md for details and list of options
You can also override the session parameters of an already running instance. For example, I was able to get the Claude model to work and override the instructions only in this way:
await sdk.configureSession({
instructions: 'You are a helpful coding assistant, your name is "Flexbe Bot". Provide concise and clear responses.',
model: 'claude-3-7-sonnet-latest',
provider: {
name: 'Anthropic',
base_url: 'https://api.anthropic.com/v1',
env_key: 'ANTHROPIC_API_KEY',
env_key_instructions: 'Create an API key (https://console.anthropic.com) and export it as an environment variable.',
wire_api: WireApi.CHAT
},
model_reasoning_effort: ModelReasoningEffort.NONE,
model_reasoning_summary: ModelReasoningSummary.NONE,
});
Conclusion
codex-js-sdk
allows you to use codex in your Node.js application — you can embed Codex into your dev tools, create a custom code assistant, run autotests before pushing, check patches before merging, or even build a Copilot interface, it's all up to you!
I welcome comments and feedback
👉 GitHub: https://github.com/openai/codex
👉 SDK: https://npmjs.com/package/codex-js-sdk
Hey everyone! We’re launching your special Dev.to drop for all verified Dev.to authors. Head over here to see if you qualify (for verified Dev.to users only). – Dev.to Community Support