On April 16, 2026, OpenAI shipped the most significant update to Codex since the desktop app launched in February, and they named it accordingly: “Codex for (almost) everything.” The parenthetical is doing real work in that name. Before this update, Codex was a powerful AI coding assistant you interacted with through its own interface. After April 16, it is something closer to an autonomous agent: one that can see and control your macOS applications with its own cursor, browse live webpages inside the app, generate images in the same session where it writes code, remember your preferences across every future session, and connect to 90+ plugins that wire it directly into the tools your development team already uses. More than 3 million developers use Codex every week. This update changes what that looks like for all of them.
Computer Use: Codex Gets Eyes, a Cursor, and Access to Your Mac
Computer use is the most structurally significant capability in this update. Before April 16, Codex operated entirely within its own window — you provided context, it produced code or answers, and you manually carried the outputs into your other tools. Computer use reverses that relationship. Codex now runs as a background agent that can see your screen, move a cursor, click interface elements, and type into any macOS application it can reach.
OpenAI documented four categories of immediate use cases at launch:
- Native app testing — Codex can navigate through a macOS application’s UI to verify that a feature you just built behaves correctly, without you writing separate test infrastructure for GUI flows.
- Simulator-based workflows — iOS and Android simulator testing, which requires GUI interaction that no previous AI coding tool could perform autonomously. Codex can now execute and observe those flows directly.
- Low-risk settings and configuration — Adjusting application preferences, switching environments, or updating build configurations through GUI-only interfaces that have no scripting API.
- GUI-only bug reproduction — When a bug only manifests through specific click sequences, Codex can execute that sequence, observe the result, and diagnose the issue without you having to narrate the steps manually.
Multiple agents can run in parallel — each with its own cursor — without interfering with your own active work in other windows. Codex uses a vision model to interpret screen state and a planning loop to decide which UI elements to interact with at each step.
Current limitations worth noting: Computer use is macOS-only at launch. The feature is also unavailable in the European Economic Area, the United Kingdom, and Switzerland pending regulatory review. OpenAI has stated EU and UK availability is planned but has not given a specific date. Windows support has not been announced. Before enabling it, review which applications Codex can access on your machine — particularly if you have sensitive credentials stored in native apps or access to internal infrastructure through desktop clients.
The In-App Browser: Closing the Frontend Iteration Loop
Frontend developers have had a persistent friction point with AI coding tools: the model generates code, you tab to a browser to check how it looks, spot a layout issue, switch back, describe the problem in text, wait for a fix, and repeat. The Codex in-app browser eliminates most of that context-switching.
The browser can open local development servers — http://localhost:3000 and similar — and public URLs that do not require authentication. Inside the browser view, you can click on specific elements and annotate them directly. Instead of translating a visual observation into a text description (“the button in the top right is misaligned with the card edge”), you point to it and ask Codex to fix it. The result moves from observation to code change without the description step.
The GitHub Pull Request integration is the other major use of the browser. You can pull up a PR in the Codex sidebar, see the diff and review comments side by side, and ask Codex to address specific reviewer feedback. The workflow becomes: read the PR comment in context, tell Codex which comment to address, let it read both the comment and the relevant code, and get the fix without manually mapping reviewer language back to file locations.
OpenAI explicitly describes the in-app browser as “early.” Pages requiring authentication do not load yet, which rules out most SaaS dashboards and staging environments behind login walls. For local dev servers and public staging URLs, it works today and meaningfully changes the feedback loop on frontend work.
Memory: Your Preferences Persist Across Sessions
Any developer who has used AI coding tools for more than a week has lived through the same friction: you explain your preferences once — TypeScript strict mode, named exports, your commit message format, which function in your codebase should never be touched — and the model forgets everything at session end. Tomorrow you explain it again. The repetition is not catastrophic, but it accumulates.
Codex memory is in preview as of April 16, but the capability addresses this directly. Codex now stores context from past interactions: preferences you have stated, corrections you have made when outputs were wrong, patterns it has observed across your sessions, and project-specific information it had to gather manually in previous sessions. When you start a new session, that knowledge is already loaded.
The practical effect shows up as faster first-pass quality on repeated task types. When you ask Codex to scaffold a new component, it already knows your file organization convention. When you ask it to write a test, it knows you use Vitest. When you ask for a commit message, it uses your team’s format. None of these are dramatic wins in a single session. Compounded over weeks of daily use, they shift the experience from “capable but amnesiac” to something that actually learns your workflow.
Memory can be reviewed and edited in Codex Settings. Retention follows OpenAI’s standard data handling policies. For teams working on sensitive codebases, understanding the retention scope before relying on it for confidential project context is worth doing during the preview period.
Image Generation with GPT-Image-1.5
Codex can now generate images directly inside a development workflow using GPT-Image-1.5, OpenAI’s latest image model. The integration is most useful when code and visuals are tightly coupled rather than as a general-purpose image generator.
Three use cases stand out as genuinely workflow-changing. First, UI mockup generation: describe a component and Codex produces a visual mockup alongside the implementation code, so design decisions happen before you write a line rather than after. Second, game asset creation: for game projects Codex is actively building, it can generate sprite, texture, or concept art assets without breaking out of the coding workflow. Third, frontend design exploration: when a feature’s visual direction is ambiguous, Codex can generate layout concepts as images before any code is written, surfacing design tradeoffs earlier in the process when they are cheaper to resolve.
For backend, API, or systems work, image generation rarely belongs in the workflow at all, and the integration stays out of the way. It surfaces when you are working on visual components and disappears when you are not.
90+ New Plugins: Three Layers of Capability
The Codex plugin library grew from approximately 40 to 90+ with this update. OpenAI organized the expansion into three distinct categories, each extending Codex’s capabilities in a different way.
Skills are task-specific capability extensions — specialized code review patterns, domain-specific problem-solving, documentation generators, and testing framework integrations that go beyond the base model’s default behaviors. They add what the model can do, not just what it can connect to.
App integrations are direct connections to external services. The new additions include several that meaningfully change how development teams can work:
- Atlassian Rovo — Codex can read, create, and update JIRA issues directly. A ticket describing a bug can become the input to a fix: Codex reads the ticket, implements the change, and updates the issue status without any copy-pasting between tools.
- GitLab Issues — equivalent JIRA-style integration for GitLab-based teams.
- CircleCI — Codex can inspect failed CI pipeline runs, read the failure output, and diagnose or fix the underlying code issue without you acting as the intermediary between the CI log and the editor.
- CodeRabbit — integration with the AI code review platform, bringing automated review feedback into the Codex workflow rather than a separate dashboard.
- Microsoft Suite — Codex can read from and write to Microsoft 365 documents. For enterprise teams whose project specifications and documentation live in Word, OneNote, or SharePoint, this makes Codex able to work directly from those sources rather than requiring you to extract and paste content manually.
- Neon by Databricks — serverless Postgres integration for database-heavy development workflows.
- Render — deployment platform integration for triggering and monitoring deployments directly from Codex.
- Remotion — programmatic video generation library integration, relevant for teams building video automation or data visualization tooling.
- Superpowers — an AI-powered workflow enhancement layer that expands what individual plugin calls can accomplish.
MCP servers are the third category and arguably the most extensible one. Model Context Protocol — the open standard originally developed by Anthropic — has become the common interface for connecting AI models to external tools and data sources. Any MCP-compatible server can be configured as a Codex plugin. The 90+ official plugins are a floor, not a ceiling: organizations running custom MCP servers for internal tools, proprietary databases, or specialized workflows can wire them into Codex using the same mechanism as the official integrations.
Remote Devboxes, Multiple Terminals, and Rich File Previews
Three additions in this update are quieter but worth flagging for development team workflows.
Remote devbox support via SSH is in alpha. Codex can now connect to remote development environments over SSH, which means its agent capabilities are not limited to your local machine. For teams that standardize on cloud-based dev environments — GitHub Codespaces, Gitpod, Coder, or internal dev infrastructure — this means Codex can operate in the environment where your actual code lives, not a local clone of it. Computer use and terminal capabilities extend into the remote session.
Multiple terminal tabs let you run parallel shell sessions inside Codex. Previously, a single terminal context serialized tasks that could logically run in parallel. With multiple tabs, you can have a dev server running in one terminal, a test runner in another, and a build process in a third — with Codex monitoring and interacting with all of them simultaneously rather than one at a time.
Rich file previews added rendering support for PDFs, spreadsheets, slide decks, and documents directly in the Codex sidebar. When your project context includes a product spec in a PDF, a data model in a spreadsheet, or a design brief in a slide deck, Codex can read and reason about that document directly. You no longer need to extract and paste the relevant content manually before asking Codex to work from it.
What This Update Signals About Where AI Coding Is Heading
Read individually, each feature in this update is useful. Read together, they describe a specific architectural direction: Codex is being built toward an agent that can receive a task and complete the full implementation cycle with minimal human coordination work at each step.
Consider what that cycle looks like today: a developer reads a JIRA ticket, opens a file in an editor, writes or modifies code, checks it in a browser or simulator, fixes issues, opens a terminal to run tests, commits the change, opens a PR, and then cycles back through review comments before the work ships. Each of those steps required the developer to act as the coordinator: reading one tool, translating the output, moving it to the next tool, and repeating.
After April 16, Codex can read the JIRA ticket directly, write and modify code, open the browser to check the result, see and click through a simulator, run tests in a terminal tab, and address PR review comments — all within one agent session. The developer still drives the work. But the number of steps that require the developer to manually bridge between tools is shrinking with each update.
The “(almost)” in the product name is honest. Computer use is macOS-only and geographically restricted. The browser cannot handle authenticated sessions. Memory is in preview with retention limits. Many of the new plugins are early-stage integrations. These are not minor caveats — they represent real gaps in daily professional workflows. But they are the expected state of an ambitious product moving fast, not signs of fundamental limitations.
For developers evaluating their AI coding tool stack in mid-2026: the question is no longer whether AI-assisted development is viable. It demonstrably is, for 3 million Codex users alone. The question is how quickly the remaining coordination gaps close, and whether the tool you are using today is positioned to close them.
How to Get Started with the New Features
Update Codex to the latest version through your macOS app store or via the direct download at the Codex product page. All April 16 features ship in this update.
Computer use is disabled by default. Enable it in Codex Settings under the Computer Use section. OpenAI recommends starting with contained, low-risk workflows: simulator-based testing on development builds, GUI-based configuration tasks, and local app testing before using it on machines with sensitive credentials or access to production infrastructure.
The in-app browser is available immediately after updating. For local dev servers, no additional configuration is needed — navigate to localhost in the Codex browser just as you would in Chrome or Safari. GitHub PR integration requires connecting your GitHub account in Codex Settings if you have not already done so.
Memory is enabled by default for accounts that have access to the preview. Navigate to Settings → Memory to review what Codex has stored, edit individual items, or clear the memory entirely. Given that Codex will accumulate codebase-specific context across sessions, reviewing the retention scope before using it on confidential projects is prudent.
Plugins are available in Codex → Plugins. Each new plugin requires separate authorization to access its external service — the connection flow is straightforward and handled in-app. For MCP server integrations, you provide an endpoint URL and authentication credentials for private servers. The full plugin catalog now serves as a reasonable starting point for auditing which of your team’s existing tools can be brought directly into Codex sessions.
Written by
Anup Karanjkar
Expert contributor at WOWHOW. Writing about AI, development, automation, and building products that ship.
Ready to ship faster?
Browse our catalog of 1,800+ premium dev tools, prompt packs, and templates.
Comments · 0
No comments yet. Be the first to share your thoughts.