OpenAI this week: Pentagon, GPT-5.3 and Codex on Windows

It’s been a wild week over at OpenAI. Between a Pentagon deal that blew up in their face, a model update aimed at making ChatGPT less insufferable, and their coding app finally landing on Windows, there’s quite a bit to unpack. Here’s what happened and why it matters.

The Pentagon deal

This is the big one. OpenAI announced a deal with the Department of Defense on Friday - hours after the Trump administration directed federal agencies to stop using Anthropic’s tools and hours before the U.S. carried out strikes on Iran. The timing alone raised eyebrows, but the actual content of the contract is where things got uncomfortable.

Critics pointed out that the published snippets of the contract were purposefully vague and left carve-outs for domestic surveillance by intelligence agencies within the Defense Department. The concern: under current U.S. law, government authorities can buy commercially available data from brokers. Feed that into an AI system, and you effectively have mass surveillance of Americans without technically breaking any rules.

The backlash was swift. Many of OpenAI’s own employees signed an open letter supporting Anthropic . Consumers sent Claude to the top of Apple’s App Store charts for the first time. Sam Altman admitted the deal “looked opportunistic and sloppy” and the contract was amended to state that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

Whether that amendment is sufficient is a different conversation. The word “intentionally” is doing a lot of heavy lifting in that sentence. Meanwhile, Anthropic and the Pentagon are reportedly back at the negotiating table .

Why this matters for developers

This isn’t just politics. The companies building the models we use every day are making decisions about who they work with and under what constraints. If you’re integrating AI into products that handle user data, the ethical posture of your model provider is part of your risk profile. It’s worth paying attention to.

GPT-5.3 Instant

On a more technical note, OpenAI released GPT-5.3 Instant on March 3rd. The headline improvement: ChatGPT should now be less “cringe.” If you’ve ever asked a simple question and received a response that opened with “Great question!” followed by three paragraphs of caveats before getting to the actual answer, this update is for you.

The concrete numbers are more interesting than the marketing. OpenAI claims GPT-5.3 Instant cuts hallucinations by 26.8% when using web access and by 19.7% when relying on internal knowledge alone. It also significantly reduces unnecessary refusals - those situations where you ask something perfectly reasonable and the model declines to answer out of an abundance of caution.

The model is available to all ChatGPT users and in the API as gpt-5.3-chat-latest. GPT-5.2 Instant sticks around for paid users until June 3rd. And in typical OpenAI fashion, they teased that 5.4 is coming “sooner than you think” barely an hour after the announcement.

Codex app on Windows

The Codex app has landed on Windows for ChatGPT Business workspaces. If you haven’t been following Codex, it’s OpenAI’s answer to agentic coding - think Claude Code but from the OpenAI side. The Windows app gives you a desktop surface for running multiple Codex agents in parallel with isolated worktrees and reviewable diffs.

What caught my eye is the worktree approach. Each agent works in its own isolated branch, and you can review, edit, discard or turn the result into a pull request. This is similar to what Claude Code does with its -w (worktree) flag, and it’s a pattern that makes a lot of sense for parallel AI-assisted development. The app also ships with a plugin system and a skills marketplace, which is a slightly different approach to extensibility than what we’re seeing from other tools.

It’s available from the Microsoft Store and works with PowerShell natively or through WSL if you prefer a Linux environment.

Also worth noting

In funding news, OpenAI secured $110 billion in a record-breaking round , with SoftBank investing $30 billion and Amazon investing $50 billion. They also expanded their AWS agreement by $100 billion over 8 years, committing to roughly 2 gigawatts of Trainium capacity. The scale of investment in this space continues to be staggering.

Conclusion

It’s been a week that captures the current state of AI pretty well: genuinely useful technical progress (less hallucination, better tone, more tooling), wrapped in a layer of corporate and geopolitical complexity that’s getting harder to ignore. The Pentagon story in particular is a reminder that the models we build with don’t exist in a vacuum - and the companies behind them are making choices that affect more than just API performance.