Latest AI news, expert analysis, bold opinions, and key trends — delivered to your inbox.
OpenAI just raised the bar again. The release of o3 and o4-mini marks a huge leap in AI reasoning—now with tool access, multimodal thinking, and even the ability to reason with images. And that’s not all…
o3 is now OpenAI’s top-tier model, delivering state-of-the-art performance in math, science, coding, and multimodal reasoning.
o4-mini is smaller but shockingly smart—faster, cheaper, and outperforming all previous "mini" models, including on advanced math benchmarks like AIME 2025.
Both models are fully agentic—they can use web search, Python, image generation, and more, chaining tools together as part of their thought process.
They’re also the first to “think with images”, integrating visual analysis into logical reasoning—essentially blending sight with thought.
OpenAI also released Codex CLI, an open-source terminal agent that combines reasoning models with real coding tasks in your command line.
Greg Brockman calls it “a GPT-4-level step into the future”—models that can now produce novel scientific ideas, not just summaries.
This isn’t just faster AI. This is AI thinking differently—using tools, interpreting visuals, solving problems end-to-end, and generating new knowledge.
It feels like Step 4 of OpenAI’s roadmap:
From understanding → reasoning → tool use → discovery.
The race toward AGI just got very real.