Latest AI news, expert analysis, bold opinions, and key trends — delivered to your inbox.
For the first time since GPT-2 in 2019, OpenAI is returning to the open-weight arena. On April 1, 2025, the company announced plans to release a new open-weight language model with advanced reasoning capabilities—a major strategic shift under pressure from rising open-source competitors.
Announced by CEO Sam Altman on March 31 via X, the model is said to rival o3-mini in its reasoning abilities, handling tasks like math, coding, and logical problem-solving.
Unlike open-source models, this will be open-weight—making the trained weights public, but not the full source code, training data, or methods.
Developers will be able to fine-tune and run the model locally, offering flexibility without exposing the entire stack.
Community engagement is central: OpenAI plans events across San Francisco, Europe, and Asia-Pacific to demo early versions and collect feedback.
Safety matters: OpenAI will evaluate risks using its Preparedness Framework, with extra caution due to potential misuse or post-release modifications.
OpenAI has long resisted opening up. But with DeepSeek-R1 already out and Meta’s LLaMA models dominating downloads (over 1 billion), the pressure is real.
Altman now hints this move reflects a strategic pivot to stay relevant and regain trust among developers.
While no firm launch date has been set, all signs point to summer 2025. If the model lives up to the hype, it could reshape how AI is built, shared, and customized in a rapidly evolving space.