option
Home
News
OpenAI Unveils Cutting-Edge o3 Reasoning Model on Final Day of 'Shipmas'

OpenAI Unveils Cutting-Edge o3 Reasoning Model on Final Day of 'Shipmas'

May 27, 2025
59

OpenAI Unveils Cutting-Edge o3 Reasoning Model on Final Day of

As the holiday season rolls around, companies are jumping on the festive bandwagon with special deals and promotions. OpenAI is no exception, launching its exciting "12 days of OpenAI" event series. Announced via a post on X, the company kicked off this event on December 5, promising 12 days of live streams and a slew of new releases, both big and small. The grand finale came on December 20, with OpenAI unveiling its most significant announcement of the series.

So, what exactly is the '12 days of OpenAI'?

What are the '12 days of OpenAI'?

OpenAI CEO Sam Altman took to X to share the details of this eagerly anticipated event. Starting at 10 a.m. PT on December 5, the event ran daily for 12 weekdays, featuring live streams with launches and demos. Altman described the releases as a mix of "big ones" and "stocking stuffers," ensuring there was something for everyone.

🎄🎅starting tomorrow at 10 am pacific, we are doing 12 days of openai.
each weekday, we will have a livestream with a launch or demo, some big ones and some stocking stuffers.
we’ve got some great stuff to share, hope you enjoy! merry christmas.

— Sam Altman (@sama) December 4, 2024

What's dropped?

Friday, December 20

On the final day of the event, OpenAI unveiled its latest models, collectively known as o3, which includes o3 and o3 mini. The name "o3" was chosen to avoid confusion with Telefonica's O2 telecommunications brand. While these models aren't yet available to the general public, they show promising advancements. o3 outperforms its predecessor, o1, in various benchmarks, including math and science, as demonstrated in its performance on the AIME 2024 and GPQA. Additionally, o3 achieved a new state-of-the-art score on the ARC-AGI benchmark, inching closer to AGI, though not quite there yet. o3 mini offers three reasoning options: low, medium, and high, with performance varying based on the level of thinking time. OpenAI is also opening up these models for external safety testing, with researchers able to apply for early access until January 10. Sam Altman concluded the live stream by announcing the planned launch of the o3 model at the end of January, followed by the full o3 model. The company also introduced "deliberative alignment," a new training paradigm for LLMs.

Thursday, December 19

The second-to-last day focused on the MacOS desktop app and its interoperability with other apps. Users can now automate their work with ChatGPT on MacOS, with new features supporting a range of coding and writing apps. The desktop app also supports Advanced Voice Mode, and these features are already available with the latest app version and a suitable subscription. OpenAI emphasized privacy, ensuring ChatGPT only interacts with other apps when manually prompted. The company teased an exciting announcement for the final day.

Wednesday, December 18

Ever wished you could use ChatGPT without Wi-Fi? Now, you can simply call 1-800-ChatGPT for access. OpenAI encourages users to save this number in their contacts. The service is available across the US, and internationally, users can message ChatGPT on WhatsApp, with 15 minutes of free calls per month. This feature aims to make ChatGPT more accessible to a broader audience.

Tuesday, December 17

The ninth day was a "Mini Dev Day," focusing on developer features and updates. The o1 model is now out of preview in the API, supporting function calling, structured outputs, and more. A new "reasoning effort" parameter helps with cost efficiency, while WebRTC support enhances the Realtime API. The fine-tuning API now includes Preference Fine-Tuning, and new Go and Java SDKs are available in beta. An AMA session was held on the OpenAI GitHub platform following the live stream.

Monday, December 16

The second Monday focused on Search in ChatGPT, now available to all users, not just ChatGPT Plus subscribers. The search experience has been improved, especially on mobile, with an enriched map experience and image-rich visual results. Search is also integrated into Advanced Voice mode, allowing users to search the web verbally. OpenAI teased developers about the upcoming "mini Dev Day."

Friday, December 13

One of the most requested features was delivered on this day: "Projects," a new way to organize and customize chats in ChatGPT. Users can create Projects with titles, customized folder colors, and relevant files, making it easier to manage conversations and pick up where they left off. This feature is rolling out to Plus, Pro, and Teams users, with plans to expand to free users soon.

Thursday, December 12

After apologizing for a previous live stream issue, OpenAI announced that Advanced Voice Mode now includes screen-sharing and visual capabilities. This allows ChatGPT to provide contextually relevant assistance based on what it sees. A special Santa voice was also introduced for the holiday season, available across all platforms where ChatGPT voice mode is accessible. Video and screen sharing are rolling out to Team users and most Pro and Plus subscribers, with plans for broader access in the future.

Wednesday, December 11

With the release of iOS 18.2, OpenAI walked through its integration with Apple's ecosystem. Siri can now use ChatGPT for queries outside its scope, with user permission. Visual Intelligence on the iPhone 16 allows users to point their camera at objects and use ChatGPT for information or tasks like translation. Writing Tools now feature a "Compose" tool, enabling text creation and image generation using DALL-E. All features are subject to ChatGPT's daily usage limits.

Tuesday, December 10

Canvas, a favorite among ChatGPT power users, is now available to all web users in GPT-4o, no longer limited to ChatGPT Plus beta users. Integrated natively into GPT-4o, Canvas provides a seamless interface for managing Q&A exchanges and project edits. It can also be used with custom GPTs and even run Python code directly, enhancing productivity for coding tasks.

Monday, December 9

OpenAI teased the much-anticipated release of its video model, Sora Turbo, which is smarter and more affordable than the February preview. Available to ChatGPT Plus and Pro users in the US, Sora can generate video-to-video and text-to-video content. The model features an explore page for viewing creations, and a live demo showcased its impressive capabilities. OpenAI also introduced Storyboard, a tool for generating inputs for video sequences.

Friday, December 6

On the second day, OpenAI expanded access to its Reinforcement Fine-Tuning Research Program, allowing developers and machine learning engineers to fine-tune models for specific tasks. The program encourages applications from research institutes, universities, and enterprises, with plans to make it publicly available in early 2025.

Thursday, December 5

OpenAI kicked off the event with a bang, unveiling ChatGPT Pro, a new subscription tier for superusers, and the full version of its o1 model. The o1 model offers improved performance and speed, replacing o1-preview for ChatGPT Plus and Pro users. ChatGPT Pro provides unlimited access to OpenAI's best models, including o1-mini, GPT-4o, and Advanced Mode, at a cost of $200 per month.

Where can you access the live stream?

Where can you access the live stream?

The live streams were hosted on the OpenAI website and immediately uploaded to its YouTube channel. If you missed any of the 12 days of OpenAI, you can catch up by watching the recordings on the company's YouTube channel.

Related article
Salesforce Unveils AI Digital Teammates in Slack to Rival Microsoft Copilot Salesforce Unveils AI Digital Teammates in Slack to Rival Microsoft Copilot Salesforce launched a new workplace AI strategy, introducing specialized “digital teammates” integrated into Slack conversations, the company revealed on Monday.The new tool, Agentforce in Slack, enab
AI's Role in Hip Hop: Tool for Innovation or Creative Shortcut? AI's Role in Hip Hop: Tool for Innovation or Creative Shortcut? Artificial intelligence is reshaping daily life, with the music scene feeling the shift too. In hip hop, fresh AI systems aim to transform track building, verse crafting, and live shows. This piece de
Oracle's $40B Nvidia Chip Investment Boosts Texas AI Data Center Oracle's $40B Nvidia Chip Investment Boosts Texas AI Data Center Oracle is set to invest approximately $40 billion in Nvidia chips to power a major new data center in Texas, developed by OpenAI, as reported by the Financial Times. This deal, one of the largest chip
Comments (6)
0/200
ScottGarcía
ScottGarcía July 30, 2025 at 9:41:20 PM EDT

This o3 model sounds like a game-changer! 🤯 Can't wait to see how it stacks up against other AI reasoning models out there.

ElijahCollins
ElijahCollins May 27, 2025 at 7:14:33 PM EDT

OpenAI's o3 model launch was super cool! It felt like Christmas came early. Watching their live sessions over these 12 days was engaging, though sometimes the tech glitches were frustrating. Overall, it's a huge leap forward in AI reasoning capabilities. 🎅🌟

BrianMartinez
BrianMartinez May 27, 2025 at 12:55:22 PM EDT

¡La presentación del modelo o3 de OpenAI fue impresionante! Las sesiones en vivo durante los 12 días fueron muy interesantes, aunque hubo algunos problemas técnicos. Sin embargo, es un gran avance en la capacidad de razonamiento de IA. 🎅🌟

StephenMartinez
StephenMartinez May 27, 2025 at 11:49:13 AM EDT

OpenAI推出的o3推理模型发布真是令人兴奋!12天的直播活动也很有趣,不过偶尔技术问题让人头疼。总体来说,这是AI能力的一大进步!🎉🌟

GaryWalker
GaryWalker May 27, 2025 at 5:48:54 AM EDT

オープンAIのo3モデルの発表は興奮しました!12日間のライブセッションも楽しかったけど、たまに技術的な問題が起こるのが残念でした。でも、AIの推論能力が向上したのは間違いありません!🎄✨

StevenGonzalez
StevenGonzalez May 27, 2025 at 3:37:47 AM EDT

오픈AI의 o3 모델 공개 정말 대박이야! 12일간의 라이브 세션도 재밌었는데, 기술적인 문제는 좀 아쉬웠어. 그래도 AI 추론 능력 엄청나게 좋아졌네! 🎄🌟

Back to Top
OR