Noam Brown: AI 'Reasoning' Models Could Have Emerged Decades Ago

Noam Brown, a leading researcher in AI reasoning at OpenAI, recently shared insights at Nvidia's GTC conference in San Jose, suggesting that advancements in "reasoning" AI could have been achieved 20 years earlier if the right methods and algorithms had been known. He explained, "There were various reasons why this research direction was neglected," highlighting a gap in the approach that could have been filled much sooner.
Reflecting on his research journey, Brown noted a crucial realization: "I noticed over the course of my research that, OK, there’s something missing. Humans spend a lot of time thinking before they act in a tough situation. Maybe this would be very useful [in AI]." This observation led him to develop AI models that mimic human-like reasoning, rather than relying solely on computational power.
His work at Carnegie Mellon University, particularly with the Pluribus AI, which bested top human poker players, exemplifies this approach. Pluribus was groundbreaking because it used reasoning to solve problems, contrasting with the more common brute-force methods at the time.
At OpenAI, Brown contributed to the development of o1, an AI model that uses a method known as test-time inference. This technique allows the AI to "think" before responding, enhancing its accuracy and reliability, especially in fields like mathematics and science.
During the panel discussion, Brown addressed the challenge of academic research competing with the scale of experiments conducted by large AI labs like OpenAI. He acknowledged the increasing difficulty due to the growing computational demands of modern models but suggested that academics could still contribute significantly by focusing on areas that require less computational power, such as designing model architectures.
He emphasized the potential for collaboration between academia and frontier labs, stating, "[T]here is an opportunity for collaboration between the frontier labs [and academia]. Certainly, the frontier labs are looking at academic publications and thinking carefully about, OK, does this make a compelling argument that, if this were scaled up further, it would be very effective. If there is that compelling argument from the paper, you know, we will investigate that in these labs."
Brown's comments are particularly timely as the Trump administration has proposed significant cuts to scientific funding, a move criticized by AI experts, including Geoffrey Hinton, who argue that such cuts could jeopardize AI research efforts globally.
He also pointed out the critical role of academia in improving AI benchmarking, noting, "The state of benchmarks in AI is really bad, and that doesn’t require a lot of compute to do." Current AI benchmarks often focus on obscure knowledge and fail to accurately reflect the capabilities that matter most to users, leading to confusion about AI models' true potential and progress.
*Updated 4:06 p.m. PT: An earlier version of this piece implied that Brown was referring to reasoning models like o1 in his initial remarks. In fact, he was referring to his work on game-playing AI prior to his time at OpenAI. We regret the error.*
Related article
Former OpenAI Engineer Shares Insights on Company Culture and Rapid Growth
Three weeks ago, Calvin French-Owen, an engineer who contributed to a key OpenAI product, left the company.He recently shared a compelling blog post detailing his year at OpenAI, including the intense
Google Unveils Production-Ready Gemini 2.5 AI Models to Rival OpenAI in Enterprise Market
Google intensified its AI strategy Monday, launching its advanced Gemini 2.5 models for enterprise use and introducing a cost-efficient variant to compete on price and performance.The Alphabet-owned c
Meta Offers High Pay for AI Talent, Denies $100M Signing Bonuses
Meta is attracting AI researchers to its new superintelligence lab with substantial multimillion-dollar compensation packages. However, claims of $100 million "signing bonuses" are untrue, per a recru
Comments (13)
0/200
PatrickTaylor
August 7, 2025 at 10:00:59 PM EDT
Mind-blowing to think AI reasoning could’ve popped off 20 years ago! 🤯 Noam’s talk makes me wonder what other breakthroughs we’re sleeping on right now.
0
JackMitchell
July 31, 2025 at 10:48:18 PM EDT
Mind-blowing to think AI reasoning could’ve been cracked 20 years ago! 🤯 Makes you wonder what else we’re sitting on, just waiting for the right spark. Noam’s talk sounds like a wake-up call for the AI world.
0
AlbertScott
July 23, 2025 at 4:50:48 AM EDT
Mind-blowing to think AI reasoning could've been cracked decades ago! 🤯 Makes you wonder what else we’re sitting on that’s just a breakthrough away.
0
LeviKing
April 23, 2025 at 11:47:27 AM EDT
노암 브라운의 통찰은 정말 놀랍습니다! 20년 전에 '추론' AI가 있었다면 세상이 얼마나 달라졌을까요? 🤯 GTC에서 그의 강연은 우리가 얼마나 놓친 것인지 깨닫게 해줬어요. 하지만 늦었다고 생각하지 말고, 다음 큰 것을 놓치지 않도록 합시다!
0
WillGarcía
April 23, 2025 at 3:02:14 AM EDT
ノアム・ブラウンの洞察は本当に驚きです!20年前に「推論」AIがあれば、世界はどれだけ違っていたでしょうか?🤯 GTCでの彼の話は、我々がどれだけ見逃してきたかを思い出させてくれました。でも、遅すぎることはないですよね?次に大きなものを見逃さないようにしましょう!
0
ThomasYoung
April 22, 2025 at 4:59:34 AM EDT
As percepções de Noam Brown são surpreendentes! Imagina se tivéssemos IA de 'raciocínio' há 20 anos? 🤯 A palestra dele no GTC foi um alerta sobre quanto perdemos. Mas, melhor tarde do que nunca, né? Vamos torcer para não perder a próxima grande coisa!
0




Mind-blowing to think AI reasoning could’ve popped off 20 years ago! 🤯 Noam’s talk makes me wonder what other breakthroughs we’re sleeping on right now.




Mind-blowing to think AI reasoning could’ve been cracked 20 years ago! 🤯 Makes you wonder what else we’re sitting on, just waiting for the right spark. Noam’s talk sounds like a wake-up call for the AI world.




Mind-blowing to think AI reasoning could've been cracked decades ago! 🤯 Makes you wonder what else we’re sitting on that’s just a breakthrough away.




노암 브라운의 통찰은 정말 놀랍습니다! 20년 전에 '추론' AI가 있었다면 세상이 얼마나 달라졌을까요? 🤯 GTC에서 그의 강연은 우리가 얼마나 놓친 것인지 깨닫게 해줬어요. 하지만 늦었다고 생각하지 말고, 다음 큰 것을 놓치지 않도록 합시다!




ノアム・ブラウンの洞察は本当に驚きです!20年前に「推論」AIがあれば、世界はどれだけ違っていたでしょうか?🤯 GTCでの彼の話は、我々がどれだけ見逃してきたかを思い出させてくれました。でも、遅すぎることはないですよね?次に大きなものを見逃さないようにしましょう!




As percepções de Noam Brown são surpreendentes! Imagina se tivéssemos IA de 'raciocínio' há 20 anos? 🤯 A palestra dele no GTC foi um alerta sobre quanto perdemos. Mas, melhor tarde do que nunca, né? Vamos torcer para não perder a próxima grande coisa!












