option
Home
News
Noam Brown: AI 'Reasoning' Models Could Have Emerged Decades Ago

Noam Brown: AI 'Reasoning' Models Could Have Emerged Decades Ago

April 10, 2025
87

Noam Brown: AI

Noam Brown, a leading researcher in AI reasoning at OpenAI, recently shared insights at Nvidia's GTC conference in San Jose, suggesting that advancements in "reasoning" AI could have been achieved 20 years earlier if the right methods and algorithms had been known. He explained, "There were various reasons why this research direction was neglected," highlighting a gap in the approach that could have been filled much sooner. Reflecting on his research journey, Brown noted a crucial realization: "I noticed over the course of my research that, OK, there’s something missing. Humans spend a lot of time thinking before they act in a tough situation. Maybe this would be very useful [in AI]." This observation led him to develop AI models that mimic human-like reasoning, rather than relying solely on computational power. His work at Carnegie Mellon University, particularly with the Pluribus AI, which bested top human poker players, exemplifies this approach. Pluribus was groundbreaking because it used reasoning to solve problems, contrasting with the more common brute-force methods at the time. At OpenAI, Brown contributed to the development of o1, an AI model that uses a method known as test-time inference. This technique allows the AI to "think" before responding, enhancing its accuracy and reliability, especially in fields like mathematics and science. During the panel discussion, Brown addressed the challenge of academic research competing with the scale of experiments conducted by large AI labs like OpenAI. He acknowledged the increasing difficulty due to the growing computational demands of modern models but suggested that academics could still contribute significantly by focusing on areas that require less computational power, such as designing model architectures. He emphasized the potential for collaboration between academia and frontier labs, stating, "[T]here is an opportunity for collaboration between the frontier labs [and academia]. Certainly, the frontier labs are looking at academic publications and thinking carefully about, OK, does this make a compelling argument that, if this were scaled up further, it would be very effective. If there is that compelling argument from the paper, you know, we will investigate that in these labs." Brown's comments are particularly timely as the Trump administration has proposed significant cuts to scientific funding, a move criticized by AI experts, including Geoffrey Hinton, who argue that such cuts could jeopardize AI research efforts globally. He also pointed out the critical role of academia in improving AI benchmarking, noting, "The state of benchmarks in AI is really bad, and that doesn’t require a lot of compute to do." Current AI benchmarks often focus on obscure knowledge and fail to accurately reflect the capabilities that matter most to users, leading to confusion about AI models' true potential and progress. *Updated 4:06 p.m. PT: An earlier version of this piece implied that Brown was referring to reasoning models like o1 in his initial remarks. In fact, he was referring to his work on game-playing AI prior to his time at OpenAI. We regret the error.*
Related article
Nonprofit leverages AI agents to boost charity fundraising efforts Nonprofit leverages AI agents to boost charity fundraising efforts While major tech corporations promote AI "agents" as productivity boosters for businesses, one nonprofit organization is demonstrating their potential for social good. Sage Future, a philanthropic research group backed by Open Philanthropy, recently
Top AI Labs Warn Humanity Is Losing Grasp on Understanding AI Systems Top AI Labs Warn Humanity Is Losing Grasp on Understanding AI Systems In an unprecedented show of unity, researchers from OpenAI, Google DeepMind, Anthropic and Meta have set aside competitive differences to issue a collective warning about responsible AI development. Over 40 leading scientists from these typically riv
ChatGPT Adds Google Drive and Dropbox Integration for File Access ChatGPT Adds Google Drive and Dropbox Integration for File Access ChatGPT Enhances Productivity with New Enterprise Features OpenAI has unveiled two powerful new capabilities transforming ChatGPT into a comprehensive business productivity tool: automated meeting documentation and seamless cloud storage integration
Comments (14)
0/200
CharlesYoung
CharlesYoung August 30, 2025 at 4:30:33 PM EDT

Increíble pensar que podríamos haber tenido IA 'razonadora' hace 20 años 😳. Me pregunto cómo habría cambiado el mundo tecnológico si esos algoritmos se hubieran descubierto antes... ¿Sería nuestro presente muy diferente? #RetroFuturo

PatrickTaylor
PatrickTaylor August 7, 2025 at 10:00:59 PM EDT

Mind-blowing to think AI reasoning could’ve popped off 20 years ago! 🤯 Noam’s talk makes me wonder what other breakthroughs we’re sleeping on right now.

JackMitchell
JackMitchell July 31, 2025 at 10:48:18 PM EDT

Mind-blowing to think AI reasoning could’ve been cracked 20 years ago! 🤯 Makes you wonder what else we’re sitting on, just waiting for the right spark. Noam’s talk sounds like a wake-up call for the AI world.

AlbertScott
AlbertScott July 23, 2025 at 4:50:48 AM EDT

Mind-blowing to think AI reasoning could've been cracked decades ago! 🤯 Makes you wonder what else we’re sitting on that’s just a breakthrough away.

LeviKing
LeviKing April 23, 2025 at 11:47:27 AM EDT

노암 브라운의 통찰은 정말 놀랍습니다! 20년 전에 '추론' AI가 있었다면 세상이 얼마나 달라졌을까요? 🤯 GTC에서 그의 강연은 우리가 얼마나 놓친 것인지 깨닫게 해줬어요. 하지만 늦었다고 생각하지 말고, 다음 큰 것을 놓치지 않도록 합시다!

WillGarcía
WillGarcía April 23, 2025 at 3:02:14 AM EDT

ノアム・ブラウンの洞察は本当に驚きです!20年前に「推論」AIがあれば、世界はどれだけ違っていたでしょうか?🤯 GTCでの彼の話は、我々がどれだけ見逃してきたかを思い出させてくれました。でも、遅すぎることはないですよね?次に大きなものを見逃さないようにしましょう!

Back to Top
OR