option
Home
News
Ex-OpenAI Policy Lead Slams Company for Altering AI Safety Narrative

Ex-OpenAI Policy Lead Slams Company for Altering AI Safety Narrative

April 10, 2025
98

Ex-OpenAI Policy Lead Slams Company for Altering AI Safety Narrative

A former OpenAI policy researcher, Miles Brundage, recently took to social media to call out OpenAI for what he sees as an attempt to "rewrite the history" of its approach to deploying potentially risky AI systems. This week, OpenAI released a document detailing its current stance on AI safety and alignment, the process of ensuring AI systems act in predictable and beneficial ways. In it, OpenAI described the development of AGI, or AI systems capable of any task a human can do, as a "continuous path" that involves "iteratively deploying and learning" from AI technologies. "In a discontinuous world [...] safety lessons come from treating the systems of today with outsized caution relative to their apparent power, [which] is the approach we took for [our AI model] GPT-2," OpenAI stated in the document. "We now view the first AGI as just one point along a series of systems of increasing usefulness [...] In the continuous world, the way to make the next system safe and beneficial is to learn from the current system." However, Brundage argues that the cautious approach taken with GPT-2 was entirely in line with OpenAI's current iterative deployment strategy. "OpenAI's release of GPT-2, which I was involved in, was 100% consistent [with and] foreshadowed OpenAI's current philosophy of iterative deployment," Brundage posted on X. "The model was released incrementally, with lessons shared at each step. Many security experts at the time thanked us for this caution." Brundage, who joined OpenAI as a research scientist in 2018 and later became the company's head of policy research, focused on the responsible deployment of language generation systems like ChatGPT while on OpenAI's "AGI readiness" team. GPT-2, announced by OpenAI in 2019, was a precursor to the AI systems behind ChatGPT. It could answer questions, summarize articles, and generate text that was sometimes indistinguishable from human writing. Though GPT-2 might seem basic now, it was groundbreaking at the time. Due to concerns about potential misuse, OpenAI initially withheld the model's source code, instead allowing select news outlets limited access to a demo. The decision received mixed feedback from the AI community. Some argued that the risks associated with GPT-2 were overstated, and there was no evidence to support OpenAI's concerns about misuse. The AI-focused publication The Gradient even published an open letter urging OpenAI to release the model, citing its technological significance. OpenAI eventually released a partial version of GPT-2 six months after its announcement, followed by the full system several months later. Brundage believes this was the correct approach. "What part of [the GPT-2 release] was motivated by or premised on thinking of AGI as discontinuous? None of it," he stated on X. "What's the evidence this caution was 'disproportionate' ex ante? Ex post, it prob. would have been OK, but that doesn't mean it was responsible to YOLO it [sic] given info at the time." Brundage is concerned that OpenAI's document aims to establish a high burden of proof, where concerns are dismissed as alarmist unless there's overwhelming evidence of imminent dangers. He finds this mindset "very dangerous" for advanced AI systems. "If I were still working at OpenAI, I would be asking why this [document] was written the way it was, and what exactly OpenAI hopes to achieve by poo-pooing caution in such a lop-sided way," Brundage added. OpenAI has faced criticism in the past for prioritizing "shiny products" over safety and rushing releases to outpace competitors. Last year, the company disbanded its AGI readiness team, and several AI safety and policy researchers left for rival firms. The competitive landscape has intensified, with Chinese AI lab DeepSeek's R1 model, which is openly available and matches OpenAI's o1 "reasoning" model on key benchmarks, drawing global attention. OpenAI CEO Sam Altman has acknowledged that DeepSeek has narrowed OpenAI's technological lead, prompting OpenAI to consider accelerating its release schedule. With OpenAI reportedly losing billions annually and projecting losses to triple to $14 billion by 2026, a faster product release cycle could boost short-term profits but potentially compromise long-term safety. Experts like Brundage are questioning whether this trade-off is worthwhile.
Related article
Former OpenAI Engineer Shares Insights on Company Culture and Rapid Growth Former OpenAI Engineer Shares Insights on Company Culture and Rapid Growth Three weeks ago, Calvin French-Owen, an engineer who contributed to a key OpenAI product, left the company.He recently shared a compelling blog post detailing his year at OpenAI, including the intense
Apple Users Can Claim Share of $95M Siri Privacy Settlement Apple Users Can Claim Share of $95M Siri Privacy Settlement Apple device owners in the US can now apply for a portion of a $95 million settlement addressing Siri privacy concerns. A dedicated website facilitates fund distribution for those who experienced unin
Google Unveils Production-Ready Gemini 2.5 AI Models to Rival OpenAI in Enterprise Market Google Unveils Production-Ready Gemini 2.5 AI Models to Rival OpenAI in Enterprise Market Google intensified its AI strategy Monday, launching its advanced Gemini 2.5 models for enterprise use and introducing a cost-efficient variant to compete on price and performance.The Alphabet-owned c
Comments (36)
0/200
LiamCarter
LiamCarter August 5, 2025 at 3:00:59 AM EDT

It's wild how OpenAI's getting called out by their own ex-policy lead! 😮 Miles Brundage isn't holding back, accusing them of spinning their AI safety story. Makes you wonder what's really going on behind the scenes at these big AI companies. Are they prioritizing safety or just playing PR games?

TimothyMitchell
TimothyMitchell April 20, 2025 at 1:18:15 PM EDT

マイルズ・ブルンダージがオープンAIを批判したのは正しいと思う。彼らはAI安全性の物語をねじ曲げようとしているが、それは受け入れられない。彼らがリスクのあるAIを扱っているとき、透明性を求めることは重要だ。マイルズ、引き続き透明性を求めてください!👍

JackPerez
JackPerez April 16, 2025 at 7:40:06 PM EDT

A crítica de Miles Brundage à OpenAI é totalmente justa! Eles estão tentando reescrever a história da segurança do AI, mas não estamos comprando isso. É crucial mantê-los responsáveis, especialmente quando lidam com sistemas de IA arriscados. Continue pressionando por transparência, Miles! 👍

StevenGonzalez
StevenGonzalez April 16, 2025 at 4:38:43 PM EDT

마일즈 브런데이지가 OpenAI의 새로운 AI 안전성 서사를 비판하는 것을 보고 정말 드라마를 보는 것 같아요. 그는 회사가 역사를 다시 쓰고 있다고 비난하고 있어서, 그의 강력한 태도에 감탄했어요. 큰 기술 기업의 안전성 주장을 믿기 전에 두 번 생각하게 만드네요. 마일즈, 고마워요! 👏

AlbertWilson
AlbertWilson April 16, 2025 at 3:07:30 PM EDT

Just listened to Miles Brundage's take on OpenAI's new AI safety narrative and wow, it's like watching a soap opera unfold! He's not holding back, calling out the company for rewriting history. It's intense but also kinda refreshing to see someone stand up like this. Definitely makes you think twice about trusting big tech's safety claims. Keep it real, Miles! 👏

DonaldBrown
DonaldBrown April 16, 2025 at 3:05:24 AM EDT

Я ценю честность Майлса Брандажа о смене нарратива OpenAI в отношении безопасности ИИ. Важно держать компании ответственными, но хотелось бы больше конструктивной критики. Тем не менее, это сигнал тревоги для отрасли! 🚨

Back to Top
OR