Anthropic Removes Biden AI Policy from Website Quietly
Anthropic, a company focused on AI development, recently removed several voluntary AI safety commitments from its website that were made in collaboration with the Biden administration back in 2023. These commitments, aimed at promoting safe and trustworthy AI, included sharing information on managing AI risks and conducting research on AI bias and discrimination. The AI watchdog group, The Midas Project, pointed out that these were taken down from Anthropic's transparency hub last week. However, commitments related to reducing AI-generated image-based sexual abuse are still in place.
Anthropic made no announcement about this change and did not respond right away to requests for comment. The company, along with others like OpenAI, Google, Microsoft, Meta, and Inflection, had agreed to follow certain voluntary AI safety measures proposed by the Biden administration in July 2023. These measures involved internal and external security testing of AI systems before their release, investing in cybersecurity to safeguard sensitive AI data, and developing ways to watermark AI-generated content.
It's worth noting that Anthropic was already implementing many of these practices, and the agreement wasn't legally binding. The Biden administration's goal was to highlight its AI policy priorities before the more comprehensive AI Executive Order was implemented a few months later.
The Trump administration, on the other hand, has signaled a different approach to AI governance. In January, President Trump revoked the AI Executive Order, which had directed the National Institute of Standards and Technology to create guidelines for companies to identify and fix flaws in AI models, including biases. Trump's allies criticized the order for its heavy reporting requirements, which they felt forced companies to reveal their trade secrets.
Soon after revoking the AI Executive Order, Trump issued a new directive for federal agencies to support the development of AI "free from ideological bias" that would promote "human flourishing, economic competitiveness, and national security." Unlike Biden's initiative, Trump's order did not address combating AI discrimination.
The Midas Project highlighted on X that the Biden-era commitments did not indicate that they were temporary or dependent on the political party in power. Following the election in November, several AI companies reaffirmed their commitments.
Anthropic isn't the only company to adjust its public policies since Trump took office. OpenAI recently declared it would champion "intellectual freedom ... no matter how challenging or controversial a topic may be," and ensure its AI does not censor certain viewpoints. OpenAI also removed a page from its website that previously showcased its dedication to diversity, equity, and inclusion (DEI). These programs have faced criticism from the Trump administration, prompting many companies to either scrap or significantly modify their DEI initiatives.
Some of Trump's Silicon Valley AI advisors, including Marc Andreessen, David Sacks, and Elon Musk, have accused companies like Google and OpenAI of censoring AI by restricting their chatbots' responses. However, labs such as OpenAI have denied that their policy adjustments are due to political pressure.
Both OpenAI and Anthropic are either currently holding or actively seeking government contracts.
Several hours after this article was published, Anthropic provided TechCrunch with the following statement:
"We remain committed to the voluntary AI commitments established under the Biden Administration. This progress and specific actions continue to be reflected in [our] transparency center within the content. To prevent further confusion, we will add a section directly citing where our progress aligns."
Updated 11:25 a.m. Pacific: Added a statement from Anthropic.
Related article
AI-Powered Music Creation: Craft Songs and Videos Effortlessly
Music creation can be complex, demanding time, resources, and expertise. Artificial intelligence has transformed this process, making it simple and accessible. This guide highlights how AI enables any
Creating AI-Powered Coloring Books: A Comprehensive Guide
Designing coloring books is a rewarding pursuit, combining artistic expression with calming experiences for users. Yet, the process can be labor-intensive. Thankfully, AI tools simplify the creation o
Qodo Partners with Google Cloud to Offer Free AI Code Review Tools for Developers
Qodo, an Israel-based AI coding startup focused on code quality, has launched a partnership with Google Cloud to enhance AI-generated software integrity.As businesses increasingly depend on AI for cod
Comments (84)
0/200
GeorgeScott
August 20, 2025 at 7:01:15 AM EDT
Interesting move by Anthropic, quietly dropping Biden's AI safety commitments. Makes me wonder if they're dodging accountability or just rethinking their approach. 🤔 Anyone know what's really going on here?
0
GaryHill
August 18, 2025 at 4:01:31 PM EDT
It's wild that Anthropic just scrubbed those AI safety promises off their site! 🤔 Makes you wonder if they're dodging accountability or just shifting priorities. What do you guys think—shady move or no big deal?
0
GaryJones
August 12, 2025 at 9:00:59 PM EDT
It's wild that Anthropic just quietly pulled those AI safety commitments! 🤔 Makes you wonder what's going on behind the scenes—new priorities or just dodging accountability?
0
JimmyWhite
August 1, 2025 at 9:47:34 AM EDT
Interesting move by Anthropic! 🤔 Wonder why they quietly dropped those Biden AI safety promises. Maybe they’re rethinking their strategy or just avoiding political heat?
0
WillieHernández
April 19, 2025 at 2:53:51 AM EDT
アントロピックがバイデン政権とのAI安全協定を静かに削除したのはちょっと怪しいですよね?🤔 安全なAIに対する約束を守らないみたいです。何が裏で起きているのか気になります。もっと透明性が必要ですね、さもないと信用できません!😒
0
EricRoberts
April 17, 2025 at 9:47:11 PM EDT
앤스로픽이 AI 안전 약속을 조용히 삭제한 것에 놀랐어요! 😱 배후에서 무슨 일이 일어나고 있는지 궁금해요. 이전의 투명성은 좋았는데, 이제는 잘 모르겠어요. 신뢰를 유지하기 위해 이 조치를 설명해야 한다고 생각해요. 🤔
0
Anthropic, a company focused on AI development, recently removed several voluntary AI safety commitments from its website that were made in collaboration with the Biden administration back in 2023. These commitments, aimed at promoting safe and trustworthy AI, included sharing information on managing AI risks and conducting research on AI bias and discrimination. The AI watchdog group, The Midas Project, pointed out that these were taken down from Anthropic's transparency hub last week. However, commitments related to reducing AI-generated image-based sexual abuse are still in place.
Anthropic made no announcement about this change and did not respond right away to requests for comment. The company, along with others like OpenAI, Google, Microsoft, Meta, and Inflection, had agreed to follow certain voluntary AI safety measures proposed by the Biden administration in July 2023. These measures involved internal and external security testing of AI systems before their release, investing in cybersecurity to safeguard sensitive AI data, and developing ways to watermark AI-generated content.
It's worth noting that Anthropic was already implementing many of these practices, and the agreement wasn't legally binding. The Biden administration's goal was to highlight its AI policy priorities before the more comprehensive AI Executive Order was implemented a few months later.
The Trump administration, on the other hand, has signaled a different approach to AI governance. In January, President Trump revoked the AI Executive Order, which had directed the National Institute of Standards and Technology to create guidelines for companies to identify and fix flaws in AI models, including biases. Trump's allies criticized the order for its heavy reporting requirements, which they felt forced companies to reveal their trade secrets.
Soon after revoking the AI Executive Order, Trump issued a new directive for federal agencies to support the development of AI "free from ideological bias" that would promote "human flourishing, economic competitiveness, and national security." Unlike Biden's initiative, Trump's order did not address combating AI discrimination.
The Midas Project highlighted on X that the Biden-era commitments did not indicate that they were temporary or dependent on the political party in power. Following the election in November, several AI companies reaffirmed their commitments.
Anthropic isn't the only company to adjust its public policies since Trump took office. OpenAI recently declared it would champion "intellectual freedom ... no matter how challenging or controversial a topic may be," and ensure its AI does not censor certain viewpoints. OpenAI also removed a page from its website that previously showcased its dedication to diversity, equity, and inclusion (DEI). These programs have faced criticism from the Trump administration, prompting many companies to either scrap or significantly modify their DEI initiatives.
Some of Trump's Silicon Valley AI advisors, including Marc Andreessen, David Sacks, and Elon Musk, have accused companies like Google and OpenAI of censoring AI by restricting their chatbots' responses. However, labs such as OpenAI have denied that their policy adjustments are due to political pressure.
Both OpenAI and Anthropic are either currently holding or actively seeking government contracts.
Several hours after this article was published, Anthropic provided TechCrunch with the following statement:
"We remain committed to the voluntary AI commitments established under the Biden Administration. This progress and specific actions continue to be reflected in [our] transparency center within the content. To prevent further confusion, we will add a section directly citing where our progress aligns."
Updated 11:25 a.m. Pacific: Added a statement from Anthropic.




Interesting move by Anthropic, quietly dropping Biden's AI safety commitments. Makes me wonder if they're dodging accountability or just rethinking their approach. 🤔 Anyone know what's really going on here?




It's wild that Anthropic just scrubbed those AI safety promises off their site! 🤔 Makes you wonder if they're dodging accountability or just shifting priorities. What do you guys think—shady move or no big deal?




It's wild that Anthropic just quietly pulled those AI safety commitments! 🤔 Makes you wonder what's going on behind the scenes—new priorities or just dodging accountability?




Interesting move by Anthropic! 🤔 Wonder why they quietly dropped those Biden AI safety promises. Maybe they’re rethinking their strategy or just avoiding political heat?




アントロピックがバイデン政権とのAI安全協定を静かに削除したのはちょっと怪しいですよね?🤔 安全なAIに対する約束を守らないみたいです。何が裏で起きているのか気になります。もっと透明性が必要ですね、さもないと信用できません!😒




앤스로픽이 AI 안전 약속을 조용히 삭제한 것에 놀랐어요! 😱 배후에서 무슨 일이 일어나고 있는지 궁금해요. 이전의 투명성은 좋았는데, 이제는 잘 모르겠어요. 신뢰를 유지하기 위해 이 조치를 설명해야 한다고 생각해요. 🤔












