option
Home
News
New AI Bill Introduced in California by SB 1047 Author

New AI Bill Introduced in California by SB 1047 Author

April 10, 2025
135

New AI Bill Introduced in California by SB 1047 Author

California State Senator Scott Wiener, who stirred the pot with last year's contentious AI safety bill SB 1047, is back with another bill that's set to ruffle feathers in Silicon Valley. On Friday, Wiener introduced SB 53, which aims to shield employees at major AI labs from retaliation if they raise concerns about their company's AI systems posing a "critical risk" to society. Additionally, the bill proposes the creation of CalCompute, a public cloud computing cluster designed to provide researchers and startups with the computing power needed to develop AI for the public good. Wiener's previous bill, SB 1047, sparked intense national debate over how to manage large AI systems that could potentially lead to catastrophic events like significant loss of life or cyberattacks causing damages over $500 million. Despite the controversy, Governor Gavin Newsom vetoed the bill in September, arguing it wasn't the right approach. The fallout from SB 1047 was fierce. Silicon Valley bigwigs argued that the bill would undermine America's edge in the global AI race, fueled by what they saw as unfounded fears of AI-induced apocalyptic scenarios. Wiener hit back, accusing some venture capitalists of orchestrating a "propaganda campaign" against his bill. Notably, Y Combinator's claim that SB 1047 could jail startup founders was labeled misleading by experts. SB 53 seems to be a strategic pivot, focusing on the less contentious elements of SB 1047. It emphasizes whistleblower protections and the establishment of CalCompute. Wiener isn't backing away from existential AI risks, though. SB 53 clearly protects whistleblowers who believe their employers are developing AI systems that could pose a "critical risk," defined as a foreseeable or material risk that could result in the death or serious injury of over 100 people, or more than $1 billion in damages. The bill targets frontier AI model developers like OpenAI, Anthropic, and xAI, prohibiting them from retaliating against employees who report concerns to California's Attorney General, federal authorities, or other employees. These developers must also report back to whistleblowers on internal processes flagged as concerning. As for CalCompute, SB 53 would set up a group including University of California representatives and other researchers to plan its development. They'll decide the size of the cluster and who gets access to it. SB 53 is still in the early stages of the legislative process, needing approval from California's legislative bodies before reaching Governor Newsom's desk. Silicon Valley's response to this new bill will be closely watched. Passing AI safety bills in 2025 might be trickier than in 2024, when California passed 18 AI-related bills. The AI safety movement seems to be losing steam, especially after Vice President J.D. Vance's comments at the Paris AI Action Summit, where he emphasized AI innovation over safety. While CalCompute could be seen as promoting AI progress, the future of legislative efforts focusing on existential AI risks remains uncertain. [ttpp][yyxx]
Related article
Effortlessly Chat with PDFs Using Gemini API, Langchain, and Chroma DB Integration Effortlessly Chat with PDFs Using Gemini API, Langchain, and Chroma DB Integration Transform your PDF documents into conversational partners with Retrieval-Augmented Generation (RAG) technology. This comprehensive guide demonstrates how to create an intelligent Python system that lets you interact with your PDFs using Gemini API's
Design Eye-Catching Coloring Book Covers Using Leonardo AI Design Eye-Catching Coloring Book Covers Using Leonardo AI Looking to design eye-catching coloring book covers that grab attention in Amazon's competitive KDP marketplace? Leonardo AI can help you create professional-grade, visually appealing covers that drive sales. Follow our expert techniques to craft stu
YouTube Integrates Veo 3 AI Video Tool Directly Into Shorts Platform YouTube Integrates Veo 3 AI Video Tool Directly Into Shorts Platform YouTube Shorts to Feature Veo 3 AI Video Model This SummerYouTube CEO Neal Mohan revealed during his Cannes Lions keynote that the platform's cutting-edge Veo 3 AI video generation technology will debut on YouTube Shorts later this summer. This follo
Comments (28)
0/200
AndrewGarcía
AndrewGarcía September 22, 2025 at 10:30:31 PM EDT

Mais uma lei de IA na Califórnia? Parece que os políticos adoram regular uma indústria que mal entendem 😅 Será que dessa vez vão acertar ou só criar mais confusão para as startups? #AImasNaoTanto

ThomasYoung
ThomasYoung September 17, 2025 at 12:30:51 PM EDT

Mais uma lei de IA na Califórnia? 🙄 Parece que os políticos adoram regular tecnologia que nem entendem. Será que dessa vez vão consultar engenheiros antes de escrever as regras? Pelo menos a proteção contra retaliação faz sentido, mas duvido que vá funcionar na prática...

JerryLee
JerryLee July 31, 2025 at 10:48:18 PM EDT

This new AI bill sounds like a game-changer! Protecting whistleblowers in AI labs is crucial—those folks need a voice without fear of getting canned. But I bet Silicon Valley's big shots are sweating bullets over this one. 😅 Curious to see how it plays out!

AlbertThomas
AlbertThomas April 20, 2025 at 3:20:43 PM EDT

위너 상원의원의 새로운 AI 법안은 유망하게 들리지만, 꽤 많은 논쟁을 불러일으키고 있어요. AI 연구소 직원들을 보복으로부터 보호하는 것은 중요하지만, 일부는 이것이 혁신을 늦출까봐 걱정하고 있어요. 미묘한 균형이지만, 누군가가 이런 문제를 다루고 있다는 것이 기쁩니다. 어떻게 될지 지켜봐야겠네요! 🤔

RalphHill
RalphHill April 18, 2025 at 11:46:59 PM EDT

O novo projeto de lei de IA do senador Wiener parece promissor, mas está causando bastante debate. Proteger os funcionários dos laboratórios de IA de represálias é importante, mas alguns temem que isso possa desacelerar a inovação. É um equilíbrio delicado, mas estou feliz que alguém esteja abordando esses problemas. Vamos ver como isso se desenrola! 🤔

ScottEvans
ScottEvans April 18, 2025 at 11:48:44 AM EDT

The new AI bill by Senator Wiener sounds promising, but it's stirring up quite the debate. Protecting AI lab employees from retaliation is important, but some worry it might slow down innovation. It's a tricky balance, but I'm glad someone's addressing these issues. Let's see how it plays out! 🤔

Back to Top
OR