option
Home
News
Fei-Fei Li's Group Urges Preemptive AI Safety Legislation

Fei-Fei Li's Group Urges Preemptive AI Safety Legislation

April 10, 2025
152

Fei-Fei Li

A new report from a California policy group, co-led by AI pioneer Fei-Fei Li, suggests that lawmakers should take into account AI risks that haven't yet been seen in the real world when creating AI regulatory policies. This 41-page interim report, released on Tuesday, comes from the Joint California Policy Working Group on AI Frontier Models, set up by Governor Gavin Newsom after he vetoed California's controversial AI safety bill, SB 1047. Newsom felt that SB 1047 didn't quite hit the mark, but he recognized the need for a deeper look into AI risks to guide legislators. In the report, Li, along with co-authors Jennifer Chayes, dean of UC Berkeley's College of Computing, and Mariano-Florentino Cuéllar, president of the Carnegie Endowment for International Peace, push for laws that would make it clearer what frontier AI labs like OpenAI are up to. Before the report went public, it was looked over by industry folks from all walks of life, including strong AI safety supporters like Turing Award winner Yoshua Bengio and those who opposed SB 1047, like Databricks co-founder Ion Stoica. The report points out that the new risks from AI systems might mean we need laws that make AI model developers share their safety tests, how they get their data, and their security steps with the public. It also calls for better standards for third-party checks on these things and company policies, plus more protections for AI company workers and contractors who blow the whistle. Li and her co-authors mention that there's not enough proof yet about AI's potential to help with cyberattacks, make biological weapons, or cause other "extreme" dangers. But they also say that AI policy shouldn't just deal with what's happening now; it should also think about what could happen in the future if we don't have enough safety measures in place. The report uses the example of a nuclear weapon, saying, "We don't need to see a nuclear weapon go off to know it could cause a lot of harm." It goes on to say, "If those who talk about the worst risks are right — and we're not sure if they will be — then not doing anything about frontier AI right now could be really costly." To make AI model development more open, the report suggests a "trust but verify" approach. It says AI model developers and their workers should have ways to report on things that matter to the public, like internal safety tests, and also have to get their testing claims checked by third parties. While the report, which will be finalized in June 2025, doesn't back any specific laws, it's been well received by experts on both sides of the AI policy debate. Dean Ball, an AI-focused research fellow at George Mason University who didn't like SB 1047, said on X that the report is a good move for California's AI safety rules. It's also a win for AI safety advocates, according to California state senator Scott Wiener, who introduced SB 1047 last year. Wiener said in a press release that the report adds to the "urgent talks about AI governance we started in the legislature in 2024." The report seems to agree with parts of SB 1047 and Wiener's next bill, SB 53, like making AI model developers report their safety test results. Looking at the bigger picture, it looks like a much-needed win for AI safety folks, whose ideas have been losing ground over the past year.
Related article
Master Emerald Kaizo Nuzlocke: Ultimate Survival & Strategy Guide Master Emerald Kaizo Nuzlocke: Ultimate Survival & Strategy Guide Emerald Kaizo stands as one of the most formidable Pokémon ROM hacks ever conceived. While attempting a Nuzlocke run exponentially increases the challenge, victory remains achievable through meticulous planning and strategic execution. This definitiv
AI-Powered Cover Letters: Expert Guide for Journal Submissions AI-Powered Cover Letters: Expert Guide for Journal Submissions In today's competitive academic publishing environment, crafting an effective cover letter can make the crucial difference in your manuscript's acceptance. Discover how AI-powered tools like ChatGPT can streamline this essential task, helping you cre
US to Sanction Foreign Officials Over Social Media Regulations US to Sanction Foreign Officials Over Social Media Regulations US Takes Stand Against Global Digital Content Regulations The State Department issued a sharp diplomatic rebuke this week targeting European digital governance policies, signaling escalating tensions over control of online platforms. Secretary Marco
Comments (37)
0/200
HarrySmith
HarrySmith August 27, 2025 at 5:01:36 PM EDT

This report sounds like a bold move! 😎 Fei-Fei Li’s team pushing for proactive AI laws is smart, but I wonder if lawmakers can keep up with tech’s pace. Preemptive rules could spark innovation or just create red tape. What do you all think?

PaulHill
PaulHill August 25, 2025 at 11:01:14 AM EDT

Fei-Fei Li’s group is onto something big here! Proactive AI safety laws sound smart, but I wonder if lawmakers can keep up with tech’s pace. 🤔 Risky moves need bold rules!

MichaelDavis
MichaelDavis April 17, 2025 at 8:14:46 AM EDT

O grupo de Fei-Fei Li sugerindo leis de segurança para IA antes mesmo de vermos os riscos? Parece proativo, mas também meio assustador! 🤔 Como se preparar para uma tempestade que pode nunca chegar. Ainda assim, melhor prevenir do que remediar, certo? Talvez devessem focar nos problemas atuais também? Só um pensamento! 😅

JasonRoberts
JasonRoberts April 16, 2025 at 12:17:56 PM EDT

¿El grupo de Fei-Fei Li proponiendo leyes de seguridad para la IA antes de que veamos los riesgos? Suena proactivo pero también un poco aterrador! 🤔 Como prepararse para una tormenta que tal vez nunca llegue. Sin embargo, más vale prevenir que lamentar, ¿verdad? Tal vez deberían enfocarse también en los problemas actuales? Solo es un pensamiento! 😅

AnthonyJohnson
AnthonyJohnson April 16, 2025 at 12:13:53 AM EDT

El grupo de Fei-Fei Li está realmente empujando por leyes de seguridad de IA proactivas, lo cual es genial. Pero, hombre, ¿ese informe de 41 páginas? Un poco difícil de digerir, ¿no crees? Aún así, es importante considerar los riesgos de IA no vistos. ¿Quizás podrían haberlo hecho un poco más conciso? 🤓📚

WillieRodriguez
WillieRodriguez April 15, 2025 at 7:01:01 PM EDT

Die Gruppe von Fei-Fei Li setzt sich wirklich für proaktive KI-Sicherheitsgesetze ein, das ist toll. Aber Mann, dieser 41-seitige Bericht? Ein bisschen viel zum Verdauen, findest du nicht? Trotzdem ist es wichtig, ungesehene KI-Risiken zu berücksichtigen. Vielleicht hätten sie es etwas kürzer fassen können? 🤓📚

Back to Top
OR