Fei-Fei Li's Group Urges Preemptive AI Safety Legislation

A new report from a California policy group, co-led by AI pioneer Fei-Fei Li, suggests that lawmakers should take into account AI risks that haven't yet been seen in the real world when creating AI regulatory policies. This 41-page interim report, released on Tuesday, comes from the Joint California Policy Working Group on AI Frontier Models, set up by Governor Gavin Newsom after he vetoed California's controversial AI safety bill, SB 1047. Newsom felt that SB 1047 didn't quite hit the mark, but he recognized the need for a deeper look into AI risks to guide legislators.
In the report, Li, along with co-authors Jennifer Chayes, dean of UC Berkeley's College of Computing, and Mariano-Florentino Cuéllar, president of the Carnegie Endowment for International Peace, push for laws that would make it clearer what frontier AI labs like OpenAI are up to. Before the report went public, it was looked over by industry folks from all walks of life, including strong AI safety supporters like Turing Award winner Yoshua Bengio and those who opposed SB 1047, like Databricks co-founder Ion Stoica.
The report points out that the new risks from AI systems might mean we need laws that make AI model developers share their safety tests, how they get their data, and their security steps with the public. It also calls for better standards for third-party checks on these things and company policies, plus more protections for AI company workers and contractors who blow the whistle.
Li and her co-authors mention that there's not enough proof yet about AI's potential to help with cyberattacks, make biological weapons, or cause other "extreme" dangers. But they also say that AI policy shouldn't just deal with what's happening now; it should also think about what could happen in the future if we don't have enough safety measures in place.
The report uses the example of a nuclear weapon, saying, "We don't need to see a nuclear weapon go off to know it could cause a lot of harm." It goes on to say, "If those who talk about the worst risks are right — and we're not sure if they will be — then not doing anything about frontier AI right now could be really costly."
To make AI model development more open, the report suggests a "trust but verify" approach. It says AI model developers and their workers should have ways to report on things that matter to the public, like internal safety tests, and also have to get their testing claims checked by third parties.
While the report, which will be finalized in June 2025, doesn't back any specific laws, it's been well received by experts on both sides of the AI policy debate.
Dean Ball, an AI-focused research fellow at George Mason University who didn't like SB 1047, said on X that the report is a good move for California's AI safety rules. It's also a win for AI safety advocates, according to California state senator Scott Wiener, who introduced SB 1047 last year. Wiener said in a press release that the report adds to the "urgent talks about AI governance we started in the legislature in 2024."
The report seems to agree with parts of SB 1047 and Wiener's next bill, SB 53, like making AI model developers report their safety test results. Looking at the bigger picture, it looks like a much-needed win for AI safety folks, whose ideas have been losing ground over the past year.
Related article
Elevate Your Images with HitPaw AI Photo Enhancer: A Comprehensive Guide
Want to transform your photo editing experience? Thanks to cutting-edge artificial intelligence, improving your images is now effortless. This detailed guide explores the HitPaw AI Photo Enhancer, an
AI-Powered Music Creation: Craft Songs and Videos Effortlessly
Music creation can be complex, demanding time, resources, and expertise. Artificial intelligence has transformed this process, making it simple and accessible. This guide highlights how AI enables any
Creating AI-Powered Coloring Books: A Comprehensive Guide
Designing coloring books is a rewarding pursuit, combining artistic expression with calming experiences for users. Yet, the process can be labor-intensive. Thankfully, AI tools simplify the creation o
Comments (35)
0/200
MichaelDavis
April 17, 2025 at 8:14:46 AM EDT
O grupo de Fei-Fei Li sugerindo leis de segurança para IA antes mesmo de vermos os riscos? Parece proativo, mas também meio assustador! 🤔 Como se preparar para uma tempestade que pode nunca chegar. Ainda assim, melhor prevenir do que remediar, certo? Talvez devessem focar nos problemas atuais também? Só um pensamento! 😅
0
JasonRoberts
April 16, 2025 at 12:17:56 PM EDT
¿El grupo de Fei-Fei Li proponiendo leyes de seguridad para la IA antes de que veamos los riesgos? Suena proactivo pero también un poco aterrador! 🤔 Como prepararse para una tormenta que tal vez nunca llegue. Sin embargo, más vale prevenir que lamentar, ¿verdad? Tal vez deberían enfocarse también en los problemas actuales? Solo es un pensamiento! 😅
0
AnthonyJohnson
April 16, 2025 at 12:13:53 AM EDT
El grupo de Fei-Fei Li está realmente empujando por leyes de seguridad de IA proactivas, lo cual es genial. Pero, hombre, ¿ese informe de 41 páginas? Un poco difícil de digerir, ¿no crees? Aún así, es importante considerar los riesgos de IA no vistos. ¿Quizás podrían haberlo hecho un poco más conciso? 🤓📚
0
WillieRodriguez
April 15, 2025 at 7:01:01 PM EDT
Die Gruppe von Fei-Fei Li setzt sich wirklich für proaktive KI-Sicherheitsgesetze ein, das ist toll. Aber Mann, dieser 41-seitige Bericht? Ein bisschen viel zum Verdauen, findest du nicht? Trotzdem ist es wichtig, ungesehene KI-Risiken zu berücksichtigen. Vielleicht hätten sie es etwas kürzer fassen können? 🤓📚
0
RalphWalker
April 15, 2025 at 1:16:25 PM EDT
Fei-Fei Li's group is really pushing for proactive AI safety laws, which is great. But man, that 41-page report? A bit much to digest, don't you think? Still, it's important to consider unseen AI risks. Maybe they could've made it a bit more concise? 🤓📚
0
EricRoberts
April 15, 2025 at 1:46:36 AM EDT
Fei-Fei Li 그룹의 AI 안전 보고서는 눈을 뜨게 하지만, 읽기에는 조금 어려워요. 미래의 위험을 생각하는 것은 중요하지만, 더 읽기 쉽게 만들어졌으면 좋겠어요. 그래도 생각할 거리를 주네요! 🤔
0




O grupo de Fei-Fei Li sugerindo leis de segurança para IA antes mesmo de vermos os riscos? Parece proativo, mas também meio assustador! 🤔 Como se preparar para uma tempestade que pode nunca chegar. Ainda assim, melhor prevenir do que remediar, certo? Talvez devessem focar nos problemas atuais também? Só um pensamento! 😅




¿El grupo de Fei-Fei Li proponiendo leyes de seguridad para la IA antes de que veamos los riesgos? Suena proactivo pero también un poco aterrador! 🤔 Como prepararse para una tormenta que tal vez nunca llegue. Sin embargo, más vale prevenir que lamentar, ¿verdad? Tal vez deberían enfocarse también en los problemas actuales? Solo es un pensamiento! 😅




El grupo de Fei-Fei Li está realmente empujando por leyes de seguridad de IA proactivas, lo cual es genial. Pero, hombre, ¿ese informe de 41 páginas? Un poco difícil de digerir, ¿no crees? Aún así, es importante considerar los riesgos de IA no vistos. ¿Quizás podrían haberlo hecho un poco más conciso? 🤓📚




Die Gruppe von Fei-Fei Li setzt sich wirklich für proaktive KI-Sicherheitsgesetze ein, das ist toll. Aber Mann, dieser 41-seitige Bericht? Ein bisschen viel zum Verdauen, findest du nicht? Trotzdem ist es wichtig, ungesehene KI-Risiken zu berücksichtigen. Vielleicht hätten sie es etwas kürzer fassen können? 🤓📚




Fei-Fei Li's group is really pushing for proactive AI safety laws, which is great. But man, that 41-page report? A bit much to digest, don't you think? Still, it's important to consider unseen AI risks. Maybe they could've made it a bit more concise? 🤓📚




Fei-Fei Li 그룹의 AI 안전 보고서는 눈을 뜨게 하지만, 읽기에는 조금 어려워요. 미래의 위험을 생각하는 것은 중요하지만, 더 읽기 쉽게 만들어졌으면 좋겠어요. 그래도 생각할 거리를 주네요! 🤔












