option
Home
News
CoSAI Launched: Founding Members Unite for Secure AI

CoSAI Launched: Founding Members Unite for Secure AI

April 10, 2025
112

CoSAI Launched: Founding Members Unite for Secure AI

AI's rapid growth demands a robust security framework and standards that can keep up. That's why we introduced the Secure AI Framework (SAIF) last year, knowing it was just the beginning. To make any industry framework work, you need teamwork and a place to collaborate. That's where we're at today.

At the Aspen Security Forum, we're excited to announce the launch of the Coalition for Secure AI (CoSAI) with our industry buddies. We've been working hard for the past year to bring this coalition together, aiming to tackle the unique security challenges AI brings, both now and in the future.

CoSAI's founding members include big names like Amazon, Anthropic, Chainguard, Cisco, Cohere, GenLab, IBM, Intel, Microsoft, NVIDIA, OpenAI, PayPal, and Wiz. We're setting up shop under OASIS Open, the international standards and open-source group.

Introducing CoSAI’s Inaugural Workstreams

As everyone from individual developers to big companies works on adopting common security standards and best practices, CoSAI will back this collective effort in AI security. Today, we're kicking off with the first three areas the coalition will focus on, working hand-in-hand with industry and academia:

  1. Software Supply Chain Security for AI systems: Google's been pushing to extend SLSA Provenance to AI models, helping to figure out if AI software is secure by tracing its creation and handling through the supply chain. This workstream aims to boost AI security by offering guidance on checking provenance, managing risks from third-party models, and assessing the full AI application provenance, building on the existing SSDF and SLSA security principles for both AI and traditional software.
  2. Preparing defenders for a changing cybersecurity landscape: Dealing with daily AI governance can be a real headache for security folks. This workstream will create a defender's framework to help them spot where to invest and what mitigation techniques to use to tackle the security impacts of AI. The framework will grow with the rise of offensive cybersecurity advancements in AI models.
  3. AI security governance: Governing AI security needs new resources and an understanding of what makes AI security unique. CoSAI will develop a list of risks and controls, a checklist, and a scorecard to help practitioners assess their readiness, manage, monitor, and report on the security of their AI products.

Plus, CoSAI will team up with groups like Frontier Model Forum, Partnership on AI, Open Source Security Foundation, and ML Commons to push forward responsible AI.

What’s Next

As AI keeps moving forward, we're all in on making sure our risk management strategies keep pace. We've seen great support from the industry over the past year to make AI safe and secure. Even better, we're seeing real action from developers, experts, and companies of all sizes to help organizations use AI safely.

AI developers need—and end users deserve—a security framework that's up to the challenge and responsibly seizes the opportunities ahead. CoSAI is the next big step in this journey, and we'll have more updates coming soon. To find out how you can support CoSAI, check out coalitionforsecureai.org. In the meantime, head over to our Secure AI Framework page to learn more about Google's work on AI security.

Related article
Salesforce Unveils AI Digital Teammates in Slack to Rival Microsoft Copilot Salesforce Unveils AI Digital Teammates in Slack to Rival Microsoft Copilot Salesforce launched a new workplace AI strategy, introducing specialized “digital teammates” integrated into Slack conversations, the company revealed on Monday.The new tool, Agentforce in Slack, enab
Oracle's $40B Nvidia Chip Investment Boosts Texas AI Data Center Oracle's $40B Nvidia Chip Investment Boosts Texas AI Data Center Oracle is set to invest approximately $40 billion in Nvidia chips to power a major new data center in Texas, developed by OpenAI, as reported by the Financial Times. This deal, one of the largest chip
Meta AI App to Introduce Premium Tier and Ads Meta AI App to Introduce Premium Tier and Ads Meta's AI app may soon feature a paid subscription, mirroring offerings from competitors like OpenAI, Google, and Microsoft. During a Q1 2025 earnings call, Meta CEO Mark Zuckerberg outlined plans for
Comments (32)
0/200
BillyAdams
BillyAdams August 13, 2025 at 9:00:59 PM EDT

This CoSAI initiative sounds promising! 😊 It's cool to see big players teaming up for AI security, but I wonder how they'll balance innovation with strict standards. Could be a game-changer if done right!

RobertMartinez
RobertMartinez July 23, 2025 at 12:59:47 AM EDT

Wow, CoSAI sounds like a game-changer! Finally, some serious teamwork to lock down AI security. I’m curious how SAIF will evolve with all these big players collaborating. 🛡️

GaryWilson
GaryWilson April 24, 2025 at 6:18:04 AM EDT

CoSAI의 런치는 AI 보안에 큰 진전입니다. 많은 창립 멤버들이 함께하는 것을 보는 것이 좋습니다. 프레임워크는 견고해 보이지만, 아직 어떻게 작동하는지 잘 모르겠어요. 더 명확한 가이드가 있으면 좋겠어요! 🤔

AnthonyJohnson
AnthonyJohnson April 24, 2025 at 2:42:02 AM EDT

El lanzamiento de CoSAI es un gran paso adelante para la seguridad en IA. Es genial ver a tantos miembros fundadores unidos. El marco parece sólido, pero aún estoy un poco confundido sobre cómo funciona todo. ¡Guías más claras serían útiles! 🤔

FredLewis
FredLewis April 23, 2025 at 12:23:37 PM EDT

CoSAI's launch is a big step forward for AI security. It's great to see so many founding members coming together. The framework seems solid, but I'm still a bit confused about how it all works. More clear guides would be helpful! 🤔

AlbertThomas
AlbertThomas April 16, 2025 at 2:24:08 PM EDT

CoSAI의 안전한 AI에 대한 초점은 정말 중요하지만, 작은 팀이 구현하기에는 조금 복잡한 것 같아요. 그래도 커뮤니티가 함께 노력하는 모습이 멋져요! 조금 더 간단하게 할 수 있다면 좋겠네요 🤔

Back to Top
OR