OpenAI just made its first cybersecurity investment

Generative AI has significantly broadened the arsenal available to hackers and malicious actors. From crafting deepfake videos of CEOs to generating counterfeit receipts, the potential for misuse is vast. OpenAI, a leading player in the generative AI space, is acutely aware of these risks. In a strategic move, OpenAI has made its first investment in a cybersecurity startup, backing New York-based Adaptive Security in a $43 million Series A round co-led with Andreessen Horowitz.
Adaptive Security takes a proactive approach to cybersecurity by simulating AI-generated attacks to train employees in identifying and thwarting these threats. Imagine receiving a call from someone sounding exactly like your CTO, asking for a verification code. That voice isn't your CTO's but a sophisticated simulation created by Adaptive Security. The company's platform goes beyond just phone calls; it also simulates texts and emails, assessing which areas of a business are most vulnerable and training staff to recognize and mitigate these risks.
The focus of Adaptive Security is on "social engineering" hacks, where attackers trick employees into taking unauthorized actions, such as clicking on malicious links. These seemingly simple tactics can lead to significant losses, as demonstrated by the case of Axie Infinity, which lost over $600 million due to a fake job offer scam targeting one of its developers in 2022.
According to co-founder and CEO Brian Long, AI has made these social engineering attacks easier to execute. Since its launch in 2023, Adaptive Security has quickly grown its customer base to over 100, leveraging positive feedback to secure OpenAI's investment. Long, a seasoned entrepreneur with successful exits from TapCommerce (acquired by Twitter for over $100 million in 2014) and Attentive (valued at over $10 billion in 2021), plans to use the new funding to hire engineers and enhance the platform amid the ongoing AI "arms race" against cybercriminals.
Adaptive Security is not alone in tackling AI-driven threats. Other startups like Cyberhaven, which recently raised $100 million at a $1 billion valuation to prevent the misuse of sensitive information in tools like ChatGPT, and Snyk, which attributes part of its over $300 million ARR to the rise of insecure AI-generated code, are also making strides in this space. Additionally, deepfake detection firm GetReal secured $17.5 million last month to combat these advanced threats.
As AI threats evolve, Long offers a straightforward piece of advice for employees concerned about voice cloning: "Delete your voicemail." This simple action can help protect against one of the many ways hackers can exploit AI technology.
Related article
Oracle's $40B Nvidia Chip Investment Boosts Texas AI Data Center
Oracle is set to invest approximately $40 billion in Nvidia chips to power a major new data center in Texas, developed by OpenAI, as reported by the Financial Times. This deal, one of the largest chip
SoftBank Acquires $676M Sharp Factory for AI Data Center in Japan
SoftBank is advancing its goal to establish a major AI hub in Japan, both independently and through partnerships like OpenAI. The tech giant confirmed on Friday it will invest $676 million to acquire
Adobe and Figma Integrate OpenAI's Advanced Image Generation Model
OpenAI’s enhanced image generation in ChatGPT has driven a surge in users, fueled by its ability to produce Studio Ghibli-style visuals and unique designs, and is now expanding to other platforms. The
Comments (38)
0/200
EricMiller
August 21, 2025 at 9:01:17 PM EDT
Wow, OpenAI's diving into cybersecurity? Smart move! With AI making deepfakes and fake receipts so easy, it’s about time they tackled the dark side of their own tech. Hope they’re ready for the hacker chaos! 😎
0
JohnHernández
August 9, 2025 at 9:00:59 PM EDT
Wow, OpenAI's diving into cybersecurity? Smart move! Generative AI's a double-edged sword—deepfakes and fake receipts are no joke. Curious to see how they tackle the dark side of AI. 🕵️♂️
0
DavidCarter
July 27, 2025 at 9:20:21 PM EDT
Wow, OpenAI jumping into cybersecurity feels like a superhero gearing up to fight their own villainous creations! 😎 Curious how their investment will tackle those sneaky deepfake and scam issues.
0
CarlTaylor
April 23, 2025 at 1:11:53 PM EDT
A OpenAI investir em cibersegurança foi uma jogada inteligente! Era hora de alguém lidar com o lado sombrio da IA. Me sinto mais seguro sabendo que eles estão nisso, mas gostaria que compartilhassem mais sobre o que estão fazendo. Mal posso esperar para ver como isso vai se desenrolar! 😏
0
CharlesThomas
April 22, 2025 at 6:36:06 PM EDT
OpenAIがサイバーセキュリティに投資したのは賢い決断だと思う!AIの悪用を防ぐための行動が必要だし、安心感がある。ただ、具体的に何をしているのかもっと知りたいな。どうなるか楽しみだよ!😊
0
StevenHill
April 21, 2025 at 10:43:57 PM EDT
OpenAI가 사이버 보안에 투자한 건 정말 좋은 선택이야! AI의 어두운 면을 다루는 게 필요해. 안전해진 느낌이 들어. 다만, 실제로 무슨 일을 하고 있는지 더 알려줬으면 좋겠어. 어떻게 될지 기대돼! 😎
0


Wow, OpenAI's diving into cybersecurity? Smart move! With AI making deepfakes and fake receipts so easy, it’s about time they tackled the dark side of their own tech. Hope they’re ready for the hacker chaos! 😎




Wow, OpenAI's diving into cybersecurity? Smart move! Generative AI's a double-edged sword—deepfakes and fake receipts are no joke. Curious to see how they tackle the dark side of AI. 🕵️♂️




Wow, OpenAI jumping into cybersecurity feels like a superhero gearing up to fight their own villainous creations! 😎 Curious how their investment will tackle those sneaky deepfake and scam issues.




A OpenAI investir em cibersegurança foi uma jogada inteligente! Era hora de alguém lidar com o lado sombrio da IA. Me sinto mais seguro sabendo que eles estão nisso, mas gostaria que compartilhassem mais sobre o que estão fazendo. Mal posso esperar para ver como isso vai se desenrolar! 😏




OpenAIがサイバーセキュリティに投資したのは賢い決断だと思う!AIの悪用を防ぐための行動が必要だし、安心感がある。ただ、具体的に何をしているのかもっと知りたいな。どうなるか楽しみだよ!😊




OpenAI가 사이버 보안에 투자한 건 정말 좋은 선택이야! AI의 어두운 면을 다루는 게 필요해. 안전해진 느낌이 들어. 다만, 실제로 무슨 일을 하고 있는지 더 알려줬으면 좋겠어. 어떻게 될지 기대돼! 😎












