option
Home
News
Top 100 Announcements from I/O 2024 Revealed

Top 100 Announcements from I/O 2024 Revealed

April 10, 2025
131

Top 100 Announcements from I/O 2024 Revealed

Wow, I/O 2024 was packed with exciting updates! Whether you're into the latest Gemini app enhancements, eager about new developer tools, or itching to play with the newest generative AI features, there was something for everyone. Don't take our word for it? Check out the 100 things we announced over the last couple of days.

AI moments and model momentum

  1. We introduced Gemini 1.5 Flash, a lighter model designed for speed and efficiency. It's the fastest Gemini model available through the API.
  2. We've made significant improvements to 1.5 Pro, our top model for general performance across various tasks.
  3. Both 1.5 Pro and 1.5 Flash are now in public preview with a 1 million token context window on Google AI Studio and Vertex AI.
  4. 1.5 Pro is also available with a 2 million token context window for developers via a waitlist on Google AI Studio and Vertex AI.
Context lengths of leading foundation models compared with Gemini 1.5’s 2 million token capability.
  1. We shared Project Astra, our vision for the future of AI assistants.
  2. We announced Trillium, the sixth generation of our custom AI accelerator, the Tensor Processing Unit (TPU). It's the most performant TPU yet.
  3. Compared to TPU v5e, Trillium TPUs offer a 4.7x increase in peak compute performance per chip.
  4. They're also our most sustainable generation yet: Trillium TPUs are over 67% more energy-efficient than TPU v5e.
  5. We demoed an early prototype of Audio Overviews for NotebookLM, which uses uploaded materials to create personalized verbal discussions.
  6. We announced that Grounding with Google Search, which connects the Gemini model with world knowledge and up-to-date internet information, is now generally available on Vertex AI.
  7. We added audio understanding to the Gemini API and AI Studio, allowing Gemini 1.5 Pro to process both images and audio for videos uploaded in AI Studio.
  8. Starting with Pixel, applications using Gemini Nano with Multimodality will understand the world like people do—through text, sight, sound, and spoken language.

Generative media models and Labs experiments

  1. We announced Imagen 3, our highest-quality image generation model to date.
  2. Imagen 3 understands natural language and the intent behind your prompts, incorporating small details from longer prompts to generate incredibly detailed, photorealistic images with fewer visual artifacts than our previous models.
  3. Imagen 3 is also our best model yet for rendering text, a challenge for image generation models.
  4. We rolled out Imagen 3 to Trusted Testers in ImageFX, and you can sign up for the waitlist.
  5. Imagen 3 will be available on Vertex AI this summer.
  6. We announced Veo, our most capable video generation model yet, which can generate high-quality 1080p resolution videos that can go beyond a minute in various cinematic and visual styles.
  7. We'll bring some of Veo's capabilities to YouTube Shorts and other products in the future.
  8. We showcased what Veo can do for artists by collaborating with filmmakers, including Donald Glover, who used Veo for a film project.
  9. We highlighted Music AI Sandbox, a suite of music AI tools that let people create new instrumental sections, transfer styles between trackers, and more. You can find new songs from collaborations with Wyclef Jean and Marc Rebillet on YouTube.
  10. Check out Infinite Wonderland, where artists and Google creatives fine-tuned an AI model to endlessly reimagine the visual world of "Alice's Adventures in Wonderland." Readers can generate infinite images for each of the 1,200 sentences in the book based on each artist's style.
  11. We announced VideoFX, our newest experimental tool that uses Google DeepMind's generative video model, Veo, to turn ideas into video clips.
  12. VideoFX also includes a Storyboard mode that lets you iterate scene by scene and add music to your final video.
25. We added more editorial controls to ImageFX—a top community request—so you can add, remove, or change elements by simply brushing over your image. 26. ImageFX will use Imagen 3 to unlock more photorealism with richer details, fewer visual artifacts, and more accurate text rendering. 27. MusicFX now has a "DJ Mode" that helps you mix beats by combining genres and instruments, using generative AI to bring music stories to life. 28. As of this week, ImageFX and MusicFX are available in over 100 countries through Labs.

New ways to get more done with the Gemini app

  1. We're bringing Gemini 1.5 Pro to Gemini Advanced subscribers, giving them a 1 million token context window and the ability to process 1,500-page PDFs.
  2. This also means Gemini Advanced now has the largest context window of any commercially available chatbot.
  3. We added the ability to upload files via Google Drive or directly from your device into Gemini Advanced.
  4. Soon, Gemini Advanced will help you analyze data to uncover insights and build charts from uploaded data files like spreadsheets.
  5. Great news for travelers: Gemini Advanced now has a planning feature that goes beyond a list of suggested activities and creates a custom itinerary just for you.
34. Then there's Gemini Live for Gemini Advanced subscribers, a new, mobile-first conversational experience that uses state-of-the-art speech technology for more natural, intuitive spoken conversations with Gemini. 35. Gemini Live lets you choose from 10 natural-sounding voices, and you can speak at your own pace or interrupt mid-response with clarifying questions. 36. Gemini in Google Messages now lets you chat with Gemini in the same app where you message your friends. 37. Gemini Advanced subscribers will soon be able to create Gems, customized versions of Gemini designed for whatever you dream up. Just describe what you want your Gem to do and how you want it to respond, and Gemini will create a Gem for your specific needs. 38. Look out for more Google tools being connected to Gemini, including Google Calendar, Tasks, Keep, and Clock.

Updates that make Search do the work for you

  1. We're using a new Gemini model customized for Google Search to bring together Gemini's advanced capabilities—like multi-step reasoning, planning, and multimodality—with our best-in-class Search systems.
  2. AI Overviews in Search are rolling out to everyone in the U.S. this week, with more countries coming soon.
41. Multi-step reasoning capabilities are coming soon to AI Overviews in Search Labs for English queries in the U.S., so you can ask complex questions like "find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill." 42. Soon, you'll be able to adjust your AI Overview with options to simplify the language or break it down in more detail, especially when you're new to a topic or trying to get to the heart of a subject. 43. Search is also getting new planning capabilities. For example, meal and trip planning with customization will launch later this year in Search Labs, followed by more categories like parties and fitness. 44. Thanks to advancements in video understanding, you can now ask questions with a video. Search can take a complex visual question, figure it out for you, then explain next steps and offer resources with an AI Overview. 45. Generative AI in Search will soon create an AI-organized results page when you're searching for fresh ideas. These AI-organized search result pages will be available for categories like dining, recipes, movies, music, books, hotels, shopping, and more.

Help from Gemini models in Workspace and Photos

  1. Gemini 1.5 Pro is now available in the side panel in Gmail, Docs, Drive, Slides, and Sheets via Workspace Labs, and it's rolling out to our Gemini for Workspace customers and Google One AI Premium subscribers next month.
  2. You'll be able to use Gmail's side panel to summarize emails and get the most important details and action items.
  3. In addition to summaries, Gmail's mobile app will soon use Gemini for two other new features: Contextual Smart Reply and Gmail Q&A.
  4. In the coming weeks, Help me write in Gmail and Docs will support Spanish and Portuguese.
  5. Later this year in Labs, you can ask Gemini to automatically organize email attachments in Drive, generate a sheet with the data, and then analyze it with Data Q&A.
  6. A new experimental feature in Google Photos called Ask Photos makes it easier to look for specific memories or recall information in your gallery. The feature uses Gemini models and is rolling out over the coming months.
52. You can also use Ask Photos to create a highlight gallery from a recent trip, and it will even write personalized captions for you to share on social media.

Android advancements

  1. Starting with Pixel later this year, Gemini Nano—Android's built-in, on-device foundation model—will have multimodal capabilities. Your Pixel phone will understand more information in context, like sights, sounds, and spoken language.
  2. Talkback, an accessibility feature for Android devices that helps blind and low-vision people use touch and spoken feedback, is being improved thanks to Gemini Nano with Multimodality.
  3. A new, opt-in scam protection feature will use Gemini Nano's on-device AI to help detect scam phone calls in a privacy-preserving way. More details coming later this year.
  4. We announced that Circle to Search is currently available on more than 100 million Android devices, and we're on track to double that by the end of the year.
  5. Soon, you'll be able to use Gemini on Android to create and drag and drop generated images into Gmail, Google Messages, and more, or ask about the YouTube video you're viewing.
  6. If you have Gemini Advanced, you'll also have the option to "Ask this PDF" to get an answer quickly without having to scroll through multiple pages.
  7. Students can now use Circle to Search for homework help directly from select Android phones and tablets. This feature is powered by LearnLM—our new family of models based on Gemini, fine-tuned for learning.
  8. Later this year, Circle to Search will be able to solve even more complex problems involving symbolic formulas, diagrams, graphs, and more.
61. Oh, and we introduced the second beta of Android 15. 62. Theft Detection Lock uses powerful Google AI to sense if your device has been snatched and quickly lock down your information on your phone. 63. Private space is coming to Android 15, which lets you choose apps to keep secure inside a separate space that requires an extra layer of authentication to open. 64. If a separate lock screen isn't enough for your private spaces, you can hide the existence of it altogether. 65. Later this year, Google Play Protect will use on-device AI to help spot apps that attempt to hide their actions to engage in fraud or phishing. 66. We're bringing an updated messaging experience to Japan with RCS in Google Messages. 67. Soon in the U.S., you'll be able to create a digital version of passes that just contain text. Simply take a photo of a pass (like an insurance card or event ticket) and easily add it to your Google Wallet for quick access. 68. We showed off how augmented reality content will be available directly in Google Maps, laying the foundation for an extended reality (XR) platform we're building in collaboration with Samsung and Qualcomm for the Android ecosystem. 69. You can now catch up on episodes of your favorite shows on Max and Peacock or start a game of Angry Birds on select cars with Google built-in. 70. We are also bringing Google Cast to cars with Android Automotive OS, starting with Rivian in the coming months, so you can easily cast video content from your phone to the car. 71. Later this year, battery life optimizations are coming to watches with Wear OS 5. For example, running an outdoor marathon will consume up to 20% less power compared to watches with Wear OS 4. 72. Wear OS 5 will also give fitness apps the option to support more data types like ground contact time, stride length, and vertical oscillation. 73. It's now easier to pick what to watch on Google TV and other Android TV OS devices with personalized AI-generated descriptions, thanks to our Gemini model. 74. These AI-generated descriptions will also fill in missing or untranslated descriptions for movies and shows. 75. Here's a fun stat: Since launch, people have made over 1 billion Fast Pair connections. 76. Later this month, you'll be able to use Fast Pair to connect and find items like your keys, wallet, or luggage in the Find My Device app with Bluetooth tracker tags from Chipolo and PebbleBee (with more partners to come).

Developments for developers

  1. You can join the Gemini API Developer Competition and be part of discovering the most helpful and groundbreaking AI apps. The prize? An electrically retrofitted custom 1981 DeLorean.
  2. We introduced PaliGemma, our first vision-language open model optimized for visual Q&A and image captioning.
  3. We previewed the next version of Gemma, Gemma 2. It's built on a whole new architecture and will include a larger 27B parameter instance which outperforms models twice its size and runs on a single TPU host.
80. Gemini models are now available to help developers be more productive in Android Studio, IDX, Firebase, Colab, VSCode, Cloud, and IntelliJ. 81. Gemini 1.5 Pro is coming to Android Studio later this year. Equipped with a large context window, this model leads to higher-quality responses and unlocks use cases like multimodal input. 82. Google AI Studio is now available in more than 200 countries, including the U.K. and E.U. 83. Parallel function calling and video frame extraction are now supported by the Gemini API. 84. With the new context caching feature in the Gemini API, coming next month, you'll be able to streamline workflows for large prompts by caching frequently used context files at lower costs. 85. Android now provides first-class support for Kotlin multiplatform to help developers share their apps' business logic across platforms. 86. Resizable Emulator, Compose UI check Mode, and Android Device Streaming powered by Firebase are new products that can all help developers build for all form factors. 87. Starting with Chrome 126, Gemini Nano will be built into the Chrome Desktop client. 88. View Transitions API for multi-page apps, a much-requested feature, is now available so developers can easily build smooth, fluid app-like navigation regardless of site architecture. 89. Project IDX, our new integrated developer experience for full-stack, multiplatform apps, is now open for everyone to try. 90. Firebase released Firebase Genkit in beta, which will make it even easier for developers to build generative AI experiences into their apps. 91. Firebase also released Firebase Data Connect, a new way for developers to use SQL with Firebase (via Google Cloud SQL). This will not only bring SQL workflows to Firebase but also reduce the amount of app code developers need to write. 92. We took developers under the hood in a deep-dive conversation about the technology and research powering our AI with James Manyika, Jeff Dean, and Koray Kavukcuoglu.

Responsible AI progress

  1. We're enhancing red teaming—a proven practice where we proactively test our own systems for weakness and try to break them—through a new technique we're calling "AI-Assisted Red Teaming."
  2. We're also expanding SynthID to two new modalities: text and video.
  3. SynthID text watermarking will also be open-sourced in the coming months through our updated Responsible Generative AI toolkit.
  4. We announced LearnLM, a new family of models based on Gemini and fine-tuned for learning. LearnLM is already powering a range of features across our products, including Gemini, Search, YouTube, and Google Classroom.
97. We'll be partnering with experts from institutions like Columbia Teachers College, Arizona State University, NYU Tisch, and Khan Academy to refine and expand LearnLM beyond our products. 98. We also worked with MIT RAISE to develop an online course that equips educators to effectively use generative AI in the classroom. 99. We've built a new experimental tool called Illuminate to make knowledge more accessible and digestible. 100. Illuminate can generate a conversation consisting of two AI-generated voices, providing an overview of the key insights from research papers. You can sign up to try it today at labs.google.
Related article
Salesforce Unveils AI Digital Teammates in Slack to Rival Microsoft Copilot Salesforce Unveils AI Digital Teammates in Slack to Rival Microsoft Copilot Salesforce launched a new workplace AI strategy, introducing specialized “digital teammates” integrated into Slack conversations, the company revealed on Monday.The new tool, Agentforce in Slack, enab
Oracle's $40B Nvidia Chip Investment Boosts Texas AI Data Center Oracle's $40B Nvidia Chip Investment Boosts Texas AI Data Center Oracle is set to invest approximately $40 billion in Nvidia chips to power a major new data center in Texas, developed by OpenAI, as reported by the Financial Times. This deal, one of the largest chip
Meta AI App to Introduce Premium Tier and Ads Meta AI App to Introduce Premium Tier and Ads Meta's AI app may soon feature a paid subscription, mirroring offerings from competitors like OpenAI, Google, and Microsoft. During a Q1 2025 earnings call, Meta CEO Mark Zuckerberg outlined plans for
Comments (52)
0/200
DennisMitchell
DennisMitchell August 17, 2025 at 3:01:00 PM EDT

The I/O 2024 updates blew my mind! 😍 Those new Gemini app features are slick, but I'm really curious how devs will use those generative AI tools. Anyone else hyped to see what apps come out of this?

FredGreen
FredGreen August 10, 2025 at 1:00:59 AM EDT

I/O 2024 was a blast! The Gemini app upgrades sound slick, but I'm curious if the new AI features will actually make my life easier or just add more buzzwords to my inbox. 😎 Anyone tried them yet?

CharlesHernández
CharlesHernández April 20, 2025 at 1:00:11 AM EDT

I/O 2024 was insane! So many cool updates, especially the Gemini app enhancements. I'm super excited to try out the new generative AI features. The list of 100 announcements is a bit overwhelming, but in a good way! Can't wait to dive in! 🚀

HaroldMoore
HaroldMoore April 20, 2025 at 12:28:30 AM EDT

I/O 2024は最高でした!Geminiアプリの強化が特に気に入りました。新しい生成AIの機能も楽しみですが、開発者ツールについてもう少し詳しく知りたかったです。全部試すのが待ちきれません!😊🚀

RalphSanchez
RalphSanchez April 19, 2025 at 3:31:49 PM EDT

I/O 2024 정말 대단했어! 특히 제미니 앱의 강화가 최고야. 새로운 생성 AI 기능을 시도하는 게 너무 기대돼. 100개의 발표 목록은 조금 압도적이지만, 좋은 의미에서 그래! 빨리 써보고 싶어! 🚀

ThomasLewis
ThomasLewis April 19, 2025 at 2:38:18 AM EDT

I/O 2024は最高でした!ジェミニアプリの更新が特に気に入りました。新しい生成AI機能も試してみたいです。100の発表リストは圧倒的ですが、良い意味で!😅 早く詳しく見てみたいですね!

Back to Top
OR