option
Home
News
Ai2 Unveils Compact AI Model Outperforming Google, Meta Rivals

Ai2 Unveils Compact AI Model Outperforming Google, Meta Rivals

August 14, 2025
22

Ai2 Unveils Compact AI Model Outperforming Google, Meta Rivals

Small AI models are making waves this week.

On Thursday, Ai2, a nonprofit AI research group, launched Olmo 2 1B, a 1-billion-parameter model that surpasses similarly-sized models from Google, Meta, and Alibaba across multiple benchmarks. Parameters, often called weights, are the internal elements that shape a model’s performance.

Olmo 2 1B is freely available under an Apache 2.0 license on Hugging Face, a platform for AI developers. Unlike most models, it can be fully recreated, with Ai2 sharing the code and datasets (Olmo-mix-1124, Dolmino-mix-1124) used in its development.

While smaller models may lack the power of larger ones, they don’t demand high-end hardware, making them ideal for developers and hobbyists using standard laptops or consumer devices.

Recent days have seen a surge in small model releases, from Microsoft’s Phi 4 reasoning family to Qwen’s 2.5 Omni 3B. Most, including Olmo 2 1B, can run smoothly on modern laptops or even mobile devices.

Ai2 notes that Olmo 2 1B was trained on 4 trillion tokens from public, AI-generated, and curated sources. A million tokens roughly equals 750,000 words.

In arithmetic reasoning tests like GSM8K, Olmo 2 1B outperforms Google’s Gemma 3 1B, Meta’s Llama 3.2 1B, and Alibaba’s Qwen 2.5 1.5B. It also excels in TruthfulQA, a benchmark for factual accuracy.

Showcase at TechCrunch Sessions: AI

Claim your space at TC Sessions: AI to present your work to over 1,200 decision-makers without breaking the bank. Available until May 9 or while spots remain.

Showcase at TechCrunch Sessions: AI

Claim your space at TC Sessions: AI to present your work to over 1,200 decision-makers without breaking the bank. Available until May 9 or while spots remain.

Berkeley, CA | June 5 BOOK NOW

This model was pretrained on 4T tokens of high-quality data, following the same standard pretraining into high-quality annealing of our 7, 13, & 32B models. We upload intermediate checkpoints from every 1000 steps in training.

Access the base model: https://t.co/xofyWJmo85 pic.twitter.com/7uSJ6sYMdL

— Ai2 (@allen_ai) May 1, 2025

Ai2 cautions that Olmo 2 1B has risks. Like all AI models, it may generate problematic outputs, including harmful or sensitive content and inaccurate information. Ai2 advises against using it in commercial applications.

Related article
What’s inside the LLM? Ai2 OLMoTrace will ‘trace’ the source What’s inside the LLM? Ai2 OLMoTrace will ‘trace’ the source Understanding the connection between the output of a large language model (LLM) and its training data has always been a bit of a puzzle for enterprise IT. This week, the Allen Institute for AI (Ai2) launched an exciting new open-source initiative called OLMoTrace, which aims to demystify this relati
AI-Generated Crossover Unites Arthur Morgan and Joshua Graham in Gaming Multiverse AI-Generated Crossover Unites Arthur Morgan and Joshua Graham in Gaming Multiverse When Gaming Worlds Collide: Arthur Morgan Meets the Burned ManPicture a realm where legendary game characters step beyond their own stories - what unfolds when Red Dead Redemption 2's Arthur Morgan crosses paths with Fallout: New Vegas' scarred proph
Microsoft hosts xAI's advanced Grok 3 models in new AI collaboration Microsoft hosts xAI's advanced Grok 3 models in new AI collaboration Earlier this month, my *Notepad* investigative journalism uncovered Microsoft's plans to integrate Elon Musk's Grok AI models - revelations that have now been officially confirmed. Today at Microsoft's annual Build developer conference, company execu
Comments (1)
0/200
MarkWilson
MarkWilson August 26, 2025 at 11:01:15 AM EDT

This tiny AI model from Ai2 sounds like a game-changer! Beating Google and Meta? That's wild! 🚀 Curious how it performs in real-world apps.

Back to Top
OR