option
Home
News
Ai2 Unveils Compact AI Model Outperforming Google, Meta Rivals

Ai2 Unveils Compact AI Model Outperforming Google, Meta Rivals

August 14, 2025
1

Ai2 Unveils Compact AI Model Outperforming Google, Meta Rivals

Small AI models are making waves this week.

On Thursday, Ai2, a nonprofit AI research group, launched Olmo 2 1B, a 1-billion-parameter model that surpasses similarly-sized models from Google, Meta, and Alibaba across multiple benchmarks. Parameters, often called weights, are the internal elements that shape a model’s performance.

Olmo 2 1B is freely available under an Apache 2.0 license on Hugging Face, a platform for AI developers. Unlike most models, it can be fully recreated, with Ai2 sharing the code and datasets (Olmo-mix-1124, Dolmino-mix-1124) used in its development.

While smaller models may lack the power of larger ones, they don’t demand high-end hardware, making them ideal for developers and hobbyists using standard laptops or consumer devices.

Recent days have seen a surge in small model releases, from Microsoft’s Phi 4 reasoning family to Qwen’s 2.5 Omni 3B. Most, including Olmo 2 1B, can run smoothly on modern laptops or even mobile devices.

Ai2 notes that Olmo 2 1B was trained on 4 trillion tokens from public, AI-generated, and curated sources. A million tokens roughly equals 750,000 words.

In arithmetic reasoning tests like GSM8K, Olmo 2 1B outperforms Google’s Gemma 3 1B, Meta’s Llama 3.2 1B, and Alibaba’s Qwen 2.5 1.5B. It also excels in TruthfulQA, a benchmark for factual accuracy.

Showcase at TechCrunch Sessions: AI

Claim your space at TC Sessions: AI to present your work to over 1,200 decision-makers without breaking the bank. Available until May 9 or while spots remain.

Showcase at TechCrunch Sessions: AI

Claim your space at TC Sessions: AI to present your work to over 1,200 decision-makers without breaking the bank. Available until May 9 or while spots remain.

Berkeley, CA | June 5 BOOK NOW

This model was pretrained on 4T tokens of high-quality data, following the same standard pretraining into high-quality annealing of our 7, 13, & 32B models. We upload intermediate checkpoints from every 1000 steps in training.

Access the base model: https://t.co/xofyWJmo85 pic.twitter.com/7uSJ6sYMdL

— Ai2 (@allen_ai) May 1, 2025

Ai2 cautions that Olmo 2 1B has risks. Like all AI models, it may generate problematic outputs, including harmful or sensitive content and inaccurate information. Ai2 advises against using it in commercial applications.

Related article
What’s inside the LLM? Ai2 OLMoTrace will ‘trace’ the source What’s inside the LLM? Ai2 OLMoTrace will ‘trace’ the source Understanding the connection between the output of a large language model (LLM) and its training data has always been a bit of a puzzle for enterprise IT. This week, the Allen Institute for AI (Ai2) launched an exciting new open-source initiative called OLMoTrace, which aims to demystify this relati
Trump’s Rise: How Political Experts Misjudged His Presidency Trump’s Rise: How Political Experts Misjudged His Presidency In politics, forecasts often miss the mark. Experts, analysts, and pundits frequently predict election results and political trends, but history shows their errors, especially with Donald J. Trump’s a
Tech Giants Divided on EU AI Code as Compliance Deadline Nears Tech Giants Divided on EU AI Code as Compliance Deadline Nears The EU's AI General-Purpose Code of Practice has revealed stark differences among leading tech firms. Microsoft has expressed its intent to adopt the European Union's voluntary AI compliance framework
Comments (0)
0/200
Back to Top
OR