option
Home
List of Al models
MiniMax-Text-01

MiniMax-Text-01

Add comparison
Add comparison
Model parameter quantity
456B
Model parameter quantity
Affiliated organization
MiniMax
Affiliated organization
Open Source
License Type
Release time
January 15, 2025
Release time
Model Introduction
MiniMax-Text-01 is a powerful language model with 456 billion total parameters, of which 45.9 billion are activated per token. To better unlock the long context capabilities of the model, MiniMax-Text-01 adopts a hybrid architecture that combines Lightning Attention, Softmax Attention and Mixture-of-Experts (MoE).
Swipe left and right to view more
Language comprehension ability Language comprehension ability
Language comprehension ability
Often makes semantic misjudgments, leading to obvious logical disconnects in responses.
6.4
Knowledge coverage scope Knowledge coverage scope
Knowledge coverage scope
Possesses core knowledge of mainstream disciplines, but has limited coverage of cutting-edge interdisciplinary fields.
8.5
Reasoning ability Reasoning ability
Reasoning ability
Can perform logical reasoning with more than three steps, though efficiency drops when handling nonlinear relationships.
7.8
Related model
MiniMax-Text-01 MiniMax-Text-01 is a powerful language model with 456 billion total parameters, of which 45.9 billion are activated per token. To better unlock the long context capabilities of the model, MiniMax-Text-01 adopts a hybrid architecture that combines Lightning Attention, Softmax Attention and Mixture-of-Experts (MoE).
MiniMax-M1-80k The world's first open-weight, large-scale hybrid-attention reasoning model released by Minimax
abab6.5 abab6.5 is an API model produced by MiniMax, with the version number being abab6.5. The abab6.5 serie is a trillion-parameter Mixture of Experts (MoE) large language model. The abab6.5 is suitable for complex scenarios, such as application problem calculations, scientific computations, and other similar scenarios. The abab6.5s is suitable for general scenarios.
abab6.5s-chat abab6.5 is an API model produced by MiniMax, with the version number being abab6.5. The abab6.5 serie is a trillion-parameter Mixture of Experts (MoE) large language model. The abab6.5 is suitable for complex scenarios, such as application problem calculations, scientific computations, and other similar scenarios. The abab6.5s is suitable for general scenarios.
abab7-chat-preview The abab7-preview model, produced by MiniMax, is an API model that shows significant improvements over the abab6.5 series in capabilities such as handling long texts, mathematics, and writing.
Relevant documents
Google Leaks Details of Upcoming Android Design Language: Material 3 Expressive Google Prepares to Unveil Next-Gen Android Design System at I/OGoogle is set to introduce a significant evolution of its Android design language at the upcoming Google I/O developer conference, as revealed through a published event schedule and an ac
Google's Gemini AI Conquers Pokémon Blue with Assistance Google's AI Milestone: Conquering a Classic Pokémon AdventureGoogle's most advanced AI model appears to have achieved a notable gaming breakthrough - completing the 1996 Game Boy title Pokémon Blue. CEO Sundar Pichai celebrated the accomplishment on
AI Takes Center Stage with TechCrunch Sessions: AI – Tickets Now Available TechCrunch Sessions: AI Registration Now Open – Join the AI RevolutionThe AI landscape is evolving at lightning speed, and your front-row seat awaits! Registration is officially live for TechCrunch Sessions: AI – secure your pass today and save up to
AI Transforms 2D Images into Stunning 3D Photos - The Ultimate Guide The digital photography landscape is undergoing a revolutionary transformation as artificial intelligence enables the conversion of static 2D images into immersive 3D experiences. This cutting-edge technology breathes new life into traditional photog
Sam Altman: ChatGPT Query Uses Minimal Water - Equivalent to 1/15 Teaspoon In a Tuesday blog post exploring AI's global impact, OpenAI CEO Sam Altman revealed surprising statistics about ChatGPT's resource consumption, noting the average query uses approximately 0.000085 gallons of water - equivalent to roughly one-fifteent
Model comparison
Start the comparison
Back to Top
OR