option
Home Navigation arrows List of Al models Navigation arrows Mistral-Small-Instruct-2409

Mistral-Small-Instruct-2409

Add comparison
Add comparison
Model parameter quantity
22B
Model parameter quantity
Affiliated organization
Mistral AI
Affiliated organization
Open Source
License Type
Release time
September 16, 2024
Release time
Model Introduction
With 22 billion parameters, Mistral Small v24.09 offers customers a convenient mid-point between Mistral NeMo 12B and Mistral Large 2, providing a cost-effective solution that can be deployed across various platforms and environments.
Language comprehension ability Language comprehension ability
Language comprehension ability
Often makes semantic misjudgments, leading to obvious logical disconnects in responses.
4.5
Knowledge coverage scope Knowledge coverage scope
Knowledge coverage scope
Has significant knowledge blind spots, often showing factual errors and repeating outdated information.
5.5
Reasoning ability Reasoning ability
Reasoning ability
Unable to maintain coherent reasoning chains, often causing inverted causality or miscalculations.
4.5
Related model
Mistral-Large-Instruct-2411 Mistral-Large-Instruct-2411 is an advanced dense Large Language Model (LLM) of 123B parameters with state-of-the-art reasoning, knowledge and coding capabilities extending Mistral-Large-Instruct-2407 with better Long Context, Function Calling and System Prompt.
Mistral-Large-Instruct-2411 Mistral-Large-Instruct-2411 is an advanced dense Large Language Model (LLM) of 123B parameters with state-of-the-art reasoning, knowledge and coding capabilities extending Mistral-Large-Instruct-2407 with better Long Context, Function Calling and System Prompt.
Mistral-Small-Instruct-2409 With 22 billion parameters, Mistral Small v24.09 offers customers a convenient mid-point between Mistral NeMo 12B and Mistral Large 2, providing a cost-effective solution that can be deployed across various platforms and environments.
Ministral-8B-Instruct-2410 The Ministral-8B-Instruct-2410 Language Model is an instruct fine-tuned model significantly outperforming existing models of similar size, released under the Mistral Research License.
Mixtral-8x22B-Instruct-v0.1 Mixtral 8x22B is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size.
Relevant documents
Google's Gemini Code Assist Enhances AI Coding with New Agentic Capabilities Gemini Code Assist, Google's AI-powered coding companion, is rolling out exciting new "agentic" features in a preview mode. At the recent Cloud Next conference, Google unveiled how
Microsoft Open-Sources Command-Line Text Editor and More at Build Microsoft Goes All-In on Open Source at Build 2025At this year's Build 2025 conference, Microsoft made some big moves in the open-source world, releasing several key tools and appl
OpenAI Enhances AI Model Behind Its Operator Agent OpenAI Takes Operator to the Next LevelOpenAI is giving its autonomous AI agent, Operator, a major upgrade. The upcoming changes mean Operator will soon rely on a model based on o3
Google’s AI Futures Fund may have to tread carefully Google’s New AI Investment Initiative: A Strategic Shift Amid Regulatory ScrutinyGoogle's recent announcement of an AI Futures Fund marks a bold move in the tech giant's ongoing qu
AI YouTube Thumbnail Generator: Boost Your Video Views The Power of AI in YouTube Thumbnail CreationIn today’s digital landscape, a captivating YouTube thumbnail is crucial for grabbing viewers’ attention. With millions of videos competing for clicks, a striking thumbnail can make all the difference. AI YouTube thumbnail generators have emerged as a gam
Model comparison
Start the comparison
Back to Top
OR