option
Home
List of Al models
Mistral-Large-Instruct-2411

Mistral-Large-Instruct-2411

Add comparison
Add comparison
Model parameter quantity
123B
Model parameter quantity
Affiliated organization
Mistral AI
Affiliated organization
Open Source
License Type
Release time
November 18, 2024
Release time

Model Introduction
Mistral-Large-Instruct-2411 is an advanced dense Large Language Model (LLM) of 123B parameters with state-of-the-art reasoning, knowledge and coding capabilities extending Mistral-Large-Instruct-2407 with better Long Context, Function Calling and System Prompt.
Swipe left and right to view more
Language comprehension ability Language comprehension ability
Language comprehension ability
Often makes semantic misjudgments, leading to obvious logical disconnects in responses.
5.4
Knowledge coverage scope Knowledge coverage scope
Knowledge coverage scope
Possesses core knowledge of mainstream disciplines, but has limited coverage of cutting-edge interdisciplinary fields.
7.3
Reasoning ability Reasoning ability
Reasoning ability
Unable to maintain coherent reasoning chains, often causing inverted causality or miscalculations.
5.6
Related model
Mistral-Large-Instruct-2411 Mistral-Large-Instruct-2411 is an advanced dense Large Language Model (LLM) of 123B parameters with state-of-the-art reasoning, knowledge and coding capabilities extending Mistral-Large-Instruct-2407 with better Long Context, Function Calling and System Prompt.
Mistral-Small-Instruct-2409 With 22 billion parameters, Mistral Small v24.09 offers customers a convenient mid-point between Mistral NeMo 12B and Mistral Large 2, providing a cost-effective solution that can be deployed across various platforms and environments.
Mistral-Small-Instruct-2409 With 22 billion parameters, Mistral Small v24.09 offers customers a convenient mid-point between Mistral NeMo 12B and Mistral Large 2, providing a cost-effective solution that can be deployed across various platforms and environments.
Ministral-8B-Instruct-2410 The Ministral-8B-Instruct-2410 Language Model is an instruct fine-tuned model significantly outperforming existing models of similar size, released under the Mistral Research License.
Mixtral-8x22B-Instruct-v0.1 Mixtral 8x22B is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size.
Relevant documents
Equals: AI-Driven Spreadsheet Tool Streamlines Data Analysis & Automation Struggling with cumbersome spreadsheet formulas and manual data processing? Envision an intelligent solution where data analysis becomes effortless through automation, delivering actionable insights with straightforward commands. Equals Spreadsheet t
No-Code AI Agents Let Anyone Create Viral POV Videos Instantly The digital era demands compelling content that captivates audiences, particularly through the immersive power of point-of-view video storytelling. Discover how combining artificial intelligence with no-code automation can transform your content stra
Microsoft Merges Windows and Xbox Features for Handheld PC Gaming Experience Microsoft has partnered with Asus to unveil two groundbreaking ROG Xbox Ally handheld gaming devices, introducing an innovative full-screen Xbox interface specifically optimized for portable play. Following earlier commitments to integrate Xbox and W
AI Charm Show Unveils Secrets to Beauty and Confidence Boost Welcome to AI Charm Show, where we explore the fascinating synergy between artificial intelligence and personal magnetism. Our focus extends beyond superficial beauty to examine the foundations of genuine confidence, celebrating individuality, and ma
Google Gemini Adds Assistant-Style Task Scheduling Feature Google continues enhancing Gemini's capabilities, introducing a powerful "scheduled actions" feature exclusively for AI Pro and AI Ultra subscribers. This innovative functionality transforms Gemini into a proactive assistant capable of executing time
Model comparison
Start the comparison
Back to Top
OR