Model Introduction
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
Language comprehension ability
Often makes semantic misjudgments, leading to obvious logical disconnects in responses.
3.6
Knowledge coverage scope
Has significant knowledge blind spots, often showing factual errors and repeating outdated information.
5.0
Reasoning ability
Unable to maintain coherent reasoning chains, often causing inverted causality or miscalculations.
2.8