Mistral AI recently released its Mixtral 8x7B large language model based on the SMoE model, which has performance comparable to GPT-3.5 and performs well in tasks such as mathematics, code generation, and reading comprehension. This model has faster inference speed and has been open sourced with the Instruct version under the Apache2.0 license. This move marks a big step forward in the field of open source large language models, and also heralds Mistral AI’s ambitious goal of open source GPT-4 level models in 2024.
Mistral AI launches Mixtral 8x7B, which is based on the SMoE model and has performance comparable to GPT-3.5. The model is widely used in tasks such as mathematics, code generation, and reading comprehension, and the reasoning speed is faster. Mixtral8x7B and Mixtral8x7B – Instruct are open sourced under the Apache2.0 license. Mistral AI plans to open source GPT-4 level models in 2024, leading the field of open source large language models.The open source of Mixtral 8x7B lowers the entry threshold for large language models, provides valuable resources for developers and researchers, and promotes the prosperity of the open source large model ecosystem. It is worth looking forward to its subsequent development and the open source of GPT-4 level models. .