Downcodes editor reports: French artificial intelligence startup Mistral AI recently released a new generation of language models, Ministral3B and Ministral8B, which are part of the "Ministraux" series and are specially designed for edge devices and edge computing scenarios. These two models support context lengths of up to 128,000 tokens, have significant advantages in data privacy and local processing, and provide strong support for applications such as local translation, offline intelligent assistants, data analysis, and autonomous robotics. Its performance surpasses many similar models in multiple benchmark tests, demonstrating strong competitiveness.
Recently, French artificial intelligence startup Mistral AI announced their new generation language models-Ministral3B and Ministral8B.
The two new models are part of the "Ministraux" series, designed for edge devices and edge computing scenarios, supporting context lengths of up to 128,000 tokens. This means that these models are not only powerful but can be used in situations where data privacy and local processing are particularly important.
Mistral said the Ministraux range of models is ideally suited for a range of applications such as local translation, offline smart assistants, data analytics and autonomous robotics. To further improve efficiency, Ministraux models can also be combined with larger language models (such as Mistral Large) to serve as effective intermediaries in multi-step workflows.
In terms of performance, benchmarks provided by Mistral show that the Ministral3B and 8B outperform many similar models in multiple categories, such as Song's Gemma22B and Meta's Llama3.18B. It is worth mentioning that despite having a smaller number of parameters, Minitral3B outperformed its predecessor, Mistral7B, in some tests.
In fact, Mistral8B performed well in all tests, especially in areas such as knowledge, general knowledge, function calling, and multilingual capabilities.
Regarding pricing, these two new models from Ministry AI are already available via API. Ministral8B’s fee is $0.10 per million tokens, while Ministral3B’s is $0.04. Additionally, Mistral provides model weights for the Ministral8B Instruct for research purposes. It is worth noting that these two new models of Mistral will also be launched soon through cloud partners such as Google Vertex and AWS.
Highlight:
- Mistral AI launches Minitral3B and 8B, supporting context lengths up to 128,000 tokens.
- These two models are suitable for applications such as local translation, offline assistants, data analysis and autonomous robots.
- In terms of pricing, the fee per million tokens for Ministral8B is US$0.10 and that of Ministral3B is US$0.04.
All in all, Mistral AI's Minitral3B and 8B bring new possibilities to the field of artificial intelligence with their powerful performance, long context length, and flexible application scenarios. It is believed that these two models will play an important role in edge computing and localization applications. The editor of Downcodes will continue to pay attention to its subsequent development.