French AI startup Mistral recently released "Les Ministraux", a series of generative AI models designed for edge devices, including Ministral3B and Ministral8B, which are designed to meet the needs of local processing and privacy priority. The editor of Downcodes will explain in detail this latest achievement of Mistral and its development strategy in the field of AI.
French AI startup Mistral recently announced the launch of a new series of generative AI models designed for edge devices, such as laptops and mobile phones. Mistral has named this series of models "Les Ministraux" and aims to meet the market's need for local processing and privacy first.
The newly released Les Ministraux series includes two models: Ministral3B and Ministral8B. It is worth noting that the context window of both models reaches 128,000 tokens and is able to handle the text length of approximately 50 pages of a book. This means that whether it is text generation, translation on the device, or providing offline intelligent assistant services, these models can handle it with ease.
Mistral said in its blog that more and more customers and partners are looking for solutions that can perform inference locally, which involves important application scenarios such as on-device translation, local analysis and autonomous robots. Les Ministraux was developed to provide computationally efficient, low-latency solutions in these scenarios.
Currently, Ministral8B is available for download, but only for research purposes. Developers and companies wishing to obtain a license for commercial use of Ministral3B or Ministral8B need to contact Mistral directly. At the same time, developers will also be able to use these two models through Mistral's cloud platform La Platforme, as well as other cloud services it will cooperate with in the coming weeks. Ministral8B charges 10 cents per million input/output tokens, while Ministral3B charges 4 cents.
Recently, there has been a growing trend toward smaller models because they are cheaper and faster to train, fine-tune, and run. Google continues to add new models to its Gemma line of small models, while Microsoft has launched its Phi line of models. Meta has also launched several small models optimized for edge hardware in the latest Llama series update.
Mistral claims that Ministral3B and Ministral8B outperformed similar Llama and Gemma models, as well as its own Mistral7B model, in multiple AI benchmarks designed to assess instruction following and problem-solving abilities. Headquartered in Paris, Mistral recently raised US$640 million and is gradually expanding its AI product portfolio. Over the past few months, the company has launched a free service that allows developers to test their models and released an SDK so customers can fine-tune those models. In addition, a code generation model called Codestral was introduced.
Mistral's co-founders come from Meta and Google's DeepMind, and the company's goal is to create flagship models that can compete with top models such as OpenAI's GPT-4o and Anthropic's Claude, and become profitable in the process. While turning a profit is a challenging goal for many generative AI startups, Mistral has reportedly started generating revenue this summer.
Highlight:
1. The Les Ministraux series of models launched by Mistral are designed for edge devices and support local privacy processing.
2. The new model includes Minor3B and Ministral8B, which have strong context processing capabilities and are suitable for a variety of application scenarios.
3. Mistral has become profitable and continues to expand its AI product portfolio to compete with the industry's top models.
Mistral's Les Ministraux series of models brings new possibilities for local AI applications with its powerful performance and optimization for edge devices. In the future, Mistral's development deserves continued attention, and let's wait and see how it creates more breakthroughs in the field of AI.