Allen AI announced the open source of its fine-tuned model called OLMo-7B-Instruct, which is built on AI2's Dolma data set and contains four 7B-scale model variants, each model trained with at least 2T markers. This open source initiative aims to promote the development of model research and application. Allen AI also provides a complete weight, evaluation suite, and training and evaluation code to facilitate users to fully understand the entire process from pre-training models to RLHF fine-tuning models, providing researchers with and developers provide valuable resources.
Allen AI recently announced the open source fine-tuning model OLMo-7B-Instruct, built on AI2’s Dolma dataset. The model includes the full weights of four 7B scale model variants, each trained on at least 2T markers. Allen AI has also released an evaluation kit for use in development, providing training and evaluation codes, allowing users to have a comprehensive understanding of the entire process from pre-training models to RLHF fine-tuning models, providing strong support for the development of model research and applications.
The open source of OLMo-7B-Instruct not only provides powerful model resources, but more importantly, its complete supporting code and evaluation tools, which brings great convenience to the research and application of the AI community and promotes the development of large models. With continued development and innovation, it is worth looking forward to its performance in future applications.