OpenGPTAndBeyond
1.0.0
简体中文 | English
开源类ChatGPT模型的实现与超越之路
LLaMA权重意外泄露、以及斯坦福小羊驼用以self-instruct方式从gpt-3 api构建的数据对LLaMA进行指令微调取得令人印象深刻的表现以来,开源社区对实现ChatGPT水平的大语言模型感到越来越有希望。
这个repo就是记录这个复刻与超越的过程,为社区提供一个概览。
包括:相关技术进展、基础模型、领域模型、训练、推理、技术、数据、多语言、多模态,等等
contributor | model/project | license | language | main feature |
---|---|---|---|---|
Meta | LLaMA/LLaMA2 | multi | LLaMA-13B outperforms GPT-3(175B) and LLaMA-65B is competitive to PaLM-540M. Base model for most follow-up works. |
|
HuggingFace-BigScience | BLOOM | multi | an autoregressive Large Language Model (LLM) trained by HuggingFace BigScience. | |
HuggingFace-BigScience | BLOOMZ | multi | instruction-finetuned version of BLOOM & mT5 pretrained multilingual language models on crosslingual task mixture. | |
EleutherAI | GPT-J | en | transformer model trained using Ben Wang'sMesh Transformer JAX. | |
Meta | OPT | en | Open Pre-trained Transformer Language Models, aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and to bring more voices to the table in studying the impact of these LLMs. |
|
Cerebras Systems | Cerebras-GPT | en | Pretrained LLM, GPT-3 like, Commercially available, efficiently trained on theAndromeda AI supercomputer, trained in accordance withChinchilla scaling laws (20 tokens per model parameter) which is compute-optimal. |
|
EleutherAI | pythia | en | combine interpretability analysis and scaling laws to understand how knowledge develops and evolves during training in autoregressive transformers. |
|
Stability-AI | StableLM | en | Stability AI Language Models | |
FDU | MOSS | en/zh | An open-source tool-augmented conversational language model from Fudan University. | |
ssymmetry & FDU | BBT-2 | zh | 12B open-source LM. | |
@mlfoundations | OpenFlamingo | en | An open-source framework for training large multimodal models. | |
EleutherAI | GPT-NeoX-20B | en | Its architecture intentionally resembles that of GPT-3, and is almost identical to that ofGPT-J- 6B. | |
UCB | OpenLLaMA | Apache-2.0 | en | An Open Reproduction of LLaMA. |
MosaicML | MPT | Apache-2.0 | en | MPT-7B is a GPT-style model, and the first in the MosaicML Foundation Series of models. Trained on 1T tokens of a MosaicML-curated dataset, MPT-7B is open-source, commercially usable, and equivalent to LLaMa 7B on evaluation metrics. |
TogetherComputer | RedPajama-INCITE-Base-3B-v1 | Apache-2.0 | en | A 2.8B parameter pretrained language model, pretrained onRedPajama-Data-1T, together with an Instruction-tuned Version and a Chat Version. |
Lightning-AI | Lit-LLaMA | Apache-2.0 | - | Independent implementation ofLLaMA that is fully open source under the Apache 2.0 license. |
@conceptofmind | PaLM | MIT License | en | An open-source implementation of Google PaLM models. |
TII | Falcon-7B | TII Falcon LLM License | en | a 7B parameters causal decoder-only model built byTII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. |
TII | Falcon-40B | TII Falcon LLM License | multi | a 40B parameters causal decoder-only model built byTII and trained on 1,000B tokens of RefinedWeb enhanced with curated corpora. |
TigerResearch | TigerBot | Apache-2.0 | en/zh | a multi-language and multitask LLM. |
BAAI | Aquila / Aquila2 | BAAI_Aquila_Model_License | en/zh | The Aquila language model inherits the architectural design advantages of GPT-3 and LLaMA, replacing a batch of more efficient underlying operator implementations and redesigning the tokenizer for Chinese-English bilingual support. |
OpenBMB | CPM-Bee | 通用模型许可协议-来源说明-宣传限制-商业授权 | en/zh | CPM-Bee is a fully open-source, commercially-usable Chinese-English bilingual base model with a capacity of ten billion parameters. And has been pre-trained on an extensive corpus of trillion-scale tokens. |
Baichuan | baichuan-7B | Apache-2.0 | en/zh | It has achieved the best performance among models of the same size on standard Chinese and English authoritative benchmarks (C-EVAL, MMLU, etc). |
Tencent | lyraChatGLM | MIT License | en/zh | To the best of our knowledge, it is thefirst accelerated version of ChatGLM-6B. The inference speed of lyraChatGLM has achieved 300x acceleration upon the early original version. We are still working hard to further improve the performance. |
SalesForce | XGen | Apache-2.0 | multi | Salesforce open-source LLMs with 8k sequence length |
Shanghai AI Lab | InternLM | Apache-2.0 | en/zh | InternLM has open-sourced a 7 billion parameter base model and a chat model tailored for practical scenarios. The model has the following characteristics: It leverages trillions of high-quality tokens for training to establish a powerful knowledge base. It supports an 8k context window length, enabling longer input sequences and stronger reasoning capabilities. It provides a versatile toolset for users to flexibly build their own workflows. |
xverse-ai | XVERSE | Apache-2.0 | multi | Multilingual LLMs developed by XVERSE Technology Inc. |
Writer | palmyra | Apache-2.0 | en | extremely powerful while being extremely fast. This model excels at many nuanced tasks such as sentiment classification and summarization. |
Mistral AI | Mistral | Apache-2.0 | en | Mistral 7B is a 7.3B parameter model that: 1. Outperforms Llama 2 13B on all benchmarks 2. Outperforms Llama 1 34B on many benchmarks 3. Approaches CodeLlama 7B performance on code, while remaining good at English tasks 4. Uses Grouped-query attention (GQA) for faster inference 5. Uses Sliding Window Attention (SWA) to handle longer sequences at smaller cost |
SkyworkAI | Skywork | - | en/zh | In major evaluation benchmarks, Skywork-13B is at the forefront of Chinese open source models and is the optimal level under the same parameter scale; it can be used commercially without application; it has also open sourced a 600G (150 billion tokens) Chinese data set. |
01.AI | Yi | - | en/zh | TheYi series models are large language models trained from scratch by developers at 01.AI. |
IEIT Systems | Yuan-2.0 | - | en/zh | In this work, the Localized Filtering-based Attention (LFA) is introduced to incorporate prior knowledge of local dependencies of natural language into Attention. Based on LFA, we develop and release Yuan 2.0, a large language model with parameters ranging from 2.1 billion to 102.6 billion. A data filtering and generation method is presented to build pretraining and fine-tuning dataset in high quality. A distributed training method with non-uniform pipeline parallel, data parallel, and optimizer parallel is proposed, which greatly reduces the bandwidth requirements of intra-node communication, and achieves good performance in large-scale distributed training. Yuan 2.0 models display impressive ability in code generation, math problem-solving, and chat compared with existing models. |
Nanbeige | Nanbeige | Apache-2.0 | en/zh | Nanbeige-16B is a 16 billion parameter language model developed by Nanbeige LLM Lab. It uses 2.5T Tokens for pre-training. The training data includes a large amount of high-quality internet corpus, various books, code, etc. It has achieved good results on various authoritative evaluation data sets. This release includes the Base, Chat, Base-32k and Chat-32k. |
deepseek-ai | deepseek-LLM | MIT License | en/zh | an advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese. |
LLM360 | LLM360 | - | - | Most open-source LLM releases include model weights and evaluation results. However, additional information is often needed to genuinely understand a model's behavior—and this information is not typically available to most researchers. Hence, we commit to releasing all of the intermediate checkpoints (up to 360!) collected during training, all of the training data (and its mapping to checkpoints), all collected metrics (e.g., loss, gradient norm, evaluation results), and all source code for preprocessing data and model training. These additional artifacts can help researchers and practitioners to have a deeper look into LLM’s construction process and conduct research such as analyzing model dynamics. We hope that LLM360 can help make advanced LLMs more transparent, foster research in smaller-scale labs, and improve reproducibility in AI research. |
FDU, etc. | CT-LLM | - | zh/en | focusing on the Chinese language. Starting from scratch, CT-LLM primarily uses Chinese data from a 1,200 billion token corpus, including 800 billion Chinese, 300 billion English, and 100 billion code tokens. By open-sourcing CT-LLM's training process, including data processing and the Massive Appropriate Pretraining Chinese Corpus (MAP-CC), and introducing the Chinese Hard Case Benchmark (CHC-Bench), we encourage further research and innovation, aiming for more inclusive and adaptable language models. |
TigerLab | MAP-NEO | - | zh/en | 第一个从数据处理到模型训练过程、模型权重全流程开源的大模型。 |
DataCamp | DCLM | - | - | 提供了用于处理原始数据、标记化、数据打乱、模型训练以及性能评估的工具和指南。基础baseline 7B模型性能优异。 |
contributor | model | domain | language | base model | main feature |
---|---|---|---|---|---|
UT Southwestern/ UIUC/OSU/HDU |
ChatDoctor | medical | en | LLaMA | Maybe the first domain-specific chat model tuned on LLaMA. |
Cambridge | Visual Med-Alpaca | biomedical | en | LLaMA-7B | a multi-modal foundation model designed specifically for the biomedical domain. |
HIT | BenTsao / ChatGLM-Med | medical | zh | LLaMA/ChatGLM | fine-tuned with Chinese medical knowledge dataset, which is generated by using gpt3.5 api. |
ShanghaiTech, etc. | DoctorGLM | medical | en/zh | ChatGLM-6B | Chinese medical consultation model fine-tuned on ChatGLM-6B. |
THU AIR | BioMedGPT-1.6B | biomedical | en/zh | - | a pre-trained multi-modal molecular foundation model with 1.6B parameters that associates 2D molecular graphs with texts. |
@LiuHC0428 | LawGPT_zh | legal | zh | ChatGLM-6B | a general model in Chinese legal domain, trained on data generated via Reliable-Self-Instruction. |
SJTU | MedicalGPT-zh | medical | zh | ChatGLM-6B | a general model in Chinese medical domain, a diverse data generated via self-instruct. |
SJTU | PMC-LLaMA | medical | zh | LLaMA | Continue Training LLaMA on Medical Papers. |
HuggingFace | StarCoder | code generation | en | - | a language model (LM) trained on source code and natural language text. Its training data incorporates more than 80 different programming languages as well as text extracted from GitHub issues and commits and from notebooks. |
@CogStack | NHS-LLM | medical | en | not clear | A conversational model for healthcare trained usingOpenGPT. |
@pengxiao-song | LaWGPT | legal | zh | LLaMA/ChatGLM | expand the vocab with Chinese legal terminologies, instruction fine-tuned on data generated using self-instruct. |
Duxiaoman | XuanYuan | finance | zh | BLOOM-176B | A Large Chinese Financial Chat Model with Hundreds of Billions Parameters. |
CUHK | HuatuoGPT | medical | zh | not clear | HuatuoGPT, a large language model (LLM) trained on a vast Chinese medical corpus. Our objective with HuatuoGPT is to construct a more professional ‘ChatGPT’ for medical consultation scenarios. |
PKU | Lawyer LLaMA | legal | zh | LLaMA | continue pretraining on Chinese legal data, insturction tuned on legal exams and legal consulting qa pairs. |
THU | LexiLaw | legal | zh | ChatGLM-6B | trained on a mixture of general data (BELLE 1.5M) and legal data |
THU, etc. | taoli | education | zh | LLaMA | A large model for international Chinese education. It extends specific vocabulary on the base model, and uses the domain's proprietary data set for instruction fine-tuning. |
NUS | Goat | arithmetic | en | LLaMA | a fine-tuned LLaMA model that significantly outperforms GPT-4 on a range of arithmetic tasks. Fine-tuned on a synthetically generated dataset, Goat achieves state-ofthe-art performance on BIG-bench arithmetic sub-task. |
CU/NYU | FinGPT | finance | en | - | an end-to-end open-source framework for financial large language models (FinLLMs). |
microsoft | WizardCoder | code generation | en | StarCoder | trained with78k evolved code instructions. surpasses Claude-Plus (+6.8) , Bard (+15.3) and InstructCodeT5+ (+22.3) on the HumanEval Benchmarks. |
UCAS | Cornucopia | finance | zh | LLaMA | finetune LLaMA on Chinese financial knowledge, |
PKU | ChatLaw | legal | zh | Ziya / Anima | Chinese legal domain model. |
@michael-wzhu | ChatMed | medical | zh | LLaMA | Chinese medical LLM based on LLaMA-7B. |
SCUT | SoulChat | mental health | zh | ChatGLM-6B | Chinese dialogue LLM in mental health domain, based on ChatGLM-6B. |
@shibing624 | MedicalGPT | medical | zh | ChatGLM-6B | Training Your Own Medical GPT Model with ChatGPT Training Pipeline. |
BJTU | TransGPT | transportation | zh | LLaMA-7B | Chinese transportation model. |
BAAI | AquilaCode | code generation | multi | Aquila | AquilaCode-multi is a multi-language model that supports high-accuracy code generation for various programming languages, including Python/C++/Java/Javascript/Go, etc. It has achieved impressive results in HumanEval (Python) evaluation, with Pass@1, Pass@10, and Pass@100 scores of 26/45.7/71.6, respectively. In the HumanEval-X multi-language code generation evaluation, it significantly outperforms other open-source models with similar parameters (as of July 19, 2023). AquilaCode-py, on the other hand, is a single-language Python version of the model that focuses on Python code generation. It has also demonstrated excellent performance in HumanEval evaluation, with Pass@1, Pass@10, and Pass@100 scores of 28.8/50.6/76.9 (as of July 19, 2023). |
Meta | CodeLLaMA | code generation | multi | LLaMA-2 | a family of large language models for code based onLlama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. |
UNSW, etc | Darwin | natural science | en | LLaMA-7B | the first open-source LLM for natural science, mainly in physics, chemistry and material science. |
alibaba | EcomGPT | e-commerce | en/zh | BLOOMZ | An Instruction-tuned Large Language Model for E-commerce. |
TIGER-AI-Lab | MAmmoTH | math | en | LLaMA2/CodeLLaMA | a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, a meticulously curated instruction tuning dataset that is lightweight yet generalizable. MathInstruct is compiled from 13 math rationale datasets, six of which are newly curated by this work. It uniquely focuses on the hybrid use of chain-of-thought (CoT) and program-of-thought (PoT) rationales, and ensures extensive coverage of diverse mathematical fields. |
SJTU | abel | math | en | LLaMA2 | We proposeParental Oversight* , A Babysitting Strategy for Supervised Fine-tuning, Parental Oversight is not limited to any specific data processing method. Instead, it defines the data processing philosophy that should guide supervised fine-tuning in the era of Generative AI GAI). |
FDU | DISC-LawLLM | legal | zh | Baichuan-13B | FudanDISC has released DISC-LawLLM, a Chinese intelligent legal system driven by a large language model. The system can provide various legal services for different user groups. In addition, DISC-Law-Eval is constructed to evaluate the large legal language model from both objective and subjective aspects. The model has obvious advantages compared with the existing large legal models. The team also made available a high-quality Supervised fine-tuning (SFT) dataset of 300,000, DISC-Law-SFT. |
HKU, etc | ChatPsychiatrist | mental health | en | LLaMA-7B | This repo open-sources the Instruct-tuned LLaMA-7B model that has been fine-tuned with counseling domian instruction data. To construct our 8K size instruct-tuning dataset, we collected real-world counseling dialogue examples and employed GPT-4 as an extractor and filter. In addition, we have introduced a comprehensive set of metrics, specifically tailored to the LLM+Counseling domain, by incorporating counseling domain evaluation criteria. These metrics enable the assessment of performance in generating language content that involves multi-dimensional counseling skills. |
CAS | StarWhisper | astronomical | zh | - | StarWhisper, a large astronomical model, significantly improves the reasoning logic and integrity of the model through the fine-tuning of astrophysical corpus labeled by experts, logical long text training, and direct preference optimization. In the CG-Eval jointly published by the Keguei AI Research Institute and LanguageX AI Lab, it reached the second place overall, just below GPT-4, and its mathematical reasoning and astronomical capabilities are close to or exceed the GPT 3.5 Turbo. |
ZhiPuAI | FinGLM | finance | zh | ChatGLM | solutions of SMP2023-ELMFT(The Evaluation of Large Model of Finance Technology). |
PKU, etc | CodeShell | code generation | en/zh | - | CodeShell is a code large language model (LLM) developed jointly by theKnowledge Computing Lab at Peking University and the AI team of Sichuan Tianfu Bank. CodeShell has 7 billion parameters, was trained on 500 billion tokens, and has a context window length of 8192. On authoritative code evaluation benchmarks (HumanEval and MBPP), CodeShell achieves the best performance for models of its scale. |
FDU | DISC-FinLLM | finance | zh | Baichuan-13B-Chat | DISC-FinLLM is a large language model in the financial field. It is a multi-expert intelligent financial system composed of four modules for different financial scenarios: financial consulting, financial text analysis, financial calculation, and financial knowledge retrieval and question answering. |
Deepseek | Deepseek Coder | code generation | en/zh | - | Deepseek Coder comprises a series of code language models trained on both 87% code and 13% natural language in English and Chinese, with each model pre-trained on 2T tokens. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks. |
microsoft | MathOctopus | math | multi | LLaMA2 | This work pioneers exploring and building powerful Multilingual Math Reasoning (xMR) LLMs. To accomplish this, we make the following works: 1. MGSM8KInstruct, the first multilingual math reasoning instruction dataset, encompassing ten distinct languages, thus addressing the issue of training data scarcity in xMR tasks. 2. MSVAMP, an out-of-domain xMR test dataset, to conduct a more exhaustive and comprehensive evaluation of the model’s multilingual mathematical capabilities. 3. MathOctopus, our effective Multilingual Math Reasoning LLMs, training with different strategies, which notably outperform conventional open-source LLMs and exhibit superiority over ChatGPT in few-shot scenarios. |
ITREC | Zh-MT-LLM | maritime | en/zh | ChatGLM3-6b | The training data use the maritime domain data Zh-mt-sft organized for three main segments, and 30w general conversation datamoss-003-sft-data. Zh-mt-sft specifically Contains CrimeKgAssitant-1.8w, Zh-law-qa, and Zh-law-court related to maritime laws and regulations Q&A, Zh-edu-qa and Zh-edu-qb related to maritime education and training, and Zh-mt-qa related to maritime specialized knowledge Q&A. |
@SmartFlowAI | EmoLLM | 心理健康 | zh | - | EmoLLM 是一系列能够支持 理解用户-支持用户-帮助用户 心理健康辅导链路的心理健康大模型,由 LLM 指令微调而来。 |
some medical models: here
some domain llms: Awesome-Domain-LLM
healcare models: Awesome-Healthcare-Foundation-Models
contributor | model/project | language | base model | main feature |
---|---|---|---|---|
Stanford | Alpaca | en | LLaMA/OPT | use 52K instruction-following data generated by Self-Instructt techniques to fine-tune 7B LLaMA, the resulting model, Alpaca, behaves similarly to the text-davinci-003 model on the Self-Instruct instruction-following evaluation suite.Alpaca has inspired many follow-up models. |
LianJiaTech | BELLE | en/zh | BLOOMZ-7B1-mt | maybe the first Chinese model to follow Alpaca. |
THU | ChatGLM-6B | en/zh | - | well-known Chinese model. |
Databricks | Dolly | en | GPT-J 6B | use Alpaca data to fine-tune a 2-year-old model: GPT-J, which exhibits surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. |
@tloen | Alpaca-LoRA | en | LLaMA-7B | trained within hours on a single RTX 4090, reproducing the Stanford Alpaca results using low-rank adaptation (LoRA), and can run on a Raspberry pi. |
ColossalAI | Coati7B | en/zh | LLaMA-7B | a large language model developed by the ColossalChat project |
Shanghai AI Lab | LLaMA-Adapter | en | LLaMA-7B | Fine-tuning LLaMA to follow instructions within 1 Hour and 1.2M Parameters |
AetherCortex | Llama-X | en | LLaMA | Open Academic Research on Improving LLaMA to SOTA LLM. |
TogetherComputer | OpenChatKit | en | GPT-NeoX-20B | OpenChatKit provides a powerful, open-source base to create both specialized and general purpose chatbots for various applications. The kit includes an instruction-tuned language models, a moderation model, and an extensible retrieval system for including up-to-date responses from custom repositories. |
nomic-ai | GPT4All | en | LLaMA | trained on a massive collection of clean assistant data including code, stories and dialogue |
@ymcui | Chinese-LLaMA-Alpaca | en/zh | LLaMA-7B/13B | expand the Chinese vocabulary based on the original LLaMA and use Chinese data for secondary pre-training, further enhancing Chinese basic semantic understanding. Additionally, the project uses Chinese instruction data for fine-tuning on the basis of the Chinese LLaMA, significantly improving the model's understanding and execution of instructions. |
UC Berkley Stanford CMU |
Vicuna | en | LLaMA-13B | Impressing GPT-4 with 90% ChatGPT Quality. |
UCSD/SYSU | baize | en/zh | LLaMA | fine-tuned withLoRA. It uses 100k dialogs generated by letting ChatGPT chat with itself. Alpaca's data is also used to improve its performance. |
UC Berkley | Koala | en | LLaMA | Rather than maximizingquantity by scraping as much web data as possible, the team focus on collecting a small high-quality dataset. |
@imClumsyPanda | langchain-ChatGLM | en/zh | ChatGLM-6B | local knowledge based ChatGLM with langchain. |
@yangjianxin1 | Firefly | zh | bloom-1b4-zh bloom-2b6-zh |
Instruction Tuning on Chinese dataset. Vocabulary pruning, ZeRO, and tensor parallelism are used to effectively reduce memory consumption and improve training efficiency. |
microsoft | GPT-4-LLM | en/zh | LLaMA | aims to share data generated by GPT-4 for building an instruction-following LLMs with supervised learning and reinforcement learning. |
Hugging Face | StackLLaMA | en | LLaMA | trained on StackExchange data and the main goal is to serve as a tutorial and walkthrough on how to train model with RLHF and not primarily model performance. |
Nebuly | ChatLLaMA | en | - | a library that allows you to create hyper-personalized ChatGPT-like assistants using your own data and the least amount of compute possible. |
@juncongmoo | ChatLLaMA | en | LLaMA | LLaMA-based RLHF model, runnable in a single GPU. |
@juncongmoo | minichatgpt | en | GPT/OPT ... | To Train ChatGPT In 5 Minutes with ColossalAI. |
@LC1332 | Luotuo-Chinese-LLM | zh | LLaMA/ChatGLM | Instruction fine-tuned Chinese Language Models, with colab provided! |
@Facico | Chinese-Vicuna | zh | LLaMA | A Chinese Instruction-following LLaMA-based Model, fine-tuned with Lora, cpp inference supported, colab provided. |
@yanqiangmiffy | InstructGLM | en/zh | ChatGLM-6B | ChatGLM based instruction-following model, fine-tuned on a variety of data sources, supports deepspeed accelerating and LoRA. |
alibaba | Wombat | en | LLaMA | a novel learning paradigm called RRHF, as an alternative of RLHF, is proposed, which scores responses generated by different sampling policies and learns to align them with human preferences through ranking loss. And the performance is comparable to RLHF, with less models used in the process. |
@WuJunde | alpaca-glassoff | en | LLaMA | a mini image-acceptable Chat AI can run on your own laptop, based onstanford-alpaca and alpaca-lora. |
@JosephusCheung | Guanaco | multi | LLaMA-7B | A Multilingual Instruction-Following Language Model. |
@FreedomIntelligence | LLM Zoo | multi | BLOOMZ/LLaMA | a project that provides data, models, and evaluation benchmark for large language models. model released: Phoenix, Chimera |
SZU | Linly | en/zh | LLaMA | expand the Chinese vocabulary, full fine-tuned models, largest LLaMA-based Chinese models, aggregation of Chinese instruction data, reproduceable details.. |
@lamini-ai | lamini | multi | - | data generator for generating instructions to train instruction-following LLMs. |
Stability-AI | StableVicuna | en | LLaMA | a further instruction fine tuned and RLHF trained version of Vicuna v0 13b, with better performance than Vicuna. |
Hugging Face | HuggingChat | en | LLaMA | seems to be the first one available to access as a platform that appears similar to ChatGPT. |
microsoft | WizardLM | en | LLaMA | trained with 70k evolved instructions,Evol-Instruct is a novel method using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs. |
FDU | OpenChineseLLaMA | en/zh | LLaMA-7B | further pretrain LLaMA on Chinese data, improving LLaMA preformance on Chinese tasks. |
@chenfeng357 | open-Chinese-ChatLLaMA | en/zh | LLaMA | The complete training code of the open-source Chinese-Llama model, including the full process from pre-training instructing and RLHF. |
@FSoft-AI4Code | CodeCapybara | en | LLaMA | Open Source LLaMA Model that Follow Instruction-Tuning for Code Generation. |
@mbzuai-nlp | LaMini-LM | en | LLaMA/Flan-T5 ... | A Diverse Herd of Distilled Models from Large-Scale Instructions. |
NTU | Panda | en/zh | LLaMA | further pretraining on Chinese data, full-size of LLaMA models. |
IBM/CMU/MIT | Dromedary | en | LLaMA-65B | Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision. |
@melodysdreamj | WizardVicunaLM | multi | Vicuna | Wizard's dataset + ChatGPT's conversation extension + Vicuna's tuning method, achieving approximately 7% performance improvement over Vicuna. |
sambanovasystems | BLOOMChat | multi | BLOOM | BLOOMChat is a 176 billion parameter multilingual chat model. It is instruction tuned fromBLOOM (176B) on assistant-style conversation datasets and supports conversation, question answering and generative answers in multiple languages. |
TII | Falcon-7B-Instruct | en | Falcon-7B | a 7B parameters causal decoder-only model built byTII based on Falcon-7B and finetuned on a mixture of chat/instruct datasets. |
TII | Falcon-40B-Instruct | multi | Falcon-40B | a 40B parameters causal decoder-only model built byTII based on Falcon-40B and finetuned on a mixture of Baize. |
USTC, etc. | ExpertLLaMA | en | LLaMA | use In-Context Learning to automatically write customized expert identity and find the quality quite satisfying. We then prepend corresponding expert identity to each instruction to produce augmented instruction-following data. We refer to the overall framework as ExpertPrompting, find more details in our paper. |
ZJU | CaMA | en/zh | LLaMA | further pretrained on Chinese courpus without expansion of vocabulary; optimized on the Information Extraction (IE) tasks. pre-training script is available, which includes transformations, construction, and loading of large-scale corpora, as well as the LoRA instruction fine-tuning script. |
THU | UltraChat | en | LLaMA | First, the UltraChat dataset provides a rich resource for the training of chatbots. Second, by fine-tuning the LLaMA model, the researchers successfully created a dialogue model UltraLLaMA with superior performance. |
RUC | YuLan-Chat | en/zh | LLaMA | developed based on fine-tuning LLaMA with high-quality English and Chinese instructions. |
AI2 | Tülu | en | LLaMA/Pythia/OPT | a suite of LLaMa models fully-finetuned on a strong mix of datasets. |
KAIST | SelFee | en | LLaMA | Iterative Self-Revising LLM Empowered by Self-Feedback Generation. |
@lyogavin | Anima | en/zh | LLaMA | trained based on QLoRA's33B guanaco, finetuned for 10000 steps. |
THU | ChatGLM2-6B | en/zh | - | ChatGLM2 -6B is the second-generation version of the open-source bilingual (Chinese-English) chat model ChatGLM-6B. It retains the smooth conversation flow and low deployment threshold of the first-generation model, while introducing the following new features: - Stronger Performance - Longer Context - More Efficient Inference- More Open License |
OpenChat | OpenChat | en | LLaMA, etc. | a series of open-source language models fine-tuned on a small, yet diverse and high-quality dataset of multi-round conversations. Specifically, we utilize only ~6K GPT-4 conversations directly filtered from the ~90K ShareGPT conversations. Despite the small size of the dataset, OpenLLMs has demonstrated remarkable performance. |
CAS | BayLing | multi | LLaMA | BayLing is an English/Chinese LLM equipped with advanced language alignment, showing superior capability in English/Chinese generation, instruction following and multi-turn interaction. |
stabilityai | FreeWilly/FreeWilly2 | en | LLaMA/LLaMA2 | FreeWilly is a Llama65B model fine-tuned on an Orca style Dataset.FreeWilly2 is a Llama2 70B model finetuned on an Orca style Dataset.FreeWilly2 outperforms Llama2 70B on the huggingface Open LLM leaderboard. |
alibaba | Qwen-7B | en/zh | - | 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. |
ZJU | KnowLM | en/zh | LLaMA | With the rapid development of deep learning technology, large language models such as ChatGPT have made substantial strides in the realm of natural language processing. However, these expansive models still encounter several challenges in acquiring and comprehending knowledge, including the difficulty of updating knowledge and potential knowledge discrepancies and biases, collectively known asknowledge fallacies . The KnowLM project endeavors to tackle these issues by launching an open-source large-scale knowledgable language model framework and releasing corresponding models. |
NEU | TechGPT | en/zh | LLAMA | TechGPT mainly strengthens the following three types of tasks: - Various information extraction tasks such as relation triplet extraction with "knowledge graph construction" as the core - Various intelligent question-and-answer tasks centered on "reading comprehension". - Various sequence generation tasks such as keyword generation with "text understanding" as the core. |
@MiuLab | Taiwan-LLaMa | en/zh | LLaMA2 | Traditional Chinese LLMs for Taiwan. |
Xwin-LM | Xwin-LM | en | LLaMA2 | Xwin-LM aims to develop and open-source alignment technologies for large language models, including supervised fine-tuning (SFT), reward models (RM), reject sampling, reinforcement learning from human feedback (RLHF), etc. Our first release, built-upon on the Llama2 base models, rankedTOP-1 on AlpacaEval. Notably, it's the first to surpass GPT-4 on this benchmark. |
wenge-research | YaYi | en/zh | LLaMA/LLaMA2 | YaYi was fine-tuned on millions of artificially constructed high-quality domain data. This training data covers five key domains: media publicity, public opinion analysis, public safety, financial risk control, and urban governance, encompassing over a hundred natural language instruction tasks. |
HuggingFace | zephyr | en | Mistral | Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-α is the first model in the series, and is a fine-tuned version of mistralai/Mistral-7B-v0.1 that was trained on on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO). |
Cohere | Command-R / Command R+ | multi | - | Command-R has the capability for multilingual generation evaluated in 10 languages and highly performant RAG capabilities. |
XAI | grok | en | - | 314B MoE; context length: 8192 |
databricks | dbrx-instruct | - | - | afine-grained mixture-of-experts (MoE) architecture with 132B total parameters of which 36B parameters are active on any input. It was pre-trained on 12T tokens of text and code data. Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts. DBRX has 16 experts and chooses 4, while Mixtral-8x7B and Grok-1 have 8 experts and choose 2. |
contributor | model/method | main feature | main feature |
---|---|---|---|
FuseAI | FuseChat | Firstly, it undertakes pairwise knowledge fusion for source LLMs to derive multiple target LLMs of identical structure and size via lightweight fine-tuning. Then, these target LLMs are merged within the parameter space, wherein we propose a novel method VaRM for determining the merging weights based on the variation ratio of parameter matrices before and after fine-tuning. | a fusion of three prominent chat LLMs with diverse architectures and scales, namelyNH2-Mixtral-8x7B, NH2-Solar-10.7B, and OpenChat-3.5-7B. FuseChat-7B-VaRM achieves an average performance of 8.22 on MT-Bench, outperforming various powerful chat LLMs at 7B and 34B scales like Starling-7B and Yi-34B-Chat, even surpassing GPT-3.5 (March), Claude-2.1, and approaching Mixtral-8x7B-Instruct. |
arcee-ai | mergekit | Tools for merging pretrained large language models. | |
SakanaAI | EvoLLM | Evolutionary Optimization of Model Merging Recipes. |
(maybe successors?)
contributor | method | main feature |
---|---|---|
BlinkDL | RWKV-LM | RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding. |
msra | RetNet | simultaneously achieving training parallelism, low-cost inference, and good performance. We theoretically derive the connection between recurrence and attention. Then we propose the retention mechanism for sequence modeling, which supports three computation paradigms, i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel representation allows for training parallelism. The recurrent representation enables low-costO(1) inference, which improves decoding throughput, latency, and GPU memory without sacrificing performance. The chunkwise recurrent representation facilitates efficient long-sequence modeling with linear complexity, where each chunk is encoded parallelly while recurrently summarizing the chunks. Experimental results on language modeling show that RetNet achieves favorable scaling results, parallel training, low-cost deployment, and efficient inference. The intriguing properties make RetNet a strong successor to Transformer for large language models. |
stanford | Bapcpack | ABackpack is a drop-in replacement for a Transformer that provides new tools for interpretability-through-control while still enabling strong language models. Backpacks decompose the predictive meaning of words into components non-contextually, and aggregate them by a weighted sum, allowing for precise, predictable interventions. |
stanford, etc. | Monarch Mixer (M2) | The basic idea is to replace the major elements of a Transformer with Monarch matrices — which are a class of structured matrices that generalize the FFT and are sub-quadratic, hardware-efficient, and expressive. In Monarch Mixer, we use layers built up from Monarch matrices to do both mixing across the sequence (replacing the Attention operation) and mixing across the model dimension (replacing the dense MLP). |
CMU, etc. | Mamba | Mamba is a new state space model architecture showing promising performance on information-dense data such as language modeling, where previous subquadratic models fall short of Transformers. It is based on the line of progress onstructured state space models, with an efficient hardware-aware design and implementation in the spirit of FlashAttention. |
TogetherComputer | StripedHyena | StripedHyena is thefirst alternative model competitive with the best open-source Transformers of similar sizes in short and long-context evaluations. StripedHyena is a hybrid architecture composed of multi-head, grouped-query attention and gated convolutions arranged inHyena blocks, different from traditional decoder-only Transformers. 1. Costant memory decoding in Hyena blocks via representation of convolutions as state-space models (modal or canonical form), or as truncated filters. 2. Low latency, faster decoding and higher throughput than Transformers. 3. Improvement to training and inference-optimal scaling laws, compared to optimized Transformer architectures such as Llama-2. 4. Trained on sequences of up to 32k, allowing it to process longer prompts. |
microsoft | bGPT | bGPT supports generative modelling via next byte prediction on any type of data and can perform any task executable on a computer, showcasing the capability to simulate all activities within the digital world, with its potential only limited by computational resources and our imagination. |
DeepMind | Griffin-Jax | Jax + Flax implementation of theGriffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models, not official code(official code is not released yet); the RG-LRU layer, a novel gated linear recurrent layer, around which we design a new recurrent block to replace MQA. We build two new models using this recurrent block: Hawk, a model which interleaves MLPs with recurrent blocks, and Griffin, a hybrid model which interleaves MLPs with a mixture of recurrent blocks and local attention Griffin-3B outperforms Mamba-3B, and Griffin-7B and Griffin-14B achieve performance competitive with Llama-2, despite being trained on nearly 7 times fewer tokens; Griffin can extrapolate on sequences significantly longer than those seen during training. |
AI21 | Jamba | Jamba is the first production-scale Mamba implementation. It’s a pretrained, mixture-of-experts (MoE) generative text model, with 12B active parameters and a total of 52B parameters across all experts. It supports a 256K context length, and can fit up to 140K tokens on a single 80GB GPU. |
Meta | Megalodon | Megalodon inherits the architecture of Mega (exponential moving average with gated attention), and further introduces multiple technical components to improve its capability and stability, including complex exponential moving average (CEMA), timestep normalization layer, normalized attention mechanism and pre-norm with two-hop residual configuration. In a controlled head-to-head comparison with Llama2, Megalodon achieves better efficiency than Transformer in the scale of 7 billion parameters and 2 trillion training tokens. |
contributor | model/project | main feature |
---|---|---|
mistralai | Mixtral-8x7B | The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested. |
Shanghai AI Lab, etc. | LLaMA-MoE | A small and affordable MoE model based onLLaMA and SlimPajama. The number of activated model parameters is only 3.0~3.5B, which is friendly for deployment and research usage. |
NUS, etc. | OpenMoE | A family of open-sourced Mixture-of-Experts (MoE) Large Language Models. |
Snowflake | Arctic | Arctic uses a unique Dense-MoE Hybrid transformer architecture. It combines a 10B dense transformer model with a residual 128x3.66B MoE MLP resulting in 480B total and 17B active parameters chosen using a top-2 gating. |
contributor | project | language | base model | main feature |
---|---|---|---|---|
BaihaiAIen | IDPChat | en/zh | LLaMA-13B Stable Diffusion |
Open Chinese multi-modal model, single GPU runnable, easy to deploy, UI provided. |
KAUST | MiniGPT-4 | en/zh | LLaMA | MiniGPT-4 aligns a frozen visual encoder from BLIP-2 with a frozen LLM, Vicuna, using just one projection layer, and yields many emerging vision-language capabilities similar to those demonstrated in GPT-4. |
MSR, etc. | LLaVA | en | LLaMA | visual instruction tuning is proposed, towards building large language and vision models with GPT-4 level capabilities. |
NUS/THU | VPGTrans | en | LLaMA/OPT/ Flan-T5/BLIP-2 ... |
transferring VPG across LLMs to build VL-LLMs at significantly lower cost. The GPU hours can be reduced over 10 times and the training data can be reduced to around 10%. Two novel VL-LLMs are released via VPGTrans, including VL-LLaMA and VL-Vicuna. VL-LLaMA is a multimodal version LLaMA by transferring the BLIP-2 OPT-6.7B to LLaMA via VPGTrans. VL-Vicuna is a GPT-4-like multimodal chatbot, based on the Vicuna LLM. |
CAS, etc | X-LLM | en/zh | ChatGLM-6B | X-LLM converts multi-modalities (images, speech, videos) into foreign languages using X2L interfaces and feed them into a large Language Model (ChatGLM) to accomplish a Multimodal LLM, achieving impressive multimodal chat capabilities. |
NTU | Otter | en | OpenFlamingo | a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following ability and in-context learning. Futhermore, optimize OpenFlamingo's implementation, democratizing the required training resources from 1x A100 GPU to 4x RTX-3090 GPUs. |
XMU | LaVIN | en | LLaMA | propose a novel and affordable solution for vision-language instruction tuning, namely Mixture-of-Modality Adaptation (MMA). Particularly, MMA is an end-to-end optimization regime, which connects the image encoder and LLM via lightweight adapters. Meanwhile, we also propose a novel routing algorithm in MMA, which can help the model automatically shifts the reasoning paths for single- and multi-modal instructions. |
USTC | Woodpecker | - | - | the first work to correct hallucination in multimodal large language models. |
hpcaitech | Open-Sora | - | - | open source alternative to Openai Sora. |
see also: awesome-Multimodal-Large-Language-Models
contributor | data/project | language | main feature |
---|---|---|---|
TogetherComputer | RedPajama-Data | en | An Open Source Recipe to Reproduce LLaMA training dataset. |
@goldsmith | Wikipedia | multi | A Pythonic wrapper for the Wikipedia API. |
see Alpaca-CoT data collection
contributor | data | language | main feature |
---|---|---|---|
salesforce | DialogStudio | en | DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection and Instruction-Aware Models for Conversational AI. |
contributor | method | main feature |
---|---|---|
UW, etc. | self-instruct | using the model's own generations to create a large collection of instructional data. |
@LiuHC0428 | Reliable-Self-Instruction | use ChatGPT to generate some questions and answers based on a given text. |
PKU | Evol-Instruct | a novel method, proposed inWizardLM, by using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs. |
KAUST, etc. | CAMEL | a novel communicative agent framework namedrole-playing is proposed, which involves using inception prompting to guide chat agents toward task completion while maintaining consistency with human intentions. role-playing can be used to generate conversational data in a specific task/domain. |
@chatarena | ChatArena | a library that provides multi-agent language game environments and facilitates research about autonomous LLM agents and their social interactions. it provides a flexible framework to define multiple players, environments and the interactions between them, based on Markov Decision Process. |
contributor | method | main feature |
---|---|---|
- | human evaluation | - |
OpenAI | GPT-4/ChatGPT | - |
PKU/CMU/MSRA ... | PandaLM | Reproducible and Automated Language Model Assessment. |
UCB | Chatbot Arena | Chat with two anonymous models side-by-side and vote for which one is better, then use the Elo rating system to calculate the relative performance of the models. |
Stanford | AlpacaEval | GPT-4/Claude evaluation onAlpacaFarm dataset. |
clueai | SuperCLUElyb | Chinese version ofChatbot Arena developed by clueai. |
SJTU, etc. | Auto-J | a new open-source generative judge that can effectively evaluate different LLMs on how they align to human preference. |
CMU | CodeBERTScore | an automatic metric for code generation, based onBERTScore. As BERTScore, CodeBERTScore leverages the pre-trained contextual embeddings from a model such as CodeBERT and matches words in candidate and reference sentences by cosine similarity. Differently from BERTScore, CodeBERTScore also encodes natural language input or other context along with the generated code, but does not use that context to compute cosine similarities. |
国内大模型测评现状
contributor | benchmark | main feature |
---|---|---|
princeton | SWE-bench | a benchmark for evaluating large language models on real world software issues collected from GitHub. Given acodebase and an issue, a language model is tasked with generating a patch that resolves the described problem. |
microsoft | AGIEval | a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. |
clueai | SuperCLUE-Agent | Agent evaluation benchmark based on Chinese native tasks. |
bytedance | GPT-Fathom | GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well as OpenAI's earlier models on 20+ curated benchmarks under aligned settings. |
opencompass, huggingface
contributor | project | main feature |
---|---|---|
CAS | Alpaca-CoT | extend CoT data to Alpaca to boost its reasoning ability. aims at building an instruction finetuning (IFT) platform with extensive instruction collection (especially the CoT datasets) and a unified interface for various large language models. |
@hiyouga | ChatGLM-Efficient-Tuning | efficient fine-tuning ChatGLM-6B with PEFT. |
@hiyouga | LLaMA-Efficient-Tuning | Fine-tuning LLaMA with PEFT (PT+SFT+RLHF with QLoRA). |
@jianzhnie | Efficient-Tuning-LLMs | Efficient Finetuning of QLoRA LLMs. |
ColossalAI | ColossalChat | An open-source low cost solution for cloningChatGPT with a complete RLHF pipeline. |
microsoft | deepspeed-chat | Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales. |
LAION-AI | Open Assistant | a project meant to give everyone access to a great chat based large language model. |
HKUST | LMFlow | an extensible, convenient, and efficient toolbox for finetuning large machine learning models, designed to be user-friendly, speedy and reliable, and acc 展开
相关应用
为您推荐
|