Chinese | English
Multi-platform model access, scalable, and multiple output formats, robot plug-in providing large language model chat services.
Project status: Steadily iterated to the official version of 1.0 (currently entered the Release Candidate stage)
Presets | Plug-in Mode & Streaming Output | Image rendering output |
---|---|---|
We can install this plug-in directly under Koishi to use basic functions without additional configuration.
Read this document to learn more.
We currently support the following models/platforms:
Model/platform | Access method | characteristic | Things to note |
---|---|---|---|
OpenAI | Local Client, official API access | Customizable personality, support plug-in/browsing mode and other chat modes | API access is charged |
Azure OpenAI | Local Client, official API access | Customizable personality, support plug-in/browsing mode and other chat modes | API access is charged |
Google Gemini | Local Client, official API access | Fast speed, performance surpasses GPT-3.5 | Need to have Gemini access account, and may charge |
Claude API | Local Client, official API access | Extremely large context, in most cases, it can exceed GPT 3.5, requires API KEY, charges | May be expensive and does not support Function Call |
Tongyi Qian Questions | Local Client, official API access | Ali produces domestic models with free quotas | The actual measurement effect is slightly better than that of Zhishu |
Wisdom | Local Client, official API access | ChatGLM, newcomers can get free token quota | The actual test effect is slightly better than iFLYTEK Spark |
iFLYTEK Spark | Local Client, official API access | Domestic model, newcomers can get free token quota | The actual measured effect is approximately equal to GPT 3.5 |
Wen Xin's words | Local Client, official API access | Baidu series model model | The actual test effect is slightly worse than iFLYTEK Spark |
Hunyuan big model | Local Client, official API access | Tencent series of big models | The actual test effect is better than Wen Xinyiyan |
Ollama | Local Client, built API access | Well-known open source model collection, supports CPU/GPU hybrid deployment, can be built locally | You need to build your own backend API and require certain configurations |
GPT Free | Local Client, official API access | Local forwarding uses GPT models of other websites, project automatically configures websites, etc. without manual registration required | May fail at any time and be unstable |
ChatGLM | Local Client, self-built backend API access | Can be built locally, rounding is free of money | You need to build your own backend API, which requires certain configurations. The model parameters are not large enough, resulting in insufficient chat effect. |
RWKV | Local Client, built API access | Well-known open source model, can be built locally | You need to build your own backend API and require certain configurations |
We support the model to provide network search capabilities:
Starting with version 1.0.0-alpha.10
, we use more customizable presets. The new personality preset uses yaml as the configuration file.
You can click here to view our personality file that comes with default: catgirl.yml
Our default preset folder path is你当前运行插件的koishi 目录的路径+/data/chathub/presets
.
All preset files are loaded from the above folder. Therefore, you can freely add and edit preset files in this folder and then use the command to switch personality presets.
For more information, please check out this document.
Run the following instructions on any Koishi template project to clone ChatLuna:
# yarn
yarn clone ChatLunaLab/chatluna
# npm
npm run clone ChatLunaLab/chatluna
You can replace ChatLunaLab/chatluna-koishi
with your own project address after Fork.
Then edit the tsconfig.json
file in the root directory of the template project and add the ChatLuna project path in compilerOptions.paths
.
{
"extends" : " ./tsconfig.base " ,
"compilerOptions" : {
"baseUrl" : " . " ,
"paths" : {
"koishi-plugin-chatluna-*" : [ " external/chatluna/packages/*/src " ]
}
}
}
Since the project itself is complex, the initial use must be built once.
# yarn
yarn workspace @root/chatluna-koishi build
# npm
npm run build -w @root/chatluna-koishi
Finish! Now you can start the template project with yarn dev
or npm run dev
in the root project and develop ChatLuna twice.
Although Koishi supports module hot replacement (hmr), this project may not be fully compatible.
If you have a bug when you are using hmr to develop this project, please raise it in Issue and follow the steps above to rebuild the project and restart Koishi to try to fix it.
Currently, the ChatLuna project team's production capacity is extremely scarce, and there is no more production capacity to achieve the following goals:
Welcome to submit a Pull Request or discuss it, and your contribution is highly welcome!
This project was developed by ChatLunaLab.
ChatLuna (hereinafter referred to as this project) is a dialogue robot framework based on large language models. We are committed to working with the open source community to promote the development of large-scale model technology. We strongly call on developers and other users to comply with the open source agreement to ensure that the framework and code of this project (and other derivatives based on this project promoted by the community) and related derivatives are not used for any purpose that may cause harm to the country and society, as well as services that have not been security-assessed and registered.
This project does not directly provide support for any generative artificial intelligence services, and users need to obtain the algorithm API used from organizations or individuals that provide production artificial intelligence services.
If you have used this project, please follow the laws and regulations of the local area and use the production artificial intelligence service algorithms available in the local area.
This project is not responsible for the results generated by the algorithm, and all results and operations are the responsibility of the user.
The relevant information storage of this project is all sourced by the user, and the project itself does not provide direct information storage.
This project does not assume any risks and responsibilities arising from data security, public opinion risks caused by users or the misleading, abuse, dissemination, or improper use of any model.
This project also references other open source projects when writing it, and special thanks to the following projects:
koishi-plugin-openai
node-chatgpt-api
poe-api
Bard
chathub
Thanks to JetBrains for providing this project with a free open source license for IDEs such as WebStorm.