??Chinese | English
ChatPilot : Chat Agent WebUI, implements AgentChat dialogue, supports Google search, file URL dialogue (RAG), code interpreter function, reproduces Kimi Chat (file, drag in; URL, send out), supports OpenAI/Azure API.
Official Demo: https://chat.mulanai.com
export OPENAI_API_KEY=sk-xxx
export OPENAI_BASE_URL=https://xxx/v1
docker run -it
-e OPENAI_API_KEY= $WORKSPACE_BASE
-e OPENAI_BASE_URL= $OPENAI_BASE_URL
-e RAG_EMBEDDING_MODEL= " text-embedding-ada-002 "
-p 8080:8080 --name chatpilot- $( date +%Y%m%d%H%M%S ) shibing624/chatpilot:0.0.1
You'll find ChatPilot running at http://0.0.0.0:8080 Enjoy! ?
git clone https://github.com/shibing624/ChatPilot.git
cd ChatPilot
pip install -r requirements.txt
# Copying required .env file, and fill in the LLM api key
cp .env.example .env
bash start.sh
Okay, now your application is running: http://0.0.0.0:8080 Enjoy! ?
Two ways to build the front end:
git clone https://github.com/shibing624/ChatPilot.git
cd ChatPilot/
# Building Frontend Using Node.js >= 20.10
cd web
npm install
npm run build
Output: The project web
directory outputs the build
folder, which contains the front-end compilation output file.
export OPENAI_API_KEY=xxx
export OPENAI_BASE_URL=https://api.openai.com/v1
export MODEL_TYPE= " openai "
export AZURE_OPENAI_API_KEY=
export AZURE_OPENAI_API_VERSION=
export AZURE_OPENAI_ENDPOINT=
export MODEL_TYPE= " azure "
Start the ollama service with ollama serve
, and then configure OLLAMA_API_URL
: export OLLAMA_API_URL=http://localhost:11413
litellm
package: pip install litellm -U
The default litellm config file of chatpilot
is in ~/.cache/chatpilot/data/litellm/config.yaml
Modify its content as follows:
model_list :
# - model_name: moonshot-v1-auto # show model name in the UI
# litellm_params: # all params accepted by litellm.completion() - https://docs.litellm.ai/docs/completion/input
# model: openai/moonshot-v1-auto # MODEL NAME sent to `litellm.completion()` #
# api_base: https://api.moonshot.cn/v1
# api_key: sk-xx
# rpm: 500 # [OPTIONAL] Rate limit for this deployment: in requests per minute (rpm)
- model_name : deepseek-ai/DeepSeek-Coder # show model name in the UI
litellm_params : # all params accepted by litellm.completion() - https://docs.litellm.ai/docs/completion/input
model : openai/deepseek-coder # MODEL NAME sent to `litellm.completion()` #
api_base : https://api.deepseek.com/v1
api_key : sk-xx
rpm : 500
- model_name : openai/o1-mini # show model name in the UI
litellm_params : # all params accepted by litellm.completion() - https://docs.litellm.ai/docs/completion/input
model : o1-mini # MODEL NAME sent to `litellm.completion()` #
api_base : https://api.61798.cn/v1
api_key : sk-xxx
rpm : 500
litellm_settings : # module level litellm settings - https://github.com/BerriAI/litellm/blob/main/litellm/__init__.py
drop_params : True
set_verbose : False
If you use ChatPilot in your research, please cite it in the following format:
APA:
Xu, M. ChatPilot: LLM agent toolkit (Version 0.0.2) [Computer software]. https://github.com/shibing624/ChatPilot
BibTeX:
@misc{ChatPilot,
author = {Ming Xu},
title = {ChatPilot: llm agent},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = { url {https://github.com/shibing624/ChatPilot}},
}
The licensing agreement is The Apache License 2.0, which is free for commercial use. Please attach the link to ChatPilot and the license agreement in the product description.
The project code is still very rough. If you have any improvements to the code, you are welcome to submit it back to this project. Before submitting, please pay attention to the following two points:
tests
python -m pytest -v
to run all unit tests to ensure that all unit tests passYou can then submit a PR.