llama api server
v0.3.5
該項目在主動部署下。可以隨時進行打破變化。
美洲駝作為服務!該項目嘗試使用諸如Llama/Llama2之類的開源後端構建與OpenAI API兼容的REST API服務器。
有了這個項目,許多常見的GPT工具/框架可以與您自己的模型兼容。
按照此協作筆記本中的指示在線播放。感謝任何Butme的建造!
如果您沒有量化Llama.cpp,則需要遵循指令才能準備模型。
如果您沒有量化Pyllama,則需要遵循指令才能準備模型。
使用以下腳本從PYPI下載軟件包,並生成Model Config File config.yml
和安全令牌文件tokens.txt
。
pip install llama-api-server
# to run wth pyllama
pip install llama-api-server[pyllama]
cat > config.yml << EOF
models:
completions:
# completions and chat_completions use same model
text-ada-002:
type: llama_cpp
params:
path: /absolute/path/to/your/7B/ggml-model-q4_0.bin
text-davinci-002:
type: pyllama_quant
params:
path: /absolute/path/to/your/pyllama-7B4b.pt
text-davinci-003:
type: pyllama
params:
ckpt_dir: /absolute/path/to/your/7B/
tokenizer_path: /absolute/path/to/your/tokenizer.model
# keep to 1 instance to speed up loading of model
embeddings:
text-embedding-davinci-002:
type: pyllama_quant
params:
path: /absolute/path/to/your/pyllama-7B4b.pt
min_instance: 1
max_instance: 1
idle_timeout: 3600
text-embedding-ada-002:
type: llama_cpp
params:
path: /absolute/path/to/your/7B/ggml-model-q4_0.bin
EOF
echo "SOME_TOKEN" > tokens.txt
# start web server
python -m llama_api_server
# or visible across the network
python -m llama_api_server --host=0.0.0.0
export OPENAI_API_KEY=SOME_TOKEN
export OPENAI_API_BASE=http://127.0.0.1:5000/v1
openai api completions.create -e text-ada-002 -p "hello?"
# or using chat
openai api chat_completions.create -e text-ada-002 -g user "hello?"
# or calling embedding
curl -X POST http://127.0.0.1:5000/v1/embeddings -H 'Content-Type: application/json' -d '{"model":"text-embedding-ada-002", "input":"It is good."}' -H "Authorization: Bearer SOME_TOKEN"
temperature
, top_p
和top_k
max_tokens
echo
stop
stream
n
presence_penalty
和frequency_penalty
logit_bias
n_batch
和n_thread
等性能參數