Alpaca LoRA RLHF PyTorch
1.0.0
un proceso completo para ajustar Alpaca LLM con LoRA y RLHF en hardware de consumo
穷人卡:2080Ti 12G
torch==2.0.0
cuda==11.8
check src/peft/utils/save_and_load.py , Only comment the line 52 to # #to_return = {k: v for k, v in to_return.items() if (("lora_" in k and adapter_name in k) or ("bias" in k))}
python supervised_finetune.py --base_model ' decapoda-research/llama-7b-hf ' --data_path ' yahma/alpaca-cleaned ' --output_dir ' ./lora-alpaca ' --num_epochs 1
pip uninstall peft -y
pip install peft==0.2.0 # 0.3.0.dev0 raise many errors
python merge_peft_adapter.py --model_name ./alpaca-lora
python train_reward_model.py --model_name 'decapoda-research/llama-7b-hf' --gradient_accumulation_steps 32 --per_device_train_batch_size 1 --train_subset 100 --eval_subset 10 --local_rank 0 --bf16 False
python merge_peft_adapter.py --model_name ./alpaca-lora-reward-model
python tuning_lm_with_rl.py --model_name ' ./lora-alpaca-adapter-merged ' --reward_model_name ' ./lora-alpaca-reward-model-adapter-merged ' --adafactor False --tokenizer_name ' decapoda-research/llama-7b-hf ' --save_freq 100 --output_max_length 128 --batch_size 1 --gradient_accumulation_steps 1 --batched_gen True --ppo_epochs 1 --seed 0 --learning_rate 1.4e-5 --early_stopping True --output_dir ' ./checkpoints/tuning_llama_rl '
value
para poner en 0. 需要参看 transformador Transformador de github, github, github, github 8小时前才刚刚修复了这个问题.utilidades y plantillas 来自 alpaca-lora 。
requisitos 主要是按照 [alpaca-lora](https://github.com/tloen/alpaca-lora) 来配环境。
Si este proyecto te ayuda a reducir el tiempo de desarrollo, puedes regalarme una taza de café :)
Alipay(支付宝)
WechatPay(微信)
MIT © Kun