Movatterモバイル変換


[0]ホーム

URL:


Hugging Face's logoHugging Face

A bilingual instruction-tuned LoRA model ofhttps://huggingface.co/baichuan-inc/baichuan-7B

Please follow thebaichuan-7B License to use this model.

Usage:

from transformersimport AutoModelForCausalLM, AutoTokenizer, TextStreamertokenizer = AutoTokenizer.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True)model = AutoModelForCausalLM.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True).cuda()streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)query ="晚上睡不着怎么办"template = ("A chat between a curious user and an artificial intelligence assistant. ""The assistant gives helpful, detailed, and polite answers to the user's questions.\n""Human: {}\nAssistant: ")inputs = tokenizer([template.format(query)], return_tensors="pt")inputs = inputs.to("cuda")generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)

You could also alternatively launch a CLI demo by using the script inhttps://github.com/hiyouga/LLaMA-Factory

python src/cli_demo.py --template default --model_name_or_path hiyouga/baichuan-7b-sft

You could reproduce our results with the following scripts usingLLaMA-Factory:

CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \    --stage sft \    --model_name_or_path baichuan-inc/baichuan-7B \    --do_train \    --dataset alpaca_gpt4_en,alpaca_gpt4_zh,codealpaca \    --template default \    --finetuning_type lora \    --lora_rank 16 \    --lora_target all \    --output_dir baichuan_lora \    --overwrite_cache \    --per_device_train_batch_size 8 \    --per_device_eval_batch_size 8 \    --gradient_accumulation_steps 8 \    --preprocessing_num_workers 16 \    --lr_scheduler_type cosine \    --logging_steps 10 \    --save_steps 100 \    --eval_steps 100 \    --learning_rate 5e-5 \    --max_grad_norm 0.5 \    --num_train_epochs 2.0 \    --val_size 0.01 \    --evaluation_strategy steps \    --load_best_model_at_end \    --plot_loss \    --fp16

Loss curve on training set:train

Loss curve on evaluation set:eval

Downloads last month
11
Inference ProvidersNEW
This model isn't deployed by any Inference Provider.🙋Ask for provider support

Datasets used to trainhiyouga/Baichuan-7B-sft

Space usinghiyouga/Baichuan-7B-sft1


[8]ページ先頭

©2009-2026 Movatter.jp