Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

How much memory is needed for the GPU to run FinGPT-Forecaster?#149

Unanswered
jiahuiLeee asked this question inQ&A
Discussion options

base_model = AutoModelForCausalLM.from_pretrained(
'meta-llama/Llama-2-7b-chat-hf',
token=access_token,
trust_remote_code=True,
device_map="auto",
torch_dtype=torch.float16,
offload_folder="offload/"
)
model = PeftModel.from_pretrained(
base_model,
'FinGPT/fingpt-forecaster_dow30_llama2-7b_lora',
offload_folder="offload/"
)
model = model.eval()

run python FinGPT/fingpt/FinGPT_Forecaster/app.py, I got this error:
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.68it/s]
/home/ljh/.conda/envs/FinGPT_Forecaster_py310/lib/python3.10/site-packages/transformers/utils/hub.py:373: FutureWarning: Theuse_auth_token argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
WARNING:root:Some parameters are on the meta device device because they were offloaded to the disk and cpu.
Traceback (most recent call last):
File "/home/ljh/Fin4LLM/FinGPT/fingpt/FinGPT_Forecaster/app.py", line 31, in
model = PeftModel.from_pretrained(
File "/home/ljh/.conda/envs/FinGPT_Forecaster_py310/lib/python3.10/site-packages/peft/peft_model.py", line 278, in from_pretrained
model.load_adapter(model_id, adapter_name, is_trainable=is_trainable, **kwargs)
File "/home/ljh/.conda/envs/FinGPT_Forecaster_py310/lib/python3.10/site-packages/peft/peft_model.py", line 587, in load_adapter
dispatch_model(
File "/home/ljh/.conda/envs/FinGPT_Forecaster_py310/lib/python3.10/site-packages/accelerate/big_modeling.py", line 378, in dispatch_model
offload_state_dict(offload_dir, disk_state_dict)
File "/home/ljh/.conda/envs/FinGPT_Forecaster_py310/lib/python3.10/site-packages/accelerate/utils/offload.py", line 98, in offload_state_dict
index = offload_weight(parameter, name, save_dir, index=index)
File "/home/ljh/.conda/envs/FinGPT_Forecaster_py310/lib/python3.10/site-packages/accelerate/utils/offload.py", line 32, in offload_weight
array = weight.cpu().numpy()
NotImplementedError: Cannot copy out of meta tensor; no data!

You must be logged in to vote

Replies: 0 comments

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Category
Q&A
Labels
None yet
1 participant
@jiahuiLeee

[8]ページ先頭

©2009-2025 Movatter.jp