Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit21e611e

Browse files
authored
Update transformers_gemma3.py
1 parente102c0f commit21e611e

File tree

1 file changed

+4
-5
lines changed

1 file changed

+4
-5
lines changed

‎llm/transformers_gemma3.py‎

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -16,12 +16,11 @@ class Gemma3(BaseLM):
1616
- Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and 32K tokens for the 1B size
1717
"""
1818

19-
def__init__(self,model_name="google/gemma-3-1b-it",temp=0.1,device='cpu',
20-
max_new_tokens=None,api_token=None,use_bf16=False,**kwargs):
21-
super(Gemma,self).__init__(name=model_name,support_batching=True,**kwargs)
19+
def__init__(self,model_name="google/gemma-3-1b-it",temp=0.1,device='cuda',
20+
max_new_tokens=None,api_token=None,**kwargs):
21+
super(Gemma3,self).__init__(name=model_name,support_batching=True,**kwargs)
2222
self.__device=device
23-
self.__model=AutoModelForCausalLM.from_pretrained(
24-
model_name,torch_dtype=torch.bfloat16ifuse_bf16else"auto",token=api_token)
23+
self.__model=AutoModelForCausalLM.from_pretrained(model_name,torch_dtype="auto",token=api_token)
2524
self.__max_new_tokens=max_new_tokens
2625
self.__model.to(device)
2726
self.__tokenizer=AutoTokenizer.from_pretrained(model_name,token=api_token)

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp