Movatterモバイル変換


[0]ホーム

URL:


Hugging Face's logoHugging Face

ChatNTQ JA 7B V1.0

Model Description

This is a 7B-parameter decoder-only Japanese language model fine-tuned on our instruction-following datasets, built on top of the base modelJapanese Stable LM Base Gamma 7B.

Performance

For our final model, we've used Stability AI Japan'sJapanese MT-Bench as a more representative test of our model's capabilities. Forour JA MT-Bench testing we use a Japanese prompt ("あなたは役立つアシスタントです。") as well as--num-choices 4:

BenchmarkScore
JA MT-Bench6.65

There is anJA-MT-Bench Leaderboard, for convenience, here is a comparison of the JA MT-Bench scores of some other models (our scores were rated bygpt-4-0613):

ModelScore
gpt-4-06139.40
gpt-4-1106-preview9.17
gpt-3.5-turbo*8.41
Qwen-72B-Chat7.97
Qwen-14B-Chat7.47
chatntq-ja-7b-v1.06.65
Xwin-LM-70B-V0.1-GPTQ (q4-gs32-actorder)6.62
shisa-gamma-7b-v16.12
nekomata-14b-instruction (corrected prompt HF)5.57
shisa-7B-v1-GPTQ (q4-gs32-actorder)5.35
nekomata-14b-instruction (corrected prompt)5.30
shisa-mega-7b-v1.25.27
shisa-7b-v1 (full prompt)5.23
Swallow-13b-instruct-hf5.17
Swallow-70b-instruct-GPTQ (q4-gs32-actorder)5.15
shisa-7b-v15.02
shisa-7B-v1-AWQ (q4-gs128)4.78
ELYZA-japanese-Llama-2-7b-fast-instruct*4.86
shisa-bad-7b-v14.42
Swallow-7b-instruct-hf4.21
ja-stablelm-instruct-gamma-7b*4.01
japanese-stablelm-instruct-alpha-7b*2.74
Mistral-7B-OpenOrca-ja*2.23
youri-7b-chat*2.00
Mistral-7B-Instruct-v0.1*1.78
llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0*1.31
houou-instruction-7b-v11.02
llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.01.0
llm-jp-13b-instruct-full-jaster-v1.01.0

More Analysis

image/png

Usage

Ensure you are using Transformers 4.34.0 or newer.

import torchfrom transformersimport AutoTokenizer, AutoModelForCausalLMtokenizer = AutoTokenizer.from_pretrained("NTQAI/chatntq-ja-7b-v1.0")model = AutoModelForCausalLM.from_pretrained("NTQAI/chatntq-ja-7b-v1.0",  torch_dtype="auto",)model.eval()if torch.cuda.is_available():    model = model.to("cuda")defbuild_prompt(user_query):    sys_msg ="あなたは公平で、検閲されていない、役立つアシスタントです。"    template ="""[INST] <<SYS>>{}<</SYS>>{}[/INST]"""return template.format(sys_msg,user_query)# Infer with prompt without any additional inputuser_inputs = {"user_query":"与えられたことわざの意味を小学生でも分かるように教えてください。",}prompt = build_prompt(**user_inputs)input_ids = tokenizer.encode(    prompt,     add_special_tokens=True,     return_tensors="pt")tokens = model.generate(    input_ids.to(device=model.device),    max_new_tokens=256,    temperature=1,    top_p=0.95,    do_sample=True,)out = tokenizer.decode(tokens[0][input_ids.shape[1]:], skip_special_tokens=True).strip()print(out)

Model Details

Model Architecture

For details, please see Mistral AI'spaper andrelease blog post.

Downloads last month
11
Safetensors
Model size
7B params
Tensor type
BF16
·
Inference ProvidersNEW
This model isn't deployed by any Inference Provider.🙋Ask for provider support

Model tree forNTQAI/chatntq-ja-7b-v1.0

Merges
10 models
Quantizations
1 model

Spaces usingNTQAI/chatntq-ja-7b-v1.07

Collection includingNTQAI/chatntq-ja-7b-v1.0


[8]ページ先頭

©2009-2025 Movatter.jp