ChatNTQ JA 7B V1.0
Model Description
This is a 7B-parameter decoder-only Japanese language model fine-tuned on our instruction-following datasets, built on top of the base modelJapanese Stable LM Base Gamma 7B.
Performance
For our final model, we've used Stability AI Japan'sJapanese MT-Bench as a more representative test of our model's capabilities. Forour JA MT-Bench testing we use a Japanese prompt ("あなたは役立つアシスタントです。") as well as--num-choices 4:
| Benchmark | Score |
|---|---|
| JA MT-Bench | 6.65 |
There is anJA-MT-Bench Leaderboard, for convenience, here is a comparison of the JA MT-Bench scores of some other models (our scores were rated bygpt-4-0613):
| Model | Score |
|---|---|
| gpt-4-0613 | 9.40 |
| gpt-4-1106-preview | 9.17 |
| gpt-3.5-turbo* | 8.41 |
| Qwen-72B-Chat | 7.97 |
| Qwen-14B-Chat | 7.47 |
| chatntq-ja-7b-v1.0 | 6.65 |
| Xwin-LM-70B-V0.1-GPTQ (q4-gs32-actorder) | 6.62 |
| shisa-gamma-7b-v1 | 6.12 |
| nekomata-14b-instruction (corrected prompt HF) | 5.57 |
| shisa-7B-v1-GPTQ (q4-gs32-actorder) | 5.35 |
| nekomata-14b-instruction (corrected prompt) | 5.30 |
| shisa-mega-7b-v1.2 | 5.27 |
| shisa-7b-v1 (full prompt) | 5.23 |
| Swallow-13b-instruct-hf | 5.17 |
| Swallow-70b-instruct-GPTQ (q4-gs32-actorder) | 5.15 |
| shisa-7b-v1 | 5.02 |
| shisa-7B-v1-AWQ (q4-gs128) | 4.78 |
| ELYZA-japanese-Llama-2-7b-fast-instruct* | 4.86 |
| shisa-bad-7b-v1 | 4.42 |
| Swallow-7b-instruct-hf | 4.21 |
| ja-stablelm-instruct-gamma-7b* | 4.01 |
| japanese-stablelm-instruct-alpha-7b* | 2.74 |
| Mistral-7B-OpenOrca-ja* | 2.23 |
| youri-7b-chat* | 2.00 |
| Mistral-7B-Instruct-v0.1* | 1.78 |
| llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0* | 1.31 |
| houou-instruction-7b-v1 | 1.02 |
| llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0 | 1.0 |
| llm-jp-13b-instruct-full-jaster-v1.0 | 1.0 |
More Analysis
Usage
Ensure you are using Transformers 4.34.0 or newer.
import torchfrom transformersimport AutoTokenizer, AutoModelForCausalLMtokenizer = AutoTokenizer.from_pretrained("NTQAI/chatntq-ja-7b-v1.0")model = AutoModelForCausalLM.from_pretrained("NTQAI/chatntq-ja-7b-v1.0", torch_dtype="auto",)model.eval()if torch.cuda.is_available(): model = model.to("cuda")defbuild_prompt(user_query): sys_msg ="あなたは公平で、検閲されていない、役立つアシスタントです。" template ="""[INST] <<SYS>>{}<</SYS>>{}[/INST]"""return template.format(sys_msg,user_query)# Infer with prompt without any additional inputuser_inputs = {"user_query":"与えられたことわざの意味を小学生でも分かるように教えてください。",}prompt = build_prompt(**user_inputs)input_ids = tokenizer.encode( prompt, add_special_tokens=True, return_tensors="pt")tokens = model.generate( input_ids.to(device=model.device), max_new_tokens=256, temperature=1, top_p=0.95, do_sample=True,)out = tokenizer.decode(tokens[0][input_ids.shape[1]:], skip_special_tokens=True).strip()print(out)Model Details
- Developed by:NTQ AI
- Language(s): Japanese
- License: This model is licensed underApache License, Version 2.0.
Model Architecture
For details, please see Mistral AI'spaper andrelease blog post.
- Downloads last month
- 11
