Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
NotificationsYou must be signed in to change notification settings

AMD-AGI/AMD-LLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This repository provides the implementation for training AMD-135M models and is based onTinyLlama.

AMD-135M is a language model trained on AMD MI250 GPUs. Based on LLaMA2 model architecture, this model can be smoothly loaded as LlamaForCausalLM with huggingface transformers. Furthermore, we use the same tokenizer as LLaMA2, enableing it to be a draft model of speculative decoding for LLaMA2 and CodeLlama.

Docker image

Please use the following rocm docker indocker hub

docker pull rocm/pytorch:rocm6.1_ubuntu20.04_py3.9_pytorch_2.3.0_preview

Python packages dependency

Please runpip install -r requirement.txt to install extra python packages based on the docker above.

Dataset

Step 1, downloadSlimPajama-627,project gutenberg andStarCoder.

git clone https://huggingface.co/datasets/cerebras/SlimPajama-627Bgit clone https://huggingface.co/datasets/manu/project_gutenberggit clone https://huggingface.co/datasets/bigcode/starcoderdata

Step 2, process the text data into token ids. And you will find the processed dataset at./slim_processed,./slim_validation_processed and./starcoderdata_python_processed.

# For pretrainingbash ./scripts/prepare_slimpajama_train.shbash ./scripts/prepare_project_gutenberg.sh# For validationbash ./scripts/prepare_slimpajama_valid.sh# For code finetuningbash ./scripts/prepare_starcoder_python.sh

Pretraining

To train a tinyllama model, please run the following scripts on 4 nodes, 4 MI250 GPUs (8 vitural devices) for each node.

# run on node 0.bash ./cluster/pretrain_node_0.sh# run on node 1.bash ./cluster/pretrain_node_1.sh# run on node 2.bash ./cluster/pretrain_node_2.sh# run on node 3.bash ./cluster/pretrain_node_3.sh

Code Finetuning

To finetune a tinyllama model, please run the following script.

bash ./cluster/finetune.sh

Evaluation

We evaluate AMD-Llama-135m usinglm-evaluation-harness on popular NLP benchmarks and results are listed as follows.

ModelSciQWinoGrandePIQAWSCMMLULambada (OpenAI)ARC - EasyARC - ChallengeLogiQAHellaswag
GPT2-124M (small)0.753±0.01360.5162±0.01400.6289±0.01130.4327±0.04880.2292±0.03830.3256±0.00650.4381±0.01020.1903±0.01150.2181±0.01620.2892±0.0045
OPT-125M0.751±0.0140.503±0.0140.630±0.0110.365±0.0470.229±0.0380.379±0.0070.436±0.0100.191±0.0120.229±0.0160.292±0.004
JackFram/llama-68m0.652±0.01510.513±0.0140.6197±0.01130.4038±0.04830.2302±0.00350.1351±0.00480.3864±0.01000.1792±0.01120.2273±0.01640.2790±0.0045
JackFram/llama-160m0.724±0.01410.5012±0.01410.6605±0.0110.3654±0.04740.2299±0.00350.3134±0.00650.4335±0.01020.1980±0.01160.2197±0.01620.3094±0.0046
AMD-Llama-135m0.761±0.01350.5012±0.01410.6420±0.01120.3654±0.04740.2302±0.00350.3330±0.00660.4364±0.01020.1911±0.01150.2120±0.01600.3048±0.0046

Speculative Decoding

To run speculative decoding using AMD-Llama-135m-code as draft model for CodeLlama-7b onHumaneval dataset, please run the following script.

# Need add some logs for huggingface transformers==4.37.2 to calculate the acceptance rate of speculative decoding.patch -u /path/to/transformers/generation/utils.py -i ./speculative_decoding/utils.patchbash ./speculative_decoding/codellama_spec.sh

We evaluate performance of decoding with target model only and speculative decoding on MI250 GPU and Ryzen AI CPU (with NPU kernel). All experiments are run on Humaneval dataset.

Target Model DeviceDraft Model DeviceDo Randomly SamplingTarget model Humaneval Pass@1Speculative Decoding Humaneval Pass@1Acceptance RateThroughput Speedup
FP32 MI250FP32 MI250TRUE32.31%29.27%0.6503552.58x
FP32 MI250FP32 MI250FALSE31.10%31.10%0.6578392.80x
BF16 MI250BF16 MI250TRUE31.10%31.10%0.6688221.67x
BF16 MI250BF16 MI250FALSE34.15%33.54%0.6654971.75x
INT4 NPUBF16 CPUTRUE28.05%30.49%0.7229132.83x
INT4 NPUBF16 CPUFALSE28.66%28.66%0.7380722.98x
BF16 CPUBF16 CPUTRUE31.10%31.71%0.7239713.68x
BF16 CPUBF16 CPUFALSE33.54%33.54%0.7275483.88x
FP32 CPUFP32 CPUTRUE29.87%28.05%0.7272143.57x
FP32 CPUFP32 CPUFALSE31.10%31.10%0.7386413.66x

Training and finetuning cost

It takes 6 days to pretrain AMD-Llama-135m on 4 MI250 nodes each of which has 4 MI250 GPUs (8 virtual GPU cards, 64G memory for each).It takes 4 days to finetune AMD-Llama-135m-code on 4 MI250 GPUs.It takes 11T disk space to store raw and processed SlimPajama, project gutenberg and Starcoder datasets.

ROCM

Version: 6.1.2.60102-119~20.04Priority: optionalSection: develMaintainer: ROCm Dev Support <rocm-dev.support@amd.com>Installed-Size: 13.3 kBDepends: hipblas (= 2.1.0.60102-119~20.04), hipblaslt (= 0.7.0.60102-119~20.04), hipfft (= 1.0.14.60102-119~20.04), hipsolver (= 2.1.1.60102-119~20.04), hipsparse (= 3.0.1.60102-119~20.04), hiptensor (= 1.2.0.60102-119~20.04), miopen-hip (= 3.1.0.60102-119~20.04), half (= 1.12.0.60102-119~20.04), rccl (= 2.18.6.60102-119~20.04), rocalution (= 3.1.1.60102-119~20.04), rocblas (= 4.1.2.60102-119~20.04), rocfft (= 1.0.27.60102-119~20.04), rocrand (= 3.0.1.60102-119~20.04), hiprand (= 2.10.16.60102-119~20.04), rocsolver (= 3.25.0.60102-119~20.04), rocsparse (= 3.1.2.60102-119~20.04), rocm-core (= 6.1.2.60102-119~20.04), hipsparselt (= 0.2.0.60102-119~20.04), composablekernel-dev (= 1.1.0.60102-119~20.04), hipblas-dev (= 2.1.0.60102-119~20.04), hipblaslt-dev (= 0.7.0.60102-119~20.04), hipcub-dev (= 3.1.0.60102-119~20.04), hipfft-dev (= 1.0.14.60102-119~20.04), hipsolver-dev (= 2.1.1.60102-119~20.04), hipsparse-dev (= 3.0.1.60102-119~20.04), hiptensor-dev (= 1.2.0.60102-119~20.04), miopen-hip-dev (= 3.1.0.60102-119~20.04), rccl-dev (= 2.18.6.60102-119~20.04), rocalution-dev (= 3.1.1.60102-119~20.04), rocblas-dev (= 4.1.2.60102-119~20.04), rocfft-dev (= 1.0.27.60102-119~20.04), rocprim-dev (= 3.1.0.60102-119~20.04), rocrand-dev (= 3.0.1.60102-119~20.04), hiprand-dev (= 2.10.16.60102-119~20.04), rocsolver-dev (= 3.25.0.60102-119~20.04), rocsparse-dev (= 3.1.2.60102-119~20.04), rocthrust-dev (= 3.0.1.60102-119~20.04), rocwmma-dev (= 1.4.0.60102-119~20.04), hipsparselt-dev (= 0.2.0.60102-119~20.04)Homepage:https://github.com/RadeonOpenCompute/ROCmDownload-Size: 1064 BAPT-Manual-Installed: yesAPT-Sources:http://repo.radeon.com/rocm/apt/6.1.2focal/main amd64 PackagesDescription: Radeon Open Compute (ROCm) Runtime software stack

System info

Ubuntu 22.04.3 LTSRelease:        22.04Codename:       jammyLinux version 5.15.0-88-generic (buildd@lcy02-amd64-058) (gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023Linux sjc144-canary-node035.dcgpu.amd.com 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

License

Copyright (c) 2018-2024 Advanced Micro Devices, Inc. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp