Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models

License

NotificationsYou must be signed in to change notification settings

sail-sg/Adan

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This is an official PyTorch implementation ofAdan. See the paperhere. If you find our adan helpful or heuristic to your projects, please cite this paper and also star this repository. Thanks!

@article{xie2024adan,  title={Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models},  author={Xie, Xingyu and Zhou, Pan and Li, Huan and Lin, Zhouchen and Yan, Shuicheng},  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},  year={2024},  publisher={IEEE}}

Supported Projects

News

  • 🔥🔥🔥 Results on large language models, likeMoE and GPT2, are released.
  • FusedAdan with less memory footprint is released.

Installation

python3 -m pip install git+https://github.com/sail-sg/Adan.git

FusedAdan is installed by default. If you want to use the original Adan, please install it by:

git clone https://github.com/sail-sg/Adan.gitcd Adanpython3 setup.py install --unfused

Usage

For your convenience to use Adan, we briefly provide some intuitive instructions below, then provide some general experimental tips, and finally provide more details (e.g., specific commands and hyper-parameters) for each experiment in the paper.

1) Two steps to use Adan

Step 1. Add Adan-dependent hyper-parameters by adding the following hyper-parameters to the config:

parser.add_argument('--max-grad-norm',type=float,default=0.0,help='if the l2 norm is large than this hyper-parameter, then we clip the gradient  (default: 0.0, no gradient clip)')parser.add_argument('--weight-decay',type=float,default=0.02,help='weight decay, similar one used in AdamW (default: 0.02)')parser.add_argument('--opt-eps',default=None,type=float,metavar='EPSILON',help='optimizer epsilon to avoid the bad case where second-order moment is zero (default: None, use opt default 1e-8 in adan)')parser.add_argument('--opt-betas',default=None,type=float,nargs='+',metavar='BETA',help='optimizer betas in Adan (default: None, use opt default [0.98, 0.92, 0.99] in Adan)')parser.add_argument('--no-prox',action='store_true',default=False,help='whether perform weight decay like AdamW (default=False)')

opt-betas: To keep consistent with our usage habits, the$\beta$'s in the paper are actually the$(1-\beta)$'s in the code.

foreach (bool): IfTrue, Adan will use thetorch._foreach implementation. It is faster but uses slightly more memory.

no-prox: It determines the update rule of parameters with weight decay. By default, Adan updates the parameters in the way presented in Algorithm 1 in the paper:

$$\boldsymbol{\theta}_{k+1} = ( 1+\lambda \eta)^{-1} \left[\boldsymbol{\theta}_k - \boldsymbol{\eta}_k \circ (\mathbf{m}_k+(1-{\color{blue}\beta_2})\mathbf{v}_k)\right]$$

But one can also update the parameter like Adamw:

$$\boldsymbol{\theta}_{k+1} = ( 1-\lambda \eta)\boldsymbol{\theta}_k - \boldsymbol{\eta}_k \circ (\mathbf{m}_k+(1-{\color{blue}\beta_2})\mathbf{v}_k).$$

Step 2. Create the Adan optimizer as follows. In this step, we can directly replace the vanilla optimizer by using the following command:

fromadanimportAdanoptimizer=Adan(param,lr=args.lr,weight_decay=args.weight_decay,betas=args.opt_betas,eps=args.opt_eps,max_grad_norm=args.max_grad_norm,no_prox=args.no_prox)

2) Tips for Experiments

  • To make Adan simple, in all experiments except Table 12 in the paper, we do not use the restart strategy in Adan. But Table 12 shows that the restart strategy can further slightly improve the performance of Adan.
  • Adan often allows one to use a large peak learning rate which often fails other optimizers, e.g., Adam and AdamW. For example, in all experiments except for the MAE pre-training and LSTM, the learning rate used by Adan is5-10 times larger than that in Adam/AdamW.
  • Adan is relatively robust tobeta1,beta2, andbeta3, especially forbeta2. If you want better performance, you can first tunebeta3 and thenbeta1.
  • Adan has a slightly higher GPU memory cost than Adam/AdamW on a single node. However, this problem can be solved using theZeroRedundancyOptimizer, which shares optimizer states across distributed data-parallel processes to reduce per-process memory footprint. Specifically, when using theZeroRedundancyOptimizer on more than two GPUs,Adan and Adam consume almost the same amount of memory.

3) More extra detailed steps&results

Please refer to the following links for detailed steps. In these detailed steps, we even include thedocker images for reproducibility.

Results for Various Tasks

Results on Large Language Models

Mixture of Experts (MoE)

To investigate the efficacy of the Adan optimizer for LLMs, we conducted pre-training experiments usingMoE models. The experiments utilized theRedPajama-v2 dataset with three configurations, each consisting of 8 experts:8x0.1B (totaling 0.5B trainable parameters),8x0.3B (2B trainable parameters), and8x0.6B (4B trainable parameters). These models were trained with sampled data comprising10B, 30B, 100B, and 300B tokens, respectively.

Model Size8x0.1B8x0.1B8x0.1B8x0.3B8x0.3B8x0.3B8x0.6B
Token Size10B30B100B30B100B300B300B
AdamW2.7222.5502.4272.3622.2182.0702.023
Adan2.6972.5132.4042.3492.2062.0452.010

GPT2-345m

We provide the config and log for GPT2-345m pre-trained on the dataset that comes fromBigCode and evaluated on theHumanEval dataset by zero-shot learning.HumanEval is used to measure functional correctness for synthesizing programs from docstrings. It consists of 164 original programming problems, assessing language comprehension, algorithms, and simple mathematics, with some comparable to simple software interview questions. We set Temperature = 0.8 during evaluation.

Stepspass@1pass@10pass@100Download
GPT2-345m (Adam)300k0.08400.2090.360log&config
GPT2-345m (Adan)150k0.08430.2210.377log&config

Adan obtains comparable results with only half cost.

Results on vision tasks

For your convenience to use Adan, we provide the configs and log files for the experiments on ImageNet-1k.

ModelEpochTraining SettingAcc. (%)ConfigBatch SizeDownload
ViT-S150I80.1config2048log/model
ViT-S150II79.6config2048log/model
ViT-S300I81.1config2048log/model
ViT-S300II80.7config2048log/model
ViT-B150II81.7config2048log/model
ViT-B300II82.6config2048log/model
ResNet-50100I78.1config2048log/model
ResNet-50200I79.7config2048log/model
ResNet-50300I80.2config2048log/model
ResNet-101100I80.0config2048log/model
ResNet-101200I81.6config2048log/model
ResNet-101300I81.9config2048log/model
ConvNext-tiny150II81.7config2048log//model
ConvNext-tiny300II82.4config2048log/model
MAE-small800+100---83.8config4096/2048log-pretrain/log-finetune/model
MAE-Large800+50---85.9config4096/2048log-pretrain/log-finetune/model

Results on NLP tasks

BERT-base

We give the configs and log files of the BERT-base model pre-trained on the Bookcorpus and Wikipedia datasets and fine-tuned on GLUE tasks. Note that we provide the config, log file, and detailedinstructions for BERT-base in the folder./NLP/BERT.

PretrainingConfigBatch SizeLogModel
Adanconfig256logmodel
Fine-tuning on GLUE-TaskMetricResultConfig
CoLAMatthew's corr.64.6config
SST-2Accuracy93.2config
STS-BPerson corr.89.3config
QQPAccuracy91.2config
MNLIMatched acc./Mismatched acc.85.7/85.6config
QNLIAccuracy91.3config
RTEAccuracy73.3config

For fine-tuning on GLUE-Task, see the total batch size in their corresponding configure files.

Transformer-XL-base

We provide the config and log for Transformer-XL-base trained on the WikiText-103 dataset. The total batch size for this experiment is60*4.

StepsTest PPLDownload
Baseline (Adam)200k24.2log&config
Transformer-XL-base50k26.2log&config
Transformer-XL-base100k24.2log&config
Transformer-XL-base200k23.5log&config

Results on Large Language Models

GPT2-345m

We provide the config and log for GPT2-345m pre-trained on the dataset that comes fromBigCode and evaluated on theHumanEval dataset by zero-shot learning.HumanEval is used to measure functional correctness for synthesizing programs from docstrings. It consists of 164 original programming problems, assessing language comprehension, algorithms, and simple mathematics, with some comparable to simple software interview questions. We set Temperature = 0.8 during evaluation.

Stepspass@1pass@10pass@100Download
GPT2-345m (Adam)300k0.08400.2090.360log&config
GPT2-345m (Adan)150k0.08430.2210.377log&config

Adan obtains comparable results with only half cost.

Results on Diffusion Models

We show the results of the text-to-3D task supported by theDreamFusion Project. More visualization results could be foundedhere.Examples generated from text promptSydney opera house, aerial view with Adam and Adan:

opera-adan.mp4
opera-adam.mp4

Memory and Efficiency

A brief comparison of peak memory and wall duration for the optimizer is as follows. The duration time is the total time of 200optimizer.step(). We further compare Adam and FusedAdan in great detail on GPT-2. See more resultshere.

ModelModel Size (MB)Adam Peak (MB)Adan Peak (MB)FusedAdan Peak (MB)Adam Time (ms)Adan Time (ms)FusedAdan Time (ms)
ResNet-50257142719571769.04.21.9
ResNet-1014410055102151016017.57.03.4
ViT-B869755975897588.912.34.3
Swin-B8716118162021617317.912.84.9
ConvNext-B8817353173891737719.115.65.0
Swin-L19624299243162431017.528.110.1
ConvNext-L19726025260552604418.631.110.2
ViT-L30425652256582565618.043.215.1
GPT-275825096254062510049.9107.737.4
GPT-2131334357385953436381.8186.064.4

[8]ページ先頭

©2009-2025 Movatter.jp