Movatterモバイル変換


[0]ホーム

URL:


madgrad: 'MADGRAD' Method for Stochastic Optimization

A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization algorithm. MADGRAD is a 'best-of-both-worlds' optimizer with the generalization performance of stochastic gradient descent and at least as fast convergence as that of Adam, often faster. A drop-in optim_madgrad() implementation is provided based on Defazio et al (2020) <doi:10.48550/arXiv.2101.11075>.

Version:0.1.0
Imports:torch (≥ 0.3.0),rlang
Suggests:testthat (≥ 3.0.0)
Published:2021-05-10
DOI:10.32614/CRAN.package.madgrad
Author:Daniel Falbel [aut, cre, cph], RStudio [cph], MADGRAD original implementation authors. [cph]
Maintainer:Daniel Falbel <daniel at rstudio.com>
License:MIT + fileLICENSE
NeedsCompilation:no
Materials:README
CRAN checks:madgrad results

Documentation:

Reference manual:madgrad.html ,madgrad.pdf

Downloads:

Package source: madgrad_0.1.0.tar.gz
Windows binaries: r-devel:madgrad_0.1.0.zip, r-release:madgrad_0.1.0.zip, r-oldrel:madgrad_0.1.0.zip
macOS binaries: r-release (arm64):madgrad_0.1.0.tgz, r-oldrel (arm64):madgrad_0.1.0.tgz, r-release (x86_64):madgrad_0.1.0.tgz, r-oldrel (x86_64):madgrad_0.1.0.tgz

Linking:

Please use the canonical formhttps://CRAN.R-project.org/package=madgradto link to this page.


[8]ページ先頭

©2009-2025 Movatter.jp