Movatterモバイル変換
[0]ホーム
luz 0.5.1
- fixed a bug preventing additional arguments passed to
predict to be forwarded tomodel$predict(#157)
luz 0.5.0
- Added mixed precision callback. (#127)
- Added support for torch iterable datasets. (#135)
- Fixed a bug when trying to resume models trained with learning rateschedulers. (#137)
- Added support for learning rate schedulers that take the currentloss as arguments. (#140)
- Added French translation of luz messages. (@cregouby #148)
luz 0.4.0
Breaking changes
drop_last=TRUE is now the default for trainingdataloaders created by luz (when eg. you pass a list or a torch datasetas data input) (#117)- The default profile callback no longer tracks intra step timings asit adds a non ignorable overhead. (#125)
New features
- Added support for arm Mac’s and the MPS device. (#104)
- Refactor checkpointing in luz - we now also serialize optimizerstate and callbacks state. (#107)
- Added a
luz_callback_autoresume() allowing to easilyresume training runs that might have crashed. (#107) - Added the
luz_callback_resume_from_checkpoint()allowing one to resume a training run from a checkpoint file.(#107) - Users can now chose if metrics should be called on both training andvalidation, only training or only validation. See
luz_metric_set() for more information. (#112) - Improved how errors raised on user code, eg while calling metrics orcallbacks are raised. This helps a lot when debuging errors in callbacksand metrics. (#112)
loss_fn is now a field of the context, thus callbackscan override it when needed. (#112)luz_callback_mixup now supports therun_valid andauto_loss arguments. (#112)ctx now aliases to the defaultopt andopt_name when a single optimizer is specified (ie. mostcases) (#114)- Added
tfevents callback for logging the loss andgetting weights histograms. (#118) - You can now specify metrics to be evaluated during
evaluate. (#123)
Bug fixes
- Bug fix:
acceleratorscpu argument isalways respected. (#119) - Handled
rlang andggplot2 deprecations.(#120) - Better handling of metrics environments.
- Faster garbage collection of dataloaders iterators, so we use lessmemory. (#122)
- Much faster loss averaging at every step. Can have hight influencein training times for large number of iterations per epoch. (#124)
luz 0.3.1
- Re-submission to fix vignette rendering.
luz 0.3.0
Breaking changes
lr_finder() now by default divides the range betweenstart_lr andend_lr into log-spaced intervals,following the fast.ai implementation. Cf. Sylvain Gugger’s post:https://sgugger.github.io/how-do-you-find-a-good-learning-rate.html. Theprevious behavior can be achieved passinglog_spaced_intervals=FALSE to the function. (#82,@skeydan)plot.lr_records() now in addition plots anexponentially weighted moving average of the loss (again, see SylvainGugger’s post), with a weighting coefficient of0.9 (whichseems a reasonable value for the default setting of 100learning-rate-incrementing intervals). (#82,@skeydan)
Documentation
- Many wording improvements in the getting started guides (#81 #94,@jonthegeek).
New features
- Added MixUp callback and helper loss function and functional logic.(#82,@skeydan).
- Added a
luz_callback_gradient_clip inspired by FastAI’simplementation. (#90) - Added a
backward argument tosetupallowing one to customize howbackward is called for theloss scalar value. (#93) - Added the
luz_callback_keep_best_model() to reload theweights from the best model after training is finished. (#95)
luz 0.2.0
New features
- Allow users to provide the minimum and maximum number of epochs whencalling
fit.luz_module_generator(). Removedctx$epochs from context object and replaced it withctx$min_epochs andctx$max_epochs (#53,@mattwarkentin). - Early stopping will now only occur if the minimum number of trainingepochs has been met (#53,@mattwarkentin).
- Added
cuda_index argument toacceleratorto allow selecting an specific GPU when multiple are present (#58,@cmcmaster1). - Implemented
lr_finder (#59,@cmcmaster1). - We now handle different kinds of data arguments passed to
fit using theas_dataloader() method(#66). valid_data can now be scalar value indicating theproportion ofdata that will be used for fitting. This onlyworks ifdata is a torch dataset or a list. (#69)- You can now supply
dataloader_options tofit to pass additional information toas_dataloader(). (#71) - Implemented the
evaluate function allowing users to getmetrics from a model in a new dataset. (#73)
Bug fixes
- Fixed bug in CSV logger callback that was saving the logs as a spacedelimited file (#52,@mattwarkentin).
- Fixed bug in the length of the progress bar for the validationdataset (#52,@mattwarkentin).
- Fixed bugs in early stopping callback related to them not workingproperly when
patience = 1 and when they are specifiedbefore other logging callbacks. (#76)
Internal changes
ctx$data now refers to the current in usedata instead of always refering toctx$train_data. (#54)- Refactored the
ctx object to make it safer and avoidreturing it in the output. (#73)
luz 0.1.0
- Added a
NEWS.md file to track changes to thepackage.
[8]ページ先頭