Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork8
Deep learning framework for the mlr3 ecosystem based on torch
License
mlr-org/mlr3torch
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Deep Learning with torch and mlr3.
# Install from CRANinstall.packages("mlr3torch")# Install the development version from GitHub:pak::pak("mlr-org/mlr3torch")
Afterwards, you also need to run the command below:
torch::install_torch()
More information about installingtorch can be foundhere.
mlr3torch is a deep learning framework for themlr3 ecosystem built on top oftorch. It allows to easily build, trainand evaluate deep learning models in a few lines of codes, withoutneeding to worry about low-level details. Off-the-shelf learners arereadily available, but custom architectures can be defined by connectingPipeOpTorch operators in anmlr3pipelines::Graph.
Using predefined learners such as a simple multi layer perceptron (MLP)works just like any other mlr3Learner.
library(mlr3torch)learner_mlp= lrn("classif.mlp",# defining network parametersactivation=nn_relu,neurons= c(20,20),# training parametersbatch_size=16,epochs=50,device="cpu",# Proportion of data to use for validationvalidate=0.3,# Defining the optimizer, loss, and callbacksoptimizer= t_opt("adam",lr=0.1),loss= t_loss("cross_entropy"),callbacks= t_clbk("history"),# this saves the history in the learner# Measures to trackmeasures_valid= msrs(c("classif.logloss","classif.ce")),measures_train= msrs(c("classif.acc")),# predict type (required by logloss)predict_type="prob")
Below, we train this learner on the sonar example task:
learner_mlp$train(tsk("sonar"))
Next, we construct the same architecture usingPipeOpTorch objects.The first pipeop – aPipeOpTorchIngress – defines the entrypoint ofthe network. All subsequent pipeops define the neural network layers.
architecture= po("torch_ingress_num") %>>% po("nn_linear",out_features=20) %>>% po("nn_relu") %>>% po("nn_head")
To turn this into a learner, we configure the loss, optimizer, callbacksas well as the training arguments.
graph_mlp=architecture %>>% po("torch_loss",loss= t_loss("cross_entropy")) %>>% po("torch_optimizer",optimizer= t_opt("adam",lr=0.1)) %>>% po("torch_callbacks",callbacks= t_clbk("history")) %>>% po("torch_model_classif",batch_size=16,epochs=50,device="cpu")graph_lrn= as_learner(graph_mlp)
To work with generic tensors, thelazy_tensor type can be used. Itwraps atorch::dataset, but allows to preprocess the data (lazily)usingPipeOp objects. Below, we flatten the MNIST task, so we can thentrain a multi-layer perceptron on it. Note that this doesnottransform the data in-memory, but is only applied when the data isactually loaded.
# load the predefined mnist taskmnist= tsk("mnist")mnist$head(3L)#> label image#> <fctr> <lazy_tensor>#> 1: 5 <tnsr[1x28x28]>#> 2: 0 <tnsr[1x28x28]>#> 3: 4 <tnsr[1x28x28]># Flatten the imagesflattener= po("trafo_reshape",shape= c(-1,28*28))mnist_flat=flattener$train(list(mnist))[[1L]]mnist_flat$head(3L)#> label image#> <fctr> <lazy_tensor>#> 1: 5 <tnsr[784]>#> 2: 0 <tnsr[784]>#> 3: 4 <tnsr[784]>
To actually access the tensors, we can callmaterialize(). We onlyshow a slice of the resulting tensor for readability:
materialize(mnist_flat$data(1:2,cols="image")[[1L]],rbind=TRUE)[1:2,1:4]#> torch_tensor#> 0 0 0 0#> 0 0 0 0#> [ CPUFloatType{2,4} ]
Below, we define a more complex architecture that has one single inputwhich is alazy_tensor. For that, we first define a single residualblock:
layer=list( po("nop"), po("nn_linear",out_features=50L) %>>% po("nn_dropout") %>>% po("nn_relu")) %>>% po("nn_merge_sum")
Next, we create a neural network that takes as input alazy_tensor(po("torch_ingress_ltnsr")). It first applies a linear layer and thenrepeats the above layer using the specialPipeOpTorchBlock, followedby the network’s head. After that, we configure the loss, optimizer andthe training parameters. Note thatpo("nn_linear_0") is equivalent topo("nn_linear", id = "nn_linear_0") and we need this here to avoid IDclashes with the linear layer frompo("nn_block").
deep_network= po("torch_ingress_ltnsr") %>>% po("nn_linear",out_features=50L) %>>% po("nn_block",layer,n_blocks=5L) %>>% po("nn_head") %>>% po("torch_loss",loss= t_loss("cross_entropy")) %>>% po("torch_optimizer",optimizer= t_opt("adam")) %>>% po("torch_model_classif",epochs=100L,batch_size=32 )
Next, we prepend the preprocessing step that flattens the images so wecan directly apply this learner to the unflattened MNIST task.
deep_learner= as_learner(flattener %>>%deep_network)deep_learner$id="deep_network"
In order to keep track of the performance during training, we use 20% ofthe data and evaluate it using classification accuracy.
set_validate(deep_learner,0.2)deep_learner$param_set$set_values(torch_model_classif.measures_valid= msr("classif.acc"))
All that is left is to train the learner:
deep_learner$train(mnist)
- Off-the-shelf architectures are readily available as
mlr3::Learners. - Currently, supervised regression and classification is supported.
- Custom learners can be defined using the
Graphlanguage frommlr3pipelines. - The package supports tabular data, as well as generic tensors via the
lazy_tensortype. - Multi-modal data can be handled conveniently, as
lazy_tensorobjectscan be stored alongside tabular data. - It is possible to customize the training process via (predefined orcustom) callbacks.
- The package is fully integrated into the
mlr3ecosystem. - Neural network architectures, as well as their hyperparameters can beeasily tuned via
mlr3tuningand friends.
- Start by reading one of the vignettes on the package website!
- There is acourse on
(mlr3)torch. - You can check out ourpresentation from UseR2024.
- To run the tests one needs to set the environment variable
TEST_TORCH = 1, e.g. by adding it to.Renviron.
- Without the great R package
torchnone of this would have beenpossible. - The names for the callback stages are taken fromluz, another high-level deeplearning framework for R
torch. - Building neural networks using
PipeOpTorchoperators is inspired bykeras. - This R package is developed as part of theMathematical Research DataInitiative.
mlr3torch is a free and open source software project that encouragesparticipation and feedback. If you have any issues, questions,suggestions or feedback, please do not hesitate to open an “issue” aboutit on the GitHub page!
In case of problems / bugs, it is often helpful if you provide a“minimum working example” that showcases the behaviour (but don’t worryabout this if the bug is obvious).
Please understand that the resources of the project are limited:response may sometimes be delayed by a few days, and some featuresuggestions may be rejected if they are deemed too tangential to thevision behind the project.
About
Deep learning framework for the mlr3 ecosystem based on torch
Topics
Resources
License
Contributing
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Sponsor this project
Uh oh!
There was an error while loading.Please reload this page.
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Contributors9
Uh oh!
There was an error while loading.Please reload this page.