Thefastai librarysimplifies training fast and accurate neural nets using modern bestpractices. See the fastai website to get started. The library is basedon research into deep learning best practices undertaken atfast.ai, and includes “out of the box” support forvision,text,tabular, andcollab (collaborative filtering) models.
Get dataset and buildcnn_learner.
URLs_MNIST_SAMPLE()tfms=aug_transforms(do_flip =FALSE)path='mnist_sample'bs=20data=ImageDataLoaders_from_folder(path,batch_tfms = tfms,size =26,bs = bs)learn=cnn_learner(data,xresnet50_deep(),metrics = accuracy)Modify channels to 1:
init= learn$model[0][0][0][['in_channels']]print(init)# 3learn$model[0][0][0][['in_channels']]%f% 1Lprint(learn$model[0][0][0][['in_channels']])# 1Here, one can observe a special assignment%f%. It helpsfor safe modification of layer parameters.
How to see and modify other parameters of the layer? First seenames:
Cnn layer parameters:
[1] "add_module" "apply" "bfloat16" [4] "bias" "buffers" "children" [7] "cpu" "cuda" "dilation" [10] "double" "dump_patches" "eval" [13] "extra_repr" "float" "forward" [16] "groups" "half" "has_children" [19] "in_channels" "kernel_size" "load_state_dict" [22] "modules" "named_buffers" "named_children" [25] "named_modules" "named_parameters" "out_channels" [28] "output_padding" "padding" "padding_mode" [31] "parameters" "register_backward_hook" "register_buffer" [34] "register_forward_hook" "register_forward_pre_hook" "register_parameter" [37] "requires_grad_" "reset_parameters" "share_memory" [40] "state_dict" "stride" "T_destination" [43] "to" "train" "training" [46] "transposed" "type" "weight" [49] "zero_grad"Kernel size from(3, 3) to 9.
In addition, one could replace values inside tensors with the sameassignment.
For single in-place value modification:
x=tensor(c(1,2),c(3,4))# tensor([[1., 2.],# [3., 4.]])print(x[0][0])# tensor(1.)# Now change it to 99.x[0][0]%f%99print(x[0][0])# tensor(99.)print(x)# tensor([[99., 2.],# [ 3., 4.]])Modify 2 or more values:
How to slice tensors? For less confusion, the slicing is the same asin python.narrow function requires a tensor and itsdimensions. Let’s see an example:
We can extract and play with tensor dimensions:
First lets understand the tensor:
a %>% narrow('[:,:,:,:]')tensor([[[[ 1, 28, 55], [10, 37, 64], [19, 46, 73]], [[ 4, 31, 58], [13, 40, 67], [22, 49, 76]], [[ 7, 34, 61], [16, 43, 70], [25, 52, 79]]], [[[ 2, 29, 56], [11, 38, 65], [20, 47, 74]], [[ 5, 32, 59], [14, 41, 68], [23, 50, 77]], [[ 8, 35, 62], [17, 44, 71], [26, 53, 80]]], [[[ 3, 30, 57], [12, 39, 66], [21, 48, 75]], [[ 6, 33, 60], [15, 42, 69], [24, 51, 78]], [[ 9, 36, 63], [18, 45, 72], [27, 54, 81]]]], dtype=torch.int32)We could imagine that the tensor contains 3 R lists and each listcontain 3 matrices with 3 rows and 3 columns.
: without any indicated value before or after: will not modify the tensor.
How to extract 1st list from the tensor? Very simple:
tensor([[[[ 1, 28, 55], [10, 37, 64], [19, 46, 73]], [[ 4, 31, 58], [13, 40, 67], [22, 49, 76]], [[ 7, 34, 61], [16, 43, 70], [25, 52, 79]]]], dtype=torch.int32)Why from0? Because indexing starts from 0 not from1.
We could extract 1st matrix from all 3 lists.
tensor([[[ 1, 28, 55], [10, 37, 64], [19, 46, 73]], [[ 2, 29, 56], [11, 38, 65], [20, 47, 74]], [[ 3, 30, 57], [12, 39, 66], [21, 48, 75]]], dtype=torch.int32)tensor([[ 1, 28, 55], [ 2, 29, 56], [ 3, 30, 57]], dtype=torch.int32)How to build a model with fastaiModule class?Simple!
Prepare data:
library(magrittr)library(fastai)library(zeallot)if(!file.exists('mnist.pkl.gz')) {download.file('http://deeplearning.net/data/mnist/mnist.pkl.gz','mnist.pkl.gz') R.utils::gunzip("mnist.pkl.gz",remove=FALSE)}c(c(x_train, y_train),c(x_valid, y_valid), res)%<-% reticulate::py_load_object('mnist.pkl',encoding ='latin-1')x_train= x_train[1:500,1:784]x_valid= x_valid[1:500,1:784]y_train=as.integer(y_train)[1:500]y_valid=as.integer(y_valid)[1:500]Plot example:
Prepare data loaders and build a model:
TensorDataset=torch()$utils$data$TensorDatasetbs=32train_ds=TensorDataset(tensor(x_train),tensor(y_train))valid_ds=TensorDataset(tensor(x_valid),tensor(y_valid))train_dl=TfmdDL(train_ds,bs = bs,shuffle =TRUE)valid_dl=TfmdDL(valid_ds,bs =2* bs)dls=Data_Loaders(train_dl, valid_dl)one=one_batch(dls)x= one[[1]]y= one[[2]]x$shape; y$shapenn=nn()Functional=torch()$nn$functionalPut your model intonn_module function: