Movatterモバイル変換


[0]ホーム

URL:


Skip to contents

GLM with Elastic Net Regularization Classification Learner

Source:R/LearnerClassifGlmnet.R
mlr_learners_classif.glmnet.Rd

Generalized linear models with elastic net regularization.Callsglmnet::glmnet() from packageglmnet.

Details

Caution: This learner is different to learners callingglmnet::cv.glmnet()in that it does not use the internal optimization of parameterlambda.Instead,lambda needs to be tuned by the user (e.g., viamlr3tuning).Whenlambda is tuned, theglmnet will be trained for each tuning iteration.While fitting the whole path oflambdas would be more efficient, as is doneby default inglmnet::glmnet(), tuning/selecting the parameter at prediction time(using parameters) is currently not supported inmlr3(at least not in efficient manner).Tuning thes parameter is, therefore, currently discouraged.

When the data are i.i.d. and efficiency is key, we recommend using the respectiveauto-tuning counterparts inmlr_learners_classif.cv_glmnet() ormlr_learners_regr.cv_glmnet().However, in some situations this is not applicable, usually when data areimbalanced or not i.i.d. (longitudinal, time-series) and tuning requirescustom resampling strategies (blocked design, stratification).

Dictionary

Thismlr3::Learner can be instantiated via thedictionarymlr3::mlr_learners or with the associated sugar functionmlr3::lrn():

mlr_learners$get("classif.glmnet")lrn("classif.glmnet")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”

  • Required Packages:mlr3,mlr3learners,glmnet

Parameters

IdTypeDefaultLevelsRange
alphanumeric1\([0, 1]\)
bignumeric9.9e+35\((-\infty, \infty)\)
devmaxnumeric0.999\([0, 1]\)
dfmaxinteger-\([0, \infty)\)
epsnumeric1e-06\([0, 1]\)
epsnrnumeric1e-08\([0, 1]\)
exactlogicalFALSETRUE, FALSE-
excludeinteger-\([1, \infty)\)
exmxnumeric250\((-\infty, \infty)\)
fdevnumeric1e-05\([0, 1]\)
gammanumeric1\((-\infty, \infty)\)
interceptlogicalTRUETRUE, FALSE-
lambdauntyped--
lambda.min.rationumeric-\([0, 1]\)
lower.limitsuntyped--
maxitinteger100000\([1, \infty)\)
mnlaminteger5\([1, \infty)\)
mxitinteger100\([1, \infty)\)
mxitnrinteger25\([1, \infty)\)
nlambdainteger100\([1, \infty)\)
use_pred_offsetlogicalTRUETRUE, FALSE-
penalty.factoruntyped--
pmaxinteger-\([0, \infty)\)
pminnumeric1e-09\([0, 1]\)
precnumeric1e-10\((-\infty, \infty)\)
relaxlogicalFALSETRUE, FALSE-
snumeric0.01\([0, \infty)\)
standardizelogicalTRUETRUE, FALSE-
standardize.responselogicalFALSETRUE, FALSE-
threshnumeric1e-07\([0, \infty)\)
trace.itinteger0\([0, 1]\)
type.gaussiancharacter-covariance, naive-
type.logisticcharacter-Newton, modified.Newton-
type.multinomialcharacter-ungrouped, grouped-
upper.limitsuntyped--

Internal Encoding

Starting withmlr3 v0.5.0, the order of class labels is reversed prior tomodel fitting to comply to thestats::glm() convention that the negative class is providedas the first factor level.

Offset

If aTask contains a column with theoffset role, it is automatically incorporated during training via theoffset argument inglmnet::glmnet().During prediction, the offset column from the test set is used only ifuse_pred_offset = TRUE (default), passed via thenewoffset argument inglmnet::predict.glmnet().Otherwise, if the user setsuse_pred_offset = FALSE, a zero offset is applied, effectively disabling the offset adjustment during prediction.

References

Friedman J, Hastie T, Tibshirani R (2010).“Regularization Paths for Generalized Linear Models via Coordinate Descent.”Journal of Statistical Software,33(1), 1–22.doi:10.18637/jss.v033.i01.

See also

Other Learner:mlr_learners_classif.cv_glmnet,mlr_learners_classif.kknn,mlr_learners_classif.lda,mlr_learners_classif.log_reg,mlr_learners_classif.multinom,mlr_learners_classif.naive_bayes,mlr_learners_classif.nnet,mlr_learners_classif.qda,mlr_learners_classif.ranger,mlr_learners_classif.svm,mlr_learners_classif.xgboost,mlr_learners_regr.cv_glmnet,mlr_learners_regr.glmnet,mlr_learners_regr.kknn,mlr_learners_regr.km,mlr_learners_regr.lm,mlr_learners_regr.nnet,mlr_learners_regr.ranger,mlr_learners_regr.svm,mlr_learners_regr.xgboost

Super classes

mlr3::Learner ->mlr3::LearnerClassif ->LearnerClassifGlmnet

Methods

Public methods

Inherited methods


Methodnew()

Creates a new instance of thisR6 class.


Methodselected_features()

Returns the set of selected features as reported byglmnet::predict.glmnet()withtype set to"nonzero".

Usage

LearnerClassifGlmnet$selected_features(lambda=NULL)

Arguments

lambda

(numeric(1))
Customlambda, defaults to the active lambda depending on parameter set.

Returns

(character()) of feature names.


Methodclone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifGlmnet$clone(deep=FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

if(requireNamespace("glmnet", quietly=TRUE)){# Define the Learner and set parameter valueslearner=lrn("classif.glmnet")print(learner)# Define a Tasktask=tsk("sonar")# Create train and test setids=partition(task)# Train the learner on the training idslearner$train(task, row_ids=ids$train)# print the modelprint(learner$model)# importance methodif("importance"%in%learner$properties)print(learner$importance)# Make predictions for the test rowspredictions=learner$predict(task, row_ids=ids$test)# Score the predictionspredictions$score()}#> <LearnerClassifGlmnet:classif.glmnet>: GLM with Elastic Net Regularization#> * Model: -#> * Parameters: use_pred_offset=TRUE#> * Packages: mlr3, mlr3learners, glmnet#> * Predict Types:  [response], prob#> * Feature Types: logical, integer, numeric#> * Properties: multiclass, offset, twoclass, weights#>#> Call:  (if (cv) glmnet::cv.glmnet else glmnet::glmnet)(x = data, y = target,      family = "binomial")#>#>     Df  %Dev   Lambda#> 1    0  0.00 0.220000#> 2    2  2.84 0.200500#> 3    2  6.12 0.182700#> 4    2  8.92 0.166400#> 5    2 11.34 0.151700#> 6    3 13.57 0.138200#> 7    4 16.23 0.125900#> 8    4 18.66 0.114700#> 9    4 20.79 0.104500#> 10   6 22.87 0.095250#> 11   7 24.99 0.086780#> 12   8 27.05 0.079080#> 13   8 28.98 0.072050#> 14   8 30.71 0.065650#> 15   9 32.81 0.059820#> 16   9 34.73 0.054500#> 17   9 36.45 0.049660#> 18  11 38.18 0.045250#> 19  13 40.12 0.041230#> 20  13 41.90 0.037570#> 21  15 43.69 0.034230#> 22  15 45.40 0.031190#> 23  16 46.95 0.028420#> 24  18 48.54 0.025890#> 25  20 50.10 0.023590#> 26  21 51.66 0.021500#> 27  21 53.09 0.019590#> 28  22 54.50 0.017850#> 29  23 55.88 0.016260#> 30  24 57.15 0.014820#> 31  26 58.42 0.013500#> 32  25 59.69 0.012300#> 33  28 60.88 0.011210#> 34  31 62.42 0.010210#> 35  31 64.03 0.009306#> 36  32 65.54 0.008479#> 37  33 67.00 0.007726#> 38  36 68.46 0.007039#> 39  38 70.09 0.006414#> 40  39 71.65 0.005844#> 41  40 73.21 0.005325#> 42  39 74.71 0.004852#> 43  40 76.07 0.004421#> 44  41 77.39 0.004028#> 45  42 78.67 0.003670#> 46  42 80.06 0.003344#> 47  40 81.23 0.003047#> 48  43 82.42 0.002776#> 49  43 83.59 0.002530#> 50  43 84.74 0.002305#> 51  46 85.87 0.002100#> 52  46 86.99 0.001914#> 53  47 88.05 0.001744#> 54  47 89.05 0.001589#> 55  48 90.00 0.001448#> 56  48 90.87 0.001319#> 57  48 91.68 0.001202#> 58  48 92.42 0.001095#> 59  48 93.10 0.000998#> 60  48 93.72 0.000909#> 61  48 94.28 0.000828#> 62  47 94.79 0.000755#> 63  47 95.25 0.000688#> 64  47 95.67 0.000627#> 65  47 96.05 0.000571#> 66  47 96.40 0.000520#> 67  47 96.71 0.000474#> 68  47 97.00 0.000432#> 69  47 97.27 0.000394#> 70  48 97.51 0.000359#> 71  48 97.73 0.000327#> 72  48 97.93 0.000298#> 73  48 98.11 0.000271#> 74  48 98.28 0.000247#> 75  48 98.43 0.000225#> 76  48 98.57 0.000205#> 77  48 98.69 0.000187#> 78  48 98.81 0.000170#> 79  48 98.91 0.000155#> 80  48 99.01 0.000141#> 81  48 99.10 0.000129#> 82  48 99.17 0.000117#> 83  48 99.25 0.000107#> 84  48 99.31 0.000097#> 85  48 99.37 0.000089#> 86  48 99.43 0.000081#> 87  48 99.48 0.000074#> 88  48 99.52 0.000067#> 89  48 99.57 0.000061#> 90  48 99.60 0.000056#> 91  48 99.64 0.000051#> 92  48 99.67 0.000046#> 93  48 99.70 0.000042#> 94  48 99.72 0.000038#> 95  48 99.75 0.000035#> 96  48 99.77 0.000032#> 97  49 99.79 0.000029#> 98  49 99.81 0.000027#> 99  49 99.82 0.000024#> 100 49 99.84 0.000022#>Warning:Multiple lambdas have been fit. Lambda will be set to 0.01 (see parameter 's').#> classif.ce#>  0.3333333

[8]ページ先頭

©2009-2025 Movatter.jp