Movatterモバイル変換


[0]ホーム

URL:


Type:Package
Title:High-Dimensional Regression with Measurement Error
Version:0.6.0
Encoding:UTF-8
Maintainer:Oystein Sorensen <oystein.sorensen.1985@gmail.com>
Description:Penalized regression for generalized linear models for measurement error problems (aka. errors-in-variables). The package contains a version of the lasso (L1-penalization) which corrects for measurement error (Sorensen et al. (2015) <doi:10.5705/ss.2013.180>). It also contains an implementation of the Generalized Matrix Uncertainty Selector, which is a version the (Generalized) Dantzig Selector for the case of measurement error (Sorensen et al. (2018) <doi:10.1080/10618600.2018.1425626>).
License:GPL-3
RoxygenNote:7.2.3
Imports:glmnet (≥ 3.0.0), ggplot2 (≥ 2.2.1), Rdpack, Rcpp (≥0.12.15), Rglpk (≥ 0.6-1), rlang (≥ 1.0), stats
URL:https://github.com/osorensen/hdme
RdMacros:Rdpack
Suggests:knitr, rmarkdown, testthat, dplyr, tidyr, covr
VignetteBuilder:knitr
LinkingTo:Rcpp, RcppArmadillo
NeedsCompilation:yes
Packaged:2023-05-16 18:52:58 UTC; oyss
Author:Oystein SorensenORCID iD [aut, cre]
Repository:CRAN
Date/Publication:2023-05-16 19:10:02 UTC

Extract Coefficients of a Corrected Lasso object

Description

Default coef method for acorrected_lasso object.

Usage

## S3 method for class 'corrected_lasso'coef(object, ...)

Arguments

object

Fitted model object returned bycorrected_lasso.

...

Other arguments (not used).


Extract Coefficients of a Generalized Dantzig Selector Object

Description

Default coef method for agds object.

Usage

## S3 method for class 'gds'coef(object, all = FALSE, ...)

Arguments

object

Fitted model object returned bygds.

all

Logical indicating whether to show all coefficient estimates, or only non-zeros.

...

Other arguments (not used).


Extract Coefficients of a GMU Lasso object

Description

Default coef method for agmu_lasso object.

Usage

## S3 method for class 'gmu_lasso'coef(object, all = FALSE, ...)

Arguments

object

Fitted model object returned bygmu_lasso.

all

Logical indicating whether to show all coefficient estimates, oronly non-zeros. Only used when delta is a single value.

...

Other arguments (not used).


Extract Coefficients of a GMUS object

Description

Default coef method for agmus object.

Usage

## S3 method for class 'gmus'coef(object, all = FALSE, ...)

Arguments

object

Fitted model object returned bygmus.

all

Logical indicating whether to show all coefficient estimates, oronly non-zeros. Only used when delta is a single value.

...

Other arguments (not used).


Corrected Lasso

Description

Lasso (L1-regularization) for generalized linear models withmeasurement error.

Usage

corrected_lasso(  W,  y,  sigmaUU,  family = c("gaussian", "binomial", "poisson"),  radii = NULL,  no_radii = NULL,  alpha = 0.1,  maxits = 5000,  tol = 1e-12)

Arguments

W

Design matrix, measured with error. Must be a numeric matrix.

y

Vector of responses.

sigmaUU

Covariance matrix of the measurement error.

family

Response type. Character string of length 1. Possible values are"gaussian", "binomial" and "poisson".

radii

Vector containing the set of radii of the l1-ball onto which thesolution is projected. If not provided, the algorithm will select an evenlyspaced vector of 20 radii.

no_radii

Length of vector radii, i.e., the number of regularizationparameters to fit the corrected lasso for.

alpha

Step size of the projected gradient descent algorithm. Default is0.1.

maxits

Maximum number of iterations of the project gradient descentalgorithm for each radius. Default is 5000.

tol

Iteration tolerance for change in sum of squares of beta. Defaults. to 1e-12.

Details

Corrected version of the lasso for generalized linear models. Themethod does require an estimate of the measurement error covariance matrix.The Poisson regression option might sensitive to numerical overflow, pleasefile a GitHub issue in the source repository if you experience this.

Value

An object of class "corrected_lasso".

References

Loh P, Wainwright MJ (2012).“High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity.”Ann. Statist.,40(3), 1637–1664.

Sorensen O, Frigessi A, Thoresen M (2015).“Measurement error in lasso: Impact and likelihood bias correction.”Statistica Sinica,25(2), 809-829.

Examples

# Example with linear regression# Number of samplesn <- 100# Number of covariatesp <- 50# True (latent) variablesX <- matrix(rnorm(n * p), nrow = n)# Measurement error covariance matrix# (typically estimated by replicate measurements)sigmaUU <- diag(x = 0.2, nrow = p, ncol = p)# Measurement matrix (this is the one we observe)W <- X + rnorm(n, sd = sqrt(diag(sigmaUU)))# Coefficientbeta <- c(seq(from = 0.1, to = 1, length.out = 5), rep(0, p-5))# Responsey <- X %*% beta + rnorm(n, sd = 1)# Run the corrected lassofit <- corrected_lasso(W, y, sigmaUU, family = "gaussian")coef(fit)plot(fit)plot(fit, type = "path")# Binomial, logistic regression# Number of samplesn <- 1000# Number of covariatesp <- 50# True (latent) variablesX <- matrix(rnorm(n * p), nrow = n)# Measurement error covariance matrixsigmaUU <- diag(x = 0.2, nrow = p, ncol = p)# Measurement matrix (this is the one we observe)W <- X + rnorm(n, sd = sqrt(diag(sigmaUU)))# Responsey <- rbinom(n, size = 1, prob = plogis(X %*% c(rep(5, 5), rep(0, p-5))))fit <- corrected_lasso(W, y, sigmaUU, family = "binomial")plot(fit)coef(fit)

Cross-validated Corrected lasso

Description

Cross-validated Corrected lasso

Usage

cv_corrected_lasso(  W,  y,  sigmaUU,  n_folds = 10,  family = "gaussian",  radii = NULL,  no_radii = 100,  alpha = 0.1,  maxits = 5000,  tol = 1e-12)

Arguments

W

Design matrix, measured with error.

y

Vector of the continuous response value.

sigmaUU

Covariance matrix of the measurement error.

n_folds

Number of folds to use in cross-validation. Default is 100.

family

Only "gaussian" is implemented at the moment.

radii

Optional vector containing the set of radii of the l1-ball ontowhich the solution is projected.

no_radii

Length of vector radii, i.e., the number of regularizationparameters to fit the corrected lasso for.

alpha

Optional step size of the projected gradient descent algorithm.Default is 0.1.

maxits

Optional maximum number of iterations of the project gradientdescent algorithm for each radius. Default is 5000.

tol

Iteration tolerance for change in sum of squares of beta. Defaultsto 1e-12.

Details

Corrected version of the lasso for the case of linear regression,estimated using cross-validation. The method does require an estimate ofthe measurement error covariance matrix.

Value

An object of class "cv_corrected_lasso".

References

Loh P, Wainwright MJ (2012).“High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity.”Ann. Statist.,40(3), 1637–1664.

Sorensen O, Frigessi A, Thoresen M (2015).“Measurement error in lasso: Impact and likelihood bias correction.”Statistica Sinica,25(2), 809-829.

Examples

# Gaussianset.seed(100)n <- 100; p <- 50 # Problem dimensions# True (latent) variablesX <- matrix(rnorm(n * p), nrow = n)# Measurement error covariance matrix# (typically estimated by replicate measurements)sigmaUU <- diag(x = 0.2, nrow = p, ncol = p)# Measurement matrix (this is the one we observe)W <- X + rnorm(n, sd = sqrt(diag(sigmaUU)))# Coefficientbeta <- c(seq(from = 0.1, to = 1, length.out = 5), rep(0, p-5))# Responsey <- X %*% beta + rnorm(n, sd = 1)# Run the corrected lassocvfit <- cv_corrected_lasso(W, y, sigmaUU, no_radii = 5, n_folds = 3)plot(cvfit)print(cvfit)# Run the standard lasso using the radius found by cross-validationfit <- corrected_lasso(W, y, sigmaUU, family = "gaussian",radii = cvfit$radius_min)coef(fit)plot(fit)

Cross-Validated Generalized Dantzig Selector

Description

Generalized Dantzig Selector with cross-validation.

Usage

cv_gds(  X,  y,  family = "gaussian",  no_lambda = 10,  lambda = NULL,  n_folds = 5,  weights = rep(1, length(y)))

Arguments

X

Design matrix.

y

Vector of the continuous response value.

family

Use "gaussian" for linear regression, "binomial" for logisticregression and "poisson" for Poisson regression.

no_lambda

Length of the vectorlambda of regularizationparameters. Note that iflambda is not provided, the actual numberof values might differ slightly, due to the algorithm used byglmnet::glmnet in finding a grid oflambda values.

lambda

Regularization parameter. If not supplied and ifno_lambda > 1, a sequence ofno_lambda regularizationparameters is computed withglmnet::glmnet. Ifno_lambda = 1then the cross-validated optimum for the lasso is computed usingglmnet::cv.glmnet.

n_folds

Number of cross-validation folds to use.

weights

A vector of weights for each row ofX. Defaults to 1per observation.

Details

Cross-validation loss is calculated as the deviance of the model dividedby the number of observations.For the Gaussian case, this is the mean squared error. Weights suppliedthrough theweights argument are used both in fitting the modelsand when evaluating the test set deviance.

Value

An object of classcv_gds.

References

Candes E, Tao T (2007).“The Dantzig selector: Statistical estimation when p is much larger than n.”Ann. Statist.,35(6), 2313–2351.

James GM, Radchenko P (2009).“A generalized Dantzig selector with shrinkage tuning.”Biometrika,96(2), 323-337.

Examples

## Not run: # Example with logistic regressionn <- 1000  # Number of samplesp <- 10 # Number of covariatesX <- matrix(rnorm(n * p), nrow = n) # True (latent) variables # Design matrixbeta <- c(seq(from = 0.1, to = 1, length.out = 5), rep(0, p-5)) # True regression coefficientsy <- rbinom(n, 1, (1 + exp(-X %*% beta))^(-1)) # Binomially distributed responsecv_fit <- cv_gds(X, y, family = "binomial", no_lambda = 50, n_folds = 10)print(cv_fit)plot(cv_fit)# Now fit a single GDS at the optimum lambda value determined by cross-validationfit <- gds(X, y, lambda = cv_fit$lambda_min, family = "binomial")plot(fit)# Compare this to the fit for which lambda is selected by GDS# This automatic selection is performed by glmnet::cv.glmnet, for# the sake of speedfit2 <- gds(X, y, family = "binomial")The following plot compares the two fits.library(ggplot2)library(tidyr)df <- data.frame(fit = fit$beta, fit2 = fit2$beta, index = seq(1, p, by = 1))ggplot(gather(df, key = "Model", value = "Coefficient", -index),       aes(x = index, y = Coefficient, color = Model)) +       geom_point() +       theme(legend.title = element_blank())## End(Not run)

Generalized Dantzig Selector

Description

Generalized Dantzig Selector

Usage

gds(X, y, lambda = NULL, family = "gaussian", weights = NULL)

Arguments

X

Design matrix.

y

Vector of the continuous response value.

lambda

Regularization parameter. Only a single value is supported.

family

Use "gaussian" for linear regression, "binomial" for logistic regression and "poisson" for Poisson regression.

weights

A vector of weights for each row ofX.

Value

Intercept and coefficients at the values of lambda specified.

References

Candes E, Tao T (2007).“The Dantzig selector: Statistical estimation when p is much larger than n.”Ann. Statist.,35(6), 2313–2351.

James GM, Radchenko P (2009).“A generalized Dantzig selector with shrinkage tuning.”Biometrika,96(2), 323-337.

Examples

# Example with logistic regressionn <- 1000  # Number of samplesp <- 10 # Number of covariatesX <- matrix(rnorm(n * p), nrow = n) # True (latent) variables # Design matrixbeta <- c(seq(from = 0.1, to = 1, length.out = 5), rep(0, p-5)) # True regression coefficientsy <- rbinom(n, 1, (1 + exp(-X %*% beta))^(-1)) # Binomially distributed responsefit <- gds(X, y, family = "binomial")print(fit)plot(fit)coef(fit)# Try with more penalizationfit <- gds(X, y, family = "binomial", lambda = 0.1)coef(fit)coef(fit, all = TRUE)# Case weighting# Assume we wish to put more emphasis on predicting the positive cases correctly# In this case we give the 1s three times the weight of the zeros.weights <- (y == 0) * 1 + (y == 1) * 3fit_w <- gds(X, y, family = "binomial", weights = weights, lambda = 0.1)# Next we test this on a new dataset, generated with the same parametersX_new <- matrix(rnorm(n * p), nrow = n)y_new <- rbinom(n, 1, (1 + exp(-X_new %*% beta))^(-1))# We use a 50 % threshold as classification rule# Unweighted classifcationclassification <- ((1 + exp(- fit$intercept - X_new %*% fit$beta))^(-1) > 0.5) * 1# Weighted classificationclassification_w <- ((1 + exp(- fit_w$intercept - X_new %*% fit_w$beta))^(-1) > 0.5) * 1# As expected, the weighted classification predicts many more 1s than 0s, since# these are heavily up-weightedtable(classification, classification_w)# Here we compare the performance of the weighted and unweighted models.# The weighted model gets most of the 1s right, while the unweighted model# gets the highest overall performance.table(classification, y_new)table(classification_w, y_new)

Generalized Matrix Uncertainty Lasso

Description

Generalized Matrix Uncertainty Lasso

Usage

gmu_lasso(  W,  y,  lambda = NULL,  delta = NULL,  family = "binomial",  active_set = TRUE,  maxit = 1000)

Arguments

W

Design matrix, measured with error. Must be a numeric matrix.

y

Vector of responses.

lambda

Regularization parameter. If not set, lambda.min fromglmnet::cv.glmnet is used.

delta

Additional regularization parameter, bounding the measurementerror.

family

Character string. Currently "binomial" and "poisson" aresupported.

active_set

Logical. Whether or not to use an active set strategy tospeed up coordinate descent algorithm.

maxit

Maximum number of iterations of iterative reweighing algorithm.

Value

An object of class "gmu_lasso".

References

Rosenbaum M, Tsybakov AB (2010).“Sparse recovery under matrix uncertainty.”Ann. Statist.,38(5), 2620–2651.

Sorensen O, Hellton KH, Frigessi A, Thoresen M (2018).“Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error.”Journal of Computational and Graphical Statistics,27(4), 739-749.doi:10.1080/10618600.2018.1425626, https://doi.org/10.1080/10618600.2018.1425626.

Examples

set.seed(1)# Number of samplesn <- 200# Number of covariatesp <- 100# Number of nonzero featuress <- 10# True coefficient vectorbeta <- c(rep(1,s),rep(0,p-s))# Standard deviation of measurement errorsdU <- 0.2# True data, not observedX <- matrix(rnorm(n*p),nrow = n,ncol = p)# Measured data, with errorW <- X + sdU * matrix(rnorm(n * p), nrow = n, ncol = p)# Binomial responsey <- rbinom(n, 1, (1 + exp(-X%*%beta))**(-1))# Run the GMU Lassofit <- gmu_lasso(W, y, delta = NULL)print(fit)plot(fit)coef(fit)# Get an elbow plot, in order to choose delta.plot(fit)

Generalized Matrix Uncertainty Selector

Description

Generalized Matrix Uncertainty Selector

Usage

gmus(W, y, lambda = NULL, delta = NULL, family = "gaussian", weights = NULL)

Arguments

W

Design matrix, measured with error. Must be a numeric matrix.

y

Vector of responses.

lambda

Regularization parameter.

delta

Additional regularization parameter, bounding the measurementerror.

family

"gaussian" for linear regression, "binomial" for logisticregression or "poisson" for Poisson regression. Defaults go "gaussian".

weights

A vector of weights for each row ofX.

Value

An object of class "gmus".

References

Rosenbaum M, Tsybakov AB (2010).“Sparse recovery under matrix uncertainty.”Ann. Statist.,38(5), 2620–2651.

Sorensen O, Hellton KH, Frigessi A, Thoresen M (2018).“Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error.”Journal of Computational and Graphical Statistics,27(4), 739-749.doi:10.1080/10618600.2018.1425626, https://doi.org/10.1080/10618600.2018.1425626.

Examples

# Example with linear regressionset.seed(1)n <- 100 # Number of samplesp <- 50 # Number of covariates# True (latent) variablesX <- matrix(rnorm(n * p), nrow = n)# Measurement matrix (this is the one we observe)W <- X + matrix(rnorm(n*p, sd = 1), nrow = n, ncol = p)# Coefficient vectorbeta <- c(seq(from = 0.1, to = 1, length.out = 5), rep(0, p-5))# Responsey <- X %*% beta + rnorm(n, sd = 1)# Run the MU Selectorfit1 <- gmus(W, y)# Draw an elbow plot to select deltaplot(fit1)coef(fit1)# Now, according to the "elbow rule", choose# the final delta where the curve has an "elbow".# In this case, the elbow is at about delta = 0.08,# so we use this to compute the final estimate:fit2 <- gmus(W, y, delta = 0.08)# Plot the coefficientsplot(fit2)coef(fit2)coef(fit2, all = TRUE)

Matrix Uncertainty Selector

Description

Matrix Uncertainty Selector for linear regression.

Usage

mus(W, y, lambda = NULL, delta = NULL)

Arguments

W

Design matrix, measured with error. Must be a numeric matrix.

y

Vector of responses.

lambda

Regularization parameter.

delta

Additional regularization parameter, bounding the measurement error.

Details

This function is just awrapper forgmus(W, y, lambda, delta, family = "gaussian").

Value

An object of class "gmus".

References

Rosenbaum M, Tsybakov AB (2010).“Sparse recovery under matrix uncertainty.”Ann. Statist.,38(5), 2620–2651.

Sorensen O, Hellton KH, Frigessi A, Thoresen M (2018).“Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error.”Journal of Computational and Graphical Statistics,27(4), 739-749.doi:10.1080/10618600.2018.1425626, https://doi.org/10.1080/10618600.2018.1425626.

Examples

# Example with Gaussian responseset.seed(1)# Number of samplesn <- 100# Number of covariatesp <- 50# True (latent) variablesX <- matrix(rnorm(n * p), nrow = n)# Measurement matrix (this is the one we observe)W <- X + matrix(rnorm(n*p, sd = 1), nrow = n, ncol = p)# Coefficient vectorbeta <- c(seq(from = 0.1, to = 1, length.out = 5), rep(0, p-5))# Responsey <- X %*% beta + rnorm(n, sd = 1)# Run the MU Selectorfit1 <- mus(W, y)# Draw an elbow plot to select deltaplot(fit1)coef(fit1)# Now, according to the "elbow rule", choose the final delta where the curve has an "elbow".# In this case, the elbow is at about delta = 0.08, so we use this to compute the final estimate:fit2 <- mus(W, y, delta = 0.08)plot(fit2) # Plot the coefficientscoef(fit2)coef(fit2, all = TRUE)

Generalized Matrix Uncertainty Selector for logistic regression

Description

Internal function.

Usage

mus_glm(W, y, lambda, delta, family = c("binomial", "poisson"), weights = NULL)

Arguments

W

Design matrix, measured with error.

y

Vector of the binomial response value.

lambda

Regularization parameter due to model error.

delta

Regularization parameter due to measurement error.

family

"binomial" or "poisson"

weights

Case weights.

Value

Intercept and coefficients at the values of lambda and delta specified.


Algorithm for mus

Description

Algorithm for mus

Usage

musalgorithm(W, y, lambda, delta, weights = NULL)

Arguments

W

Matrix of measurements.

y

Response vector.

lambda

Regularization parameter due to residual.

delta

Regularization parameter due to measurement error.


plot.corrected_lasso

Description

Plot the output of corrected_lasso

Usage

## S3 method for class 'corrected_lasso'plot(x, type = "nonzero", label = FALSE, ...)

Arguments

x

Object of class corrected_lasso, returned from callingcorrected_lasso()

type

Type of plot. Either "nonzero" or "path". Ignored iflength(x$radii) == 1, in case of which all coefficient estimates areplotted at the given regularization parameter.

label

Logical specifying whether to add labels to coefficient paths.Only used whentype = "path".

...

Other arguments to plot (not used)

Examples

# Example with linear regressionn <- 100 # Number of samplesp <- 50 # Number of covariates# True (latent) variablesX <- matrix(rnorm(n * p), nrow = n)# Measurement error covariance matrix# (typically estimated by replicate measurements)sigmaUU <- diag(x = 0.2, nrow = p, ncol = p)# Measurement matrix (this is the one we observe)W <- X + rnorm(n, sd = sqrt(diag(sigmaUU)))# Coefficientbeta <- c(seq(from = 0.1, to = 1, length.out = 5), rep(0, p-5))# Responsey <- X %*% beta + rnorm(n, sd = 1)# Run the corrected lassofit <- corrected_lasso(W, y, sigmaUU, family = "gaussian")plot(fit)

plot.cv_corrected_lasso

Description

Plot the output ofcv_corrected_lasso.

Usage

## S3 method for class 'cv_corrected_lasso'plot(x, ...)

Arguments

x

The object to be plotted, returned fromcv_corrected_lasso.

...

Other arguments to plot (not used).


plot.cv_gds

Description

Plot the output ofcv_gds.

Usage

## S3 method for class 'cv_gds'plot(x, ...)

Arguments

x

The object to be plotted, returned fromcv_gds.

...

Other arguments to plot (not used).


Plot the estimates returned by gds

Description

Plot the number of nonzero coefficients at the given lambda.

Usage

## S3 method for class 'gds'plot(x, ...)

Arguments

x

An object of class gds

...

Other arguments to plot (not used).

Examples

set.seed(1)# Example with logistic regression# Number of samplesn <- 1000# Number of covariatesp <- 10# True (latent) variables (Design matrix)X <- matrix(rnorm(n * p), nrow = n)# True regression coefficientsbeta <- c(seq(from = 0.1, to = 1, length.out = 5), rep(0, p-5))# Binomially distributed responsey <- rbinom(n, 1, (1 + exp(-X %*% beta))^(-1))# Fit the generalized Dantzig Selectorgds <- gds(X, y, family = "binomial")# Plot the estimated coefficients at the chosen lambdaplot(gds)

Plot the estimates returned by gmu_lasso

Description

Plot the number of nonzero coefficients along a range of deltavalues if delta has length larger than 1, or the estimated coefficients ofdelta has length 1.

Usage

## S3 method for class 'gmu_lasso'plot(x, ...)

Arguments

x

An object of class gmu_lasso

...

Other arguments to plot (not used).

Examples

set.seed(1)n <- 200p <- 50s <- 10beta <- c(rep(1,s),rep(0,p-s))sdU <- 0.2X <- matrix(rnorm(n*p),nrow = n,ncol = p)W <- X + sdU * matrix(rnorm(n * p), nrow = n, ncol = p)y <- rbinom(n, 1, (1 + exp(-X%*%beta))**(-1))gmu_lasso <- gmu_lasso(W, y)plot(gmu_lasso)

Plot the estimates returned by gmus and mus

Description

Plot the number of nonzero coefficients along a range of deltavalues if delta has length larger than 1, or the estimated coefficients ifdelta has length 1.

Usage

## S3 method for class 'gmus'plot(x, ...)

Arguments

x

An object of class gmus

...

Other arguments to plot (not used).

Examples

# Example with linear regressionset.seed(1)# Number of samplesn <- 100# Number of covariatesp <- 50# True (latent) variablesX <- matrix(rnorm(n * p), nrow = n)# Measurement matrix (this is the one we observe)W <- X + matrix(rnorm(n*p, sd = 0.4), nrow = n, ncol = p)# Coefficient vectorbeta <- c(seq(from = 0.1, to = 1, length.out = 5), rep(0, p-5))# Responsey <- X %*% beta + rnorm(n, sd = 1)# Run the MU Selectormus1 <- mus(W, y)# Draw an elbow plot to select deltaplot(mus1)# Now, according to the "elbow rule", choose the final# delta where the curve has an "elbow".# In this case, the elbow is at about delta = 0.08, so# we use this to compute the final estimate:mus2 <- mus(W, y, delta = 0.08)# Plot the coefficientsplot(mus2)

Print a Corrected Lasso object

Description

Default print method for acorrected_lasso object.

Usage

## S3 method for class 'corrected_lasso'print(x, ...)

Arguments

x

Fitted model object returned bycorrected_lasso.

...

Other arguments (not used).


Print a Cross-Validated Corrected Lasso object

Description

Default print method for acv_corrected_lasso object.

Usage

## S3 method for class 'cv_corrected_lasso'print(x, ...)

Arguments

x

Fitted model object returned bycv_corrected_lasso.

...

Other arguments (not used).


Print a Cross-Validated GDS Object

Description

Default print method for acv_gds object.

Usage

## S3 method for class 'cv_gds'print(x, ...)

Arguments

x

Fitted model object returned bycv_gds.

...

Other arguments (not used).


Print a Generalized Dantzig Selector Object

Description

Default print method for agds object.

Usage

## S3 method for class 'gds'print(x, ...)

Arguments

x

Fitted model object returned bygds.

...

Other arguments (not used).


Print a GMU Lasso object

Description

Default print method for agmu_lasso object.

Usage

## S3 method for class 'gmu_lasso'print(x, ...)

Arguments

x

Fitted model object returned bygmu_lasso.

...

Other arguments (not used).


Print a GMUS object

Description

Default print method for agmus object.

Usage

## S3 method for class 'gmus'print(x, ...)

Arguments

x

Fitted model object returned bygmus.

...

Other arguments (not used).


[8]ページ先頭

©2009-2025 Movatter.jp