| conquer.reg {conquer} | R Documentation |
Fit sparse quantile regression models in high dimensions via regularized conquer methods with "lasso", "scad" and "mcp" penalties. For "scad" and "mcp", the iteratively reweighted l_1-penalized algorithm is complemented with a local adpative majorize-minimize algorithm.
conquer.reg(
X,
Y,
lambda = 0.2,
tau = 0.5,
kernel = c("Gaussian", "logistic", "uniform", "parabolic", "triangular"),
h = 0,
penalty = c("lasso", "scad", "mcp"),
para = NULL,
epsilon = 0.001,
iteMax = 500,
phi0 = 0.01,
gamma = 1.2,
iteTight = 3
)
X |
A n by p design matrix. Each row is a vector of observation with p covariates. |
Y |
An n-dimensional response vector. |
lambda |
(optional) Regularization parameter. Default is 0.2. |
tau |
(optional) Quantile level (between 0 and 1). Default is 0.5. |
kernel |
(optional) A character string specifying the choice of kernel function. Default is "Gaussian". Choices are "Gaussian", "logistic", "uniform", "parabolic" and "triangular". |
h |
(optional) Bandwidth/smoothing parameter. Default is max{0.5 * (log(p) / n)^(1/4), 0.05}. |
penalty |
(optional) A character string specifying the penalty. Default is "lasso". The other two options are "scad" and "mcp". |
para |
(optional) A constant parameter for "scad" and "mcp". Do not need to specify if the penalty is lasso. The default values are 3.7 for "scad" and 3 for "mcp". |
epsilon |
(optional) A tolerance level for the stopping rule. The iteration will stop when the maximum magnitude of the change of coefficient updates is less than |
iteMax |
(optional) Maximum number of iterations. Default is 500. |
phi0 |
(optional) The initial quadratic coefficient parameter in the local adaptive majorize-minimize algorithm. Default is 0.01. |
gamma |
(optional) The adaptive search parameter (greater than 1) in the local adaptive majorize-minimize algorithm. Default is 1.2. |
iteTight |
(optional) Maximum number of tightening iterations in the iteratively reweighted l_1-penalized algorithm. Do not need to specify if the penalty is lasso. Default is 3. |
An object containing the following items will be returned:
coeffA (p + 1) vector of estimated coefficients, including the intercept.
bandwidthBandwidth value.
tauQuantile level.
kernelKernel function.
penaltyPenalty type.
lambdaRegularization parameter.
nSample size.
pNumber of the covariates.
Xuming He <xmhe@umich.edu>, Xiaoou Pan <xip024@ucsd.edu>, Kean Ming Tan <keanming@umich.edu>, and Wen-Xin Zhou <wez243@ucsd.edu>
Fan, J., Liu, H., Sun, Q. and Zhang, T. (2018). I-LAMM for sparse learning: Simultaneous control of algorithmic complexity and statistical error. Ann. Statist. 46 814-841.
Koenker, R. and Bassett, G. (1978). Regression quantiles. Econometrica 46 33-50.
Tan, K. M., Wang, L. and Zhou, W.-X. (2021). High-dimensional quantile regression: convolution smoothing and concave regularization. J. Roy. Statist. Soc. Ser. B, to appear.
See conquer.cv.reg for regularized quantile regression with cross-validation.
n = 200; p = 500; s = 10 beta = c(rep(1.5, s), rep(0, p - s)) X = matrix(rnorm(n * p), n, p) Y = X %*% beta + rt(n, 2) ## Regularized conquer with lasso penalty at tau = 0.8 fit.lasso = conquer.reg(X, Y, lambda = 0.05, tau = 0.8, kernel = "Gaussian", penalty = "lasso") beta.lasso = fit.lasso$coeff #' ## Regularized conquer with scad penalty at tau = 0.8 fit.scad = conquer.reg(X, Y, lambda = 0.13, tau = 0.8, kernel = "Gaussian", penalty = "scad") beta.scad = fit.scad$coeff