| conquer.cv.reg {conquer} | R Documentation |
Fit sparse quantile regression models via regularized conquer methods with "lasso", "scad" and "mcp" penalties. The regularization parameter λ is selected by cross-validation.
conquer.cv.reg(
X,
Y,
lambdaSeq = NULL,
tau = 0.5,
kernel = c("Gaussian", "logistic", "uniform", "parabolic", "triangular"),
h = 0,
penalty = c("lasso", "scad", "mcp"),
kfolds = 5,
numLambda = 50,
para = NULL,
epsilon = 0.001,
iteMax = 500,
phi0 = 0.01,
gamma = 1.2,
iteTight = 3
)
X |
A n by p design matrix. Each row is a vector of observation with p covariates. |
Y |
An n-dimensional response vector. |
lambdaSeq |
(optional) A sequence of candidate regularization parameters. If unspecified, the sequence will be generated by a simulated pivotal quantity approach proposed by Belloni and Chernozhukov (2011). |
tau |
(optional) Quantile level (between 0 and 1). Default is 0.5. |
kernel |
(optional) A character string specifying the choice of kernel function. Default is "Gaussian". Choices are "Gaussian", "logistic", "uniform", "parabolic" and "triangular". |
h |
(optional) The bandwidth parameter for kernel smoothing. Default is max{0.5 * (log(p) / n)^(1/4), 0.05}. |
penalty |
(optional) A character string specifying the penalty. Default is "lasso". Choices are "lasso", "scad" or "mcp". |
kfolds |
(optional) Number of folds for cross-validation. Default is 5. |
numLambda |
(optional) Number of lambda values for cross-validation if |
para |
(optional) A constant parameter for "scad" and "mcp". Do not need to specify if the penalty is lasso. The default values are 3.7 for "scad" and 3 for "mcp". |
epsilon |
(optional) A tolerance level for the stopping rule. The iteration will stop when the maximum magnitude of the change of coefficient updates is less than |
iteMax |
(optional) Maximum number of iterations. Default is 500. |
phi0 |
(optional) The initial quadratic coefficient parameter in the local adaptive majorize-minimize algorithm. Default is 0.01. |
gamma |
(optional) The adaptive search parameter (greater than 1) in the local adaptive majorize-minimize algorithm. Default is 1.2. |
iteTight |
(optional) Maximum number of tightening iterations in the iteratively reweighted l_1-penalized algorithm. Do not need to specify if the penalty is lasso. Default is 3. |
An object containing the following items will be returned:
coeffA (p + 1) vector of estimated coefficients, including the intercept.
lambdaRegularization parameter selected by cross-validation.
bandwidthBandwidth value.
tauQuantile level.
kernelKernel function.
penaltyPenalty type.
nSample size.
pNumber of covariates.
Xuming He <xmhe@umich.edu>, Xiaoou Pan <xip024@ucsd.edu>, Kean Ming Tan <keanming@umich.edu>, and Wen-Xin Zhou <wez243@ucsd.edu>
Belloni, A. and Chernozhukov, V. (2011). l_1 penalized quantile regression in high-dimensional sparse models. Ann. Statist. 39 82-130.
Fan, J., Liu, H., Sun, Q. and Zhang, T. (2018). I-LAMM for sparse learning: Simultaneous control of algorithmic complexity and statistical error. Ann. Statist. 46 814-841.
Koenker, R. and Bassett, G. (1978). Regression quantiles. Econometrica 46 33-50.
Tan, K. M., Wang, L. and Zhou, W.-X. (2021). High-dimensional quantile regression: convolution smoothing and concave regularization. J. Roy. Statist. Soc. Ser. B, to appear.
See conquer.reg for regularized quantile regression with a prescribed lambda.
n = 100; p = 100; s = 3 beta = c(rep(1.5, s), rep(0, p - s)) X = matrix(rnorm(n * p), n, p) Y = X %*% beta + rt(n, 2) ## Cross-validated regularized conquer with lasso penalty at tau = 0.8 fit.lasso = conquer.cv.reg(X, Y, tau = 0.8, kernel = "Gaussian", penalty = "lasso") beta.lasso = fit.lasso$coeff #' ## Cross-validated regularized conquer with scad penalty at tau = 0.8 fit.scad = conquer.cv.reg(X, Y,tau = 0.8, kernel = "Gaussian", penalty = "scad") beta.scad = fit.scad$coeff