| kernel.pls.fit {plsdof} | R Documentation |
This function computes the Partial Least Squares fit. This algorithm scales mainly in the number of observations.
kernel.pls.fit(X, y, m, compute.jacobian,DoF.max)
X |
matrix of predictor observations. |
y |
vector of response observations. The length of |
m |
maximal number of Partial Least Squares components. Default is |
compute.jacobian |
Should the first derivative of the regression coefficients be computed as well? Default is |
.
DoF.max |
upper bound on the Degrees of Freedom. Default is |
We first standardize X to zero mean and unit variance.
coefficients |
matrix of regression coefficients |
intercept |
vector of regression intercepts |
DoF |
Degrees of Freedom |
sigmahat |
vector of estimated model error |
Yhat |
matrix of fitted values |
yhat |
vector of squared length of fitted values |
RSS |
vector of residual sum of error |
covariance |
|
TT |
matrix of normalized PLS components |
Nicole Kraemer, Mikio L. Braun
Kraemer, N., Sugiyama M. (2011). "The Degrees of Freedom of Partial Least Squares Regression". Journal of the American Statistical Association 106 (494) http://pubs.amstat.org/doi/abs/10.1198/jasa.2011.tm10107
Kraemer, N., Braun, M.L. (2007) "Kernelizing PLS, Degrees of Freedom, and Efficient Model Selection", Proceedings of the 24th International Conference on Machine Learning, Omni Press, 441 - 448
linear.pls.fit, pls.cv,pls.model, pls.ic
n<-50 # number of observations p<-5 # number of variables X<-matrix(rnorm(n*p),ncol=p) y<-rnorm(n) pls.object<-kernel.pls.fit(X,y,m=5,compute.jacobian=TRUE)