| mxComputeGradientDescent {OpenMx} | R Documentation |
This optimizer does not require analytic derivatives of the fit function. The fully open-source CRAN version of OpenMx offers 2 choices, CSOLNP and SLSQP (from the NLOPT collection). The OpenMx Team's version of OpenMx offers the choice of three optimizers: CSOLNP, SLSQP, and NPSOL.
mxComputeGradientDescent(freeSet = NA_character_, ..., engine = NULL,
fitfunction = "fitfunction", verbose = 0L, tolerance = NA_real_,
useGradient = NULL, warmStart = NULL, nudgeZeroStarts = mxOption(NULL,
"Nudge zero starts"), maxMajorIter = NULL, gradientAlgo = mxOption(NULL,
"Gradient algorithm"),
gradientIterations = imxAutoOptionValue("Gradient iterations"),
gradientStepSize = imxAutoOptionValue("Gradient step size"))
freeSet |
names of matrices containing free parameters. |
... |
Not used. Forces remaining arguments to be specified by name. |
engine |
specific 'CSOLNP', 'SLSQP', or 'NPSOL' |
fitfunction |
name of the fitfunction (defaults to 'fitfunction') |
verbose |
level of debugging output |
tolerance |
how close to the optimum is close enough (also known as the optimality tolerance) |
useGradient |
whether to use the analytic gradient (if available) |
warmStart |
a Cholesky factored Hessian to use as the NPSOL Hessian starting value (preconditioner) |
nudgeZeroStarts |
whether to nudge any zero starting values prior to optimization (default TRUE) |
maxMajorIter |
maximum number of major iterations |
gradientAlgo |
one of c('forward','central') |
gradientIterations |
number of Richardson iterations to use for the gradient |
gradientStepSize |
the step size for the gradient |
One option for CSOLNP and SLSQP is
gradientAlgo. CSOLNP uses forward method
by default, while SLSQP uses central method. forward method requires
1 time gradientIterations function evaluation per parameter
per gradient, while central method requires 2 times
gradientIterations function evaluations per parameter
per gradient. Users can change the default methods for either of these optimizers.
NPSOL usually uses the forward method, but
adaptively switches to central under certain circumstances.
CSOLNP uses the value of argument gradientStepSize as-is,
whereas SLSQP internally scales it by a factor of 100. The
purpose of this transformation is to obtain roughly the same
accuracy given other differences in numerical procedure.
NPSOL ignores gradientStepSize, and instead uses a function
of mxOption “Function precision” to determine its gradient
step size.
All three optimizers can use analytic gradients,
and only NPSOL uses warmStart.
Luenberger, D. G. & Ye, Y. (2008). Linear and nonlinear programming. Springer.
data(demoOneFactor)
factorModel <- mxModel(name ="One Factor",
mxMatrix(type="Full", nrow=5, ncol=1, free=FALSE, values=0.2, name="A"),
mxMatrix(type="Symm", nrow=1, ncol=1, free=FALSE, values=1, name="L"),
mxMatrix(type="Diag", nrow=5, ncol=5, free=TRUE, values=1, name="U"),
mxAlgebra(expression=A %*% L %*% t(A) + U, name="R"),
mxExpectationNormal(covariance="R", dimnames=names(demoOneFactor)),
mxFitFunctionML(),
mxData(observed=cov(demoOneFactor), type="cov", numObs=500),
mxComputeSequence(steps=list(
mxComputeGradientDescent(),
mxComputeNumericDeriv(),
mxComputeStandardError(),
mxComputeHessianQuality()
)))
factorModelFit <- mxRun(factorModel)
factorModelFit$output$conditionNumber # 29.5