KrisLibrary  1.0.0
Public Member Functions | Public Attributes | List of all members
Optimization::MinimizationProblem Struct Reference

An iterative solution to minimize f over multidimensional x. May only find a local minimum, depending on the initial point. More...

#include <Minimization.h>

Public Member Functions

 MinimizationProblem (ScalarFieldFunction *f)
 
ConvergenceResult SolveGD (Real alpha, int &maxIters)
 Gradient descent.
 
void StepGD (Real alpha)
 
ConvergenceResult SolveSD (int &maxIters)
 Steepest descent.
 
ConvergenceResult SolveCG (int &maxIters)
 Conjugate gradient.
 
ConvergenceResult SolveNewton (int &maxIters)
 Newton iteration.
 
ConvergenceResult SolveQuasiNewton_Ident (int &maxIters)
 
ConvergenceResult SolveQuasiNewton_Diff (Real dx, int &maxIters)
 Finite-difference hessian with difference dx.
 
ConvergenceResult SolveQuasiNewton (int &maxIters)
 Hessian given in H.
 
ConvergenceResult SolveLM (int &maxIters, Real lambda0=1, Real lambdaScale=10)
 Levenberg-Marquardt iteration.
 
ConvergenceResult LineMinimizationStep (const Vector &dx, Real &alpha0)
 

Public Attributes

ScalarFieldFunctionf
 
Vector x
 
Real tolx
 
Real tolf
 
Real tolgrad
 
int verbose
 
std::vector< Vector > * S
 
Real fx
 
Vector grad
 
Vector dx
 
Matrix H
 

Detailed Description

An iterative solution to minimize f over multidimensional x. May only find a local minimum, depending on the initial point.

Optionally, can include bound constraints

Methods:
Gradient descent (GD): move by alpha*gradient.
Steepest descent (SD): alpha*gradient.
Conjugate gradient (CG): use the CG method (Fletcher-Reeves-Polak-Ribiere)
Newton iteraton (Newton): use Newton's method (requires Hessians) Quasi-Newton iteraton (QuasiNewton): use a quasi-Newton method (requires Gradient) Levenberg-marquardt (LM): use Levenberg-marquardt method with starting lambda and lambda update scale factor (requires Hessian) Levenberg-marquardt (LMTrust): same as above, but uses a trust region search

Each method starts at the initial point x. It then determines a direction d and a step size alpha that f(x+alpha*d)<f(x). It then moves x to this position and iterates until maxIters is reached, or one of the 3 termination criteria is met:
change in x < tolx
change in f < tolf
||gradient|| < tolgrad.
The appropriate ConvergenceResult is returned (tolf and tolgrad both return ConvergenceF). tolx is by default 1e-6, tolf is 1e-5, and tolgrad is 1e-8

maxIters is returned as the number of iterations taken.

If S is not NULL it logs the sequence of iterates x0...xn, starting from x1.

Member Function Documentation

ConvergenceResult MinimizationProblem::LineMinimizationStep ( const Vector dx,
Real &  alpha0 
)

Performs a line search to minimize f(x) in the direction dx. fx must be set to the initial f(x). f is updated to the final x fx is set to the final f(x). Alpha0 is given as the initial step size and returned as the final step size.

Returns the convergence result for the given tolerances or MaxItersReached to continue.

ConvergenceResult MinimizationProblem::SolveQuasiNewton_Ident ( int &  maxIters)

Quasi-newton iteration, with differing initial hessian methods. Identity hessian


The documentation for this struct was generated from the following files: