klampt.math.optimize module

Classes to help set up and solve nonlinear, constrained optimization problems.

Supports local and global optimization. Wraps around scipy, pyOpt, or DIRECT (for now).

Works well with the klampt.math.symbolic module.

Classes:

GlobalOptimizer([method])

A wrapper around different global optimization libraries.

LocalOptimizer([method])

A wrapper around different local optimization libraries.

OptimizationObjective(expr, type[, weight])

Describes an optimization cost function or constraint.

OptimizationProblem()

A holder for optimization problem data.

OptimizationProblemBuilder([context])

Defines a generalized optimization problem that can be saved/loaded from a JSON string.

OptimizerParams([numIters, tol, …])

Functions:

sample_range(a, b)

Samples x in the range [a,b].

class klampt.math.optimize.GlobalOptimizer(method='auto')[source]

Bases: object

A wrapper around different global optimization libraries. Only minimization is supported, and only DIRECT, scipy, and pyOpt are supported.

The optimization technique is specified using the method string, which can be:

  • ‘auto’: picks between DIRECT and random-restart

  • ‘random-restart.METHOD’: random restarts using the local optimizer METHOD.

  • ‘DIRECT’: the DIRECT global optimizer

  • ‘scipy’: uses scipy.optimize.minimize with default settings.

  • ‘scipy.METHOD’: uses scipy.optimize.minimize with the argument method=METHOD.

  • ‘pyOpt’: uses pyOpt with SLSQP.

  • ‘pyOpt.METHOD’: uses pyOpt with the given method.

The method attribute can also be a list, which does a cascading solver in which the previous solution point is used as a seed for the next solver.

Examples:

  • ‘DIRECT’: Run the DIRECT method

  • ‘scipy.differential_evolution’: Runs the scipy differential evolution technique

  • ‘random-restart.scipy’: Runs random restarts using scipy’s default local optimizer

  • ‘random-restart.pyOpt.SLSQP’: Runs random restarts using pyOpt as a local optimizer

  • [‘DIRECT’,’auto’]: Run the DIRECT method then clean it up with the default local optimizer

Random restarts picks each component x of the seed state randomly using sample_range(a,b) where [a,b] is the range of x given by problem.bounds.

DIRECT and scipy.differential_evolution require a bounded state space.

Methods:

setSeed(x)

solve(problem[, numIters, tol])

Returns a pair (solved,x) where solved is True if the solver found a valid solution, and x is the solution vector.

setSeed(x)[source]
solve(problem, numIters=100, tol=1e-06)[source]

Returns a pair (solved,x) where solved is True if the solver found a valid solution, and x is the solution vector.

class klampt.math.optimize.LocalOptimizer(method='auto')[source]

Bases: object

A wrapper around different local optimization libraries. Only minimization is supported, and only scipy and pyOpt are supported.

The method is specified using the method string, which can be:

  • ‘auto’: picks between scipy and pyOpt, whatever is available.

  • ‘scipy’: uses scipy.optimize.minimize with default settings.

  • ‘scipy.[METHOD]’: uses scipy.optimize.minimize with the argument method=[METHOD].

  • ‘pyOpt’: uses pyOpt with SLSQP.

  • ‘pyOpt.[METHOD]’: uses pyOpt with the given method.

Methods:

methodsAppropriate(problem)

Returns a list of available methods that are appropriate to use for the given problem

methodsAvailable()

Returns a list of methods that are available on this system

setSeed(x)

solve(problem[, numIters, tol])

Returns a tuple (success,result)

static methodsAppropriate(problem)[source]

Returns a list of available methods that are appropriate to use for the given problem

static methodsAvailable()[source]

Returns a list of methods that are available on this system

setSeed(x)[source]
solve(problem, numIters=100, tol=1e-06)[source]

Returns a tuple (success,result)

class klampt.math.optimize.OptimizationObjective(expr, type, weight=None)[source]

Bases: object

Describes an optimization cost function or constraint.

expr

object f(x)

Type

symbolic.Expression

type

string describing what the objective does:

  • ‘cost’: added to the cost. Must be scalar.

  • ‘eq’: an equality f(x)=0 that must be met exactly (up to a given equality tolerance)

  • ‘ineq’: an inequality constraint f(x)<=0

  • ‘feas’: a black-box boolean feasibility test f(x) = True

Type

str

soft

if true, this is penalized as part of the cost function. Specifically \(w \|f(x)\|^2\) is the penalty for ‘eq’ types, and \(w I[f(x)\neq \text{True}]\) for ‘feas’ types.

Type

bool

weight

a weight, used only for cost or soft objectives

Type

float, optional

name

a name for this objective.

Type

str, optional

class klampt.math.optimize.OptimizationProblem[source]

Bases: object

A holder for optimization problem data. All attributes are optional, and some solvers can’t handle certain types of constraints and costs.

The objective function must return a float. All equality and inequality functions are required to return a list of floats.

objective

an objective function f(x)

Type

function

objectiveGrad

a function df/dx(x) giving the gradient of f.

Type

function

bounds

a pair (l,u) giving lower and upper bounds on the search space.

Type

tuple

equalities

functions \(g(x)=0\) required of a feasible solution. In practice, \(|g(x)| \leq tol\) is required, where tol is a tolerance parameter for the solver.

Type

list of functions

equalityGrads

gradient/Jacobian functions \(\frac{\partial g}{\partial x}(x)\) of the equality functions.

Type

list of functions

inequalities

inequality functions requiring math:h(x) leq 0 for a feasible solution.

Type

list of functions

inequalityGrads

a list of gradient/Jacobian functions \(\frac{\partial h}{\partial x}(x)\) of each inequality function.

Type

list of functions

feasibilityTests

boolean black-box predicates that must be true of the solution

Type

list of functions

Suitable for use with the symbolic module. Once a Context is created, and appropriate Variables, Functions, and Expressions are declared, the setSymbolicObjective() and addSymbolicConstraint() methods automatically determine the standard Python function forms. i.e., context.makeFlatFunction(f,varorder) where varorder = None for the default variable ordering.

The OptimizationProblemBuilder class is more closely tied with the symbolic module and is more convenient to use. It performs automatic simplification and differentiation, and can be saved / loaded to disk.

Methods:

addEquality(func[, funcGrad])

addFeasibilityTest(test)

addInequality(func[, funcGrad])

addSymbolicConstraint(func, context[, …])

Adds a constraint from a symbolic Function or symbolic.Expression (see symbolic module).

equalityResidual(x)

Returns the stacked vector g(x) where g(x)=0 is the equality constraint.

feasible(x[, equalityTol])

Returns true if x is a feasible point.

inequalityResidual(x)

Returns the stacked vector h(x) where h(x)<=0 is the inequality constraint.

makeUnconstrained(objective_scale[, keep_bounds])

If this problem is constrained, returns a new problem in which the objective function is a scoring function that sums all of the equality / inequality errors at x plus objective_scale*objective function(x).

objectiveValue(x)

Returns the objective function value f(x).

setBounds(xmin, xmax)

setFeasibilityTest(test)

setObjective(func[, funcGrad])

setSymbolicObjective(func, context[, varorder])

Sets an objective function from a symbolic Function or Expression (see symbolic module).

addEquality(func, funcGrad=None)[source]
addFeasibilityTest(test)[source]
addInequality(func, funcGrad=None)[source]
addSymbolicConstraint(func, context, varorder=None, blackbox=False)[source]

Adds a constraint from a symbolic Function or symbolic.Expression (see symbolic module). This will be “smart” in that and Expressions will be converted to multiple constraints, inequalities will be converted to inequality constraints, and bounds will be converted to bound constraints. All other constraints will be treated as feasibility constraints.

equalityResidual(x)[source]

Returns the stacked vector g(x) where g(x)=0 is the equality constraint.

feasible(x, equalityTol=1e-06)[source]

Returns true if x is a feasible point.

inequalityResidual(x)[source]

Returns the stacked vector h(x) where h(x)<=0 is the inequality constraint.

makeUnconstrained(objective_scale, keep_bounds=True)[source]

If this problem is constrained, returns a new problem in which the objective function is a scoring function that sums all of the equality / inequality errors at x plus objective_scale*objective function(x). If objective_scale is small, then the scoring function is approximately minimized at a feasible minimum.

If the problem is unconstrained, this just returns self.

If keep_bounds = true, this does not add the bounds to the inequality errors.

objectiveValue(x)[source]

Returns the objective function value f(x).

setBounds(xmin, xmax)[source]
setFeasibilityTest(test)[source]
setObjective(func, funcGrad=None)[source]
setSymbolicObjective(func, context, varorder=None)[source]

Sets an objective function from a symbolic Function or Expression (see symbolic module).

Note

The optimization parameters will be a flattened version of each Variable appearing in func.

class klampt.math.optimize.OptimizationProblemBuilder(context=None)[source]

Bases: object

Defines a generalized optimization problem that can be saved/loaded from a JSON string. Allows custom lists of objectives, feasibility tests, and cost functions. Multiple variables can be optimized at once.

context

a context that stores the optimization variables and any user data.

Type

symbolic.Context

objectives

all objectives or constraints used in the optimization.

Type

list of OptimizationObjective

optimizationVariables

A list of Variables used for optimization. If not set, this will try to find the variable ‘x’. If not found, this will use all unbound variables in the objectives.

Type

list of Variable

Note that objectives must be created from symbolic.Function objects, so that they are savable/loadable. See the documentation of the symbolic module for more detail.

Methods:

addCost(f[, weight])

Adds a cost function f(q).

addEquality(f[, weight])

If f is a symbolic.Function it’s a function f(x) that evaluates to 0 for a feasible solution.

addFeasibilityTest(f[, weight])

Adds an additional feasibility test.

addInequality(f[, weight])

Adds an inequality f(x) <= 0.

bind(**kwargs)

Binds the variables specified by the keyword arguments

bindVars(*args)

cost()

Evaluates the cost function with the variables already bound.

costSymbolic()

Returns a symbolic.Expression, over variables in self.context, that evaluates to the cost

equalityResidual([soft])

Evaluates the equality + ik functions at the currently bound state x, stacking the results into a single vector.

equalityResidualSymbolic([soft])

Returns a symbolic.Expression, over variables in self.context, that evaluates to the equality residual

equalitySatisfiedSymbolic([tol, soft])

Returns a symbolic.Expression, over variables in self.context, that evaluates to True if the equality constraint is met with tolerance tol

feasibilityTestsPass([soft])

Returns True if the currently bound state passes all black-box feasibility tests.

feasibilityTestsPassSymbolic([soft])

Returns a symbolic.Expression, over variables in self.context, that evaluates to True if the black-box feasibility constraints are met

fromJson(object[, context])

Sets this IK problem to a JSON object representing it.

getBounds()

Returns optimization varable bounds as a list of (xmin,xmax) pairs.

getProblem()

Returns an OptimizationProblem instance over the optimization variables.

getVarValues()

Saves the bindings for optimization variables in the current context into a list.

getVarVector()

Flattens the bindings for optimization variables in the current context into a vector x.

inBounds()

Returns True if all bounded variables are within their ranges at the currently bound state x

inBoundsSymbolic()

Returns a symbolic.Expression, over variables in self.context, that evaluates to True the configuration meets bound constraints

inequalityResidual([soft])

Evaluates the inequality functions at the currently bound state x, stacking the results into a single vector.

inequalityResidualSymbolic([soft])

Returns a symbolic.Expression, over variables in self.context, that evaluates to the inequality residual

inequalitySatisfiedSymbolic([soft])

Returns a symbolic.Expression, over variables in self.context, that evaluates to True if the inequality constraint is met

isFeasible([eqTol])

Returns True if the currently bound state passes all equality, inequality, joint limit, and black-box feasibility tests.

isFeasibleSymbolic([eqTol])

Returns a symbolic.Expression, over $q and other user data variables, that evaluates to True if the configuration meets all feasibility tests

pprint([indent])

preprocess([steps])

Preprocesses the problem to make solving more efficient

randomVarBinding()

Samples values for all optimization variables, sampling uniformly according to their bounds

satisfiesEqualities([tol])

Returns True if every entry of the (hard) equality + IK residual equals 0 (to the tolerance tol).

satisfiesInequalities([margin])

Returns True if the for currently bound state x, every entry of the (hard) inequality residuals is <= -margin (default 0).

score([eqWeight, ineqWeight, infeasWeight])

Returns an error score that is equal to the optimum at a feasible solution.

setBounds(var[, xmin, xmax])

Bounds the optimization variable var

setVarValues(s)

Converts a state into bindings for the optimization variables in the current context.

setVarVector(x)

Turns a vector x into bindings for the optimization variables in the current context.

solve([params, preprocess, cache])

Solves the optimization problem.

toJson([saveContextFunctions, prettyPrintExprs])

Returns a JSON object representing this optimization problem.

unbind(**kwargs)

Unbinds the variables specified by the keyword arguments

unbindVars()

addCost(f, weight=1)[source]

Adds a cost function f(q).

addEquality(f, weight=None)[source]

If f is a symbolic.Function it’s a function f(x) that evaluates to 0 for a feasible solution. If it is a symbolic.Expression it’s an expresion over the optimization variables

If weight = None then this is an equality constraint, Otherwise it gets added to the objective weight*||f(x)||^2.

addFeasibilityTest(f, weight=None)[source]

Adds an additional feasibility test.

addInequality(f, weight=None)[source]

Adds an inequality f(x) <= 0.

bind(**kwargs)[source]

Binds the variables specified by the keyword arguments

bindVars(*args)[source]
cost()[source]

Evaluates the cost function with the variables already bound.

costSymbolic()[source]

Returns a symbolic.Expression, over variables in self.context, that evaluates to the cost

equalityResidual(soft=True)[source]

Evaluates the equality + ik functions at the currently bound state x, stacking the results into a single vector. The residual should equal 0 (to a small tolerance) at a feasible solution.

If soft=True, also stacks the soft equalities.

equalityResidualSymbolic(soft=False)[source]

Returns a symbolic.Expression, over variables in self.context, that evaluates to the equality residual

equalitySatisfiedSymbolic(tol=0.001, soft=False)[source]

Returns a symbolic.Expression, over variables in self.context, that evaluates to True if the equality constraint is met with tolerance tol

feasibilityTestsPass(soft=False)[source]

Returns True if the currently bound state passes all black-box feasibility tests.

feasibilityTestsPassSymbolic(soft=False)[source]

Returns a symbolic.Expression, over variables in self.context, that evaluates to True if the black-box feasibility constraints are met

fromJson(object, context=None)[source]

Sets this IK problem to a JSON object representing it. A ValueError is raised if it is not the correct type.

getBounds()[source]

Returns optimization varable bounds as a list of (xmin,xmax) pairs. None is returned if the problem is unconstrained

getProblem()[source]

Returns an OptimizationProblem instance over the optimization variables.

getVarValues()[source]

Saves the bindings for optimization variables in the current context into a list.

getVarVector()[source]

Flattens the bindings for optimization variables in the current context into a vector x.

inBounds()[source]

Returns True if all bounded variables are within their ranges at the currently bound state x

inBoundsSymbolic()[source]

Returns a symbolic.Expression, over variables in self.context, that evaluates to True the configuration meets bound constraints

inequalityResidual(soft=False)[source]

Evaluates the inequality functions at the currently bound state x, stacking the results into a single vector. The residual should be <= 0 at a feasible solution.

If soft=True then this includes the soft inequality residuals.

inequalityResidualSymbolic(soft=False)[source]

Returns a symbolic.Expression, over variables in self.context, that evaluates to the inequality residual

inequalitySatisfiedSymbolic(soft=False)[source]

Returns a symbolic.Expression, over variables in self.context, that evaluates to True if the inequality constraint is met

isFeasible(eqTol=0.001)[source]

Returns True if the currently bound state passes all equality, inequality, joint limit, and black-box feasibility tests. Equality and IK constraints mut be met with equality tolerance eqTol.

isFeasibleSymbolic(eqTol=0.001)[source]

Returns a symbolic.Expression, over $q and other user data variables, that evaluates to True if the configuration meets all feasibility tests

pprint(indent=0)[source]
preprocess(steps='all')[source]

Preprocesses the problem to make solving more efficient

Returns

(opt,optToSelf,selfToOpt) giving:

  • opt: a simplified version of this optimization problem. If no simplfication can be performed, opt = self

  • optToSelf: a map of opt’s variables to self’s variables. If no simplification can be performed, optToSelf = None

  • selfToOpt: a map of self’s variables to opts’s variables. If no simplification can be performed, selfToOpt = None

Return type

tuple

Specific steps include:

  1. delete any objectives with 0 weight

  2. delete any optimization variables not appearing in expressions

  3. fixed-bound (x in [a,b], with a=b) variables are replaced with fixed values.

  4. simplify objectives

  5. TODO: replace equalities of the form var = expr by matching var to expr?

If optToSelf is not None, then it is a list of Expressions that, when eval’ed, produce the values of the corresponding optimizationVariables in the original optimization problem. selfToOpt performs the converse mapping. In other words, if opt has bound values to all of its optimizationVariables, the code:

for var,expr in zip(self.optimizationVariables,optToSelf):
    var.bind(expr.eval(opt.context))

binds all optimization variables in self appropriately.

randomVarBinding()[source]

Samples values for all optimization variables, sampling uniformly according to their bounds

satisfiesEqualities(tol=0.001)[source]

Returns True if every entry of the (hard) equality + IK residual equals 0 (to the tolerance tol).

satisfiesInequalities(margin=0)[source]

Returns True if the for currently bound state x, every entry of the (hard) inequality residuals is <= -margin (default 0).

score(eqWeight=1.0, ineqWeight=1.0, infeasWeight=1.0)[source]

Returns an error score that is equal to the optimum at a feasible solution. Evaluated at the currently bound state x.

setBounds(var, xmin=None, xmax=None)[source]

Bounds the optimization variable var

setVarValues(s)[source]

Converts a state into bindings for the optimization variables in the current context.

setVarVector(x)[source]

Turns a vector x into bindings for the optimization variables in the current context.

solve(params=<klampt.math.optimize.OptimizerParams object>, preprocess=True, cache=False)[source]

Solves the optimization problem. The result is stored in the bound optimizationVariables.

If you will be solving the problem several times without modification (except for user data and initial values of optimizationVariables), you may set cache=True to eliminate some overhead. Note that caching does not work properly if you change constraints or non-optimization variables.

toJson(saveContextFunctions=False, prettyPrintExprs=False)[source]

Returns a JSON object representing this optimization problem.

Parameters
  • saveContextFunctions (bool, optional) – if True, saves all custom functions in self.context. If they are saved, then the current context is required to be the same context in which the problem is loaded.

  • prettyPrintExprs (bool, optional) – if True, prints expressions more nicely as more human-readable strings rather than JSON objects. These strings are parsed on load, which is a little slower than pure JSON.

unbind(**kwargs)[source]

Unbinds the variables specified by the keyword arguments

unbindVars()[source]
class klampt.math.optimize.OptimizerParams(numIters=50, tol=0.001, startRandom=False, numRestarts=1, timeout=10, globalMethod=None, localMethod=None)[source]

Bases: object

Methods:

fromJson(obj)

solve(optProblem[, seed])

Globally or locally solves an OptimizationProblem instance with the given parameters.

toJson()

fromJson(obj)[source]
solve(optProblem, seed=None)[source]

Globally or locally solves an OptimizationProblem instance with the given parameters. Optionally takes a seed as well.

Basically, this is a thin wrapper around GlobalOptimizer that converts the OptimizerParams to the appropriate format.

Returns

(success,x) where success is True or False and x is the solution.

Return type

tuple

toJson()[source]
klampt.math.optimize.sample_range(a, b)[source]

Samples x in the range [a,b].

  • If the range is bounded, the uniform distribution x~U(a,b) is used.

  • If the range is unbounded, then this uses the log transform to sample a distribution.

Specifically, if a=-inf and b is finite, then \(x \sim b + \log(y)\) where \(y \sim U(0,1)\). A similar formula holds for a finite and \(b=\infty\).

If a=-inf and b=inf, then \(x \sim s*\log(y)\), where \(y \sim U(0,1)\) and the sign s takes on either of {-1,1} each with probability 0.5.