File Exchange

## Optimal Inverse Function Creation

version 1.0 (181 KB) by

Creates continuous, optimal inverse-functions: given a desired output, finds the optimal input

Updated

Given an output function f(x) and a cost function J(x) (and gradients), optimal inverse functions are found which produce the input x* with the lowest cost based on the current desired output. Output and cost functions produce a scalar. The method is demonstrated on 2D problems (for convenient visualization). With 2D plots removed (all but Figure 41), the code can run higher dimension systems also.

Inverse functions are guaranteed to be continuous, so inputs will be continuous if the desired output changes in a continuous fashion. Continuity of input is typically a required condition for real-time systems, so inverse functions can be paired with a sensor for online optimization of multiple-input systems. The inverse function significantly reduces an operator’s cognitive load (or complexity of higher level planner) from many inputs to simply setting the desired output, with no loss of optimality.

Because each point on the inverse function is locally optimal, multiple inverse functions can be created, possibly with overlapping domain. This allows selection of fundamentally different solutions which can be chosen based on situational constraints (such as temporary blockades) or better performance based on an expected distribution of outputs. Since there may not be one ‘best’ inverse function, they are all found and the user can apply their God-given judgment.

Algorithm is based on constrained gradient descent with a population of agents. Once a constrained optimal point is found, the output is changed and a new optimal point is found. The cluster of connected optimal points is interpolated in the output to produce intermediate output values. Method is similar to “Population based optimization for variable operating points,” IEEE CEC 2011, http://dx.doi.org/10.1109/CEC.2011.5949611, but features the addition of the Armijo rule to ensure convergence.

A problem is specified in a structure which is passed to the optimization function with an initial population. Inverse functions are returned in a cell array.
Inverse functions = InvFun(Initial Population, Problem Structure)
The inverse function is evaluated by 1-dimensional, shape-preserving spline interpolation:
x* = pchip( Inverse functions {k}.y, Inverse functions {k}.x, Desired y)
This is demonstrated in InvFun_Test.m, as shown by the published results accompanying the submission.

From the dependency report, no toolboxes are needed. I am not aware of any backward compatibility issues (not sure when subfunctions were first allowed, but that would be my first guess at an issue).