My research in recent years has included paper on

control with multiple objectives and with partial differential

equations, and on computing optimal control),

(with Islam),

Some of the ideas are briefly as follows. The

minimization of a function f(x), subject to constraints

g(x) <= 0, can be described using a Lagrangian function

f(x) + vg(x), where v is a Lagrange multiplier. If f is

vector valued, then "minimum" at a point p must be

redefined, replacing f(x) >= f(p) by NOT [f(x) < f(p)],

with some ordering of the vectors. The Lagrangian L is

replaced by tf(x) + vg(x), with an additional multiplier

t. If the functions are differentiable, then L has zero

gradient at the minimum, for some t and v. There are

analogs of this result when the functions are not

differentiable, or their values are sets instead of

points. One approach is to smooth f and g by averaging

their values over nearby points; this gives a smooth

problem, approximating the given one.

When do the Lagrangian conditions, in turn, imply a

minimum? This is true if the vector function F = (f,g) is convex,

thus if F(x) - F(p) >= F'(p)(x-p). But we can replace x-p by

some "scale function" of x-p, and it still works. This

makes F an "invex" vector function. This extends to

functions F that are not differentiable, in two ways.

One can replace derivatives by tangent cones to the graph

of F. Or F may satisfy an inequality

tF(x) + (1-t)F(p) >= F(z) for some suitable z,

depending on x,p,t. We still get sufficient conditions.

If there is an equality constraint h(x) = 0, then this

makes x fall on some curved surface. More generally,

what happens to "invex" when x lies on a manifold? A

somewhat restricted "invex" corresponds to

"convexifiable", thus F can be made convex by

transforming the underlying space. This idea works for

manifolds, if we assume singular points are well behaved,

and then "invex" corresponds to a topological property

("zero index") at singular points. Extensions to vector

functions are being sought.

Lagrange multipliers, like shadow prices in linear

programming, measure the sensitivity of a minimum value

to small perturbations of the problem. It is harder to

find what happens to a minimum point when the problem is

perturbed. This requires the problem to have a stability

property, and this happens when "invex" is strengthened a

bit. All this works also with optimal control problems,

where x is replaced by a "state function" and a "control

function".

Optimal control questions are equivalent to mathematical

programs in infinite-dimensional spaces, and Lagrangian

conditions apply. The Pontryagin theory can be deduced

from this viewpoint (see my 1995 book), and this has led to

the development of two computer packages for optimal control

(OCIM and SCOM); the latter, using MATLAB, runs on a

desktop computer, and facilitates investigations, such as

sensitivity analysis, witha minimum of programming. A

number of optimal control models for economics (especially

growth models) and finance have been developed (with Islam),

and some of them computed.

For graduate students supervised, see in:
Graduate Students