ABSTRACTS:


Nick Barron
Mathematics & Statistics, Loyola University Chicago

Applications of Quasiconvex Duality to Hamilton-Jacobi Equations and Optimal Transport

Quasiconvex functions are functions with convex level sets. Hamilton Jacobi equations which are quasiconex in the gradient may be written using quasiconvex duality as Bellman equations of appropriate variational problems in L-infinity. This results in representation formulas for the solution of a large class of Hamilton-Jacobi equations. Furthermore, using quasiconvex duality we may derive the dual of an optimal transport problem with L-infinity cost functional.

The optimal transport result is joint with M. Bocea and R. Jensen.


Olga Brezhneva
Mathematics, Miami University

Optimality Conditions for Nonregular Inequality-Constrained Optimization Problems

In this talk, we present necessary and sufficient optimality conditions for some classes of nonregular inequality--constrained optimization problems. First, we analyze the cases, when optimality conditions of the Karush-Kuhn-Tucker-type (KKT) hold for nonregular problems and prove geometric necessary conditions and the KKT-type optimality conditions under some new regularity assumptions. Then we continue with consideration of nonregular problems, for which the KKT-type of conditions do not hold, and propose some new necessary and sufficient optimality conditions.


Asen L. Dontchev
Mathematical Reviews (AMS) and Mathematics, University of Michigan

Around the Inverse Function Theorem

The classical inverse/implicit function theorems revolves around solving an equation  in terms of a parameter and tell us when the solution mapping associated with this equation is a differentiable function with respect to the parameter. Already in 1927 Hildebrandt and Graves observed that one can put aside differentiability and focus on Lipschitz continuity only.  More sophisticated  results may be obtained by employing various concepts of generalized differentiability. As an illustration I will present  an unconventional implicit function theorem for an optimal control problem.


Yuri Ledyaev
Mathematics, Western Michigan University
Robert Kipka
Mathematics, Queens University

Optimal Control on Infinite-dimensional Manifolds

Optimal control problems for systems described by partial differential equations were studied intensively during last few decades. But for systems with  internal symmetries it is natural to consider such optimal control problems on infinite-dimensional manifolds. Surprisingly there is no literature on such infinite-dimensional optimal control problems on manifolds.

In this talk we discuss a mathematical framework for analysis of optimal control problems on infinite-dimensional manifolds.  In particular, we demonstrate nonsmooth analysis methods and Lagrangian charts techniques which can be used for study of global variations of optimal trajectories of such control systems and derivation of  Pontryagin maximum principle for them.


Daniel Liberzon
Electrical and Computer Engineering and Coordinated Science Lab, University of Illinois Urbana-Champaign

On Almost Lyapunov Functions.

In this talk we will discuss asymptotic stability properties of nonlinear systems in the presence of ``almost Lyapunov" functions which decrease along solutions in a given region not everywhere but rather on the complement of a set of small volume. Nothing specific about the structure of this set is assumed besides an upper bound on its volume. We will show that solutions starting inside the region approach a small set around the origin whose volume depends on the volume of the set where the Lyapunov function does not decrease, as well as on other system parameters. The result is established by a perturbation argument which compares a given system trajectory with nearby trajectories that lie entirely in the set where the Lyapunov function is known to decrease, and trades off convergence speed of these trajectories against the expansion rate of the distance to them from the given trajectory.
 
This is joint work with Charles Ying and Vadim Zharnitsky (UIUC Math)


Mau Nam Nguyen
Mathematics, Portland State University

Nonsmooth Optimization Algorithms and Applications to Location Problems Involving Sets

Traditional facility location studies location problems of negligible sizes (points), but it is natural to consider location problems of large sizes (sets). Besides the geometric beauty reflected by their connections to many well-known computational geometry problems, these problems have promising applications. In this talk we present a number of nonsmooth optimization algorithms for solving continuous location problems involving sets. We also discuss further applications of these algorithms to other nonsmooth optimization problems in computational geometry and machine learning.


Aleksander Olshevsky
Industrial and Enterprise Systems Engineering, University of Illinois Urbana-Champaign

Convergence Rates in Decentralized Optimization

The widespread availability of copious amounts of data has created a pressing need to develop optimization algorithms which can work in parallel when input data is unavailable at a single place but rather spread throughout multiple locations. In this talk, we consider the problem of optimizing a sum of convex functions in a network where each node knows only one of the functions; this is a common model which includes as particular cases a number of distributed regression and classification problems. We develop a stochastic gradient method which is fully decentralized and robust to unpredictable node and link failures. Our main results yield convergence time bounds which simultaneously achieve the currently best scalings with time and network size for this problem.


Ebrahim Sarabi
Boris Mordukchovich
Mathematics, Wayne State University

Full Stability of Optimal Solutions to Constrained and Minimax Problems

This talk presents new developments and applications of advanced tools of second-order variational analysis and generalized differentiation to the fundamental notion of full stability of local minimizers in general classes of constrained optimization and minimax problems. In particular, we derive second-order characterizations of full stability and investigate its relationships with other notions of stability for parameterized conic programs and minimax problems.


Victor Zavala
Mathematics and Computer Science Division, Argonne National Laboratory

An Interior Point Framework for Structured Nonconvex Optimization

We present a filter line-search interior point framework for nonconvex NLPs (PIPS-NLP) capable of exploiting embedded structures in the linear algebra system without requiring inertia information. We focus on the issue of inertia because we argue that this hinders the use of modular linear algebra implementations. The proposed framework relies on a test for curvature along the tangential direction is sufficient to guarantee global convergence. We provide numerical evidence that the inertia-free test is as effective as an inertia detection strategy based on LBL^T factorizations. We also provide scalability results for our linear algebra implementation using NLPs arising from stochastic optimal control of natural gas networks.


Chao Zhu
Mathematical Sciences, University of Wisconsin Milwaukee

A Measure Approach for Continuous Inventory Models: Discounted Cost Criterion

This work develops a new approach to the solution of impulse control problems for continuous inventory models under a discounted cost criterion.  The analysis imbeds this stochastic problem in two different infinite-dimensional linear programs, parametrized by the initial inventory level $x_0$, by concentrating on particular functions and capturing the expected (discounted) behavior of the inventory level process and ordering decisions as measures.  The first imbedding then naturally leads to the minimization of a nonlinear function representing the cost associated with an $(s,S)$ ordering policy and an optimizing pair determines optimal levels $(s^*,S^*)$.  The lower bound arising from this imbedding is tight when $x_0 \geq s^*$ but is a strict lower bound when $x_0 < s^*$.  Solving the first linear program determines the value function in the ``no order'' region and is critical to the formulation of the second linear program.  The dual of the second linear program is then solved to provide a tight lower bound for all $x_0$, in particular for $x_0 < s^*$, and thereby completely determines the value function.  Existence of an optimal $(s,S)$ policy in the general class of ordering policies (and its characterization) is a consequence of the method, not an a priori assumption. No smoothness of the value function is required; instead, the level of smoothness results from its construction using the particular functions from which the linear programs are derived. This work places minimal assumptions on a general stochastic differential equation model for the inventory level.

This is a joint work with Kurt Helmes (Humboldt University of Berlin​) and Richard H. Stockbridge (University of Wisconsin-Milwaukee).


Jim Zhu
Mathematics, Western Michigan University

Variational Approach to Lagrange Multipliers

Despite an extensive literature on various Lagrange multiplier rules, several fine points related to this important result, are still worthy further attention. First, Lagrange multipliers are intrinsically related to the derivative or derivative like objects of the optimal value function. Moreover, complementary slackness conditions are nice additional information to have when the optimal solution exists but is not intrinsic to the existence and application of the Lagrange multiplier rule. Finally, computing Lagrange multipliers often rely on decoupling information in term of each individual constraint. Sufficient conditions are often needed for this purpose but they are often mixed with the core condition. We approach the Lagrange multiplier rule from a variational perspective and, time permits will illustrate the issues alluded to above with examples.

This talk is based on a collaborative survey project with Jon Borwein.