Stochastic variants of classical optimization methods, with complexity guarantees

With Professor Coralia Cartis

Stochastic variants of classical optimization methods, with complexity guarantees

Optimization is a key component of machine learning application, as it helps with training of (neural net, nonconvex) models and parameter tuning. Classical optimization methods are challenged by the scale of machine learning applications and the lack of /cost of full derivatives, as well as the stochastic nature of the problem. On the other hand, the simple approaches that the machine learning community uses need improvement. Here we try to merge the two perspectives and adapt the strength of classical optimization techniques to meet the challenges of data science applications: from deterministic to stochastic problems, from typical to large scale. We propose a general algorithmic framework and complexity analysis that allows the use of inexact, stochastic and even possibly biased, problem information in classical methods for nonconvex optimization. This work is joint with Katya Scheinberg (Cornell), Jose Blanchet (Columbia) and Matt Menickelly (Argonne).

Add to your calendar or Include in your list