Center for Applied Mathematics Colloquium

Damek DavisCornell University
A SMART stochastic algorithm for nonconvex optimization with applications to robust machine learning

Friday, February 10, 2017 - 3:30pm
Rhodes 655

In this talk, we show how to transform any optimization problem that arises from fitting a machine learning model into one that (1) detects and removes contaminated data from the training set while (2) simultaneously fitting the trimmed model on the uncontaminated data that remains. To solve the resulting nonconvex optimization problem, we introduce a fast stochastic gradient algorithm. For datasets of size $n$, our approach requires $O(n^{2/3}/\varepsilon)$ gradient evaluations to reach $\varepsilon$-accuracy and, when a certain error bound holds, the complexity improves to $O(\kappa n^{2/3}\log(1/\varepsilon))$. These rates are $n^{1/3}$ times better than those achieved by typical, full gradient methods.

This talk will assume minimal background in optimization beyond knowledge of the gradient descent algorithm.