ORIE Colloquium

Tuo ZhaoJohns Hopkins University
Compute faster and learn better: machine learning via nonconvex model-based optimization

Tuesday, March 22, 2016 - 4:15pm
Rhodes 253

Nonconvex optimization naturally arises in many machine learning problems (e.g. sparse learning, matrix factorization, and tensor decomposition). Machine learning researchers exploit various nonconvex formulations to gain modeling flexibility, estimation robustness, adaptivity, and computational scalability. Although classical computational complexity theory has shown that solving nonconvex optimization is generally NP-hard in the worst case, practitioners have proposed numerous heuristic optimization algorithms, which achieve outstanding empirical performance in real-world applications.

To bridge this gap between practice and theory, we propose a new generation of model-based optimization algorithms and theory, which incorporate the statistical thinking into modern optimization. Particularly, when designing practical computational algorithms, we take the underlying statistical models into consideration (e.g. sparsity, low rankness). Our novel algorithms exploit hidden geometric structures behind many nonconvex optimization problems, and can obtain global optima with the desired statistics properties in polynomial time with high probability.