Statistics Seminar

Ambuj TewariUniversity of Michigan
Random perturbations in machine learning

Wednesday, April 26, 2017 - 4:15pm
Biotech G01

Hannan proved a fundamental result in online learning leading to a notion now called Hannan consistency. Breiman coined the term “bagging” to denote bootstrap aggregating. Current algorithms for training deep neural networks use a technique called dropout. What do these three ideas have in common?

All three ideas rely on using random perturbations of some sort to enable learning. With help from collaborators, I have been trying to better understand the mathematical properties of random perturbation methods in machine learning, especially in online learning. In this talk, I will briefly describe what we have learned. I will also discuss fascinating questions that remain open. No prior knowledge of online learning will be assumed.