Creative Commons license From classical statistics to modern machine learning by Mikhail Belkin (UCSD) [interrupted] [June 28, 2021]

 Summary

GdR ISIS Théorie du deep learning - June 28, 2021

From classical statistics to modern machine learning (invited talk)

By Mikhail Belkin (UCSD)

"A model with zero training error is overfit to the training data and will typically generalize poorly" goes statistical textbook wisdom. Yet, in modern practice, over-parametrized deep networks with near perfect fit on training data still show excellent test performance.
As I will discuss in my talk, this apparent contradiction is key to understanding modern machine learning. While classical methods rely on the bias-variance trade-off where the complexity of a predictor is balanced with the training error, "modern" models are best described by interpolation, where a predictor is chosen among functions that fit the training data exactly, according to a certain inductive bias. Furthermore, classical and modern models can be unified within a single "double descent" risk curve, which extends the usual U-shaped bias-variance trade-off curve beyond the point of interpolation. This understanding of model performance delineates the limits of classical analyses and opens new lines of enquiry into computational, statistical, and mathematical properties of models. A number of implications for model selection with respect to generalization and optimization will be discussed.

 Infos

 Downloads

 Embed/Share

Social Networks

 Options
Check the box to autoplay the video.
Check the box to loop the video.
Check the box to indicate the beginning of playing desired.
 Embed in a web page
 Share the link
qrcode