Licence Creative Commons From classical statistics to modern machine learning by Mikhail Belkin (UCSD) [interrupted] [28 juin 2021]

 Description

GdR ISIS Théorie du deep learning - June 28, 2021

From classical statistics to modern machine learning (invited talk)

By Mikhail Belkin (UCSD)

"A model with zero training error is overfit to the training data and will typically generalize poorly" goes statistical textbook wisdom. Yet, in modern practice, over-parametrized deep networks with near perfect fit on training data still show excellent test performance.
As I will discuss in my talk, this apparent contradiction is key to understanding modern machine learning. While classical methods rely on the bias-variance trade-off where the complexity of a predictor is balanced with the training error, "modern" models are best described by interpolation, where a predictor is chosen among functions that fit the training data exactly, according to a certain inductive bias. Furthermore, classical and modern models can be unified within a single "double descent" risk curve, which extends the usual U-shaped bias-variance trade-off curve beyond the point of interpolation. This understanding of model performance delineates the limits of classical analyses and opens new lines of enquiry into computational, statistical, and mathematical properties of models. A number of implications for model selection with respect to generalization and optimization will be discussed.

 Informations

  • Ajouté par :

  • Mis à jour le :

    29 juin 2021 20:16
  • Durée :

    00:11:13
  • Nombre de vues :

    21
  • Type :

  • Langue principale :

    Anglais
  • Public :

    Autre
  • Discipline(s) :

 Téléchargements

 Intégrer/Partager

Réseaux sociaux

 Options
Cocher cette case pour lancer la lecture automatiquement.
Cocher cette case pour lire la vidéo en boucle.
Cocher la case pour indiquer le début de lecture souhaité.
 Intégrer dans une page web
 Partager le lien
qrcode