Creative Commons license Some Theoretical Insights into Wasserstein GANs by Gérard Biau (Sorbonne Université) [June 28, 2021]


GdR ISIS Théorie du deep learning - June 28, 2021

Some Theoretical Insights into Wasserstein GANs (invited talk)

By Gérard Biau (Sorbonne Université)

Generative Adversarial Networks (GANs) have been successful in producing outstanding results in areas as diverse as image, video, and text generation. Building on these successes, a large number of empirical studies have validated the benefits of the cousin approach called Wasserstein GANs (WGANs), which brings stabilization in the training process. In the present contribution, we add a new stone to the edifice by proposing some theoretical advances in the properties of WGANs. First, we properly define the architecture of WGANs in the context of integral probability metrics parameterized by neural networks and highlight some of their basic mathematical features. We stress in particular interesting optimization properties arising from the use of a parametric 1-Lipschitz discriminator. Then, in a statistically-driven approach, we study the convergence of empirical WGANs as the sample size tends to infinity, and clarify the adversarial effects of the generator and the discriminator by underlining some trade-off properties. These features are finally illustrated with experiments using both synthetic and real-world datasets.
This is a joint work with Maxime Sangnier (Sorbonne University) and Ugo Tanielian (Sorbonne University & Criteo)


  • Added by:

  • Contributor(s):

    • Gérard Biau (Sorbonne Université) (author)
  • Updated on:

    June 29, 2021, 4:58 p.m.
  • Duration:

  • Number of views:

  • Type:

  • Main language:

  • Audience:

  • Discipline(s):



Social Networks

Check the box to autoplay the video.
Check the box to loop the video.
Check the box to indicate the beginning of playing desired.
 Embed in a web page
 Share the link