news highlights

ICTP Online Colloquium 25 November

Professor Stéphane Mallat on "Mathematical Mysteries of Deep Neural Networks"

ICTP Online Colloquium 25 November
ICTP Online Colloquium 25 November

20/11/2020 - Trieste

ICTP is pleased to welcome Professor Stéphane Mallat from the Collège de France for an online Colloquium on November 25 at 16:00 CET, on "Mathematical Mysteries of Deep Neural Networks".

All are welcome to attend the Colloquium, which will be held via Zoom, with pre-registration required. A Q&A session will follow, and the talk will also be live-streamed on ICTP’s YouTube channel, with a recording posted at the end of the session.

Stéphane Mallat is an applied mathematician, Professor at the Collège de France and chair of Data Sciences. He is a member of the French Academy of Sciences, the Academy of Technologies and a Foreign Member of the US National Academy of Engineering. He was Professor at the Courant Institute of NYU in New York for ten years, then at Ecole Polytechnique and Ecole Normale Supérieure in Paris. He also was the co-founder and CEO of a semiconductor start-up company.

Mallat's research focuses on machine learning, signal processing and harmonic analysis. He developed the multiresolution wavelet theory with applications to image processing and sparse representations. He now works on mathematical understanding of deep neural networks, and their applications.


ABSTRACT: Deep neural networks obtain impressive results for image, sound and language recognition or to address complex problems in physics. They are partly responsible for the renewal of artificial intelligence. Yet, we do not understand why they can work so well and why they fail sometimes, which raises many problems of robustness and explainability.
Recognizing or classifying data amounts to approximate phenomena which depend on a very large number of variables. The combinatorial explosion of possibilities makes it potentially impossible to solve. One can learn from data only if the problem is highly structured. Deep neural networks appear to take advantage of these unknown structures.  Understanding this "architecture of complexity" involves many branches of mathematics and is related to open questions in physics. I will discuss some approaches and show applications.