11–26 Nov 2021
Europe/Budapest timezone

Taming neural networks with TUSLA: Non-convex learning via adaptive stochastic gradient Langevin algorithms

Not scheduled
20m
Online lecture

Speaker

Dr Attila Lovas (Alfréd Rényi Institute of Mathematics)

Description

Artificial neural networks (ANNs) are typically highly nonlinear systems that are finely tuned via the optimization of their associated, non-convex loss functions. Typically, the gradient of any such loss function fails to be dissipative making the use of widely-accepted (stochastic) gradient descent methods problematic. We offer a new learning algorithm based on an appropriately constructed variant of the popular stochastic gradient Langevin dynamics (SGLD), which is called the tamed unadjusted stochastic Langevin algorithm (TUSLA). We also provide a non-asymptotic analysis of the new algorithm's convergence properties in the context of non-convex learning problems with the use of ANNs. Thus, we provide finite-time guarantees for TUSLA to find approximate minimizers of both empirical and population risks. The roots of the TUSLA algorithm are based on the taming technology for diffusion processes with superlinear coefficients as developed in Sabanis (2013, 2016) and for MCMC algorithms in Brosse et al. (2019). Numerical experiments are presented which confirm the theoretical findings and illustrate the need for the use of the new algorithm in comparison to vanilla SGLD within the framework of ANNs.

Title

Taming neural networks with TUSLA

affiliation Alfréd Rényi Institute of Mathematics, The Alan Turing Institute
authors Attila Lovas, Miklós Rásonyi, Iosif Lytras, Sabanis Sotirios

Primary authors

Dr Attila Lovas (Alfréd Rényi Institute of Mathematics) Dr Miklós Rásonyi (Alfréd Rényi Institute of Mathematics) Mr Iosif Lytras (The Alan Turing Institute) Dr Sotirios Sabanis (The Alan Turing Institute)

Presentation materials

There are no materials yet.