MENU

You are here

Neural Networks

External Lecturer: 
Sebastian Goldt
Alessandro Treves
Antonio Celani
Course Type: 
PhD Course
Academic Year: 
2021-2022
Period: 
Second term
Duration: 
36 h
Description: 

Course description: The goals of this course are twofold: to introduce various approaches to learning with neural networks, and to develop a scientific understanding of the power and limitations of these approaches. We discuss supervised learning and generative modelling with feed-forward networks and recurrent architectures. From the theoretical point of view, we will discuss the key questions surrounding neural networks - approximation, optimisation, generalisation, and representation learning - and review the current approaches to tackle them. The accompanying labs will help to get hands-on experience with the application of neural networks.

Syllabus:

1. Introduction: from a single neuron to the transformer. Surprises with neural nets in high dimensions.

2. Optimisation dynamics in Linear regression. Backprop, Local and global optima in neural networks.

3. From optimisation to learning in linear regression. Random matrix theory and double descent.

4. Convolutional networks for computer vision: historical motivation, modern architectures.

5. Theory of non-linear networks I: committees, online learning. Simplicity bias in neural networks.

6. Theory of non-linear networks II: meanfield limit, NTK, and implicit bias.

7. Recurrent neural networks (w/ Ale Treves)

8. Analysing recurrent neural networks

9. Unsupervised learning: Variational auto-encoders, GANs and normalising flows.

10. Introduction to Reinforcement learning (w/ Antonio Celani)

11. NLP: emergence of meaning in word embeddings, self-attention and transformers.

12. Neural Networks for Science Labs:

i) Implementing stochastic gradient descent: from linear regression to autograd

ii) Computer Vision

iii) Natural language processing with recurrent neural networks

iv) Reinforcement learning

Research Group: 
Next Lectures: 

Sign in