MENU

You are here

A density theorem for Feed-forward Neural Networks

Speaker: 
Luca Benatti
Institution: 
SISSA
Schedule: 
Wednesday, May 30, 2018 - 16:00
Location: 
A-136
Abstract: 

Machine Learning and Neural Networks had a considerable development in the last years. In 1989, K. Hornik, M. Stinchcombe and H. White published a paper named “Multilayer Feed-forward Networks are Universal Approximators” where they proved a theorem of density for feedforward neural networks. What they show is that the space of functions that can be represented as $f(x) = \sum_{i=1}^q = \beta_i G(A_i(x))$ for some special $G : R \mapsto R$ is uniformly dense on compacta in the space of continuous function and dense in measure in the space of measurable functions. In this talk I would like to retrace their work. I will give some basic concepts about Machine Learning and Neural Networks, and I will walk through  all proofs the authors presented to get this density result. In the final part I will show some consequences of this theorem.

Sign in