gaussian-equiv-2layer
Code and resources for "The Gaussian equivalence of generative models for learning with two-layer neural networks" [https://arxiv.org/abs/2006.14709]
view repo
Understanding the impact of data structure on learning in neural networks remains a key challenge for the theory of neural networks. Many theoretical works on neural networks do not explicitly model training data, or assume that inputs are drawn independently from some factorised probability distribution. Here, we go beyond the simple i.i.d. modelling paradigm by studying neural networks trained on data drawn from structured generative models. We make three contributions: First, we establish rigorous conditions under which a class of generative models shares key statistical properties with an appropriately chosen Gaussian feature model. Second, we use this Gaussian equivalence theorem (GET) to derive a closed set of equations that describe the dynamics of two-layer neural networks trained using one-pass stochastic gradient descent on data drawn from a large class of generators. We complement our theoretical results by experiments demonstrating how our theory applies to deep, pre-trained generative models.
READ FULL TEXT
Understanding the impact of data structure on the computational tractabi...
read it
The lack of crisp mathematical models that capture the structure of
real...
read it
There has been a recent surge of interest in understanding the convergen...
read it
Despite the huge success of deep learning, our understanding to how the
...
read it
Energy-based models (EBMs) are a simple yet powerful framework for gener...
read it
We describe a layer-by-layer algorithm for training deep convolutional
n...
read it
Extrapolation to unseen sequence lengths is a challenge for neural gener...
read it
Code and resources for "The Gaussian equivalence of generative models for learning with two-layer neural networks" [https://arxiv.org/abs/2006.14709]
Comments
There are no comments yet.