Generative Models and Adversarial Training of Networks - BSQ Research

Generative Models and Adversarial Training of Networks

The idea of generative models and their adversarial training has evoked a lot of interest in the Machine Learning and Artificial Intelligence community. It was first proposed by Ian Goodfellow and his colleagues at the University of Montreal in 2014.  For a thorough introduction to these ideas, see Goodfellow’s tutorial at the Thirtieth Annual Conference on Neural Information Processing Systems (NIPS 2016). For a less technical introduction, see this post that also has code in Google’s TensorFlow. Research on Generative Adversarial Networks has become one of the hottest topics in Machine Learning in the past year. As leading researcher Yann LeCun has put it

There are many interesting recent development in deep learning…The most important one, in my opinion, is adversarial training (also called GAN for Generative Adversarial Networks). This, and the variations that are now being proposed is the most interesting idea in the last 10 years in ML…

A Generative Model is a high dimensional probability distribution whose parameters are learned from the initial data. The distribution may describe an image, a text segment, a video,  a time-series, or some other complex object. Once the distribution has been learnt, the model can be used to “generate” new instances of the object, by drawing samples from the distribution. Thus, a generative model of an image can be used to generate synthetic images of the same class, say a face or a scene.

A Generative Adversarial Network (GAN) is the idea of combining generative models with a game-like training process to improve the quality of learning. A GAN consists of two networks, a generator model and a discriminator model. The discriminator is a deep neural network that has been trained to distinguish natural objects from synthetic ones.  The generator is a generative model that synthesizes artificial objects of the same class as the natural objects, starting with white noise as input. The training process is an adversarial game between the generator and the discriminator where the generator attempts to defeat the efforts of the discriminator to distinguish between natural and synthetic objects.

A lot of the interest in adversarial networks is because they may offer a path to unsupervised learning, as explained by Sounith Chintala and Yunn LeCun of Facebook in this post. They explain that generative models, and in particular, GANs may provide a way to build internal models of the environment, a capability that is essential for developing higher order functions that go beyond just text and image recognition.

We can imagine a not-too-distant future where a complete artificial intelligence system is capable of not only text and image recognition but also higher-order functions like reasoning, prediction, and planning, rivaling the way humans think and behave. For machines to have this type of common sense, they need an internal model of how the world works, which requires the ability to predict. What we’re missing is the ability for a machine to build such a model itself without requiring a huge amount of effort by humans to train it.

… Adversarial networks have recently surfaced as a new way to train machines to predict what’s going to happen in the world simply by observing it.

The GAN framework was originally proposed for image analysis tasks. However, since then, their use has expanded to many other areas such as natural language processing, voice and music synthesiscryptography, and others. Leading technology companies including Google, Facebook, Apple as well as new startups are working on this new topic that is turning out to be one of the frontiers of Machine Learning and Artificial Intelligence research.

Leave a Comment

Your email address will not be published. Required fields are marked *