Generative Adversarial Networks (GANs) have become a popular method to learn a probability model from data. In this talk, I will aim to provide an understanding of some of the basic issues surrounding GANs. First, we propose a natural way of specifying the loss function for GANs by drawing a connection with supervised learning. Second, we shed light on the generalization peformance of GANs and the convergence of alternating gradient descent for its training through the analysis of a LQG setting: the generator is Linear, the loss function is Quadratic and the data is drawn from a Gaussian distribution.