In recent years analysis of complexity of learning Gaussian mixture models from sampled data has received significant attention in computational machine learning and theory communities. However all results on learning parameters of mixture distributions require a certain minimum separation between the components (component means). This minimum required separation is typically an increasing function of the dimension n and/or the number of components k. In this talk I will discuss the first result on polynomial learning of high dimensional Gaussian Mixture distributions with arbitrarily small separation between the components. Specifically, I will present an algorithm for learning the parameters of an arbitrary mixture of k identical spherical Gaussians in n-dimensional space, which is polynomial in dimension, component separation and other input parameters for a fixed number of components k. The algorithm relies on a lower bound showing that two mixture distributions with similar density functions must have similar parameters. This bound is obtained by a reduction to 1-dimension and analyzing the Fourier transform of the mixture near the origin.