We show that very large mixtures of Gaussians with known and identical covariance matrix are efficiently learnable in high dimension. More precisely, a mixture whose number of components is a polynomial of any fixed degree in the dimension is polynomially learnable as long as a non-degeneracy condition on the means is satisfied. This condition is generic in the sense of smoothed complexity, as soon as the dimensionality of the space is high enough. Moreover, we prove that no such condition can exist in low dimension. Our main result on mixture recovery transforms a mixture of Gaussian to a projection of a product distribution. The projection can be learned efficiently using recent results on tensor decompositions, and this gives an efficient algorithm for learning the mixture.