In many situations, given a sequence of $n$ random variables (or sensors) the covariance matrix is not know but has to be estimated. In the case we have $m$ observations and $m>>n$ then the sample covariance matrix is a good approximation of the true covariance. More specifically, for a fixed number of variables the sample covariance matrix converges with $m$ to the true covariance matrix. However, in applications like weather forecast, wireless communications (MIMO channels with a big number of antennas), linear estimation and military applications the number of observations is limited and usually one has $m\leq n$. The traditional remedy is diagonal loading - the addition of a small identity matrix to make the covariance estimate invertible. In this talk we will discuss an alternative approach. In this one, we reduce the dimension of the data through an ensemble of isotropically random (Haar measure) unitary matrices. For every member of the unitary ensemble, the shortened data vectors yield a statistically meaningful, invertible covariance estimate from which we can compute an estimate for the ultimate desired quantity. Finally, we take the expectation of this estimate with respect to the unitary ensemble. Preliminary numerical results indicate considerable promise for this approach. This is based on a project done at Bell Labs with Steven H. Simon and Gabriel H. Tucci where we used techniques from random matrices, free probability and representation theory.