Suppose you want to recover an unknown signal of length n from a system of m linearly independent equations. In lieu of any prior signal information, linear algebra suggests that your remaining uncertainty about the signal is characterized by an (n-m) dimensional linear subspace. In this talk, we show that if these equations are generated randomly, then it is, in fact, possible to say quite a bit more. For example, the signal can be recovered with a mean squared error that is proportional to the quantization error of the optimal scalar quantizer using only (m-3) values. Connections with the problem of recovering a signal corrupted by additive Gaussian noise are also discussed.