Suppose that we have $p$ noisy sensors that collect $n$ samples of data and we would like to learn a model for the sensor data. If $n$ is fixed, is there an advantage to adding more (non-iid) noisy sensors and increasing the dimensionality, $p$? The curse of dimensionality usually prevents substantial gains from this strategy. If the underlying system is described by a fixed number of latent factors then the answer is less clear. Recent results show that a curse of dimensionality still applies in general, with a sample complexity for model structure recovery that grows like $log p$. However, under the additional restriction of non-overlapping latent factors, we present a lower bound on sample complexity that decreases with dimensionality. Furthermore, we devise a method that exhibits this blessing of dimensionality for high-dimensional structure recovery using few samples. We will discuss possible applications and generalizations of these results.