High-dimensional sparse linear regression is a basic problem in machine learning and statistics. Consider a linear model y = X*theta + w, where y is the vector of observations, X is the covariate matrix, and w is an unknown noise vector. In many applications, the linear regression model is high-dimensional in nature, meaning that the number of observations (n) may be substantially smaller than the number of covariates (d). In these cases, it is common to assume that theta is sparse, and the goal in sparse linear regression is to estimate this sparse theta, given (X,y). In this paper, we study a variant of the traditional sparse linear regression problem where each of the n covariate vectors are individually projected by a random linear transformation to a lower-dimensional space. Such transformations are commonly applied in practice for computational savings in resources such as storage space, transmission bandwidth, and processing time. Our main result shows that one can estimate theta, even with access to only these projected covariate vectors, under some mild assumptions on the problem instance. The main technical ingredient of our result, a bound on the restricted eigenvalue on certain projections of a deterministic matrix satisfying a stable rank condition. Joint work with Mark Rudelson