Projected gradient descent (PGD) is one of the main algorithms for constrained optimization. In many cases, the projection step is the computational bottleneck in the overall algorithm, often rendering PGD infeasible for large-scale problems. This is not a coincidence: there are computational hardness results for common constraint sets such as group sparsity or tree sparsity. We introduce a framework for using approximate projections in PGD that overcomes this barrier. Our framework gives faster algorithms for a wide variety of structured estimation problems such as structured sparse regression and low-rank matrix recovery. Although our framework relies only on approximate projections, it yields the same statistical guarantees as exact projections or convex relaxations.