Compressed sensing (CS) is a new area of signal processing and statistics that focuses on signal reconstruction from a small number of linear (e.g., dot product) measurements. In this paper, we analyze CS using tools from coding theory because CS can also be viewed as a syndrome-based source coding of sparse vectors using linear codes over real numbers. While coding theory does not typically deal with codes over real numbers, many results can be modified to apply in this case. In particular, there is a very close relationship between CS and error-correcting codes over large discrete alphabets. Exploiting recent advances in capacity-approaching codes, we propose new approaches for CS of exactly sparse vectors. These methods are based on irregular low-density parity-check codes and result in provable oversampling thresholds with reconstruction complexities that are linear in the signal dimension $n$. Reconstruction succeeds with high probability (as $n\rightarrow\infty$) using a random signal and measurement model. We also discuss extensions to deterministic signal models and "approximately sparse" signals.