An algorithm runs in input-sparsity time if its running time is no more than a constant times the size of the input, for worst-case input. Somewhat remarkably, recent work in Randomized Numerical Linear Algebra has shown that problems such as least-squares and low-rank approximation can be solved to epsilon-accuracy in input-sparsity time. I will describe several related results in constructing low-distortion embeddings for Lp spaces in input-sparsity time, as well as the application of these results to solving problems such as least-squares, least-absolute deviations, and quantile regression. Of particular importance in making these theoretical ideas practical is understanding the tradeoffs between the running time to construct the embedding and the distortion quality of the embedding.