Randomized algorithms for matrices exploit novel random sampling and projection methods; and implementations of these algorithms have already proven superior to state-of-the-art algorithms such as in Lapack for problems stored in RAM on a single machine. Here, we describe the extension of these methods to computing solutions in parallel and distributed environments that are more common in very large-scale data applications. Our iterative least-squares solver, LSRN, scales well for problems on clusters that have high communication costs, e.g., an Amazon EC2 cluster. Our iterative least-absolute-deviations solver is based on fast ellipsoidal rounding, random sampling, and interior-point cutting-plane methods and shows improvements over traditional algorithms on MapReduce.