A big challenge of modern machine learning is to develop techniques that perform well in high-dimensional settings. Learning algorithms are frequently run on sensitive data and results of such analyses could leak sensitive information. In this talk, we consider the problem of designing differentially private mechanisms for convex empirical risk minimization in high-dimensional settings. We will show how ideas from random projection and high-dimensional estimation can be combined to obtain better (in many cases, dimensionality independent) excess risk bounds for empirical risk minimization than previously known. Using similar ideas, we will also present private mechanisms that achieve good performance even when the data is streaming.