Principle components analysis (PCA) is an important technique for reducing the dimensionality of data, and is often used as a preprocessing step in machine learning algorithms. We propose a algorithm that produces an approximation to the top-k PCA subspace of a collection of data points while guaranteeing differential privacy. We provide theoretical upper and lower bounds when k = 1 on the number of points needed to guarantee a given privacy level and approximation error. We show that our algorithm is nearly is nearly optimal in terms of scaling with the data dimension. We demonstrate our method on a number of data sets and indicate a number of promising future directions. This is joint work with Kamalika Chaudhuri and Kaushik Sinha.