In high-dimensional data analysis, sparse PCA offers simultaneous dimension reduction and variable selection. This talk will present some recent developments in convex relaxation of sparse PCA with a special focus on subspace estimation. The results include a new method based on the convex hull of rank-$k$ projection matrices (the Fantope) that can be solved efficiently by alternating direction method of multipliers, and some statistical properties of this new method such as subspace convergence and variable selection consistency. The results hold for general covariance models, do not require rank assumptions, and they can be applied to a wide array of settings beyond PCA. In the special case $k=1$, the results imply near-optimality of an earlier method due to d'Aspremont et al. (2007).