The L1-regularized Gaussian maximum likelihood estimator has strong statistical guarantees in recovering a sparse inverse covariance matrix. However, it requires solving a difficult non-smooth log-determinant program with number of parameters that scale quadratically with the no. of variables. State-of-the-art methods thus do not scale to problems with more than 20,000 variables. In this talk, I will describe a quadratic approximation method that can solve 1-million dimensional L1-regularized log determinant problems on a single computer. In order to do so, we employ (i) a second-order Newton-like method, (ii) division of variables into free and fixed sets, (iii) block co-ordinate descent, and (iv) a memory efficient scheme that approximates key quantities within the algorithm.