I will present two state of the art algorithms achieving significantly faster convergence than previous approaches for solving L1 regularization problems under some weak strong convexity conditions. In the first part, I show that for L1 regularized least squares problems in compressed sensing, a proximal gradient homotopy method achieves global geometric convergence. In the second part, I show that for general L1 regularized smooth loss minimization problems in machine learning, a proximal stochastic dual coordinate ascent method achieves fast global geometric convergence if a weak L2 regularizer with strength inversely proportional to the training sample size is included. Relationship of the two results will be discussed. Collaborators: Lin Xiao and Shai Shalev-Shwartz