We develop and analyze stochastic optimization algorithms for problems in which the expected loss is strongly convex, and the optimum is (approximately) sparse. Previous approaches are able to exploit only one of these two structures, yielding an $order(d/T)$ convergence rate for strongly convex objectives in $d$ dimensions, and an $order(sqrt{(s log d)/T})$ convergence rate when the optimum is $s$-sparse. Our algorithm is based on successively solving a series of ell1-regularized optimization problems using Nesterov's dual averaging algorithm. We establish that the error of our solution after T iterations is at most $O((s log d)/T)$. Our results apply to locally Lipschitz losses.