We derive optimal first-order stochastic optimization rates for convex functions whose growth around their minimizer is quantified by a Tsybakov noise condition (TNC) with exponent k. This include as special cases the classical rates for convex (k tending to infinity), strongly convex (k=2) and uniformly convex optimization (${k \ge 2}$). Even faster rates (nearly exponential) are possible if ${1 \le k \ge 2}$. Our analysis involves a reduction of convex optimization to active learning as both involve feedback to minimize the number of queries needed to identify the minimizer or the decision boundary, respectively. Thus, the complexity of d-dimensional convex optimization is same as 1-dimensional active learning. It is also possible to adapt to unknown TNC exponent and hence the degree of convexity.