We show how the fat-shattering dimension can be used to obtain lower bounds for a generic class of convex optimization problems under the local access model. This in turn implies that the sample complexity for learning is upper bounded by the optimization runtime (with only local black-box accesses) of the corresponding empirical optimization problem.