Classical algorithms for stochastic optimization involve exact evaluation of the gradi- ent and can be prohibitively computation-intensive in high-dimensions. Thus, often sample gradients are used which introduce their own problems such as high variance and slow convergence. In this paper, we introduce a class of ‘empirical’ algorithms that provide a better trade-o↵ between compu- tation and convergence. We use multiple samples in each iteration, which is computation-wise still practical, improves convergence. We first consider unconstrained stochastic optimization and pro- pose the empirical gradient descent algorithm for it. We also propose a streaming or online variant, likely of use in streaming big data applications. We then consider constrained stochastic optimization and propose the empirical and streaming variants of the classical primal-dual algorithm. We assume convexity (but not strong convexity) and smoothness of functions. We establish convergence in prob- ability for the empirical algorithms, and in expectation for the streaming algorithms. Our analysis is based on a novel convergence analysis technique and a refinement of the classical Azuma-Hoe↵ding inequality likely of independent interest.