In this talk, we study empirical risk minimization (ERM) problems for large-scale datasets and incorporate the idea of adaptive sample size methods to improve the guaranteed convergence bounds for first-order and second-order methods. In contrast to traditional methods that attempt to solve the ERM problem corresponding to the full dataset directly, adaptive sample size schemes start with a small number of samples and solve the corresponding ERM problem to its statistical accuracy. The sample size is then grown geometrically, and the solution of the previous ERM is used as a warm start for the new ERM. Theoretical analyses show that the use of adaptive sample size methods reduces the overall computational cost of achieving the statistical accuracy of the whole dataset significantly for a broad range of deterministic and stochastic methods.