We propose empirical dynamic programming algorithms for MDPs. In these, the exact expectation in the Bellman operator is replaced by an empirical estimate to get `empirical value iteration' (EVI). Policy evaluation and policy improvement in classical policy iteration are also replaced by simulation to get `empirical policy iteration' (EPI). These EDP algorithms involve iteration of a random operator, the empirical Bellman operator. We introduce notions of probabilistic fixed points for such random monotone operators. We develop a stochastic dominance framework for convergence analysis of such operators. We then use this to give sample complexity bounds for both EVI and EPI. Preliminary experimental results suggest a faster rate of convergence than stochastic approximation methods.