We study stochastic games with a large number of players, where players are coupled via their payoff functions. A standard solution concept for such games is Markov perfect equilibrium (MPE). It is well known that the computation of MPE suffers from the “curse of dimensionality.” Recently an approximate solution concept called “oblivious equilibrium” (OE) was developed by Weintraub et. al, where each player reacts to only the average behavior of other players. In this paper, we develop a unified framework to study mean field equilibrium behavior of large scale stochastic games. In particular, We prove that under a set of simple assumptions on the model, a mean field equilibrium always exists. Furthermore, as a simple consequence of this existence theorem, we show that from the viewpoint of a single agent, a near optimal decision making policy is one that reacts only to the average behavior of its environment. This result unifies previous known results on the mean field equilibrium in large scale games. In developing this unified framework, we isolate and highlight the key modeling parameters which make this mean field approach feasible.