Much of the classical work on algorithms for multiarmed bandits studies rewards that are stationary over time. By contrast, we study multiarmed bandit (MAB) games, where rewards obtained by an agent also depend on how many other agents choose the same arm (as might be the case in many competitive or cooperative scenarios). Such systems are naturally nonstationary due to the interdependent evolution of agents, and in general MAB games can be intractable to analyze using typical equilibrium concepts (such as perfect Bayesian equilibrium). We introduce a general model of multiarmed bandit games, and study a notion of equilibrium inspired by a large system approximation known as mean field equilibrium. In such an equilibrium, the proportion of agents playing the various arms, called the