We study the problem of dynamic learning by a social network of agents. Each agent receives a signal about an underlying state and communicates with a subset of agents (its neighbors) in each period. The network is connected. In contrast to the majority of existing learning models, we focus on the case where the underlying state is time-varying. We consider the following class of rule of thumb learning rules: at each period, each agent constructs its posterior as a weighted average of his prior, its signal and the information it receives from neighbors. The weights given to signals can vary over time and the weights given to neighbors can vary across agents. We distinguish between two subclasses: (1) constant weight rules; (2) diminishing weight rules. The latter reduces weights given to signals asymptotically to 0. Our main results characterize the asymptotic behavior of beliefs.