Probabilistic graphical models are widely used to represent complex, stochastic systems in application domains such as coding, signal processing, computer vision and biology. In many of these applications, the underlying models are used repeatedly with minor variations each time, for example during iterative algorithms for estimation or learning from data, from user interaction, or to evaluate sequences which change slowly over time. Traditional algorithms are typically designed with only a single computation in mind and are simply re-run from scratch with each change, resulting in an unnecessarily high computational cost. However, minor changes to the model often require updating only a small fraction of the solution. We show that on optimization tasks, our algorithm is within a logarithmic factor of the optimal and is asymptotically never slower than re-computing from scratch: if a modification to the model requires $m$ updates to the MAP configuration of $n$ random variables, then our algorithm requires $O(m \log(n/m))$ time, compared to $O(n)$ for recomputation. We illustrate its effectiveness on several problem domains, including computer vision and computational mutagenesis.