Adaptive Markov chain Monte Carlo (AMCMC) is a new class of MCMC algorithms in which the transition kernel is continuously modified "on the fly" using the information provided by the very samples produced by the chain. Regional adaptation is useful in situations in which one would like to use different transition kernels across a partition of the sample space. We focus on Random Walk Metropolis algorithms and assuming that the target distribution is approximated by a mixture of Gaussians, we propose a regional adaptive algorithm with online recursion (RAPTOR) in which the partition and the proposal's parameters are modified as the simulation proceeds. In the second part of the talk we will discuss a regime-change algorithm which performs adaptation only within a region of the sample space. The theoretical results presented close some of the gaps that exist between the theory and practice of AMCMC.