Markov chain Monte Carlo (MCMC) methods, originated in computational physics more than half a century ago, have seen an enormous range of applications in quantitative scientific investigations. This is mainly due to their ability to simulate from very complex distributions needed by all kinds of statistical models, from bioinformatics to financial engineering to astronomy. The first part of this talk provides an introductory tutorial of the two most frequently used MCMC algorithms: the Gibbs sampler and the Metropolis-Hastings algorithm. Using simple yet non-trivial examples, we demonstrate, via live performance, the good, bad, and ugly implementations. Along the way, we reveal both the mathematical challenges in establishing their convergence rates and the statistical thinking underlying their designs, including the secret behind the greatest statistical magic. Audience participation is required, though no prior experience is needed. The second part of the talk presents an Ancillary-Sufficient Interweaving Strategy (ASIS), a surprisingly simple and effective boosting method for combating some of the serious problems revealed in the first part. The ASIS method was discovered almost by accident during the struggle of a Ph.D. student (Yaming Yu) with fitting a Cox process model for detecting changes in source intensity of photon counts observed by the Chandra X-ray Telescope from a (candidate) neutron/quark star. Yu’s method for solving that particular problem turned out to be of considerable generality, which ultimately led to the full formulation of ASIS. The method achieves fast convergence by taking advantage of the “beauty and beast” discordant nature of two MCMC schemes to break the usual “stickiness” of a Markov chain, i.e., its high auto dependence and hence slow exploration of the state space. This part of the talk is based on an upcoming discussion article, Yu and Meng (2011), in Journal of Computational and Graphical Statistics.