We introduce a new algorithm for Maximum Likelihood (ML) decoding. The algorithm is based on the principle that the receiver rank orders noise sequences from most likely to least likely. Subtracting noise from the received signal in that order, the first instance that results in an element of the code-book is the ML decoding. For channels where the noise is determinable from the input and output, and where the noise is independent of the input, such as common additive noise channels, we establish that the algorithm is capacity achieving for uniformly selected code-books. When the code-book rate is less than capacity, we identify exact asymptotic error exponents as the block-length becomes large. When the code-book rate is beyond capacity, we determine exact asymptotic success exponents, i.e. exponents for the probability that the ML decoding is the transmitted code-word. We illustrate the practical usefulness of this new approach with several examples, making use of the fact that guessing on noise may be, on average, quite rapid.