In this talk, we present a new methodology for the analysis of algorithms. "Informativeness" is proposed as an alternative measure of algorithmic performance besides time- and space requirements. The generalization capability of algorithms w.r.t. to their task of processing input data to meaningful outputs is introduced. The trade-off between the informativeness of algorithmic procedures and their stability against uncertainties is investigated by generalizing Shannon's channel capacity to arbitrary learning problems. We exemplify the principle with sorting algorithms. Different algorithms for the same data processing task, e.g. the various sorting methods, can now be objectively compared based on the bit rate of their respective capacities.