The so called No-Free-Lunch principle is a basic insight of machine learning. It may be viewed as stating that in the lack of prior knowledge, any learning algorithm fails on some learnable task. In recent years, several paradigms for "universal learning" have been proposed and advocated. These range from paradigms of almost science-fictional nature, like "Automation of science", through practically oriented Deep Belief Networks, to theoretical constructs like Universal Kernels and Universal Coding for MDL-based learning. In this talk I investigate this apparent contradiction by examining possible definitions of universal learning, proving a basic no-free-lunch theorem for such notions and discussing how they apply to the above mentioned universal learning paradigms.