We consider the problem of batch learning with log-loss and propose a minimax universal learning solution that minimizes the worst case log-loss regret. Utilizing the minimax theorem and information-theoretical tools, we came up with the minimax universal learning solution, a redundancy capacity theorem and an upper bound on the performance of the optimal solution. The resulting universal learning solution is a mixture over the models in the considered class and it outperforms the empirical risk minimization solution that is commonly used in statistical learning theory. Furthermore, we get a better bound on the generalization error that decays as $O(log N/N)$, where $N$ is the sample size, instead of $O(sqrt{log N/N})$.