We consider the problem of aggregating pairwise comparisons to obtain a consensus ranking order over a collection of objects. We use the Bayesian BTL model which allows for meaningful prior assumptions and to cope with situations where the number of objects is large and the number of comparisons between some objects is small or even zero. For the conventional Bayesian BTL model, we derive information-theoretic lower bounds on the Bayes risk of estimators for norm-based distortion functions. We compare the information-theoretic lower bound with the Bayesian Cram´er-Rao lower bound we derive for the case when the Bayes risk is the mean squared error. We illustrate the utility of the bounds through simulations by comparing them with the error performance of an expectation-maximization based inference algorithm proposed for the Bayesian BTL model. We draw parallels between pairwise comparisons in the BTL model and inter-player games represented as edges in a comparison graph and analyze the effect of various graph structures on the lower bounds.