We address the basic tradeoff between blocklength, dimension and performance (error probabilities) in composite binary hypothesis testing. We define universal optimality criteria with and without training data, and present tests that are optimal in the limit of large blocklength, for a large class of parametric families. We further explore the dependence upon the dimension of the composite class, characterizing the required change in other parameters (e.g., blocklength) in order to compensate for the growth of dimension and keep the performance fixed.