Enumerative coding for tree sources is not straightforward, since the size of the type class of a sequence $x^n$ with respect to a given tree may deviate significantly from $exp(n \hat{H}(x^n))$, where $\hat{H}(x^n)$ is the empirical entropy of $x^n$ with respect to the model. The latter asymptotic behavior is key to twice-universal enumerative schemes for FSM sources, including fully parametrized $k$-th order Markov models. We survey recent work on the characterization of type classes of tree models, and describe an enumerative coding scheme that is twice-universal in the class of tree sources in the sense that, for a string emitted by an unknown tree source, the expected normalized codelength of the encoding approaches the entropy rate of the source with a covergence rate (K/2)(log n)/n, where K is the number of free parameters of the tree. The encoding consists of the index of the sequence in a type class related to the tree, and an efficient description of the class itself. It is also shown that there is some freedom in the choice of the class partition used in the encoding, and that variations in the size of the class are compensated by offsetting variations in the length of the description of the class, maintaining the universality result.