The type class of a sequence $x^n$ with respect to a certain model class (e.g., Markov sources of a given order) is defined as the set of sequences which have the same probability as $x^n$ under any model in the class. The requirement that the property hold universally for the model class is analogous to the setting in universal data compression (and other problems in information theory), where the code length of a universal code is required to approach the model entropy universally in a model class. In data compression, the concept is further refined in the twice-universal setting, where a nested family of model classes is assumed (e.g., Markov models of different orders) and the goal is to design a single code that would approach the model entropy at the optimal convergence rate corresponding to each given model class in the nest, universally for each model in that class. Can a partition of the set of $n$-vectors into type classes, which is ``twice-universal'' in an analogous way, be built? In this talk, such a partition is proposed for the Markov case. The (twice-universal) type class of $x^n$ consists of the set of sequences which estimate the same Markov order $i$ as $x^n$, and which have the same type (of order $i$) as $x^n$. While it cannot be required that sequences in the same type class have the same probability for any Markov model of any order (unless the type is a singleton), we show that, for a suitable choice of the Markov order estimator, the probabilities are close in a well-defined sense, and that the proposed type classes are essentially the largest ones with such a property. This work extends previous work where type classes are based on the Lempel-Ziv parsing rule, and as in that work, the results are applied to simulation in the individual sequence setting. We show that the design of the Markov order estimator controls the tension between faithfulness and type size, a property which is not shared by the LZ-based types. Looking at the simulation problem in its more traditional stochastic setting, the label of double universality is justified via an optimal convergence rate. We discuss possible extensions to other model classes, such as tree models.