Compression schemes for advanced data structures have become a central modern challenge. Information theory has traditionally dealt with conventional data such as text, images, or video. In contrast, most data available today is multitype and context-dependent. To meet this challenge, we have recently initiated a systematic study of advanced data structures such as unlabeled graphs generated by Erdos-Renyi model. Extension to preferential attachment graphs turns to be of a challenge. However, recently, we solved this problem, too. During the talk we present new precise formula for the entropy of the preferential attachment graph and its structural entropy (unlabeled graphs). Then we we continue this program by considering trees with statistically correlated vertex names. Trees come in many forms, but here we first deal with binary plane trees (where order of subtrees matters) and their non-plane version (where order of subtrees doesn't matter). Furthermore, we assume that each name is generated by a known memoryless source (horizontal independence), but a symbol of a vertex name depends in a Markovian sense on the corresponding symbol of the parent vertex name (vertical Markovian dependency). We present entropy for both types of trees. Finally, we consider d-ary trees and trees with unrestricted degree for which we compute entropy (the first step to design optimal compression). As it turns out extending from binary trees to general trees is mathematically quite challenging and leads to new recurrences that find ample of applications in information theory. This is joint work with Y. Choi, Z. Golebiewski, T. Luczak, A. Magner, and K. Turowski.