Consider the setting of sparse graphs on $N$ vertices, where the vertices have distinct "names", which are strings of length $O(\log N)$ from a fixed finite alphabet. For many natural probability models, the entropy grows as $cN \log N$ for some model-dependent rate constant $c$. In adaptations of familiar ``complex network" models to this setting, it is usually easy to calculate $c$; we give one non-trivial example. One cannot hope for universal data compression algorithms in this context, but we speculate that there is a notion of ``local entropy rate" that specifies the best compression that can be guaranteed by a universal algorithm.