In fields across science and engineering, we are increasingly faced with high-dimensional problems, where for any hope of statistically consistent estimation, it becomes vital to leverage any potential structure in the problem such as sparsity, group-sparsity, low-rank structure, or sparse graphical model structure. In many problems however, any single structure might not capture the data, whereas a superposition of structural classes might. We thus study a class of M-estimators that learn a superposition of parameters with different structures, under a unified framework. We provide consistency and convergence rates for such regularized M-estimators under high-dimensional scaling.