We discuss the recovery of models with simultaneous structure, for example recovering a matrix that is simultaneously sparse and low-rank, given generic linear observations. Often penalties that promote each individual structure are known and yield an order-wise optimal number of measurements (e.g.,$\ell$1 norm for sparsity, nuclear norm for matrix rank), so it is reasonable to minimize a combination of such norms. We show that, surprisingly, if we use multi-objective optimization with the individual norms, then we can do no better, order-wise, than an algorithm that exploits only one of the structures. This result suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation, not one that is a function of convex relaxations used for each structures.