A line of recent work on high-dimensional statistical inference has studied models with various types of structure (e.g., sparse vectors; block-structured matrices; low-rank matrices, Markov assumptions). A general approach to estimation in such settings is to use a regularized M-estimator which combines a loss function (measuring goodness-of-fit of the models to the data) with some regularization function that encourages the assumed structure. Our goal is to provide a unified framework for establishing consistency and convergence rates for such regularized M-estimation procedures under high-dimensional scaling. We state one main theorem and show how it can be used to re-derive several existing results, and also to obtain several new results on consistency and convergence rates.