We propose an image representation scheme combining the local and nonlocal characterization of patches in an image. Our representation scheme is shown to be equivalent to a tight frame constructed from convolving local bases (e.g. wavelet frames, discrete cosine transforms, etc.) with nonlocal bases (e.g. spectral basis from nonlinear embedding of patches), and we call the resulting frame elements convolution framelets. Insight gained from analyzing the proposed representation leads to a novel interpretation of a recent high-performance patch-based image inpainting algorithm, Low Dimension Manifold Model (LDMM). In particular, we show that LDMM is a weighted $ell_2$-regularization on the coefficients obtained by decomposing images into linear combinations of convolution framelets; we extend the original LDMM to a reweighted version that yields further improved inpainting results. Our framework can be potentially generalized to interpret more complex image processing algorithms.