We will describe a primal-dual method based on the proximal augmented Lagrangian for non-smooth composite optimization problems. After introducing an auxiliary variable, we utilize the proximal operator of the nonsmooth regularizer to transform the associated augmented Lagrangian into a function that is once, but not twice, continuously differentiable. Saddle points of this function, which we call proximal augmented Lagrangian, correspond to the solution of the original optimization problem. This function is used to develop a customized algorithm based on the primal-dual gradient flow dynamics. When the differentiable component of the objective function is strongly convex with a Lipschitz continuous gradient, we employ the theory of integral quadratic constraints to prove global exponential stability. We close the talk by discussing the classes of problems that are amenable to distributed optimization and compare the developed method to the state-of-the-art alternatives.