We will discuss the optimal convergence rates for distributed convex optimization problems in networks, where the objective is to minimize the sum of network agents' objective functions. We model the communication restrictions imposed by the network by affine constraints and provide optimal complexity bounds for several different cases based on the properties of the agents' objective functions. Our approach uses the dual of an appropriately formulated primal problem. We show that Nesterov’s accelerated gradient method for the dual problem can be applied in a distributed manner and that it enjoys the same optimal rates as in the centralized case, with an additional cost related to the spectral gap of the interaction matrix.