We consider the problem of estimating high-dimensional and nonparametric distributions in distributed networks, where each node in the network observes an independent sample from the underlying distribution and has $k$ bits to communicate it to a centralized estimator. We obtain matching upper and lower bounds for the minimax risk of estimating the underlying distribution under $L_1$ loss. Our results reveal that the minimax risk reduces exponentially in $k$. Instead of relying on strong data processing inequalities for the converse as commonly done in the literature, we propose a new representation of the communication constraint. This approach circumvents the need for strong data processing inequalities and leads to a stronger lower bound which yields a tight characterization of the problem.