The problem addressed in this talk is that of learning an unknown network from nodal observations, which are modeled as graph signals generated by linear diffusion dynamics that depend on the topology of the sought graph. We assume that the observed diffused signals can be modeled as the output of a linear graph filter, applied to a set of independent input graph signals with arbitrarily-correlated components. In this context, we first rely on observations of the output signals along with prior statistical information on the inputs to identify the diffusion filter. Critical to this end is the formulation of the problem as the solution of a system of quadratic equations. Then, we leverage that linear graph filters are polynomials of the so-called graph-shift operator (a matrix representation of the network topology) to recover the graph via a convex optimization problem. We first address the case of undirected networks and then consider directed networks as well. The resultant problem can be recast as a sparse recovery problem with either Boolean constraints (for the undirected case) or manifold constraints (for the directed case). Numerical tests corroborating the effectiveness of the proposed algorithms in recovering synthetic and real-world directed graphs are provided.