In this paper we study the following mechanism design problem: in a system of networks where each possesses beliefs of one another, with the truth of an entity at best only known to itself, how to construct incentive mechanisms for them to participate in a collective effort to contribute their information without violating self-interest, so that more accurate assessment may be obtained. We introduce a number of utility models that capture a network's desire to maintain security and/or reachability and examine whether networks can be incentivized to provide data within this context. For each model, we either design a mechanism that achieves the optimal performance (solution to the corresponding centralized problem), or present incentive compatible sub-optimal solutions.