Decentralized Detection

In decentralized detection systems decentralized processing of information is necessary because of the cost of transmitting large amounts of data and limitted battery life of the sensors. A distributed detection network consists of a number of sensors that perform a certain transformation on their observations, and transmit the output to a fusion center at which the final decision about the hypothesis is declared. The decentralized detection research area, focuses on optimizing the local sensor's decision rules as well as fusion center's decision rule in order to optimize a certain performance criteria (e.g. probability of error, probability of miss detection, etc.) subject to certain constraints (e.g. energy, communication rate, probability of false alarm, etc.).

Parallel Configuration
There are $M$ hypotheses $H_0,...,H_{M-1}$, with positive prior probabilities $\pi_0,...,\pi_{M-1}$, respectively. We view the true hypothesis as a random variable $H$ which takes the value $h_i$ with probabilities $\pi_j, \,\ j=0,...,M-1$. There are $N$ sensors $S_1,...,S_N$, and a fusion center $S_0$. Each sensor $S_i$ receives an observation $Y_i$ which is a random variable taking values in a set $\mathcal{Y}_i$. We assume that the joint probability distribution of $(Y_0,..., Y_N)$ conditioned on $H$ is known for each $j$.

Each sensor $S_i$, $i \neq 0$, upon receiving a realization $y_i$ of the random variable $Y_i$, evaluates a message $u_i=\gamma(y_i) \in \{1,...,D\}$, and sends it to the fusion center. Here, $\gamma_i : \mathcal{Y}_i \rightarrow \{1,...,D\}$ is a function that will be referred to as the {\itshape decision rule} of sensor $S_i$, $i \neq 0$. The message $u_1,...,u_N$, and makes a final decision $u_0 = \gamma_0 (u_1,...,u_N) \in \{1,...,M\}$. Here, $\gamma_0: \{1,...,D \}^N \rightarrow \{1,...,M\}$ is a function that will be referred to as the {\itshape decision rule} of the fusion center or, alternatively, as the {\itshape fusion rule}. \smallbreak

In Bayesian formulation, we are given a cost function $C\,\,: \,\ \{0,...,M-1\} \times \{1,...,D\}^N \times \{H_0,...,H_{M-1} \} \rightarrow \Re$, with $C(u_0,u_1,...,u_N,H_j)$ representing the cost associated to a fusion center decision $u_0$, and messages $u_1,..., u_N$, when $H_j$ is the true hypothesis. For any strategy $\gamma \in \Gamma$, its cost $J(\gamma)$ is defined by \begin{equation*} J(\gamma) = E [C(U_0,U_1,...,U_N,H)] \end{equation*} Where $U_0,...,U_N$ are the random variables defined by $U_i = \gamma_i(Y_i) \,\,\, i\neq 0$, and $U_0 = \gamma_0(U_1,...,U_N)$. An equivalent expression for $J(\gamma)$, in which the dependence on $\gamma$ is more apparent, is \begin{equation*} J(\gamma)= \sum_{j=1}^{M} \pi_j E_j[C \bigl(\gamma_0(\gamma_1(Y_1),...,\gamma_N(Y_N)),\gamma_1(Y_1),...,\gamma(Y_N),H_j \bigr)] \end{equation*} The most common form used as the cost function is the probability of error in which $C(u_0,u_1,...,u_M,H_j)$ is equal to one when $u_0 \neq j$, and zero otherwise. Thus, we get unit penalty when the fusion center chooses an incorrect hypothesis and, therefore, the objective is the minimization of the error probability.\smallbreak