Neural networks and coding theory

Coding theory in neuroscience can be viewed from different perspectives:

Channel Coding and Neural Networks
There are a few different types of neural networks that accomplish the same task as error correcting codes, namely, retrieving correct information from noisy or corrupted input data. The most widely-known such neural networks is the neural associative memory (also known as content addressable memories).

While in channel coding, it is the receiver that attempts to decode what was sent by the transmitter over a noisy channel, in neural and neuronal networks it is usually the network that has memorized a number of patterns in advance and now tries to recover the correct memorized pattern from a noisy given query.

Although they have similar goals, the way channel coding methods and neural associative memories accomplish information retrieval is radically different. The difference comes from several factors including:
 * The simplistic nature of neurons: In contrast to decoding nodes, similar to those in the decoding graph of lo density parity check codes, neurons are incapable of doing fancy operations. What they can do is basically integrating the messages they receive over their input links and apply a threshold function to send feedback to their neighbors.
 * Real-field operations: In most of channel coding techniques (excluding lattice codes), the operation is done in $GF(2)$. In contrast, neurons operate in real field (as expected!).
 * Broadcast nature of neural communication: in codes on graph, one often uses belief propagation and message passing to retrieve the correct transmitted codeword. In such cases, variable and parity check nodes are able to transmit different messages over their output links. However, in neural networks whatever a neuron transmits goes to all its neighbors.