Sparsity in nature

Most Signals in the nature presents high level of sparsity in the sense of carrying information. In the other words it carries much less information due to clear or hidden patterns in the signal.

Conventional Signals sampling in traditional systems follow the Nyquist rate and shannon's celeberated Shannon's source coding theorem. At first step the signal is sampled in a rate satisfying nyquist rate criteria to avoid signal aliasing. Based on Nyquist rate criteria, the signal should be sampled at least with a rate equal to twice the highest frequency contained within the signal.

$f_N = 2B$

where $f_N$ is sampling rate and $B$ is the highest frequency at which the signal can have nonzero energy. For baseband signals B is the highest positive-frequency and for bandpass signals it is the difference between lowest and highest frequency.

At the second step, the signals is compressed with source coding schemes with the rate following shannon's theorem prior sending or storage of the signal. The compression rate for iid samples follows

$\frac{H(X)}{log_2{a}} \leq ES \leq \frac{H(X)}{log_2{a}} + 1$

where the sample X is a random variable taking values from a finite alphabet with size $a$, H(X) is the entropy of X and ES the coding rate. []

Since the most signals in nature have repeated hidden patterns, the assumption of i.i.d samples is not realistic. Many different techniques are developed to exploit this

sparsity in speech, image and movie signals and compress data as much as possible. Recently compressed sensing is introduced to perform the compression at sensing level. The main idea is based on the fact that a small collection of linear projections of a sparse signal contains enough information for reconstruction.

The following are some examples of sparse signals in nature.

1-Sparsity in speech signals

2-Sparsity in still image and movie signals

3-Sparsity in earthquake signals

4-Sparsity in solar waves

5-Sparsity in bat's ultrasound navigation signal