# Sonar signal processing

No matter the active sonar or passive sonar, when receiving the acoustic signal reflected from the target, the information included in the signal can not be directly collected and used without technical signal processing. To extract the efficient and useful informations from the mixed signal, some steps should be taken to transfer sonar data from raw acoustic data reception to detection output. Thus, for the active sonar, six steps are needed during the signal processing system.

## Signal generation

To actually generate in hardware as signal pulse typical analog implementations are oscillators and voltage controlled oscillators (VCO) which are followed by modulators. Amplitude modulation is used to weight the pulse envelopes and to translate the signal spectrum up to some suitable carrier frequency for transmission.

First, in sonar system, the acoustic pressure field can be represented as ${\displaystyle s(t,{\vec {r}})}$. The field function include four variables: time ${\displaystyle t}$ and spatial coordinate ${\displaystyle {}{\vec {r}}=(x,y,z)}$. Thus, according to the Fourier transform, in frequency domain

${\displaystyle {}s(w,{\vec {k}})=\iiiint \limits \,s(t,{\vec {r}})\cdot e^{-j(wt-{\vec {k}}{\vec {r}})}d{\vec {x}}dt,}$

${\displaystyle {\vec {k}}=(k_{x},k_{y},k_{z}),}$

${\displaystyle {}s(t,{\vec {r}})=\iiiint \limits \,s(w,{\vec {k}})\cdot e^{j(wt-{\vec {k}}{\vec {r}})}d{\vec {k}}dw,}$

In the formula ${\displaystyle w}$ is temporal frequency and ${\displaystyle {\vec {k}}}$ is spatial frequency. We often define ${\displaystyle s(t,{\vec {r}})=e^{-j(wt-{\vec {k}}{\vec {r}})},}$ as elemental signal, for the reason that any 4-D can be generated by taking a linear combination of elemental signals. Obviously, the direction of ${\displaystyle {\vec {k}}}$ gives the direction of propagation of waves, and the speed of the waves is

${\displaystyle v={\frac {w}{|{\vec {k}}|}}}$

The wavelength is

${\displaystyle \lambda ={\frac {2\pi }{|{\vec {k}}|}}}$

## Temporal sampling

In modern world, digital computer do contribute a lot to higher speed and efficiency in data analysis . Thus, it is necessary to convert an analog signal into a digital signal by sample the signal in time domain . The operation can be realized by three devices: a digital conversion device, a dynamic range controller and a digital conversion device.

For simplicity, the sampling is done at equal time intervals. In order to prevent the distortion (that is aliasing in frequency domain)after reconstruct the signal from sampled signal, one must sample at a faster rate .The sampling rate, which can well preserves the information content of an analog signal ${\displaystyle s(t,{\vec {r}})}$, is submitted to the Nyquist–Shannon sampling theorem . Assuming the sampling period is T, thus after temporal sampling, the signal is

${\displaystyle r(t)=r(nT)=s({\vec {r}},nT)}$ n is the integer.

## Spatial sampling and beamforming

It is really an important part for good system performance in sonar system to have appropriate sensor array and beamformer. To infer information about the acoustic field it is necessary to sample the field in space and time. Temporal sampling has already been discussed in a previous section. The sensor array samples the spatial domain, while the beanformer integrate the sensor’s output in a special way to enhance detection and estimation performance of the system.The input to the beamformer is a set of time series, while the output of the beamformer is another set of time series or a set of Fourier coefficient.

${\displaystyle r_{i}(t)=s({\vec {x}}_{i},t)}$

${\displaystyle {\vec {x}}_{i}=(x_{i},0,0)=(iD,0,0)}$

For a desired direction ${\displaystyle {\vec {k}}={\vec {k}}_{0}}$,Set ${\displaystyle t_{i}={\frac {{\vec {k}}_{0}}{w}}{\vec {x}}_{i}}$

Beamforming is one kind of filtering that can be applied to isolate signal components that are propagating in a particular direction.. In the picture is the most simple beamformer-the weighted delay-and-sum beamformer, which can be accomplished by an array of receivers or sensors. Every triangle is a sensor to sample in spatial domain. After spatial sampling, the sample signal will be weighted and the result is summing all the weighted signals. Assuming an array of M sensors distributed in space, such that the ${\displaystyle i}$th sensor is located at the position of ${\displaystyle x_{i}(i=0,1,...,M-1)}$ and the signal received by it is denoted ${\displaystyle r_{i}(t)}$.Thus after beamforming, the signal is

${\displaystyle b(t)={\frac {1}{M}}\sum _{i=0}^{i=M-1}{w_{i}r_{i}(t-t_{i})}}$

## Bandshifting

Bandshifting is employed in active and passive sonar to reduce the complexity of the hardware and software required for subsequent processing. For example，in active sonars the received signal is contained in a very narrow band of frequencies, typically about 2 kHz, centered at some high frequency, typically about 50 kHz. To avoid having to sample the received process at the Nyquist rate of 100 kHz, it is more efficient to demodulate the process to baseband and then employ sampling of the complex envelope at only 2 kHz.

## Filtering and smoothing

Filters and smoothen are used extensively in modem sonar systems, After sampling, the signal is converted from analog signal into a discrete time signal, thus digital filters are only into consideration. What’s more, although some filters are time varying or adaptive, most of the filters are linear shift invariant. Digital filters used in sonar signal processors perform two major functions, the filtering of waveforms to modify the frequency content and the smoothing of waveforms to reduce the effects of noise. The two generic types of digital filters are FIR and infinite impulse response (IIR) filters . Input-output relationship of an FIR filter is

${\displaystyle y(n)=\sum _{k=0}^{N-1}{h(k)x(n-k)}}$ (1-D)

${\displaystyle y(n1,n2)=\sum _{k_{1}=0}^{M_{1}-1}\sum _{k_{2}=0}^{M_{2}-1}{h(k_{1},k_{2})x(n_{1}-k_{1},n_{2}-k_{2})}}$ (2-D)

Input-output relationship of an IIR filter is

${\displaystyle y(n)=\sum _{k=0}^{N-1}{a_{k}x(n-k)}+\sum _{k=0}^{M-1}{b_{k}y(n-k)}}$ (1-D)

${\displaystyle y(n_{1},n_{2})=\sum _{r_{1}=0}^{N_{1}-1}\sum _{r_{2}=0}^{N_{2}-1}{a(r_{1},r_{2})x(n_{1}-r_{1},n_{2}-r_{2})}-\sum _{l_{1}=0}^{M_{1}-1}\sum _{l_{2}=0}^{M_{2}-1}{b(l_{1},l_{2})y(l_{1},l_{2})}}$ (2-D)

Both FIR filters and IIR filters have there advantages and disadvantages. First, the computational requirements of a sonar processor are more severe when implementing FIR filters. Second, for an IIR filter, linear phase is always difficult to obtain, so FIR filter is stable as opposed to an IIR filter. What’s more, FIR filters are more easily designed using the windowing technique.

## Decision processing

In a word, the goal of the sonar is to extract the informations and data from acoustic space-time field, and put them into designed and prescribed process so that we can apply the different cases into one fixed pattern. Thus, to realize the goal, the final stage of sonar system consists of the following functions:

1. Detection:Sonar detection determine if there is noise around the target.
2. Classification:Sonar classification distinguish a detected target signal.
3. Parameter estimation and tracking:Estimation in sonar is often associated with the localization of a target which has already been detected.
4. Normalization: Normalization is to make the noise-only response of the detection statistic as uniform as possible.
5. Display processing:Display processing addresses the operability and data management problems of the sonar system.