# Sensor array

A sensor array is a group of sensors deployed in a certain geometry pattern. The advantage of using a sensor array over using a single sensor lies in the factor that an array can increase the antenna gain in the direction of the signal while decreasing the gain in the directions of noise and interferences. In other words, sensor arrays can increase signal-to-noise ratio (SNR): magnify the signal while suppressing the noise. Another ability of sensor array is that is can detect the direction and distance of impinging signal sources. The technology to achieve this is called Array Signal Processing. Application examples of array signal processing include RADAR/SONAR, wireless communications, seismology, machine condition monitoring and fault diagnosis, etc.

Using array signal processing, the temporal and spatial properties (or parameters) of the impinging signals contaminated with noise and hidden in the data collected by the sensor array can be estimated and revealed. This is known as parameter estimation.

Figure 1: Linear array and incident angle

Figure 1 illustrates a six-element uniform linear array. In this example, the impinging signal is assumed far-field so that it can be treated as planar wave.

Parameter estimation takes advantage of the fact that the distance from the source to each microphone in the array is different, which means that the signal recorded by each microphone will be phase-shifted replicas of each other. Eq. (1) shows the calculation for the extra time it takes to reach each microphone in the array relative to the array center, where c is the sound speed.

$\Delta t_i = \frac{(i-1)d \cos \theta}{c}, i = 1, 2, ..., M......(1)$

Each sensor is associated with a different delay. Although the delays are small but not trivial. In frequency domain, the delays display as phase shift among the signals received by the sensors. The delays are closely related to the incident angle and the geometry of the sensor array. Given the geometry of the array, the delays or phase differences can be used to estimate the incident angle. This[clarification needed] the mathematical basis behind the array signal processing. Simply summing the signals received by the sensors and calculating the mean value give the following result:

$y = \frac{1}{M}\sum_{i=1}^{M}[x(t-\Delta t_i)]......(2)$

Because the received signals are out of phase, this mean value does not give an enhanced signal compared with the original source. Heuristically, if we can find weights multiplying to the received signals to make them in phase before summing them together, the mean value will give an enhanced signal:

$y = \frac{1}{M}\sum_{i=1}^{M}[w_i*x(t-\Delta t_i)]......(3)$

The process of multiplying a well selected set of weights to the signals received by the sensor array so that the signal is added constructively while suppressing the noise is called beamforming. There are a variety of beamforming algorithms for sensor arrays, such as The delay-and-sum approach, spectral based (non-parametric) approaches and parametric approaches. These beamforming algorithms are briefly described as follows.

## Delay-and-Sum Beamforming

If a time delay is added to the recorded signal from each microphone that is equal and opposite of the delay caused by the extra travel time, it will result in signals that are perfectly in-phase with each other. Summing these in-phase signals will result in constructive interference that will amplify the result by the number of microphones in the array. This is known as delay-and-sum beamforming. For DOA (direction of arrival) estimation, one can iteratively test time delays for all possible directions. If the guess is wrong, the signal will destructively interfere, resulting in a diminished output signal, but the correct guess will result in the signal amplification described above.

The problem is, before the incident angle is estimated, how could it be possible to a time delay that is 'equal' and opposite of the delay caused by the extra travel time? It is impossible. The solution is to try a series of angles $\hat{\theta} \in [0, \pi]$ at sufficiently high resolution, and calculate the resulted mean output signal of the array using Eq. (3). The trial angle that maximizes the mean output is an estimation of DOA given by the delay-and-sum beamformer. Adding an opposite delays to the input signals is equivalent to physically turning the sensor array. Therefore, it is also known as beam steering.

## Spectrum-Based Beamforming

Delay and sum beamforming is a time domain approach. It is simple to implement, but it may poorly estimate direction of arrival (DOA): if the signal is contaminated with strong noise, there may be practical difficulty in implementing the algorithm. The solution to this is a frequency domain approach. The Fourier transform transforms the time domain signal to the frequency domain. This converts the time delays between adjacent sensors into phase shifts. Thus, the array output vector at any time t can be denoted as $x(t) = x_1(t)\begin{bmatrix} 1 & e^{-j\omega\Delta t} & \cdots & e^{-j\omega(M-1)\Delta t} \end{bmatrix}^T$, where $x_1(t)$ stands for the signal received by the first sensor. Frequency domain beamforming algorithms use the spatial covariance matrix, represented by $R=E\{x(t)x^T(t)\}$. This M by M matrix carries the spatial and spectral information of the incoming signals. Assuming zero-mean Gaussian white noise, the frequency signal-noise snapshot model of the spatial covariance matrix is given by

$R = VSV^H + \sigma^2I ......(4)$

where $\sigma^2$ is the variance of the white noise, I is the identity matrix and V is the array manifold vector: $V = \begin{bmatrix} 1 & e^{-j\omega\Delta t} & \cdots & e^{-j\omega(M-1)\Delta t} \end{bmatrix}^T$. This model is of central use in frequency domain beamforming algorithms.

Some spectrum-based beamforming approaches are listed below.

### Conventional (Barlett) Beamformer

The Barlett beamformer is a natural extension of conventional spectral analysis (spectrogram) to the sensor array. Its spectral power is represented by

$\hat{P}_{Barlett}(\theta)=V^HRV ......(5)$

The angle that maximize this power is an estimation of the angle of arrival.

### MVDR (Capon) Beamformer

The Minimum Variance Distortionless Response beamformer, also known as the Capon beamforming algorithm, has a power is given by

$\hat{P}_{Capon}(\theta)=\frac{1}{V^HR^{-1}V} ......(6)$

Though the MVDR/Capon beamformer can achieve better resolution than the conventional/Bartlett approach, its algorithm is much more computationally intensive due to the full-rank matrix inversion. This said, advancements in GPU computing have begun to narrow this gap and make real-time Capon beamforming possible.[1]

### MUSIC Beamformer

MUSIC (MUltiple SIgnal Classification) beamforming algorithm is derived from the Capon algorithm by decomposing the covariance matrix as given by Eq. (4) for both the signal part and the noise part. The eigen-decomposition of is represented by

$R = U_s\Lambda_s{U_s}{^H} + U_n\Lambda_n{U_n}^H ......(7)$

The MUSIC uses the noise sub-space of the spatial covariance matrix in the denominator of the Capon algorithm:

$\hat{P}_{MUSIC}(\theta)=\frac{1}{V^H(U_n{U_n}^H)V} ......(8)$

Therefore MUSIC beamformer is also known as subspace beamformer. Compared Capon beamformer, it gives much better DOA estimation while avoiding matrix inversion. Computational intensity is reduced significantly if the number of sensors (M) is large.

## Parametric Beamformers

One of the major pros of the spectral based beamformers is their relatively lighter computational complexity, but they may not give accurate DOA estimation if the signals are correlated or coherent. An alternative approach is the parametric beamformers, also known as Maximum Likelihood (ML) beamformers. One example of maximum likelihood method commonly used in engineering is least squares method. In the least square approach, a quadratic penalty function is used. To get the minimum value (or least square) of the quadratic penalty function (or objective function), take its derivative (which is linear) and let it equal zero, and solve linear equations.

In ML Beamformers, quadratic penalty functions are used to the spatial covariance matrix and the signal/noise model. One example of ML beamformer penalty function is:

$L_{ML}(\theta)=\sum_{i=1}^{N} \|\hat{R}-(VSV^H)\|^2 ......(9)$

where N is the number of snapshots and $\| \|^2$ is the Euclidean norm. It can be seen from Eq. (4) that minimizing the penalty function of Eq. (9) is to make the noise term as much as possible, or the signal model as close to the sample covariance matrix as much as possible. In other words, the Maximum Likelihood beamformer is to find the DOA $\theta$, the independent variable of vector V, so that the penalty function such as Eq. (9) is minimized. In practice, the penalty functions used look different, depending on the signal/noise model employed, but they are the same in essence. For this reason, there are two major categories of maximum likelihood beamformers: Deterministic ML beamformers and Stochastic ML beamformers, corresponding to the deterministic noise model and the stochastic noise model, respectively.

Another factor to change the former of the penalty equation is the consideration of simplifying the minimization of the penalty function. Although the derivative of a quadratic function is linear, matrix operations and trigonometric operations make it highly non-linear. In order to simplify the optimization algorithm, logarithmic operation and probability density function (PDF) of the observations may be used in some ML beamformers.

In ML beamformer, the optimizing problem becomes finding the roots of the equation of zeroing the derivative of the penalty function. Because the equation is highly non-linear, usually a numerical searching approach, such as Newton-Raphson method, is used. The Newton-Raphson method is an iterative root search method as given by

$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}......(10)$

The search starts from an initial guess $x_0$. If the Newton-Raphson search method is employed in minimizing the penalty function in beamforming, the resulting beamformer is called Newton ML beamformer. Several best-known ML beamformers are described below without giving their formulas due to the complexity of expression.

• Deterministic Maximum Likelihood Beamformer

In Deterministic Maximum Likelihood Beamformer (DML), the noise is modeled as a stationary Gaussian white random processes while the signal waveform as deterministic (but arbitrary) and unknown.

• Stochastic Maximum Likelihood Beamformer

In Stochastic Maximum Likelihood Beamformer (SML), the noise are modeled as a stationary Gaussian white random processes (the same as in DML) whereas the signal waveform as Gaussian random processes.

• Method of Direction Estimation

Method of Direction Estimation (MODE) is subspace maximum likelihood beamformer, just as MUSIC is the subspace spectral based beamformer. Subspace ML beamforming is obtained by eigen-decomposition of the sample covariance matrix.

## References

• H. L. Van Trees, “Optimum array processing – Part IV of detection, estimation, and modulation theory”, John Wiley, 2002
• H. Krim and M. Viberg, “Two decades of array signal processing research”, IEEE Transactions on Signal Processing Magazine, July 1996
• S. Haykin, Ed., “Array Signal Processing”, Eaglewood Cliffs, NJ: Prentice-Hall, 1985
• S. U. Pillai, “Array Signal Processing”, New York: Springer-Verlag, 1989
• P. Stoica and R. Moses, “Introduction to Spectral Analysis", Prentice-Hall, Englewood Cliffs, USA, 1997. available for download.
• J. Li and P. Stoica, “Robust Adaptive Beamforming", John Wiley, 2006.
• J. Cadzow, “Multiple Source Location—The Signal Subspace Approach”, IEEE Transactions on Acoustics, Speech and Signal Processing, Vol. 38, No. 7, July 1990
• G. Bienvenu and L. Kopp, “Optimality of high resolution array processing using the eigensystem approach”, IEEE Transactions on Acoustics, Speech and Signal Process, Vol. ASSP-31, pp. 1234–1248, October 1983
• I. Ziskind and M. Wax, “Maximum likelihood localization of multiple sources by alternating projection”, IEEE Transactions on Acoustics, Speech and Signal Process, Vol. ASSP-36, pp. 1553–1560, October 1988
• B. Ottersten, M. Verberg, P. Stoica, and A. Nehorai, “Exact and large sample maximum likelihood techniques for parameter estimation and detection in array processing”, Radar Array Processing, Springer-Verlag, Berlin, pp. 99–151, 1993
• M. Viberg, B. Ottersten, and T. Kailath, “Detection and estimation in sensor arrays using weighted subspace fitting”, IEEE Transactions on Signal Processing, vol. SP-39, pp 2346–2449, November 1991
• M. Feder and E. Weinstein, “Parameter estimation of superimposed signals using the EM algorithm”, IEEE Transactions on Acoustic, Speech and Signal Proceeding, vol ASSP-36, pp. 447–489, April 1988
• Y. Bresler and Macovski, “Exact maximum likelihood parameter estimation of superimposed exponential signals in noise”, IEEE Transactions on Acoustic, Speech and Signal Proceeding, vol ASSP-34, pp. 1081–1089, October 1986
• R. O. Schmidt, “New mathematical tools in direction finding and spectral analysis”, Proceedings of SPIE 27th Annual Symposium, San Diego, California, August 1983