# Location estimation in sensor networks

Jump to navigation Jump to search

Location estimation in wireless sensor networks is the problem of estimating the location of an object from a set of noisy measurements. These measurements are acquired in a distributed manner by a set of sensors.

## Use

Many civilian and military applications require monitoring that can identify objects in a specific area, such as monitoring the front entrance of a private house by a single camera. Monitored areas that are large relative to objects of interest often require multiple sensors (e.g., infra-red detectors) at multiple locations. A centralized observer or computer application monitors the sensors. The communication to power and bandwidth requirements call for efficient design of the sensor, transmission, and processing.

The CodeBlue system of Harvard university is an example where a vast number of sensors distributed among hospital facilities allow staff to locate a patient in distress. In addition, the sensor array enables online recording of medical information while allowing the patient to move around. Military applications (e.g. locating an intruder into a secured area) are also good candidates for setting a wireless sensor network.

## Setting

Let $\theta$ denote the position of interest. A set of $N$ sensors acquire measurements $x_{n}=\theta +w_{n}$ contaminated by an additive noise $w_{n}$ owing some known or unknown probability density function (PDF). The sensors transmit measurements to a central processor. The $n$ th sensor encodes $x_{n}$ by a function $m_{n}(x_{n})$ . The application processing the data applies a pre-defined estimation rule ${\hat {\theta }}=f(m_{1}(x_{1}),\cdot ,m_{N}(x_{N}))$ . The set of message functions $m_{n},\,1\leq n\leq N$ and the fusion rule $f(m_{1}(x_{1}),\cdot ,m_{N}(x_{N}))$ are designed to minimize estimation error. For example: minimizing the mean squared error (MSE), $\mathbb {E} \|\theta -{\hat {\theta }}\|^{2}$ .

Ideally, sensors transmit their measurements $x_{n}$ right to the processing center, that is $m_{n}(x_{n})=x_{n}$ . In this settings, the maximum likelihood estimator (MLE) ${\hat {\theta }}={\frac {1}{N}}\sum _{n=1}^{N}x_{n}$ is an unbiased estimator whose MSE is $\mathbb {E} \|\theta -{\hat {\theta }}\|^{2}={\text{var}}({\hat {\theta }})={\frac {\sigma ^{2}}{N}}$ assuming a white Gaussian noise $w_{n}\sim {\mathcal {N}}(0,\sigma ^{2})$ . The next sections suggest alternative designs when the sensors are bandwidth constrained to 1 bit transmission, that is $m_{n}(x_{n})$ =0 or 1.

## Known noise PDF

A Gaussian noise $w_{n}\sim {\mathcal {N}}(0,\sigma ^{2})$ system can be designed as follows:


$m_{n}(x_{n})=I(x_{n}-\tau )={\begin{cases}1&x_{n}>\tau \\0&x_{n}\leq \tau \end{cases}}$ ${\hat {\theta }}=\tau -F^{-1}\left({\frac {1}{N}}\sum \limits _{n=1}^{N}m_{n}(x_{n})\right),\quad F(x)={\frac {1}{{\sqrt {2\pi }}\sigma }}\int \limits _{x}^{\infty }e^{-w^{2}/2\sigma ^{2}}\,dw$ Here $\tau$ is a parameter leveraging our prior knowledge of the approximate location of $\theta$ . In this design, the random value of $m_{n}(x_{n})$ is distributed Bernoulli~$(q=F(\tau -\theta ))$ . The processing center averages the received bits to form an estimate ${\hat {q}}$ of $q$ , which is then used to find an estimate of $\theta$ . It can be verified that for the optimal (and infeasible) choice of $\tau =\theta$ the variance of this estimator is ${\frac {\pi \sigma ^{2}}{4}}$ which is only $\pi /2$ times the variance of MLE without bandwidth constraint. The variance increases as $\tau$ deviates from the real value of $\theta$ , but it can be shown that as long as $|\tau -\theta |\sim \sigma$ the factor in the MSE remains approximately 2. Choosing a suitable value for $\tau$ is a major disadvantage of this method since our model does not assume prior knowledge about the approximated location of $\theta$ . A coarse estimation can be used to overcome this limitation. However, it requires additional hardware in each of the sensors.

A system design with arbitrary (but known) noise PDF can be found in. In this setting it is assumed that both $\theta$ and the noise $w_{n}$ are confined to some known interval $[-U,U]$ . The estimator of  also reaches an MSE which is a constant factor times ${\frac {\sigma ^{2}}{N}}$ . In this method, the prior knowledge of $U$ replaces the parameter $\tau$ of the previous approach.

## Unknown noise parameters

A noise model may be sometimes available while the exact PDF parameters are unknown (e.g. a Gaussian PDF with unknown $\sigma$ ). The idea proposed in  for this setting is to use two thresholds $\tau _{1},\tau _{2}$ , such that $N/2$ sensors are designed with $m_{A}(x)=I(x-\tau _{1})$ , and the other $N/2$ sensors use $m_{B}(x)=I(x-\tau _{2})$ . The processing center estimation rule is generated as follows:

${\hat {q}}_{1}={\frac {2}{N}}\sum \limits _{n=1}^{N/2}m_{A}(x_{n}),\quad {\hat {q}}_{2}={\frac {2}{N}}\sum \limits _{n=1+N/2}^{N}m_{B}(x_{n})$ ${\hat {\theta }}={\frac {F^{-1}({\hat {q}}_{2})\tau _{1}-F^{-1}({\hat {q}}_{1})\tau _{2}}{F^{-1}({\hat {q}}_{2})-F^{-1}({\hat {q}}_{1})}},\quad F(x)={\frac {1}{\sqrt {2\pi }}}\int \limits _{x}^{\infty }e^{-v^{2}/2}dw$ As before, prior knowledge is necessary to set values for $\tau _{1},\tau _{2}$ to have an MSE with a reasonable factor of the unconstrained MLE variance.

## Unknown noise PDF

The system design of  for the case that the structure of the noise PDF is unknown. The following model is considered for this scenario:

$x_{n}=\theta +w_{n},\quad n=1,\dots ,N$ $\theta \in [-U,U]$ $w_{n}\in {\mathcal {P}},{\text{ that is }}:w_{n}{\text{ is bounded to }}[-U,U],\mathbb {E} (w_{n})=0$ In addition, the message functions are limited to have the form

$m_{n}(x_{n})={\begin{cases}1&x\in S_{n}\\0&x\notin S_{n}\end{cases}}$ where each $S_{n}$ is a subset of $[-2U,2U]$ . The fusion estimator is also restricted to be linear, i.e. ${\hat {\theta }}=\sum \limits _{n=1}^{N}\alpha _{n}m_{n}(x_{n})$ .

The design should set the decision intervals $S_{n}$ and the coefficients $\alpha _{n}$ . Intuitively, one would allocate $N/2$ sensors to encode the first bit of $\theta$ by setting their decision interval to be $[0,2U]$ , then $N/4$ sensors would encode the second bit by setting their decision interval to $[-U,0]\cup [U,2U]$ and so on. It can be shown that these decision intervals and the corresponding set of coefficients $\alpha _{n}$ produce a universal $\delta$ -unbiased estimator, which is an estimator satisfying $|\mathbb {E} (\theta -{\hat {\theta }})|<\delta$ for every possible value of $\theta \in [-U,U]$ and for every realization of $w_{n}\in {\mathcal {P}}$ . In fact, this intuitive design of the decision intervals is also optimal in the following sense. The above design requires $N\geq \lceil \log {\frac {8U}{\delta }}\rceil$ to satisfy the universal $\delta$ -unbiased property while theoretical arguments show that an optimal (and a more complex) design of the decision intervals would require $N\geq \lceil \log {\frac {2U}{\delta }}\rceil$ , that is: the number of sensors is nearly optimal. It is also argued in  that if the targeted MSE $\mathbb {E} \|\theta -{\hat {\theta }}\|\leq \epsilon ^{2}$ uses a small enough $\epsilon$ , then this design requires a factor of 4 in the number of sensors to achieve the same variance of the MLE in the unconstrained bandwidth settings.

## Additional information

The design of the sensor array requires optimizing the power allocation as well as minimizing the communication traffic of the entire system. The design suggested in  incorporates probabilistic quantization in sensors and a simple optimization program that is solved in the fusion center only once. The fusion center then broadcasts a set of parameters to the sensors that allows them to finalize their design of messaging functions $m_{n}(\cdot )$ as to meet the energy constraints. Another work employs a similar approach to address distributed detection in wireless sensor arrays.