# Siamese neural network

(Redirected from Siamese network)
Jump to navigation Jump to search

Siamese neural network is an artificial neural network that use the same weights while working in tandem on two different input vectors to compute comparable output vectors. Often one of the output vectors is precomputed, thus forming a baseline against which the other output vector is compared. This is similar to comparing fingerprints or more technical as a distance function for Locality-sensitive hashing.

It is possible to make a kind of structure that are functional similar to a siamese network, but still implement slightly different function. This is typically used for comparing similar instances in different type sets.

Uses of similarity measures where a siamese network might be used are such things as recognizing handwritten checks, automatic detection of faces in camera images, and matching queries with indexed documents. The perhaps most well-known application of siamese networks are face recognition, where known images of people are precomputed and compared to an image from a turnstile or similar. It is not obvious at first, but there are two slightly different problems. One is recognizing a person among a large number of other persons, that is the facial recognition problem. DeepFace is an example of such a system. In its most extreme form this is recognizing a single person at a train station or airport. The other is face verification, that is to verify whether the photo in a pass is the same as the person claiming he or she is the same person. The siamese network might be the same, but the implementation can be quite different.

## Learning

Learning in siamese networks can be done with triplet loss or contrastive loss. For learning by triplet loss a baseline vector (anchor image) is compared against a positive vector (truthy image) and a negative vector (falsy image). The negative vector will force learning in the network, while the positive vector will act like a regularizer. For learning by contrastive loss there must be a weight decay to regularize the weights, or some similar operation like a normalization.

A distance metric for a loss function must have the following properties

• Non-negativity: $\delta (x,y)\geq 0$ • Identity of Discernible: $\delta (x,y)=0\iff x=y$ • Symmetry: $\delta (x,y)=\delta (y,x)$ • Triangle inequality: $\delta (x,z)\leq \delta (x,y)+\delta (y,z)$ In particular, the triplet loss algorithm is often defined with squared Euclidean distance at its core.

### Predefined metrics, Euclidean distance metric

The common learning goal is to minimize a distance metric. This gives a loss function like

{\begin{aligned}{\text{if}}\,i=j\,{\text{then}}&\,\|\operatorname {f} \left(x^{(i)}\right)-\operatorname {f} \left(x^{(j)}\right)\|\,{\text{is small}}\\{\text{otherwise}}&\,\|\operatorname {f} \left(x^{(i)}\right)-\operatorname {f} \left(x^{(j)}\right)\|\,{\text{is large}}\end{aligned}} $i,j$ are indexes into a set of vectors
$\operatorname {f} (\cdot )$ function implemented by the siamese network

This is the most common case, but it is also a special case implementing an Euclidean distance metric.

On a matrix form the previous is often expressed as

$\operatorname {\delta } (\mathbf {x} ^{(i)},\mathbf {x} ^{(j)})\approx (\mathbf {x} ^{(i)}-\mathbf {x} ^{(j)})^{T}(\mathbf {x} ^{(i)}-\mathbf {x} ^{(j)})$ This is not the same as it is the squared Euclidean distance, that is the Manhattan distance.

### Learned metrics, nonlinear distance metric

A more general case is where the output vector from the siamese network is passed through additional network layers implementing non-linear distance metrics.

{\begin{aligned}{\text{if}}\,i=j\,{\text{then}}&\,\operatorname {\delta } \left[\operatorname {f} \left(x^{(i)}\right),\,\operatorname {f} \left(x^{(j)}\right)\right]\,{\text{is small}}\\{\text{otherwise}}&\,\operatorname {\delta } \left[\operatorname {f} \left(x^{(i)}\right),\,\operatorname {f} \left(x^{(j)}\right)\right]\,{\text{is large}}\end{aligned}} $i,j$ are indexes into a set of vectors
$\operatorname {f} (\cdot )$ function implemented by the siamese network
$\operatorname {\delta } (\cdot )$ function implemented by the network joining outputs from the siamese network

On a matrix form the previous is often approximated as a Mahalanobis distance for a linear space as

$\operatorname {\delta } (\mathbf {x} ^{(i)},\mathbf {x} ^{(j)})\approx (\mathbf {x} ^{(i)}-\mathbf {x} ^{(j)})^{T}\mathbf {M} (\mathbf {x} ^{(i)}-\mathbf {x} ^{(j)})$ This can be further subdivided in at least Unsupervised learning and Supervised learning.

### Learned metrics, half-twin networks

This form also allows the siamese network to be more of a half-twin, implementing a slightly different functions

{\begin{aligned}{\text{if}}\,i=j\,{\text{then}}&\,\operatorname {\delta } \left[\operatorname {f} \left(x^{(i)}\right),\,\operatorname {g} \left(x^{(j)}\right)\right]\,{\text{is small}}\\{\text{otherwise}}&\,\operatorname {\delta } \left[\operatorname {f} \left(x^{(i)}\right),\,\operatorname {g} \left(x^{(j)}\right)\right]\,{\text{is large}}\end{aligned}} $i,j$ are indexes into a set of vectors
$\operatorname {f} (\cdot ),\operatorname {g} (\cdot )$ function implemented by the half-twin network
$\operatorname {\delta } (\cdot )$ function implemented by the network joining outputs from the siamese network