Maximally stable extremal regions

In computer vision, maximally stable extremal regions (MSER) are used as a method of blob detection in images. This technique was proposed by Matas et al.[1] to find correspondences between image elements from two images with different viewpoints. This method of extracting a comprehensive number of corresponding image elements contributes to the wide-baseline matching, and it has led to better stereo matching and object recognition algorithms.

Terms and Definitions

Image $I$ is a mapping $I : D \subset \mathbb{Z}^2 \to S$. Extremal regions are well defined on images if:

1. $S$ is totally ordered (total, antisymmetric and transitive binary relations $\le$ exist).
2. An adjacency relation $A \subset D \times D$ is defined.

Region $Q$ is a contiguous subset of $D$. (For each $p,q \in Q$ there is a sequence $p, a_1, a_2, .., a_n, q$ and $pAa_1, a_iAa_{i+1}, a_nAq$.)

(Outer) Region Boundary $\partial Q = \{ q \in D \setminus Q: \exists p \in Q : qAp \}$, which means the boundary $\partial Q$ of $Q$ is the set of pixels adjacent to at least one pixel of $Q$ but not belonging to $Q$.

Extremal Region $Q \subset D$ is a region such that either for all $p \in Q, q \in \partial Q : I(p) > I(q)$ (maximum intensity region) or for all $p \in Q, q \in \partial Q : I(p) < I(q)$ (minimum intensity region).

Maximally Stable Extremal Region Let $Q_1,.., Q_{i-1}, Q_i,...$ be a sequence of nested extremal regions ($Q_i \subset Q_{i+1}$). Extremal region $Q_{i*}$ is maximally stable if and only if $q(i) = | Q_{i+\Delta} \setminus Q_{i-\Delta} | / |Q_i|$ has a local minimum at $i*$. (Here $| \cdot |$ denotes cardinality.)$\Delta \in S$ is a parameter of the method.

The equation checks for regions that remain stable over a certain number of thresholds. If a region $Q_{i+\Delta}$ is not significantly larger than a region $Q_{i-\Delta}$, region $Q_i$ is taken as a maximally stable region.

The concept more simply can be explained by thresholding. All the pixels below a given threshold are 'black' and all those above or equal are 'white'. If we are shown a sequence of thresholded images $I_t$ with frame $t$ corresponding to threshold t, we would see first a white image, then 'black' spots corresponding to local intensity minima will appear then grow larger. These 'black' spots will eventually merge, until the whole image is black. The set of all connected components in the sequence is the set of all extremal regions. In that sense, the concept of MSER is linked to the one of component tree of the image.[2] The component tree indeed provide an easy way for implementing MSER.[3]

Extremal regions

Extremal regions in this context have two important properties, that the set is closed under...

1. continuous transformation of image coordinates. This means it is affine invariant and it doesn't matter if the image is warped or skewed.
2. monotonic transformation of image intensities. The approach is of course sensitive to natural lighting effects as change of day light or moving shadows.

Because the regions are defined exclusively by the intensity function in the region and the outer border, this leads to many key characteristics of the regions which make them useful. Over a large range of thresholds, the local binarization is stable in certain regions, and have the properties listed below.

• Invariance to affine transformation of image intensities
• Covariance to adjacency preserving (continuous)transformation $T : D \to D$ on the image domain
• Stability: only regions whose support is nearly the same over a range of thresholds is selected.
• Multi-scale detection without any smoothing involved, both fine and large structure is detected.
Note however that detection of MSERs in a scale pyramid improves repeatability, and number of correspondences across scale changes.[4]
• The set of all extremal regions can be enumerated in worst-case $O(n)$, where $n$ is the number of pixels in the image.[5]

Comparison to other region detectors

In Mikolajczyk et al.,[6] six region detectors are studied (Harris-affine, Hessian-affine, MSER, edge-based regions, intensity extrema, and salient regions). A summary of MSER performance in comparison to the other five follows.

• Region density - in comparison to the others MSER offers the most variety detecting about 2600 regions for a textured blur scene and 230 for a light changed. scene, and variety is generally considered to be good. Also MSER had a repeatability of 92% for this test.
• Region size - MSER tended to detect many small regions, versus large regions which are more likely to be occluded or to not cover a planar part of the scene. Though large regions may be slightly easier to match.
• Viewpoint change - MSER outperforms the five other region detectors in both the original images and those with repeated texture motifs.
• Scale change - Following Hessian-affine detector, MSER comes in second under a scale change and in-plane rotation.
• Blur - MSER proved to be the most sensitive to this type of change in image, which is the only area that this type of detection is lacking in.
Note however that this evaluation did not make use of multi-resolution detection, which has been shown to improve repeatability under blur.[4]
• Light change - MSER showed the highest repeatability score for this type of scene, with all the other having good robustness as well.

MSER consistently resulted in the highest score through many tests, proving it to be a reliable region detector.[6]

Implementation

The original algorithm of Matas et al.[1] is $O(n\,\log(\log(n)))$ in the number $n\,$ of pixels. It proceeds by first sorting the pixels by intensity. This would take $O(n)\,$ time, using BINSORT. After sorting, pixels are marked in the image, and the list of growing and merging connected components and their areas is maintained using the union-find algorithm. This would take $O(n\,\log(\log(n)))$ time. In practice these steps are very fast. During this process, the area of each connected component as a function of intensity is stored producing a data structure. A merge of two components is viewed as termination of existence of the smaller component and an insertion of all pixels of the smaller component into the larger one. In the extremal regions, the 'maximally stable' ones are those corresponding to thresholds where the relative area change as a function of relative change of threshold is at a local minimum, i.e. the MSER are the parts of the image where local binarization is stable over a large range of thresholds.[1][6]

The component tree is the set of all connected components of the thresholds of the image, ordered by inclusion. Efficient (quasi-linear whatever the range of the weights) algorithms for computing it do exist.[2] Thus this structure offers an easy way for implementing MSER.[3]

More recently, Nister and Stewenius have proposed a truly (if the weight are small integers) worst-case $O(n)\,$ method in,[5] which is also much faster in practice. This algorithm is similar to the one of Ph. Salembier et al.[7]

Robust wide-baseline algorithm

The purpose of this algorithm is to match MSERs to establish correspondence points between images. First MSER regions are computed on the intensity image (MSER+) and on the inverted image (MSER-). Measurement regions are selected at multiple scales: the size of the actual region, 1.5x, 2x, and 3x scaled convex hull of the region. Matching is accomplished in a robust manner, so it is better to increase the distinctiveness of large regions without being severely affected by clutter or non-planarity of the region's pre-image. A measurement taken from an almost planar patch of the scene with stable invariant description are called a 'good measurement'. Unstable ones or those on non-planar surfaces or discontinuities are called 'corrupted measurements'. The robust similarity is computed: For each $M_A^i$ on region $A, k$ regions $B_1,\dots, B_k$ from the other image with the corresponding i-th measurement $M_{B_1}^i ,\dots, M_{B_k}^i$ nearest to $M_A^i$ are found and a vote is cast suggesting correspondence of A and each of $B_1, \dots , B_k$. Votes are summed over all measurements, and using probability analysis, we pick out the 'good measurements' as the 'corrupt measurements' will likely spread their votes randomly. By applying RANSAC to the centers of gravity of the regions, we can compute a rough epipolar geometry. An affine transformation between pairs of potentially corresponding regions is computed, and correspondences define it up to a rotation, which is then determined by epipolar lines. The regions are then filtered, and the ones with correlation of their transformed images above a threshold are chosen. RANSAC is applied again with a more narrow threshold, and the final eipolar geometry is estimated by the eight-point algorithm.

This algorithm can be tested here (Epipolar or homography geometry constrained matches): WBS Image Matcher