# Biconnected component

Each color corresponds to a biconnected component. Multi-colored vertices are cut vertices, and thus belong to multiple biconnected components.

In graph theory, a biconnected component (or 2-connected component) is a maximal biconnected subgraph. Any connected graph decomposes into a tree of biconnected components called the block tree of the graph. The blocks are attached to each other at shared vertices called cut vertices or articulation points. Specifically, a cut vertex is any vertex whose removal increases the number of connected components.

## Algorithm

The classic sequential algorithm for computing biconnected components in a connected undirected graph is due to John Hopcroft and Robert Tarjan (1973).[1] It runs in linear time, and is based on depth-first search. This algorithm is also outlined as Problem 22-2 of Introduction to Algorithms (both 2nd and 3rd editions).

The idea is to run a depth-first search while maintaining the following information:

1. the depth of each vertex in the depth-first-search tree (once it gets visited), and
2. for each vertex v, the lowest depth of neighbors of all descendants of v in the depth-first-search tree, called the lowpoint.

The depth is standard to maintain during a depth-first search. The lowpoint of v can be computed after visiting all descendants of v (i.e., just before v gets popped off the depth-first-search stack) as the minimum of the depth of v, the depth of all neighbors of v (other than the parent of v in the depth-first-search tree) and the lowpoint of all children of v in the depth-first-search tree.

The key fact is that a nonroot vertex v is a cut vertex (or articulation point) separating two biconnected components if and only if there is a child y of v such that lowpoint(y) ≥ depth(v). This property can be tested once the depth-first search returned from every child of v (i.e., just before v gets popped off the depth-first-search stack), and if true, v separates the graph into different biconnected components. This can be represented by computing one biconnected component out of every such y (a component which contains y will contain the subtree of y, plus v), and then erasing the subtree of y from the tree.

The root vertex must be handled separately: it is a cut vertex if and only if it has at least two children. Thus, it suffices to simply build one component out of each child subtree of the root (including the root).

### Pseudocode

```GetArticulationPoints(i, d)
visited[i] = true
depth[i] = d
low[i] = d
childCount = 0
isArticulation = false
if not visited[ni]
parent[ni] = i
GetArticulationPoints(ni, d + 1)
childCount = childCount + 1
if low[ni] >= depth[i]
isArticulation = true
low[i] = Min(low[i], low[ni])
else if ni <> parent[i]
low[i] = Min(low[i], depth[ni])
if (parent[i] <> null and isArticulation) or (parent[i] == null and childCount > 1)
Output i as articulation point
```

## Other algorithms

A simple alternative to the above algorithm uses chain decompositions, which are special ear decompositions depending on DFS-trees.[2] Chain decompositions can be computed in linear time by this traversing rule. Let C be a chain decomposition of G. Then G is 2-vertex-connected if and only if G has minimum degree 2 and C1 is the only cycle in C. This gives immediately a linear-time 2-connectivity test and can be extended to list all cut vertices of G in linear time using the following statement: A vertex v in a connected graph G (with minimum degree 2) is a cut vertex if and only if v is incident to a bridge or v is the first vertex of a cycle in C - C1. The list of cut vertices can be used to create the block-cut tree of G in linear time.

In the online version of the problem, vertices and edges are added (but not removed) dynamically, and a data structure must maintain the biconnected components. Jeffery Westbrook and Robert Tarjan (1992) [3] developed an efficient data structure for this problem based on disjoint-set data structures. Specifically, it processes n vertex additions and m edge additions in O(m α(mn)) total time, where α is the inverse Ackermann function. This time bound is proved to be optimal.

Uzi Vishkin and Robert Tarjan (1985) [4] designed a parallel algorithm on CRCW PRAM that runs in O(log n) time with n + m processors. Guojing Cong and David A. Bader (2005) [5] developed an algorithm that achieves a speedup of 5 with 12 processors on SMPs. Speedups exceeding 30 based on the original Tarjan-Vishkin algorithm were reported by James A. Edwards and Uzi Vishkin (2012).[6]