# Heuristic function

A heuristic function, or simply a heuristic, is a function that ranks alternatives in search algorithms at each branching step based on available information to decide which branch to follow.

## Shortest paths

For example, for shortest path problems, a heuristic is a function, $h(n)$ defined on the nodes of a search tree, which serves as an estimate of the cost of the cheapest path from that node to the goal node. Heuristics are used by informed search algorithms such as Greedy best-first search and A* to choose the best node to explore. Greedy best-first search will choose the node that has the lowest value for the heuristic function. A* search will expand nodes that have the lowest value for $g(n)+h(n)$, where $g(n)$ is the (exact) cost of the path from the initial state to the current node. If $h(n)$ is admissible, i.e., if $h(n)$ never overestimates the costs of reaching the goal, then A* will always find an optimal solution.

The classical problem involving heuristics is the n-puzzle. Commonly used heuristics for this problem include counting the number of misplaced tiles and finding the sum of the Manhattan distances between each block and its position in the goal configuration. Note that both are admissible.

### Effect of heuristics on computational performance

In any searching problem where there are $b$ choices at each node and a depth of $d$ at the goal node, a naive searching algorithm would have to potentially search around $b^d$ nodes before finding a solution. Heuristics improve the efficiency of search algorithms by reducing the branching factor from $b$ to a lower constant $b'$, using a cutoff mechanism. The branching factor can be used for defining a partial order on the heuristics, such that $h_1(n) < h_2(n)$ if $h_1(n)$ has a lower branch factor than $h_2(n)$ for a given node $n$ of the search tree. Heuristics giving lower branching factors at every node in the search tree are preferred for the resolution of a particular problem, as they are more computationally efficient.

### Finding heuristics

The problem of finding an admissible heuristic with a low branching factor for common search tasks has been extensively researched in the artificial intelligence community. Several common techniques are used:

• Solution costs of sub-problems often serve as useful estimates of the overall solution cost. These are always admissible. For example, a heuristic for a 10-puzzle might be the cost of moving tiles 1-5 into their correct places. A common idea is to use a pattern database that stores the exact solution cost of every subproblem instance.
• The solution of a relaxed problem often serves as a useful admissible estimate of the original. For example, Manhattan distance is a relaxed version of the n-puzzle problem, because we assume we can move each tile to its position independently of moving the other tiles.
• Given a set of admissible heuristic functions $h_1(n), h_2(n), ..., h_i(n)$, the function $h(n) = \max\{h_1(n), h_2(n), ..., h_i(n)\}$ is an admissible heuristic that dominates all of them.

Using these techniques a program called ABSOLVER was written (1993) by A.E. Prieditis for automatically generating heuristics for a given problem. ABSOLVER generated a new heuristic for the 8-puzzle better than any pre-existing heuristic and found the first useful heuristic for solving the Rubik's Cube.

If a Heuristic function never overestimates the cost reaching to goal, then it is called an Admissible heuristic function.

If $H(n)$ is consistent when the value of $H(n)$ for each node along a path to goal node are non decreasing.

### Example

One might be interested in finding a heuristic to estimate the number of steps required to solve an 8-puzzle from a given state. Two simple heuristic functions are:

$h_1$ = the number of misplaced tiles. This is also known as the Hamming Distance. In the pictured example, the start state has $h_1$ = 8. Clearly, $h_1$ is an admissible heuristic because any tile that is out of place will have to be moved at least once.

$h_2$ = the sum of the distances of the tiles from their goal positions. Because tiles cannot be moved diagonally, the distance counted is the sum of horizontal and vertical distances. This is also known as the Manhattan Distance. In the pictured example, the start state has $h_2$ = 3 + 1 + 2 + 2 + 2 + 3 + 3 + 2 = 18. Clearly, $h_2$ is also an admissible heuristic because any move can, at best, move one tile one step closer to the goal.

As expected, neither heuristic overestimates the true number of moves required to solve the puzzle, which is 26 ($h_1$+$h_2$). Additionally, it is easy to see from the definitions of the heuristic functions that for any given state, $h_2$ will always be greater than or equal to $h_1$. Thus, we can say that $h_2$ dominates $h_1$.

(example taken from Russell and Norvig)