Jump to content

Principal variation search

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Citation bot (talk | contribs) at 08:39, 13 December 2019 (Alter: issue. Formatted dashes. | You can use this bot yourself. Report bugs here.| Activated by User:Nemo bis | via #UCB_webform). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Principal variation search (sometimes equated with the practically identical NegaScout) is a negamax algorithm that can be faster than alpha-beta pruning. Like alpha-beta pruning, NegaScout is a directional search algorithm for computing the minimax value of a node in a tree. It dominates alpha-beta pruning in the sense that it will never examine a node that can be pruned by alpha-beta; however, it relies on accurate node ordering to capitalize on this advantage.

NegaScout works best when there is a good move ordering. In practice, the move ordering is often determined by previous shallower searches. It produces more cutoffs than alpha-beta by assuming that the first explored node is the best. In other words, it supposes the first node is in the principal variation. Then, it can check whether that is true by searching the remaining nodes with a null window (also known as a scout window; when alpha and beta are equal), which is faster than searching with the regular alpha-beta window. If the proof fails, then the first node was not in the principal variation, and the search continues as normal alpha-beta. Hence, NegaScout works best when the move ordering is good. With a random move ordering, NegaScout will take more time than regular alpha-beta; although it will not explore any nodes alpha-beta did not, it will have to re-search many nodes.

Alexander Reinefeld invented NegaScout several decades after the invention of alpha-beta pruning. He gives a proof of correctness of NegaScout in his book.[1]

Another search algorithm called SSS* can theoretically result in fewer nodes searched. However, its original formulation has practical issues (in particular, it relies heavily on an OPEN list for storage) and nowadays most chess engines still use a form of NegaScout in their search. Most chess engines use a transposition table in which the relevant part of the search tree is stored. This part of the tree has the same size as SSS*'s OPEN list would have.[2] A reformulation called MT-SSS* allowed it to be implemented as a series of null window calls to Alpha-Beta (or NegaScout) that use a transposition table, and direct comparisons using game playing programs could be made. It did not outperform NegaScout in practice. Yet another search algorithm, which does tend to do better than NegaScout in practice, is the best-first algorithm called MTD-f, although neither algorithm dominates the other. There are trees in which NegaScout searches fewer nodes than SSS* or MTD-f and vice versa.

NegaScout takes after SCOUT, invented by Judea Pearl in 1980, which was the first algorithm to outperform alpha-beta and to be proven asymptotically optimal.[3][4] Null windows, with β=α+1 in a negamax setting, were invented independently by J.P. Fishburn and used in an algorithm similar to SCOUT in an appendix to his Ph.D. thesis,[5] in a parallel alpha-beta algorithm,[6] and on the last subtree of a search tree root node.[7]

The Idea

Most of the moves are not acceptable for both players, so we do not need to fully search every node to get the exact score. The exact score is only needed in principal variation (a sequence of moves acceptable for both players) which is expected to propagate down to the root. In iterative deepening search, the previous iteration has already established such a sequence. In a node that has a score that ends up being inside the window, which is called PV-node, we search the first move, which is deemed best, in a full window to establish a bound. That is needed to prove other moves are unacceptable. We conducted a zero window search to test if a move can be better. Since a zero window search is much cheaper, it can save a lot of effort. If we find that a move can raise alpha, we do a re-search with the full window to get the exact score. [8] [9]

Pseudocode

function pvs(node, depth, α, β, color) is
    if depth = 0 or node is a terminal node then
        return color × the heuristic value of node
    for each child of node do
        if child is first child then
            score := −pvs(child, depth − 1, −β, −α, −color)
        else
            score := −pvs(child, depth − 1, −α − 1, −α, −color) (* search with a null window *)
            if α < score < β then
                score := −pvs(child, depth − 1, −β, −score, −color) (* if it failed high, do a full re-search *)
        α := max(α, score)
        if α ≥ β then
            break (* beta cut-off *)
    return α

References

  1. ^ A. Reinefeld. Spielbaum-Suchverfahren. Informatik-Fachbericht 200, Springer-Verlag, Berlin (1989), ISBN 3-540-50742-6
  2. ^ Plaat, Aske; Jonathan Schaeffer; Wim Pijls; Arie de Bruin (November 1996). "Best-first Fixed-depth Minimax Algorithms". Artificial Intelligence. 87 (1–2): 255–293. doi:10.1016/0004-3702(95)00126-3.
  3. ^ Pearl, J., "SCOUT: A Simple Game-Searching Algorithm With Proven Optimal Properties," Proceedings of the First Annual National Conference on Artificial Intelligence, Stanford University, August 18–21, 1980, pp. 143-145.
  4. ^ Pearl, J., "Asymptotic Properties of Minimax Trees and Game-Searching Procedures," Artificial Intelligence, Vol. 14, No. 2, pp. 113-138, September 1980.
  5. ^ Fishburn, J.P., "Analysis of Speedup in Distributed Algorithms", UMI Research Press ISBN 0-8357-1527-2, 1981, 1984.
  6. ^ Fishburn, J.P., Finkel, R.A., and Lawless, S.A., "Parallel Alpha-Beta Search on Arachne" Proceedings 1980 Int. Conf. Parallel Processing, IEEE, August 26–29, 1980, pp. 235-243.
  7. ^ Fishburn, J.P., "An Optimization of Alpha-Beta Search" ACM SIGART Bulletin, issue 72, July 1980, pp 29-31.
  8. ^ Judea Pearl (1980). Asymptotic Properties of Minimax Trees and Game-Searching Procedures. Artificial Intelligence, Vol. 14, No. 2
  9. ^ Murray Campbell, Tony Marsland (1983). A Comparison of Minimax Tree Search Algorithms. Artificial Intelligence, Vol. 20, No. 4, pp. 347-367. ISSN 0004-3702.

See also