User:Csfypwaiting

From Wikipedia, the free encyclopedia
Flow chart of an algorithm (Euclid's algorithm) for calculating the greatest common divisor (g.c.d.) of two numbers a and b in locations named A and B. The algorithm proceeds by successive subtractions in two loops: IF the test B ≤ A yields "yes" (or true) (more accurately the number b in location B is less than or equal to the number a in location A) THEN the algorithm specifies B ← B - A (meaning the number b - a replaces the old b). Similarly IF A > B THEN A ← A - B. The process terminates when (the contents of) B is 0, yielding the g.c.d. in A. (Algorithm derived from Scott 2009:13; symbols and drawing style from Tausworthe 1977).

In mathematics and computer science, an algorithm is an effective method expressed as a finite list[1] of well-defined instructions[2] for calculating a function[3]. Algorithms are used for calculation, data processing, and automated reasoning. It represents simplified Mathematics ideas solving complicated problems. For example, sorting the order of customers processing, calculating the income of a company need algorithm.

Starting from an initial state and initial input (perhaps null)[4], the instructions describe a computation that, when executed, will proceed through a finite [5] number of well-defined successive states, eventually producing "output"[6] and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input[7].

A partial formalization of the concept began with attempts to solve the Entscheidungsproblem (the "decision problem") posed by David Hilbert in 1928. Subsequent formalizations were framed as attempts to define "effective calculability"[8] or "effective method";[9] those formalizations included the GödelHerbrandKleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's "Formulation 1" of 1936, and Alan Turing's Turing machines of 1936–7 and 1939.

Why algorithms are necessary: an informal definition[edit]

For a detailed presentation of the various points of view around the definition of "algorithm" see Algorithm characterizations. For examples of simple addition algorithms specified in the detailed manner described in Algorithm characterizations, see Algorithm examples.

While there is no generally accepted formal definition of "algorithm," an informal definition could be "a set of rules that precisely defines a sequence of operations."[10] For some people, a program is only an algorithm if it stops eventually; for others, a program is only an algorithm if it stops before a given number of calculation steps [11].

A prototypical example of an algorithm is Euclid's algorithm to determine the maximum common divisor of two integers; an example (there are others) is described by the flow chart above and as an example in a later section.

Boolos, Jeffrey & 1974, 1999 offer an informal meaning of the word in the following quotation:

No human being can write fast enough, or long enough, or small enough† ( †"smaller and smaller without limit ...you'd be trying to write on molecules, on atoms, on electrons") to list all members of an enumerably infinite set by writing out their names, one after another, in some notation. But humans can do something equally useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human who is capable of carrying out only very elementary operations on symbols[12]

The term "enumerably infinite" means "countable using integers perhaps extending to infinity." Thus Boolos and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be chosen from 0 to infinity. Thus an algorithm can be an algebraic equation such as y = m + n — two arbitrary "input variables" m and n that produce an output y. But various authors' attempts to define the notion (see more at Algorithm characterizations) indicate that the word implies much more than this, something on the order of (for the addition example):

Precise instructions (in language understood by "the computer")[13] for a fast, efficient, "good"[14] process that specifies the "moves" of "the computer" (machine or human, equipped with the necessary internally contained information and capabilities)[15] to find, decode, and then process arbitrary input integers/symbols m and n, symbols + and = ... and "effectively"[16] produce, in a "reasonable" time[17], output-integer y at a specified place and in a specified format.

The concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a small set of axioms and rules. In logic, the time that an algorithm requires to complete cannot be measured, as it is not apparently related with our customary physical dimension. From such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete (in some sense) and abstract usage of the term.

Formalization[edit]

Algorithms are essential to the way computers process information. Many computer programs contain algorithms that specify the specific instructions a computer should perform (in a specific order) to carry out a specified task, such as calculating employees' paychecks or printing students' report cards. Thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Authors who assert this thesis include Minsky (1967), Savage (1987) and Gurevich (2000):

Minsky: "But we will also maintain, with Turing . . . that any procedure which could "naturally" be called effective, can in fact be realized by a (simple) machine. Although this may seem extreme, the arguments . . . in its favor are hard to refute".[18]

Gurevich: "...Turing's informal argument in favor of his thesis justifies a stronger thesis: every algorithm can be simulated by a Turing machine ... according to Savage [1987], an algorithm is a computational process defined by a Turing machine".[19]

Typically, when an algorithm is associated with processing information, data is read from an input source, written to an output device, and/or stored for further processing. Stored data is regarded as part of the internal state of the entity performing the algorithm. In practice, the state is stored in one or more data structures.

For some such computational process, the algorithm must be rigorously defined: specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be systematically dealt with, case-by-case; the criteria for each case must be clear (and computable).

Because an algorithm is a precise list of precise steps, the order of computation will always be critical to the functioning of the algorithm. Instructions are usually assumed to be listed explicitly, and are described as starting "from the top" and going "down to the bottom", an idea that is described more formally by flow of control.

So far, this discussion of the formalization of an algorithm has assumed the premises of imperative programming. This is the most common conception, and it attempts to describe a task in discrete, "mechanical" means. Unique to this conception of formalized algorithms is the assignment operation, setting the value of a variable. It derives from the intuition of "memory" as a scratchpad. There is an example below of such an assignment.

For some alternate conceptions of what constitutes an algorithm see functional programming and logic programming .

Termination[edit]

Some writers restrict the definition of algorithm to procedures that eventually finish. In such a category Kleene places the "decision procedure or decision method or algorithm for the question".[20] Others, including Kleene, include procedures that could run forever without stopping; such a procedure has been called a "computational method"[21] or "calculation procedure or algorithm (and hence a calculation problem) in relation to a general question which requires for an answer, not yes or no, but the exhibiting of some object".[22]

Minsky makes the pertinent observation, in regards to determining whether an algorithm will eventually terminate (from a particular starting state):

But if the length of the process isn't known in advance, then "trying" it may not be decisive, because if the process does go on forever—then at no time will we ever be sure of the answer.[18]

As it happens, no other method can do any better, as was shown by Alan Turing with his celebrated result on the undecidability of the so-called halting problem. There is no algorithmic procedure for determining whether or not arbitrary algorithms terminate from given starting states. The analysis of algorithms for their likelihood of termination is called termination analysis.

See the examples of (im-)"proper" subtraction at partial function for more about what can happen when an algorithm fails for certain of its input numbers—e.g., (i) non-termination, (ii) production of "junk" (output in the wrong format to be considered a number) or no number(s) at all (halt ends the computation with no output), (iii) wrong number(s), or (iv) a combination of these. Kleene proposed that the production of "junk" or failure to produce a number is solved by having the algorithm detect these instances and produce e.g., an error message (he suggested "0"), or preferably, force the algorithm into an endless loop.[23] Davis (1958) does this to his subtraction algorithm—he fixes his algorithm in a second example so that it is proper subtraction and it terminates.[24] Along with the logical outcomes "true" and "false" Kleene (1952) also proposes the use of a third logical symbol "u" — undecided[25] — thus an algorithm will always produce something when confronted with a "proposition". The problem of wrong answers must be solved with an independent "proof" of the algorithm e.g., using induction:

We normally require auxiliary evidence for this [that the algorithm correctly defines a mu recursive function], e.g, in the form of an inductive proof that, for each argument value, the computation terminates with a unique value.[26]

Expressing algorithms[edit]

File:Pseudocode.jpg
Pseudocode of a simple algorithm

Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, programming languages or control tables (processed by interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous, and are rarely used for complex or technical algorithms. Pseudocode, flowcharts and control tables are structured ways to express algorithms that avoid many of the ambiguities common in natural language statements, while remaining independent of a particular implementation language. Programming languages are primarily intended for expressing algorithms in a form that can be executed by a computer, but are often used as a way to define or document algorithms.

There is a wide variety of representations possible and one can express a given Turing machine program as a sequence of machine tables (see more at finite state machine and state transition table), as flowcharts (see more at state diagram), or as a form of rudimentary machine code or assembly code called "sets of quadruples" (see more at Turing machine).

Sometimes it is helpful in the description of an algorithm to supplement small "flow charts" (state diagrams) with natural-language and/or arithmetic expressions written inside "block diagrams" to summarize what the "flow charts" are accomplishing.

Representations of algorithms are generally classed into three accepted levels of Turing machine description:[27]

  • 1 High-level description:
"...prose to describe an algorithm, ignoring the implementation details. At this level we do not need to mention how the machine manages its tape or head."
  • 2 Implementation description:
"...prose used to define the way the Turing machine uses its head and the way that it stores data on its tape. At this level we do not give details of states or transition function."
  • 3 Formal description:
Most detailed, "lowest level", gives the Turing machine's "state table".
For an example of the simple algorithm "Add m+n" described in all three levels see Algorithm examples.

Implementation[edit]

Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain implementing arithmetic or an insect looking for food), in an electrical circuit, or in a mechanical device.

Computer algorithms[edit]

Flowchart examples of the canonical Böhm-Jacopini structures: the SEQUENCE (rectangles descending the page), the WHILE-DO and the IF-THEN-ELSE. The three structures are made of the primitive conditional GOTO (IF test=true THEN GOTO step xxx) (a diamond), the unconditional GOTO (rectangle), various assignment operators (rectangle), and HALT (rectangle). Nesting of these structures inside assignment-blocks result in complex diagrams (cf Tausworthe 1977:100,114).

In computer systems, an algorithm is basically an instance of logic written in software by software developers to be effective for the intended "target" computer(s), in order for the target machines to produce output from given input (perhaps null).

"Elegant" (compact) programs, "good" (fast) programs : The notion of "simplicity and elegance" appears informally in Knuth and precisely in Chaitin:

Knuth: ". . .we want good algorithms in some loosely defined aesthetic sense. One criterion . . . is the length of time taken to perform the algorithm . . .. Other criteria are adaptability of the algorithm to computers, its simplicity and elegance, etc"[28]
Chaitin: " . . . a program is 'elegant,' by which I mean that it's the smallest possible program for producing the output that it does"[29]

Chaitin prefaces his definition with: "I'll show you can't prove that a program is 'elegant'" -- such a proof would solve the Halting problem (ibid).

Algorithm versus function computable by an algorithm: For a given function multiple algorithms may exist. This will be true, even without expanding the available instruction set available to the programmer. Rogers observes that "It is . . . important to distinguish between the notion of algorithm, i.e. procedure and the notion of function computable by algorithm, i.e. mapping yielded by procedure. The same function may have several different algorithms"[30].

Unfortunately there may be a tradeoff between goodness (speed) and elegance (compactness) -- an elegant program may take more steps to complete a computation than one less elegant. An example of using Euclid's algorithm will be shown below.

Computers (and computors), models of computation: A computer (or human "computor"[31]) is a restricted type of machine, a "discrete deterministic mechanical device"[32] that blindly follows its instructions[33]. Melzak's and Lambek's primitive models[34] reduced this notion to four elements: (i) discrete, distinguishable locations, (ii) discrete, indistinguishable counters[35] (iii) an agent, and (iv) a list of instructions that are effective relative to the capability of the agent[36].

Minsky describes a more congenial variation of Lambek's "abacus" model in his "Very Simple Bases for Computability" [37]. Minsky's machine proceeds sequentially through its five (or six depending on how one counts) instructions unless either a conditional IF - THEN GOTO or an unconditional GOTO changes program flow out of sequence. Besides HALT, Minsky's machine includes three assignment (replacement, substitution)[38] operations: ZERO (e.g. the contents of location replaced by 0: L ← 0), SUCCESSOR (e.g. L ← L+1), and DECREMENT (e.g. L ← L - 1)[39]. Rarely will a programmer have to write "code" with such a limited instruction set. But Minsky shows (as do Melzak and Lambek) that his machine is Turing complete with only four general types of instructions: conditional GOTO, unconditional GOTO, assignment/replacement/substitution, and HALT[40].

Simulation of an algorithm: computer(computor) language: Knuth advises the reader that "the best way to learn an algorithm is to try it . . . immediately take pen and paper and work through an example"[41]. But what about a simulation or execution of the real thing? The programmer must translate the algorithm into a language that the simulator/computer/computor can effectively execute. Stone gives an example of this: when computing the roots of a quadratic equation the computor must know how to take a square root. If they don't then for the algorithm to be effective it must provide a set of rules for extracting a square root[42].

This means that the programmer must know a "language" that is effective relative to the target computing agent (computer/computor).

But what model should be used for the simulation? Van Emde Boas observes "even if we base complexity theory on abstract instead of concrete machines, arbitrariness of the choice of a model remains. It is at this point that the notion of simulation enters"[43]. When speed is being measured, the instruction set matters. For example, the subprogram in Euclid's algorithm to compute the remainder would execute much faster if the programmer had a "modulus" (division) instruction available rather than just subtraction (or worse: just Minsky's "decrement").

Structured programming, canonical structures: Per the Church-Turing thesis any algorithm can be computed by a model known to be Turing complete, and per Minsky's demonstrations Turing completeness requires only four instruction types -- conditional GOTO, unconditional GOTO, assignment, HALT. Kemeny and Kurtz observe that while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code" a programmer can write structured programs using these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language"[44]. Tausworthe augments the three Böhm-Jacopini canonical structures:[45] SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE[46]. An additional benefit of a structured program will be one that lends itself to proofs of correctness using mathematical induction[47].

Canonical flowchart symbols[48]: The graphical aide called a flowchart offers a way to describe and document an algorithm (and a computer program of one). Like program flow of a Minsky machine, a flowchart always starts at the top of a page and proceeds down. Its primary symbols are only 4: the directed arrow showing program flow, the rectangle (SEQUENCE, GOTO), the diamond (IF-THEN-ELSE), and the dot (OR-tie). The Böhm-Jacopini canonical structures structures are made of these primitive shapes. Sub-structures can "nest" in rectangles but only if a single exit occurs from the superstructure. The symbols and their use to build the canonical structures are shown in the diagram.

Examples[edit]

Sorting example[edit]

An animation of the quicksort algorithm sorting an array of randomized values. The red bars mark the pivot element; at the start of the animation, the element farthest to the right hand side is chosen as the pivot.

One of the simplest algorithms is to find the largest number in an (unsorted) list of numbers. The solution necessarily requires looking at every number in the list, but only once at each. From this follows a simple algorithm, which can be stated in a high-level description English prose, as:

High-level description:

  1. Assume the first item is largest.
  2. Look at each of the remaining items in the list and if it is larger than the largest item so far, make a note of it.
  3. The last noted item is the largest in the list when the process is complete.

(Quasi-)formal description: Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm in pseudocode or pidgin code:

Algorithm LargestNumber
  Input: A non-empty list of numbers L.
  Output: The largest number in the list L.
  largestL0
  for each item in the list (Length(L)≥1), do
    if the item > largest, then
      largest ← the item
  return largest
  • "←" denotes assignment. For instance, "largestitem" means that the value of largest changes to the value of item.
  • "return" terminates the algorithm and outputs the following value.

Euclid’s algorithm[edit]

The example-diagram of Euclid's algorithm from T.L. Heath 1908 with more detail added. Euclid does not go beyond a third measuring and gives no numerical examples. Nicomachus gives the example of 49 and 21: "I subtract the less from the greater; 28 is left; then again I subtract from this the same 21 (for this is possible); 7 is left; I subtact this from 21, 14 is left; from which I again subtract 7 (for this is possible); 7 will be left, but 7 cannot be subtracted from 7." Heath comments that "The last phrase is curious, but the meaning of it is obvious enough, as also the meaning of the phrase about ending "at one and the same number"(Heath 1908:300).

Euclid’s algorithm appears as Proposition II in Book VII ("Elementary Number Theory") of his Elements [49]. Euclid poses the problem: "Given two numbers not prime to one another, to find their greatest common measure". He defines "A number [to be] a multitude composed of units": a counting number, a positive integer not including 0. And to "measure" is to place a shorter measuring length s successively (q times) along longer length l until the remaining portion r is less than the shorter length s[50]. In modern words, remainder r = l - q*s, q being the quotient, or remainder r is the "modulus", the integer-fractional part left over after the division[51].

For Euclid’s method to succeed, the starting lengths must satisfy two requirements: (i) the lengths must not be 0, AND (ii) the subtraction must be “proper”, a test must guarantee that the smaller of the two numbers is subtracted from the larger (alternately, the two can be equal so their subtraction yields 0).

Euclid's original proof adds a third: the two lengths are not prime to one another. Euclid stipulated this so that he could construct a reductio ad absurdum proof that the two numbers' common measure is in fact the greatest[52]. While Nicomachus' algorithm is the same as Euclid's, when the numbers are prime to one another it yields the number "1" for their common measure. So to be precise the following is really Nicomachus' algorithm.

Computer (computor) language for Euclid's algorithm[edit]

Only a few instruction types are required to execute Euclid's algorithm—some logical tests (conditional GOTO), unconditional GOTO, assignment (replacement), and subtraction.

  • A location is symbolized by upper case letter(s), e.g. S, A, etc.
  • The varying quantity (number) in a location will be written in lower case letter(s) and (usually) associated with the location's name. For example, location L at the start might contain the number l = 3009.

An inelegant program for Euclid's algorithm[edit]

"Inelegant" is a translation of Knuth's version of the algorithm with a subtraction-based remainder-loop replacing his use of division (or a "modulus" instruction). Derived from Knuth 1973:2-4. Depending on the two numbers "Inelegant" may compute the g.c.d. in fewer steps than "Elegant".

The following algorithm is framed as Knuth's 4-step version of Euclid's and Nichomachus', but rather than using division to find the remainder it uses successive subtractions of the shorter length s from the remaining length r until r is less than s. The high-level description, shown in boldface, is adapted from Knuth 1973:2-4:

INPUT:

1 [Into two locations L and S put the numbers l and s that represent the two lengths]: INPUT L, S
2 [Initialize R: make the remaining length r equal to the starting/initial/input length l] R ← L

E0: [Insure rs.]

3 [Insure the smaller of the two numbers is in S and the larger in R]: IF R > S THEN the contents of L is the larger number so skip over the exchange-steps 4, 5 and 6: GOTO step 6 ELSE swap the contents of R and S.]
4 L ← R (this first step is redundant, but will be useful for later discussion).
5 R ← S
6 S ← L

E1:[Find remainder]: Until the remaining length r in R is less than the shorter length s in S, repeatedly subtract the measuring number s in S from the remaining length r in R.

7 IF S > R THEN done measuring so GOTO 10 ELSE measure again,
8 R ← R - S
9 [Remainder-loop]: GOTO 7.

E2: [Is the remainder 0?]: EITHER (i) the last measure was exact and the remainder in R is 0 program can halt, OR (ii) the algorithm must continue: the last measure left a remainder in R less than measuring number in S.

10 IF R = 0 then done so GOTO step 15 ELSE continue to step 11,

E3: [Interchange s and r: The nut of Euclid's algorithm. Use remainder r to measure what was previously smaller number s:; L serves as a temporary location.

11 L ← R
12 R ← S
13 S ← L
14 [Repeat the measuring process]: GOTO 7

OUTPUT:

15 [Done. S contains the greatest common divisor]: PRINT S

DONE:

16 HALT, END, STOP.

An elegant program for Euclid's algorithm[edit]

The following version of Euclid's algorithm requires only 6 core instructions to do what 13 are required to do by "Inelegant"; worse, "Inelegant" requires more types of instructions. The flowchart of "Elegant" can be found at the top of this article. In the (unstructured) Basic language the steps are numbered, and the instruction LET [ ] = [ ] is the assignment instruction symbolized by ←.

   5 REM Euclid's algorithm for greatest common divisor
   6 PRINT "Type two integers greater than 0"
   10 INPUT A,B
   20 IF B=0 THEN GOTO 80
   30 IF A > B THEN GOTO 60
   40 LET B=B-A
   50 GOTO 20
   60 LET A=A-B
   70 GOTO 20
   80 PRINT A
   90 END

How "Elegant" works: In place of an outer "Euclid loop", "Elegant" shifts back and forth between two "co-loops", an A > B loop that computes A ← A - B, and a B ≤ A loop that computes B ← B - A. This works because, when at last the minuend M is less than or equal to the subtrahend S ( Difference = Minuend - Subtrahend), the minuend can become s (the new measuring length) and the subtrahend can become the new r (the length to be measured); in other words the "sense" of the subtraction reverses.

Testing the Euclid algorithms[edit]

Does an algorithm do what its author wants it to do? A few test cases usually suffice to confirm core functionality. One source[53] uses 3009 and 884. Knuth suggested 40902, 24140. Another interesting case is the two relatively-prime numbers 14157 and 5950.

But exceptional cases must be identified and tested. Will "Inelegant" perform properly when R > S, S > R, R = S? Ditto for "Elegant": B > A, A > B, A = B? (Yes to all). What happens when one number is zero, both numbers are zero? ("Inelegant" computes forever in all cases; "Elegant" computes forever when A = 0.) What happens if negative numbers are entered? Fractional numbers? If the input numbers, i.e. the domain of the function computed by the algorithm/program, is to include only positive integers including zero, then the failures at zero indicate that the algorithm (and the program that instantiates it) is a partial function rather than a total function. A notable failure due to exceptions is the Ariane V rocket failure.

Proof of program correctness by use of mathematical induction: Knuth demonstrates the application of mathematical induction to an "extended" version of Euclid's algorithm, and he proposes "a general method applicable to proving the validity of any algorithm" [54]. Tausworthe proposes that a measure of the complexity of a program be the length of its correctness proof [55].

Measuring and improving the Euclid algorithms[edit]

Elegance (compactness) versus goodness (speed) : With only 6 core instructions, "Elegant" is the clear winner compared to "Inelegant" at 13 instructions. However, "Inelegant" is faster (it arrives at HALT in fewer steps). Algorithm analysis[56] indicates why this is the case: "Elegant" does two conditional tests in every subtraction loop, whereas "Inelegant" only does one. As the algorithm (usually) requires many loop-throughs, on average much time is wasted doing a "B = 0?" test that is needed only after the remainder is computed.

Can the algorithms be improved?: Once the programmer judges a program "fit" and "effective" -- that is, it computes the function intended by its author -- then the question becomes, can it be improved?

The compactness of "Inelegant" can be improved by the elimination of 5 steps. But Chaitin proved that compacting an algorithm cannot be automated by a generalized algorithm[57]; rather, it can only be done heuristically, i.e. by exhaustive search (examples to be found at Busy beaver), trial and error, cleverness, insight, application of inductive reasoning, etc. Observe that steps 4, 5 and 6 are repeated in steps 11, 12 and 13. Comparison with "Elegant" provides a hint that these steps together with steps 2 and 3 can be eliminated. This reduces the number of core instructions from 13 to 8, which makes it "more elegant" than "Elegant" at 9 steps.

The speed of "Elegant" can be improved by moving the B=0? test outside of the two subtraction loops. This change calls for the addition of 3 instructions (B=0?, A=0?, GOTO). Now "Elegant" computes the example-numbers faster; whether for any given A, B and R, S this is always the case would require a detailed analysis.

Algorithmic analysis[edit]

It is frequently important to know how much of a particular resource (such as time or storage) is theoretically required for a given algorithm. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, the algorithm above has a time requirement of O(n), using the big O notation with n as the length of the list. At all times the algorithm only needs to remember two values: the largest number found so far, and its current position in the input list. Therefore, it is said to have a space requirement of O(1), if the space required to store the input numbers is not counted, or O(n) if it is counted.

Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm will usually outperform a brute force sequential search when used for table lookups on sorted lists.

Formal versus empirical[edit]

The analysis and study of algorithms is a discipline of computer science, and is often practiced abstractly without the use of a specific programming language or implementation. In this sense, algorithm analysis resembles other mathematical disciplines in that it focuses on the underlying properties of the algorithm and not on the specifics of any particular implementation. Usually pseudocode is used for analysis as it is the simplest and most general representation. However, ultimately, most algorithms are usually implemented on particular hardware / software platforms and their algorithmic efficiency is eventually put to the test using real code.

Empirical testing is useful because it may uncover unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization.

Classification[edit]

There are various ways to classify algorithms, each with its own merits.

By implementation[edit]

One way to classify algorithms is by implementation means.

  • Recursion or iteration: A recursive algorithm is one that invokes (makes reference to) itself repeatedly until a certain condition matches, which is a method common to functional programming. Iterative algorithms use repetitive constructs like loops and sometimes additional data structures like stacks to solve the given problems. Some problems are naturally suited for one implementation or the other. For example, towers of Hanoi is well understood in recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa.
  • Logical: An algorithm may be viewed as controlled logical deduction. This notion may be expressed as: Algorithm = logic + control.[58] The logic component expresses the axioms that may be used in the computation and the control component determines the way in which deduction is applied to the axioms. This is the basis for the logic programming paradigm. In pure logic programming languages the control component is fixed and algorithms are specified by supplying only the logic component. The appeal of this approach is the elegant semantics: a change in the axioms has a well defined change in the algorithm.
  • Serial or parallel or distributed: Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time. Those computers are sometimes called serial computers. An algorithm designed for such an environment is called a serial algorithm, as opposed to parallel algorithms or distributed algorithms. Parallel algorithms take advantage of computer architectures where several processors can work on a problem at the same time, whereas distributed algorithms utilize multiple machines connected with a network. Parallel or distributed algorithms divide the problem into more symmetrical or asymmetrical subproblems and collect the results back together. The resource consumption in such algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable. Some problems have no parallel algorithms, and are called inherently serial problems.
  • Deterministic or non-deterministic: Deterministic algorithms solve the problem with exact decision at every step of the algorithm whereas non-deterministic algorithms solve problems via guessing although typical guesses are made more accurate through the use of heuristics.
  • Exact or approximate: While many algorithms reach an exact solution, approximation algorithms seek an approximation that is close to the true solution. Approximation may use either a deterministic or a random strategy. Such algorithms have practical value for many hard problems.
  • Quantum algorithm: Quantum algorithm run on a realistic model of quantum computation. The term is usually used for those algorithms which seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.

By design paradigm[edit]

Another way of classifying algorithms is by their design methodology or paradigm. There is a certain number of paradigms, each different from the other. Furthermore, each of these categories will include many different types of algorithms. Some commonly found paradigms include:

  • Brute-force or exhaustive search. This is the naïve method of trying every possible solution to see which is best.[59]
  • Divide and conquer. A divide and conquer algorithm repeatedly reduces an instance of a problem to one or more smaller instances of the same problem (usually recursively) until the instances are small enough to solve easily. One such example of divide and conquer is merge sorting. Sorting can be done on each segment of data after dividing data into segments and sorting of entire data can be obtained in the conquer phase by merging the segments. A simpler variant of divide and conquer is called a decrease and conquer algorithm, that solves an identical subproblem and uses the solution of this subproblem to solve the bigger problem. Divide and conquer divides the problem into multiple subproblems and so the conquer stage will be more complex than decrease and conquer algorithms. An example of decrease and conquer algorithm is the binary search algorithm.
  • Dynamic programming. When a problem shows optimal substructure, meaning the optimal solution to a problem can be constructed from optimal solutions to subproblems, and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions that have already been computed. For example, Floyd–Warshall algorithm, the shortest path to a goal from a vertex in a weighted graph can be found by using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. The main difference between dynamic programming and divide and conquer is that subproblems are more or less independent in divide and conquer, whereas subproblems overlap in dynamic programming. The difference between dynamic programming and straightforward recursion is in caching or memoization of recursive calls. When subproblems are independent and there is no repetition, memoization does not help; hence dynamic programming is not a solution for all complex problems. By using memoization or maintaining a table of subproblems already solved, dynamic programming reduces the exponential nature of many problems to polynomial complexity.
  • The greedy method. A greedy algorithm is similar to a dynamic programming algorithm, but the difference is that solutions to the subproblems do not have to be known at each stage; instead a "greedy" choice can be made of what looks best for the moment. The greedy method extends the solution with the best possible decision (not all feasible decisions) at an algorithmic stage based on the current local optimum and the best decision (not all possible decisions) made in a previous stage. It is not exhaustive, and does not give accurate answer to many problems. But when it works, it will be the fastest method. The most popular greedy algorithm is finding the minimal spanning tree as given by Huffman Tree, Kruskal, Prim, Sollin.
  • Linear programming. When solving a problem using linear programming, specific inequalities involving the inputs are found and then an attempt is made to maximize (or minimize) some linear function of the inputs. Many problems (such as the maximum flow for directed graphs) can be stated in a linear programming way, and then be solved by a 'generic' algorithm such as the simplex algorithm. A more complex variant of linear programming is called integer programming, where the solution space is restricted to the integers.
  • Reduction. This technique involves solving a difficult problem by transforming it into a better known problem for which we have (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithm's. For example, one selection algorithm for finding the median in an unsorted list involves first sorting the list (the expensive portion) and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer.
  • Search and enumeration. Many problems (such as playing chess) can be modeled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration and backtracking.
  1. Randomized algorithms are those that make some choices randomly (or pseudo-randomly); for some problems, it can in fact be proven that the fastest solutions must involve some randomness. There are two large classes of such algorithms:
    1. Monte Carlo algorithms return a correct answer with high-probability. E.g. RP is the subclass of these that run in polynomial time)
    2. Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP.
  2. In optimization problems, heuristic algorithms do not try to find an optimal solution, but an approximate solution where the time or resources are limited. They are not practical to find perfect solutions. An example of this would be local search, tabu search, or simulated annealing algorithms, a class of heuristic probabilistic algorithms that vary the solution of a problem by a random amount. The name "simulated annealing" alludes to the metallurgic term meaning the heating and cooling of metal to achieve freedom from defects. The purpose of the random variance is to find close to globally optimal solutions rather than simply locally optimal ones, the idea being that the random element will be decreased as the algorithm settles down to a solution. Approximation algorithms are those heuristic algorithms that additionally provide some bounds on the error. Genetic algorithms attempt to find solutions to problems by mimicking biological evolutionary processes, with a cycle of random mutations yielding successive generations of "solutions". Thus, they emulate reproduction and "survival of the fittest". In genetic programming, this approach is extended to algorithms, by regarding the algorithm itself as a "solution" to a problem.

By field of study[edit]

Every field of science has its own problems and needs efficient algorithms. Related problems in one field are often studied together. Some example classes are search algorithms, sorting algorithms, merge algorithms, numerical algorithms, graph algorithms, string algorithms, computational geometric algorithms, combinatorial algorithms, medical algorithms, machine learning, cryptography, data compression algorithms and parsing techniques.

Fields tend to overlap with each other, and algorithm advances in one field may improve those of other, sometimes completely unrelated, fields. For example, dynamic programming was invented for optimization of resource consumption in industry, but is now used in solving a broad range of problems in many fields.

By complexity[edit]

Algorithms can be classified by the amount of time they need to complete compared to their input size. There is a wide variety: some algorithms complete in linear time relative to input size, some do so in an exponential amount of time or even worse, and some never halt. Additionally, some problems may have multiple algorithms of differing complexity, while other problems might have no algorithms or no known efficient algorithms. There are also mappings from some problems to other problems. Owing to this, it was found to be more suitable to classify the problems themselves instead of the algorithms into equivalence classes based on the complexity of the best possible algorithms for them.

Burgin (2005, p. 24) uses a generalized definition of algorithms that relaxes the common requirement that the output of the algorithm that computes a function must be determined after a finite number of steps. He defines a super-recursive class of algorithms as "a class of algorithms in which it is possible to compute functions not computable by any Turing machine" (Burgin 2005, p. 107). This is closely related to the study of methods of hypercomputation.

Continuous algorithms[edit]

The adjective "continuous" when applied to the word "algorithm" can mean:

  1. An algorithm operating on data that represents continuous quantities, even though this data is represented by discrete approximations – such algorithms are studied in numerical analysis; or
  2. An algorithm in the form of a differential equation that operates continuously on the data, running on an analog computer.[60]

Legal issues[edit]

See also: Software patents for a general overview of the patentability of software, including computer-implemented algorithms.

Algorithms, by themselves, are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), and hence algorithms are not patentable (as in Gottschalk v. Benson). However, practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is highly controversial, and there are highly criticized patents involving algorithms, especially data compression algorithms, such as Unisys' LZW patent.

Additionally, some cryptographic algorithms have export restrictions (see export of cryptography).

Etymology[edit]

The word "Algorithm" or "Algorism" in some other writing versions, comes from the name Al-Khwārizmī (c. 780-850), a Persian mathematician, astronomer, geographer and a scholar in the House of Wisdom in Baghdad, whose name means "the native of Kharazm", a city that was part of the Greater Iran during his era and now is in modern-day Uzbekistan[61][62][63] He wrote a treatise in Arabic language in the 9th century, which was translated into Latin in the 12th century under the title Algoritmi de numero Indorum. This title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name.[64] Al-Khwarizmi was the most widely read mathematician in Europe in the late Middle Ages, primarily through his other book, the Algebra.[65] In late medieval Latin, algorismus, the corruption of his name, simply meant the "decimal number system" that is still the meaning of modern English algorism. In 17th century French the word's form, but not its meaning, changed to algorithme. English adopted the French very soon afterwards, but it wasn't until the late 19th century that "Algorithm" took on the meaning that it has in modern English.[66]

History: Development of the notion of "algorithm"[edit]

Discrete and distinguishable symbols[edit]

Tally-marks: To keep track of their flocks, their sacks of grain and their money the ancients used tallying: accumulating stones or marks scratched on sticks, or making discrete symbols in clay. Through the Babylonian and Egyptian use of marks and symbols, eventually Roman numerals and the abacus evolved (Dilson, p. 16–41). Tally marks appear prominently in unary numeral system arithmetic used in Turing machine and Post–Turing machine computations.

Manipulation of symbols as "place holders" for numbers: algebra[edit]

The work of the ancient Greek geometers (Euclidean algorithm), Persian mathematician Al-Khwarizmi (from whose name the terms "algorism" and "algorithm" are derived), and Western European mathematicians culminated in Leibniz's notion of the calculus ratiocinator (ca 1680):

A good century and a half ahead of his time, Leibniz proposed an algebra of logic, an algebra that would specify the rules for manipulating logical concepts in the manner that ordinary algebra specifies the rules for manipulating numbers.[67]

Mechanical contrivances with discrete states[edit]

The clock: Bolter credits the invention of the weight-driven clock as "The key invention [of Europe in the Middle Ages]", in particular the verge escapement[68] that provides us with the tick and tock of a mechanical clock. "The accurate automatic machine"[69] led immediately to "mechanical automata" beginning in the 13th century and finally to "computational machines" – the difference engine and analytical engines of Charles Babbage and Countess Ada Lovelace.[70]

Logical machines 1870 – Stanley Jevons' "logical abacus" and "logical machine": The technical problem was to reduce Boolean equations when presented in a form similar to what are now known as Karnaugh maps. Jevons (1880) describes first a simple "abacus" of "slips of wood furnished with pins, contrived so that any part or class of the [logical] combinations can be picked out mechanically . . . More recently however I have reduced the system to a completely mechanical form, and have thus embodied the whole of the indirect process of inference in what may be called a Logical Machine" His machine came equipped with "certain moveable wooden rods" and "at the foot are 21 keys like those of a piano [etc] . . .". With this machine he could analyze a "syllogism or any other simple logical argument".[71]

This machine he displayed in 1870 before the Fellows of the Royal Society.[72] Another logician John Venn, however, in his 1881 Symbolic Logic, turned a jaundiced eye to this effort: "I have no high estimate myself of the interest or importance of what are sometimes called logical machines ... it does not seem to me that any contrivances at present known or likely to be discovered really deserve the name of logical machines"; see more at Algorithm characterizations. But not to be outdone he too presented "a plan somewhat analogous, I apprehend, to Prof. Jevon's abacus ... [And] [a]gain, corresponding to Prof. Jevons's logical machine, the following contrivance may be described. I prefer to call it merely a logical-diagram machine ... but I suppose that it could do very completely all that can be rationally expected of any logical machine".[73]

Jacquard loom, Hollerith punch cards, telegraphy and telephony—the electromechanical relay: Bell and Newell (1971) indicate that the Jacquard loom (1801), precursor to Hollerith cards (punch cards, 1887), and "telephone switching technologies" were the roots of a tree leading to the development of the first computers.[74] By the mid-19th century the telegraph, the precursor of the telephone, was in use throughout the world, its discrete and distinguishable encoding of letters as "dots and dashes" a common sound. By the late 19th century the ticker tape (ca 1870s) was in use, as was the use of Hollerith cards in the 1890 U.S. census. Then came the Teletype (ca. 1910) with its punched-paper use of Baudot code on tape.

Telephone-switching networks of electromechanical relays (invented 1835) was behind the work of George Stibitz (1937), the inventor of the digital adding device. As he worked in Bell Laboratories, he observed the "burdensome' use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device".[75]

Davis (2000) observes the particular importance of the electromechanical relay (with its two "binary states" open and closed):

It was only with the development, beginning in the 1930s, of electromechanical calculators using electrical relays, that machines were built having the scope Babbage had envisioned."[76]

Mathematics during the 1800s up to the mid-1900s[edit]

Symbols and rules: In rapid succession the mathematics of George Boole (1847, 1854), Gottlob Frege (1879), and Giuseppe Peano (1888–1889) reduced arithmetic to a sequence of symbols manipulated by rules. Peano's The principles of arithmetic, presented by a new method (1888) was "the first attempt at an axiomatization of mathematics in a symbolic language".[77]

But Heijenoort gives Frege (1879) this kudos: Frege's is "perhaps the most important single work ever written in logic. ... in which we see a " 'formula language', that is a lingua characterica, a language written with special symbols, "for pure thought", that is, free from rhetorical embellishments ... constructed from specific symbols that are manipulated according to definite rules".[78] The work of Frege was further simplified and amplified by Alfred North Whitehead and Bertrand Russell in their Principia Mathematica (1910–1913).

The paradoxes: At the same time a number of disturbing paradoxes appeared in the literature, in particular the Burali-Forti paradox (1897), the Russell paradox (1902–03), and the Richard Paradox.[79] The resultant considerations led to Kurt Gödel's paper (1931) — he specifically cites the paradox of the liar—that completely reduces rules of recursion to numbers.

Effective calculability: In an effort to solve the Entscheidungsproblem defined precisely by Hilbert in 1928, mathematicians first set about to define what was meant by an "effective method" or "effective calculation" or "effective calculability" (i.e., a calculation that would succeed). In rapid succession the following appeared: Alonzo Church, Stephen Kleene and J.B. Rosser's λ-calculus[80] a finely honed definition of "general recursion" from the work of Gödel acting on suggestions of Jacques Herbrand (cf. Gödel's Princeton lectures of 1934) and subsequent simplifications by Kleene.[81] Church's proof[82] that the Entscheidungsproblem was unsolvable, Emil Post's definition of effective calculability as a worker mindlessly following a list of instructions to move left or right through a sequence of rooms and while there either mark or erase a paper or observe the paper and make a yes-no decision about the next instruction.[83] Alan Turing's proof of that the Entscheidungsproblem was unsolvable by use of his "a- [automatic-] machine"[84] – in effect almost identical to Post's "formulation", J. Barkley Rosser's definition of "effective method" in terms of "a machine".[85] S. C. Kleene's proposal of a precursor to "Church thesis" that he called "Thesis I",[86] and a few years later Kleene's renaming his Thesis "Church's Thesis"[87] and proposing "Turing's Thesis".[88]

Emil Post (1936) and Alan Turing (1936–7, 1939)[edit]

Here is a remarkable coincidence of two men not knowing each other but describing a process of men-as-computers working on computations—and they yield virtually identical definitions.

Emil Post (1936) described the actions of a "computer" (human being) as follows:

"...two concepts are involved: that of a symbol space in which the work leading from problem to answer is to be carried out, and a fixed unalterable set of directions.

His symbol space would be

"a two way infinite sequence of spaces or boxes... The problem solver or worker is to move and work in this symbol space, being capable of being in, and operating in but one box at a time.... a box is to admit of but two possible conditions, i.e., being empty or unmarked, and having a single mark in it, say a vertical stroke.
"One box is to be singled out and called the starting point. ...a specific problem is to be given in symbolic form by a finite number of boxes [i.e., INPUT] being marked with a stroke. Likewise the answer [i.e., OUTPUT] is to be given in symbolic form by such a configuration of marked boxes....
"A set of directions applicable to a general problem sets up a deterministic process when applied to each specific problem. This process will terminate only when it comes to the direction of type (C ) [i.e., STOP]".[89] See more at Post–Turing machine

Alan Turing's work[90] preceded that of Stibitz (1937); it is unknown whether Stibitz knew of the work of Turing. Turing's biographer believed that Turing's use of a typewriter-like model derived from a youthful interest: "Alan had dreamt of inventing typewriters as a boy; Mrs. Turing had a typewriter; and he could well have begun by asking himself what was meant by calling a typewriter 'mechanical'".[91] Given the prevalence of Morse code and telegraphy, ticker tape machines, and Teletypes we might conjecture that all were influences.

Turing—his model of computation is now called a Turing machine — begins, as did Post, with an analysis of a human computer that he whittles down to a simple set of basic motions and "states of mind". But he continues a step further and creates a machine as a model of computation of numbers.[92]

"Computing is normally done by writing certain symbols on paper. We may suppose this paper is divided into squares like a child's arithmetic book....I assume then that the computation is carried out on one-dimensional paper, i.e., on a tape divided into squares. I shall also suppose that the number of symbols which may be printed is finite....
"The behavior of the computer at any moment is determined by the symbols which he is observing, and his "state of mind" at that moment. We may suppose that there is a bound B to the number of symbols or squares which the computer can observe at one moment. If he wishes to observe more, he must use successive observations. We will also suppose that the number of states of mind which need be taken into account is finite...
"Let us imagine that the operations performed by the computer to be split up into 'simple operations' which are so elementary that it is not easy to imagine them further divided".[93]

Turing's reduction yields the following:

"The simple operations must therefore include:
"(a) Changes of the symbol on one of the observed squares
"(b) Changes of one of the squares observed to another square within L squares of one of the previously observed squares.

"It may be that some of these change necessarily invoke a change of state of mind. The most general single operation must therefore be taken to be one of the following:

"(A) A possible change (a) of symbol together with a possible change of state of mind.
"(B) A possible change (b) of observed squares, together with a possible change of state of mind"
"We may now construct a machine to do the work of this computer".[93]

A few years later, Turing expanded his analysis (thesis, definition) with this forceful expression of it:

"A function is said to be "effectively calculable" if its values can be found by some purely mechanical process. Although it is fairly easy to get an intuitive grasp of this idea, it is nevertheless desirable to have some more definite, mathematical expressible definition . . . [he discusses the history of the definition pretty much as presented above with respect to Gödel, Herbrand, Kleene, Church, Turing and Post] . . . We may take this statement literally, understanding by a purely mechanical process one which could be carried out by a machine. It is possible to give a mathematical description, in a certain normal form, of the structures of these machines. The development of these ideas leads to the author's definition of a computable function, and to an identification of computability † with effective calculability . . . .
"† We shall use the expression "computable function" to mean a function calculable by a machine, and we let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions".[94]

J. B. Rosser (1939) and S. C. Kleene (1943)[edit]

J. Barkley Rosser boldly defined an 'effective [mathematical] method' in the following manner (boldface added):

"'Effective method' is used here in the rather special sense of a method each step of which is precisely determined and which is certain to produce the answer in a finite number of steps. With this special meaning, three different precise definitions have been given to date. [his footnote #5; see discussion immediately below]. The simplest of these to state (due to Post and Turing) says essentially that an effective method of solving certain sets of problems exists if one can build a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer. All three definitions are equivalent, so it doesn't matter which one is used. Moreover, the fact that all three are equivalent is a very strong argument for the correctness of any one." (Rosser 1939:225–6)

Rosser's footnote #5 references the work of (1) Church and Kleene and their definition of λ-definability, in particular Church's use of it in his An Unsolvable Problem of Elementary Number Theory (1936); (2) Herbrand and Gödel and their use of recursion in particular Gödel's use in his famous paper On Formally Undecidable Propositions of Principia Mathematica and Related Systems I (1931); and (3) Post (1936) and Turing (1936–7) in their mechanism-models of computation.

Stephen C. Kleene defined as his now-famous "Thesis I" known as the Church–Turing thesis. But he did this in the following context (boldface in original):

"12. Algorithmic theories... In setting up a complete algorithmic theory, what we do is to describe a procedure, performable for each set of values of the independent variables, which procedure necessarily terminates and in such manner that from the outcome we can read a definite answer, "yes" or "no," to the question, "is the predicate value true?"" (Kleene 1943:273)

History after 1950[edit]

A number of efforts have been directed toward further refinement of the definition of "algorithm", and activity is on-going because of issues surrounding, in particular, foundations of mathematics (especially the Church–Turing thesis) and philosophy of mind (especially arguments around artificial intelligence). For more, see Algorithm characterizations.

See also[edit]

Notes[edit]

  1. ^ "Any classical mathematical algorithm, for example, can be described in a finite number of English words" (Rogers 1987:2).
  2. ^ Well defined with respect to the agent that executes the algorithm: "There is a computing agent, usually human, which can react to the instructions and carry out the computations" (Rogers 1987:2).
  3. ^ "an algorithm is a procedure for computing a function (with respect to some chosen notation for integers) . . . this limitation (to numerical functions) results in no loss of generality", (Rogers 1987:1).
  4. ^ "An algorithm has zero or more inputs, i.e., quantities which are given to it initially before the algorithm begins" (Knuth 1973:5).
  5. ^ "A procedure which has all the characteristics of an algorithm except that it possibly lacks finiteness may be called a 'computational method'" (Knuth 1973:5).
  6. ^ "An algorithm has one or more outputs, i.e. quantities which have a specified relation to the inputs" (Knuth 1973:5).
  7. ^ Whether or not a process with random interior processes (not including the input) is an algorithm is debatable. Rogers opines that: "a computation is carried out in a discrete stepwise fashion, without use of continuous methods or analogue devices . . . carried forward deterministically, without resort to random methods or devices, e.g., dice" Rogers 1987:2).
  8. ^ Kleene 1943 in Davis 1965:274
  9. ^ Rosser 1939 in Davis 1965:225
  10. ^ Stone 1973:4
  11. ^ Stone simply requires that "it must terminate in a finite number of steps" (Stone 1973:7-8).
  12. ^ Boolos and Jeffrey 1974,1999:19
  13. ^ cf Stone 1972:5
  14. ^ Knuth 1973:7 states: "In practice we not only want algorithms, we want good agorithms ... one criterion of goodness is the length of time taken to perform the algorithm ... other criteria are the adaptability of the algorithm to computers, its simplicty and elegance, etc."
  15. ^ cf Stone 1973:6
  16. ^ Stone 1973:7-8 states that there must be "a procedure that a robot [i.e. computer] can follow in order to determine pecisely how to obey the instruction". Stone adds finiteness of the process, and definiteness (having no ambiguity in the instructions) to this definition.
  17. ^ Knuth, loc. cit
  18. ^ a b Minsky 1967:105
  19. ^ Gurevich 2000:1, 3
  20. ^ Kleene 1952:136
  21. ^ Knuth 1997:5
  22. ^ Boldface added, Kleene 1952:137
  23. ^ Kleene 1952:325
  24. ^ Davis 1958:12–15
  25. ^ Kleene 1952:332
  26. ^ Minsky 1967:186
  27. ^ Sipser 2006:157
  28. ^ Knuth 1973:7
  29. ^ Chaitin 2005:32
  30. ^ Rogers 1987:1-2
  31. ^ In his essay "Calculations by Man and Machine: Conceptual Analysis" Seig 2002:390 credits this distinction to Robin Gandy, cf Wilfred Seig, et. al., 2002 Reflections on the foundations of mathematics: Essays in honor of Solomon Feferman, Association for Symbolic Logic, A. K Peters Ltd, Natick, MA.
  32. ^ cf Gandy 1980:126, Robin Gandy Church's Thesis and Principles for Mechanisms appearing on pp.123-148 in J. Barwise et. al. 1980 The Kleene Symposium, North-Holland Publishing Company.
  33. ^ A "robot": "A computer is a robot that will perform any task that can be described as a sequence of instructions" cf Stone 1972:3
  34. ^ Lambek’s "abacus" is a "countably infinite number of locations (holes, wires etc.) together with an unlimited supply of counters (pebbles, beads, etc). The locations are distinguishable, the counters are not". The holes will have unlimited capacity, and standing by is an agent who understands and is able to carry out the list of instructions" (Lambek 1961:295). Lambek references Melzak who defines his Q-machine as "an indefinitely large number of locations . . . an indefinitely large supply of counters distributed among these locations, a program, and an operator whose sole purpose is to carry out the program" (Melzak 1961:283). B-B-J (loc. cit.) add the stipulation that the holes are "capable of holding any number of stones" (p. 46). Both Melzak and Lambek appear in The Canadian Mathematical Bulletin, vol. 4, no. 3, September 1961.
  35. ^ If no confusion will result, the word "counters" can be dropped, and a location can be said to contain a single "number".
  36. ^ "We say that an instruction is effective if there is a procedure that the robot can follow in order to determine precisely how to obey the instruction" (Stone 1972:6)
  37. ^ cf Minsky 1967: Chapter 11 "Computer models" and Chapter 14 "Very Simple Bases for Computability" pp. 255-281 in particular
  38. ^ cf Knuth 1973:3.
  39. ^ But always preceded by IF-THEN to avoid improper subtraction.
  40. ^ However, a few different assignment instructions (e.g. DECREMENT, INCREMENT and ZERO/CLEAR/EMPTY for a Minsky machine) are also required for Turing-completeness; their exact specification is somewhat up to the designer. The unconditional GOTO is a convenience; it can be constructed by initializing a dedicated location to zero e.g. the instruction " Z ← 0 "; thereafter the instruction IF Z=0 THEN GOTO xxx will be unconditional.
  41. ^ Knuth 1973:4
  42. ^ Stone 1972:5. Methods for extracting roots are not trivial: see Methods of computing square roots.
  43. ^ Cf page 875 in Peter van Emde Boas's "Machine Models and Simulation" in Jan van Leeuwen ed., 1990, "Handbook of Theoretical Computer Science. Volume A: algoritms and Compexity", The MIT Press/Elsevier, ISBN: 0-444-88071-2 (Volume A).
  44. ^ John G. Kemeney and Thomas E. Kurtz 1985 Back to Basic: The History, Corruption, and Future of the Language, Addison-Wesley Publishing Company, Inc. Reading, MA, ISBN 0-201-13433-0.
  45. ^ Tausworthe 1977:101
  46. ^ Tausworthe 1977:142
  47. ^ Knuth 1973 section 1.2.1, expanded by Tausworthe 1977 at pages 100ff and Chapter 9.1
  48. ^ cf Tausworthe 1977
  49. ^ Heath 1908:300; Hawking’s Dover 2005 edition derives from Heath.
  50. ^ " 'Let CD, measuring BF, leave FA less than itself.' This is a neat abbreviation for saying, measure along BA successive lengths equal to CD until a point F is reached such that the length FA remaining is less than CD; in other words, let BF be the largest exact multiple of CD contained in BA" (Heath 1908:297
  51. ^ For modern treatments using division in the algorithm see Hardy and Wright 1979:180, Knuth 1973:2 (Volume 1), plus more discussion of Euclid's algorithm in Knuth 1969:293-297 (Volume 2).
  52. ^ Euclid covers this question in his Proposition 1.
  53. ^ http://aleph0.clarku.edu/~djoyce/java/elements/bookVII/propVII2.html
  54. ^ Knuth 1973:13-18. He credits "the formulation of algorithm-proving in terms of asertions and induction" to R. W. Floyd, Peter Naur, C. A. R. Hoare, H. H. Goldstine and J. von Neumann. Tausworth 1977 borrows Knuth's Euclid example and extends Knuth's method in section 9.1 Formal Proofs (pages 288-298).
  55. ^ Tausworthe 1997:294
  56. ^ cf Knuth 1973:7 (Vol. I), and his more-detailed analyses on pp. 1969:294-313 (Vol II).
  57. ^ Breakdown occurs when an algorithm tries to compact itself. Success would solve the Halting problem.
  58. ^ Kowalski 1979
  59. ^ Sue Carroll, Taz Daughtrey (July 4, 2007). Fundamental Concepts for the Software Quality Engineer. pp. 282 et seq. ISBN 9780873897204.
  60. ^ Adaptation and learning in automatic systems, page 54, Ya. Z. Tsypkin, Z. J. Nikolic, Academic Press, 1971, ISBN 978-0-12-702050-1
  61. ^ Toomer 1990
  62. ^ Hogendijk, Jan P. (1998). "al-Khwarzimi". Pythagoras. 38 (2): 4–5. ISSN 0033–4766. {{cite journal}}: Check |issn= value (help); Invalid |ref=harv (help) [dead link]
  63. ^ Oaks, Jeffrey A. "Was al-Khwarizmi an applied algebraist?". University of Indianapolis. Retrieved May 30, 2008.
  64. ^ Al-Khwarizmi: The Inventor of Algebra, by Corona Brezina (2006)
  65. ^ Foremost mathematical texts in history, according to Carl B. Boyer.
  66. ^ Etymology of algorithm at Dictionary.Reference.com
  67. ^ Davis 2000:18
  68. ^ Bolter 1984:24
  69. ^ Bolter 1984:26
  70. ^ Bolter 1984:33–34, 204–206)
  71. ^ All quotes from W. Stanley Jevons 1880 Elementary Lessons in Logic: Deductive and Inductive, Macmillan and Co., London and New York. Republished as a googlebook; cf Jevons 1880:199–201. Louis Couturat 1914 the Algebra of Logic, The Open Court Publishing Company, Chicago and London. Republished as a googlebook; cf Couturat 1914:75–76 gives a few more details; interestingly he compares this to a typewriter as well as a piano. Jevons states that the account is to be found at Jan . 20, 1870 The Proceedings of the Royal Society.
  72. ^ Jevons 1880:199–200
  73. ^ All quotes from John Venn 1881 Symbolic Logic, Macmillan and Co., London. Republished as a googlebook. cf Venn 1881:120–125. The interested reader can find a deeper explanation in those pages.
  74. ^ Bell and Newell diagram 1971:39, cf. Davis 2000
  75. ^ * Melina Hill, Valley News Correspondent, A Tinkerer Gets a Place in History, Valley News West Lebanon NH, Thursday March 31, 1983, page 13.
  76. ^ Davis 2000:14
  77. ^ van Heijenoort 1967:81ff
  78. ^ van Heijenoort's commentary on Frege's Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought in van Heijenoort 1967:1
  79. ^ Dixon 1906, cf. Kleene 1952:36–40
  80. ^ cf. footnote in Alonzo Church 1936a in Davis 1965:90 and 1936b in Davis 1965:110
  81. ^ Kleene 1935–6 in Davis 1965:237ff, Kleene 1943 in Davis 1965:255ff
  82. ^ Church 1936 in Davis 1965:88ff
  83. ^ cf. "Formulation I", Post 1936 in Davis 1965:289–290
  84. ^ Turing 1936–7 in Davis 1965:116ff
  85. ^ Rosser 1939 in Davis 1965:226
  86. ^ Kleene 1943 in Davis 1965:273–274
  87. ^ Kleene 1952:300, 317
  88. ^ Kleene 1952:376
  89. ^ Turing 1936–7 in Davis 1965:289–290
  90. ^ Turing 1936 in Davis 1965, Turing 1939 in Davis 1965:160
  91. ^ Hodges, p. 96
  92. ^ Turing 1936–7:116)
  93. ^ a b Turing 1936–7 in Davis 1965:136
  94. ^ Turing 1939 in Davis 1965:160

References[edit]

  • Axt, P. (1959) On a Subrecursive Hierarchy and Primitive Recursive Degrees, Transactions of the American Mathematical Society 92, pp. 85–105
  • Bell, C. Gordon and Newell, Allen (1971), Computer Structures: Readings and Examples, McGraw-Hill Book Company, New York. ISBN 0-07-004357-4}.
  • Blass, Andreas; Gurevich, Yuri (2003). "Algorithms: A Quest for Absolute Definitions" (PDF). Bulletin of European Association for Theoretical Computer Science. 81. Includes an excellent bibliography of 56 references.
  • Boolos, George; Jeffrey, Richard (1974, 1980, 1989, 1999). Computability and Logic (4th ed.). Cambridge University Press, London. ISBN 0-521-20402-X. {{cite book}}: Check date values in: |year= and |date= (help); Invalid |ref=harv (help); More than one of |author= and |last1= specified (help)CS1 maint: date and year (link): cf. Chapter 3 Turing machines where they discuss "certain enumerable sets not effectively (mechanically) enumerable".
  • Burgin, M. Super-recursive algorithms, Monographs in computer science, Springer, 2005. ISBN 0-387-95569-0
  • Campagnolo, M.L., Moore, C., and Costa, J.F. (2000) An analog characterization of the subrecursive functions. In Proc. of the 4th Conference on Real Numbers and Computers, Odense University, pp. 91–109
  • Church, Alonzo (1936a). "An Unsolvable Problem of Elementary Number Theory". The American Journal of Mathematics. 58 (2): 345–363. doi:10.2307/2371045. Reprinted in The Undecidable, p. 89ff. The first expression of "Church's Thesis". See in particular page 100 (The Undecidable) where he defines the notion of "effective calculability" in terms of "an algorithm", and he uses the word "terminates", etc.
  • Church, Alonzo (1936b). "A Note on the Entscheidungsproblem". The Journal of Symbolic Logic. 1 (1): 40–41. doi:10.2307/2269326. {{cite journal}}: More than one of |number= and |issue= specified (help) Church, Alonzo (1936). "Correction to a Note on the Entscheidungsproblem". The Journal of Symbolic Logic. 1 (3): 101–102. doi:10.2307/2269030. {{cite journal}}: More than one of |number= and |issue= specified (help) Reprinted in The Undecidable, p. 110ff. Church shows that the Entscheidungsproblem is unsolvable in about 3 pages of text and 3 pages of footnotes.
  • Daffa', Ali Abdullah al- (1977). The Muslim contribution to mathematics. London: Croom Helm. ISBN 0-85664-464-1.
  • Davis, Martin (1965). The Undecidable: Basic Papers On Undecidable Propositions, Unsolvable Problems and Computable Functions. New York: Raven Press. ISBN 0486432289. Davis gives commentary before each article. Papers of Gödel, Alonzo Church, Turing, Rosser, Kleene, and Emil Post are included; those cited in the article are listed here by author's name.
  • Davis, Martin (2000). Engines of Logic: Mathematicians and the Origin of the Computer. New York: W. W. Nortion. ISBN 0393322297. Davis offers concise biographies of Leibniz, Boole, Frege, Cantor, Hilbert, Gödel and Turing with von Neumann as the show-stealing villain. Very brief bios of Joseph-Marie Jacquard, Babbage, Ada Lovelace, Claude Shannon, Howard Aiken, etc.
  • Public Domain This article incorporates public domain material from Paul E. Black. "algorithm". Dictionary of Algorithms and Data Structures. NIST.
  • Dennett, Daniel (1995). Darwin's Dangerous Idea. New York: Touchstone/Simon & Schuster. ISBN 0684802902.
  • Yuri Gurevich, Sequential Abstract State Machines Capture Sequential Algorithms, ACM Transactions on Computational Logic, Vol 1, no 1 (July 2000), pages 77–111. Includes bibliography of 33 sources.
  • Kleene C., Stephen (1936). "General Recursive Functions of Natural Numbers". Mathematische Annalen. 112 (5): 727–742. doi:10.1007/BF01565439. Presented to the American Mathematical Society, September 1935. Reprinted in The Undecidable, p. 237ff. Kleene's definition of "general recursion" (known now as mu-recursion) was used by Church in his 1935 paper An Unsolvable Problem of Elementary Number Theory that proved the "decision problem" to be "undecidable" (i.e., a negative result).
  • Kleene C., Stephen (1943). "Recursive Predicates and Quantifiers". American Mathematical Society Transactions. 54 (1): 41–73. doi:10.2307/1990131. {{cite journal}}: More than one of |number= and |issue= specified (help) Reprinted in The Undecidable, p. 255ff. Kleene refined his definition of "general recursion" and proceeded in his chapter "12. Algorithmic theories" to posit "Thesis I" (p. 274); he would later repeat this thesis (in Kleene 1952:300) and name it "Church's Thesis"(Kleene 1952:317) (i.e., the Church thesis).
  • Kleene, Stephen C. (First Edition 1952). Introduction to Metamathematics (Tenth Edition 1991 ed.). North-Holland Publishing Company. ISBN 0720421039. {{cite book}}: Check date values in: |year= (help) Excellent—accessible, readable—reference source for mathematical "foundations".
  • Knuth, Donald (1997). Fundamental Algorithms, Third Edition. Reading, Massachusetts: Addison–Wesley. ISBN 0201896834.
  • Knuth, Donald (1969). Volume 2/Seminumerical Algorithms, The Art of Computer Programming First Edition. Reading, Massachusetts: Addison–Wesley.
  • Kosovsky, N. K. Elements of Mathematical Logic and its Application to the theory of Subrecursive Algorithms, LSU Publ., Leningrad, 1981
  • Kowalski, Robert (1979). "Algorithm=Logic+Control". Communications of the ACM. 22 (7): 424–436. doi:10.1145/359131.359136. ISSN 0001-0782.
  • A. A. Markov (1954) Theory of algorithms. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [i.e., Jerusalem, Israel Program for Scientific Translations, 1961; available from the Office of Technical Services, U.S. Dept. of Commerce, Washington] Description 444 p. 28 cm. Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algerifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS 60-51085.]
  • Minsky, Marvin (1967). Computation: Finite and Infinite Machines (First ed.). Prentice-Hall, Englewood Cliffs, NJ. ISBN 0131654497. Minsky expands his "...idea of an algorithm—an effective procedure..." in chapter 5.1 Computability, Effective Procedures and Algorithms. Infinite machines.
  • Post, Emil (1936). "Finite Combinatory Processes, Formulation I". The Journal of Symbolic Logic. 1 (3): 103–105. doi:10.2307/2269031. Reprinted in The Undecidable, p. 289ff. Post defines a simple algorithmic-like process of a man writing marks or erasing marks and going from box to box and eventually halting, as he follows a list of simple instructions. This is cited by Kleene as one source of his "Thesis I", the so-called Church–Turing thesis.
  • Rogers, Jr, Hartley (1987). Theory of Recursive Functions and Effective Computbility. The MIT Press. ISBN 0-262-68052-1 (pbk.). {{cite book}}: Check |isbn= value: invalid character (help)
  • Rosser, J.B. (1939). "An Informal Exposition of Proofs of Godel's Theorem and Church's Theorem". Journal of Symbolic Logic. 4. Reprinted in The Undecidable, p. 223ff. Herein is Rosser's famous definition of "effective method": "...a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps... a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer" (p. 225–226, The Undecidable)
  • Scott, Michael L. (2009). Programming Language Mathematics: Third Edition. Morgan Kaufmann Publishers:Elsevier. ISBN 13:978-0-12-274514-9. {{cite book}}: Check |isbn= value: invalid character (help)
  • Sipser, Michael (2006). Introduction to the Theory of Computation. PWS Publishing Company. ISBN 053494728X.
  • Stone, Harold S. (1972). Introduction to Computer Organization and Data Structures (1972 ed.). McGraw-Hill, New York. ISBN 0070617260. Cf. in particular the first chapter titled: Algorithms, Turing Machines, and Programs. His succinct informal definition: "...any sequence of instructions that can be obeyed by a robot, is called an algorithm" (p. 4).
  • Tausworthe, Robert C (1977). Standardized Development of Computer Software Part 1 Methods. Englewood Cliffs NJ: Prentice-Hall, Inc. ISBN 0-13-842195-1.
  • Turing, Alan M. (1936–7). "On Computable Numbers, With An Application to the Entscheidungsproblem". Proceedings of the London Mathematical Society, Series 2. 42: 230–265. doi:10.1112/plms/s2-42.1.230. {{cite journal}}: Check date values in: |year= (help). Corrections, ibid, vol. 43(1937) pp. 544–546. Reprinted in The Undecidable, p. 116ff. Turing's famous paper completed as a Master's dissertation while at King's College Cambridge UK.
  • Turing, Alan M. (1939). "Systems of Logic Based on Ordinals". Proceedings of the London Mathematical Society, Series 2. 45: 161–228. doi:10.1112/plms/s2-45.1.161. Reprinted in The Undecidable, p. 155ff. Turing's paper that defined "the oracle" was his PhD thesis while at Princeton USA.
  • United States Patent and Trademark Office (2006), 2106.02 **>Mathematical Algorithms< - 2100 Patentability, Manual of Patent Examining Procedure (MPEP). Latest revision August 2006

Secondary references[edit]

Further reading[edit]

External links[edit]

* Category:Arabic words and phrases Category:Theoretical computer science Category:Articles with example pseudocode Category:Mathematical logic

The LR parser can be created from a context-free grammar hand written by a programmer. However, LR parsers are difficult to produce by hand and they are usually constructed by a program called parser generator or a compiler-compiler. Depending on how the parsing table is generated, these parsers can be called simple LR parsers (SLR), look-ahead LR parsers (LALR), or canonical LR parsers. For those parsers, LALR parsers have more language recognition power than SLR parsers while Canonical LR parsers have more recognition power than LALR parsers.


The syntax of many programming languages can be defined by LR(1) grammar, or close to being so. For this reason, LR parsers are often used by compilers to perform syntax analysis of source code.

A deterministic context-free language is a language for which some LR(k) grammar exists. Every LR(k) grammar for k > 1 can be mechanically transformed into an LR(1) grammar for the same language, while an LR(0) grammar for the same language may not exist; the LR(0) languages are a proper subset of the deterministic ones.

LR parsing can be generalized as arbitrary context-free language parsing without a performance penalty, even for LR(k) grammars. This is because most programming languages can be expressed with LR(k) grammars, where k is a small constant (usually 1). Note that parsing of non-LR(k) grammars is an order of magnitude slower (cubic instead of quadratic in relation to the input length).

LR parsing can handle a larger range of languages than LL parsing, and is also better at error reporting, i.e. it detects syntactic errors when the input does not conform to the grammar as soon as possible. This is in contrast to an LL(k) (or even worse, an LL(*) parser) which may defer error detection to a different branch of the grammar due to backtracking, often making errors harder to localize across disjunctions with long common prefixes.

Constructing LR(0) parsing tables[edit]

Items[edit]

The construction of these parsing tables is based on the notion of LR(0) items (simply called items here) which are grammar rules with a special dot added somewhere in the right-hand side. For example, the rule E → E + B has the following four corresponding items:

E → • E + B
E → E • + B
E → E + • B
E → E + B •

Rules of the form A → ε have only a single item A → •. The item E → E • + B, for example, indicates that the parser has recognized a string corresponding with E on the input stream and now expects to read a '+' followed by another string corresponding with B.

Item sets[edit]

It is usually not possible to characterize the state of the parser with a single item because it may not know in advance which rule it is going to use for reduction. For example, if there is also a rule E → E * B then the items E → E • + B and E → E • * B will both apply after a string corresponding with E has been read. Therefore, we will characterize the state of the parser by a set of items, in this case the set { E → E • + B, E → E • * B }.

Extension of Item Set by expansion of non-terminals[edit]

An item with a dot before a nonterminal, such as E → E + • B, indicates that the parser expects to parse the nonterminal B next. To ensure the item set contains all possible rules the parser may be in the midst of parsing, it must include all items describing how B itself will be parsed. This means that if there are rules such as B → 1 and B → 0 then the item set must also include the items B → • 1 and B → • 0. In general this can be formulated as follows:

If there is an item of the form AvBw in an item set and in the grammar there is a rule of the form Bw' then the item B → • w' should also be in the item set.


Closure of item sets[edit]

Thus, any set of items can be extended by recursively adding all the appropriate items until all nonterminals preceded by dots are accounted for. The minimal extension is called the closure of an item set and written as clos(I) where I is an item set. It is these closed item sets that we will take as the states of the parser, although only the ones that are actually reachable from the begin state will be included in the tables.

Augmented grammar[edit]

Before we start determining the transitions between the different states, the grammar is always augmented with an extra rule

(0) S → E

where S is a new start symbol and E the old start symbol. The parser will use this rule for reduction exactly when it has accepted the input string.

For our example we will take the same grammar as before and augment it:

(0) S → E
(1) E → E * B
(2) E → E + B
(3) E → B
(4) B → 0
(5) B → 1

It is for this augmented grammar that we will determine the item sets and the transitions between them.

Table construction[edit]

Finding the reachable item sets and the transitions between them[edit]

The first step of constructing the tables consists of determining the transitions between the closed item sets. These transitions will be determined as if we are considering a finite automaton that can read terminals as well as nonterminals. The begin state of this automaton is always the closure of the first item of the added rule: S → • E:

Item set 0
S → • E
+ E → • E * B
+ E → • E + B
+ E → • B
+ B → • 0
+ B → • 1

The boldfaced "+" in front of an item indicates the items that were added for the closure (not to be confused with the mathematical '+' operator which is a terminal). The original items without a "+" are called the kernel of the item set.

Starting at the begin state (S0) we will now determine all the states that can be reached from this state. The possible transitions for an item set can be found by looking at the symbols (terminals and nonterminals) we find right after the dots; in the case of item set 0 these are the terminals '0' and '1' and the nonterminal E and B. To find the item set that a symbol x leads to we follow the following procedure:

  1. Take the set, S, of all items in the current item set where there is a dot in front of some symbol x.
  2. For each item in S, move the dot to the right of x.
  3. Close the resulting set of items.

For the terminal '0' this results in:

Item set 1
B → 0 •

and for the terminal '1' in:

Item set 2
B → 1 •

and for the nonterminal E in:

Item set 3
S → E •
E → E • * B
E → E • + B

and for the nonterminal B in:

Item set 4
E → B •

Note that the closure does not add new items in all cases. We continue this process until no more new item sets are found. For the item sets 1, 2, and 4 there will be no transitions since the dot is not in front of any symbol. For item set 3 we see that the dot is in front of the terminals '*' and '+'. For '*' the transition goes to:

Item set 5
E → E * • B
+ B → • 0
+ B → • 1

and for '+' the transition goes to:

Item set 6
E → E + • B
+ B → • 0
+ B → • 1

For item set 5 we have to consider the terminals '0' and '1' and the nonterminal B. For the terminals we see that the resulting closed item sets are equal to the already found item sets 1 and 2, respectively. For the nonterminal B the transition goes to:

Item set 7
E → E * B •

For item set 6 we also have to consider the terminal '0' and '1' and the nonterminal B. As before, the resulting item sets for the terminals are equal to the already found item sets 1 and 2. For the nonterminal B the transition goes to:

Item set 8
E → E + B •

These final item sets have no symbols beyond their dots so no more new item sets are added and we are finished. The finite automaton, with item sets as its states is shown below.

The transition table for the automaton now looks as follows:

A Deterministic finite automaton that can also make parsing decision.
Item Set * + 0 1 E B
0     1 2 3 4
1            
2            
3 5 6        
4            
5     1 2   7
6     1 2   8
7            
8            

Constructing the action and goto tables[edit]

From this table and the found item sets we construct the action and goto table as follows:

  1. the columns for nonterminals are copied to the goto table
  2. the columns for the terminals are copied to the action table as shift actions
  3. an extra column for '$' (end of input) is added to the action table that contains acc for every item set that contains S → E •.
  4. if an item set i contains an item of the form Aw • and Aw is rule m with m > 0 then the row for state i in the action table is completely filled with the reduce action rm.

The reader may verify that this results indeed in the action and goto table that were presented earlier on.

A note about LR(0) versus SLR and LALR parsing[edit]

Note that only step 4 of the above procedure produces reduce actions, and so all reduce actions must occupy an entire table row, causing the reduction to occur regardless of the next symbol in the input stream. This is why these are LR(0) parse tables: they don't do any lookahead (that is, they look ahead zero symbols) before deciding which reduction to perform. A grammar that needs lookahead to disambiguate reductions would require a parse table row containing different reduce actions in different columns, and the above procedure is not capable of creating such rows.

Refinements to the LR(0) table construction procedure (such as SLR and LALR) are capable of constructing reduce actions that do not occupy entire rows. Therefore, they are capable of parsing more grammars than LR(0) parsers.

Conflicts in the constructed tables[edit]

The automaton is constructed in such a way that it is guaranteed to be deterministic. However, when reduce actions are added to the action table it can happen that the same cell is filled with a reduce action and a shift action (a shift-reduce conflict) or with two different reduce actions (a reduce-reduce conflict). However, it can be shown that when this happens the grammar is not an LR(0) grammar.

A small example of a non-LR(0) grammar with a shift-reduce conflict is:

(1) E → 1 E
(2) E → 1

One of the item sets we then find is:

Item set 1
E → 1 • E
E → 1 •
+ E → • 1 E
+ E → • 1

There is a shift-reduce conflict in this item set because in the cell in the action table for this item set and the terminal '1' there will be both a shift action to state 1 and a reduce action with rule 2.

A small example of a non-LR(0) grammar with a reduce-reduce conflict is:

(1) E → A 1
(2) E → B 2
(3) A → 1
(4) B → 1

In this case we obtain the following item set:

Item set 1
A → 1 •
B → 1 •

There is a reduce-reduce conflict in this item set because in the cells in the action table for this item set there will be both a reduce action for rule 3 and one for rule 4.

Both examples above can be solved by letting the parser use the follow set (see LL parser) of a nonterminal A to decide if it is going to use one of As rules for a reduction; it will only use the rule Aw for a reduction if the next symbol on the input stream is in the follow set of A. This solution results in so-called Simple LR parsers.

Architecture of LR parsers[edit]

Figure 1. Architecture of a table-based bottom-up parser.

Conceptually, an LR Parser is a recursive program that can be proven correct by direct computation, and can be implemented more efficiently (in time) as a recursive ascent parser, a set of mutually-recursive functions for every grammar, much like a recursive descent parser. Conventionally, however, LR parsers are presented and implemented as table-based stack machines in which the call stack of the underlying recursive program is explicitly manipulated.

A table-driven bottom-up parser can be schematically presented as in Figure 1. The following describes a rightmost derivation by this parser.

General case[edit]

The parser is a state machine. It consists of:

  • an input buffer
  • a stack on which is stored a list of states it has been in
  • a goto table that prescribes to which new state it should move
  • an action table that gives a grammar rule to apply given the current state and the current terminal in the input stream
  • a set of CFL rules


Since the LR parser reads input from left to right but needs to produce a rightmost derivation, it uses reductions, instead of derivations to process input. That is, the algorithm works by creating a "leftmost reduction" of the input. The end result, when reversed, will be a rightmost derivation.

The LR parsing algorithm works as follows:

  1. The stack is initialized with [0]. The current state will always be the state that is at the top of the stack.
  2. Given the current state and the current terminal on the input stream an action is looked up in the action table. There are four cases:
    • a shift sn:
      • the current terminal is removed from the input stream
      • the state n is pushed onto the stack and becomes the current state
    • a reduce rm:
      • the number m is written to the output stream
      • for every symbol in the right-hand side of rule m a state is removed from the stack (i. e. if the right-hand side of rule m consists of 3 symbols, 3 top states are removed from the stack)
      • given the state that is then on top of the stack and the left-hand side of rule m a new state is looked up in the goto table and made the new current state by pushing it onto the stack
    • an accept: the string is accepted
    • no action: a syntax error is reported
  3. Step 2 is repeated until either the string is accepted or a syntax error is reported.

Concrete example[edit]

To explain its workings we will use the following small grammar whose start symbol is E:

(1) E → E * B
(2) E → E + B
(3) E → B
(4) B → 0
(5) B → 1

and parse the following input:

1 + 1

Action and goto tables[edit]

The two LR(0) parsing tables for this grammar look as follows:

state action goto
  * + 0 1 $ E B
0     s1 s2   3 4
1 r4 r4 r4 r4 r4    
2 r5 r5 r5 r5 r5    
3 s5 s6     acc    
4 r3 r3 r3 r3 r3    
5     s1 s2     7
6     s1 s2     8
7 r1 r1 r1 r1 r1    
8 r2 r2 r2 r2 r2    

The action table is indexed by a state of the parser and a terminal (including a special terminal $ that indicates the end of the input stream) and contains three types of actions:

  • shift, which is written as 'sn' and indicates that the next state is n
  • reduce, which is written as 'rm' and indicates that a reduction with grammar rule m should be performed
  • accept, which is written as 'acc' and indicates that the parser accepts the string in the input stream.

The goto table is indexed by a state of the parser and a nonterminal and simply indicates what the next state of the parser will be if it has recognized a certain nonterminal. This table is important to find out the next state after every reduction. After a reduction, the next state is found by looking up the goto table entry for top of the stack (i.e. current state) and the reduced rule's LHS (i.e. non-terminal).

Parsing procedure[edit]

The table below illustrates each step in the process. Here the state refers to the element at the top of the stack (the right-most element), and the next action is determined by referring to the action table above. Also note that a $ is appended to the input string to denote the end of the stream.

State Input Stream Output Stream Stack Next Action
0 1+1$ [0] Shift 2
2 +1$ [0,2] Reduce 5
4 +1$ 5 [0,4] Reduce 3
3 +1$ 5,3 [0,3] Shift 6
6 1$ 5,3 [0,3,6] Shift 2
2 $ 5,3 [0,3,6,2] Reduce 5
8 $ 5,3,5 [0,3,6,8] Reduce 2
3 $ 5,3,5,2 [0,3] Accept

Walkthrough[edit]

The parser starts out with the stack containing just the initial state ('0'):

[0]

The first symbol from the input string that the parser sees is '1'. In order to find out what the next action is (shift, reduce, accept or error), the action table is indexed with the current state (remember that the "current state" is just whatever is on the top of the stack), which in this case is 0, and the current input symbol, which is '1'. The action table specifies a shift to state 2, and so state 2 is pushed onto the stack (again, remember that all the state information is in the stack, so "shifting to state 2" is the same thing as pushing 2 onto the stack). The resulting stack is

[0 '1' 2]

where the top of the stack is 2. For the sake of explanation we also show the symbol (e.g., '1', B) that caused the transition to the next state, although strictly speaking it is not part of the stack.

In state 2 the action table says that regardless of what terminal we see on the input stream, we should do a reduction with grammar rule 5. If the table is correct, this means that the parser has just recognized the right-hand side of rule 5, which is indeed the case. In this case we write 5 to the output stream, pop one state from the stack (since the right-hand side of the rule has one symbol), and push on the stack the state from the cell in the goto table for state 0 and B, i.e., state 4. The resulting stack is:

[0 B 4]

However, in state 4 the action table says we should now do a reduction with rule 3. So we write 3 to the output stream, pop one state from the stack, and find the new state in the goto table for state 0 and E, which is state 3. The resulting stack:

[0 E 3]

The next terminal that the parser sees is a '+' and according to the action table it should then go to state 6:

[0 E 3 '+' 6]

Note that the resulting stack can be interpreted as the history of a finite state automaton that has just read a nonterminal E followed by a terminal '+'. The transition table of this automaton is defined by the shift actions in the action table and the goto actions in the goto table.

The next terminal is now '1' and this means that we perform a shift and go to state 2:

[0 E 3 '+' 6 '1' 2]

Just as the previous '1' this one is reduced to B giving the following stack:

[0 E 3 '+' 6 B 8]

Again note that the stack corresponds with a list of states of a finite automaton that has read a nonterminal E, followed by a '+' and then a nonterminal B. In state 8 we always perform a reduce with rule 2. Note that the top 3 states on the stack correspond with the 3 symbols in the right-hand side of rule 2.

[0 E 3]

Finally, we read a '$' from the input stream which means that according to the action table (the current state is 3) the parser accepts the input string. The rule numbers that will then have been written to the output stream will be [5, 3, 5, 2] which is indeed a rightmost derivation of the string "1 + 1" in reverse.

See also[edit]

References[edit]

Further reading[edit]

External links[edit]