Jump to content

Hyperoperation

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 130.179.29.61 (talk) at 17:02, 29 July 2014 (this sentence is explicitly limited to the operations beyond exponentiation, so the parenthetical comment is inapplicable and confusing.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In mathematics, the hyperoperation sequence[nb 1] is an infinite sequence of arithmetic operations (called hyperoperations)[1][11][13] that starts with the unary operation of successor, then continues with the binary operations of addition, multiplication and exponentiation, after which the sequence proceeds with further binary operations extending beyond exponentiation, using right-associativity. For the operations beyond exponentiation, the nth member of this sequence is named by Reuben Goodstein after the Greek prefix of n suffixed with -ation (such as tetration, pentation)[5] and can be written using n-2 arrows in Knuth's up-arrow notation. Each hyperoperation may be understood recursively in terms of the previous one by:

with b occurrences of a on the right hand side of the equation

It may also be defined according to the recursion rule part of the definition, as in Knuth's up-arrow version of the Ackermann function:

This recursion rule is common to many variants of hyperoperations (see below).

Definition

The hyperoperation sequence is the sequence of binary operations indexed by , defined recursively as follows:

(Note that for n = 0, the binary operation essentially reduces to a unary operation by ignoring the first argument.)

For n = 0, 1, 2, 3, this definition reproduces the basic arithmetic operations of successor (which is a unary operation), addition, multiplication, and exponentiation, respectively, as

and for n ≥ 4 it extends these basic operations beyond exponentiation to what can be written in Knuth's up-arrow notation as

...
...

Knuth's notation could be extended to negative indices ≥ -2 in such a way as to agree with the entire hyperoperation sequence, except for the lag in the indexing:

The hyperoperations can thus be seen as an answer to the question "what's next" in the sequence: successor, addition, multiplication, exponentiation, tetration, and so on. Noting that

the relationship between basic arithmetic operations is illustrated, allowing the higher operations to be defined naturally as above. The parameters of the hyperoperation hierarchy are sometimes referred to by their analogous exponentiation term;[14] so a is the base, b is the exponent (or hyperexponent),[12] and n is the rank (or grade).[6]

In common terms, the hyperoperations are ways of compounding numbers that increase in growth based on the iteration of the previous hyperoperation. The concepts of successor, addition, multiplication and exponentiation are all hyperoperations; the successor operation (producing x+1 from x) is the most primitive, the addition operator specifies the number of times 1 is to be added to itself to produce a final value, multiplication specifies the number of times a number is to be added to itself, and exponentiation refers to the number of times a number is to be multiplied by itself.

Examples

This is a list of the first seven hyperoperations.

n Operation Definition Names Domain
0 hyper0, increment, successor, zeration b arbitrary
1 hyper1, addition arbitrary
2 hyper2, multiplication arbitrary
3 hyper3, exponentiation a > 0, b real, or a non-zero, b an integer, with some multivalued extensions to complex numbers
4 hyper4, tetration a > 0 or a non-zero integer, b an integer > 0 (with some proposed extensions)
5 or hyper5, pentation a and b integers > 0
6 hyper6, hexation a and b integers > 0

History

One of the earliest discussions of hyperoperations was that of Albert Bennett[6] in 1914, who developed some of the theory of commutative hyperoperations (see below). About 12 years later, Wilhelm Ackermann defined the function [15] which somewhat resembles the hyperoperation sequence.

In his 1947 paper,[5] R. L. Goodstein introduced the specific sequence of operations that are now called hyperoperations, and also suggested the Greek names tetration, pentation, hexation, etc., for the extended operations beyond exponentiation (because they correspond to the indices 4, 5, 6, etc.). As a three-argument function, e.g., , the hyperoperation sequence as a whole is seen to be a version of the original Ackermann function recursive but not primitive recursive — as modified by Goodstein to incorporate the primitive successor function together with the other three basic operations of arithmetic (addition, multiplication, exponentiation), and to make a more seamless extension of these beyond exponentiation.

The original three-argument Ackermann function uses the same recursion rule as does Goodstein's version of it (i.e., the hyperoperation sequence), but differs from it in two ways. First, defines a sequence of operations starting from addition (n = 0) rather than the successor function, then multiplication (n = 1), exponentiation (n = 2), etc. Secondly, the initial conditions for result in , thus differing from the hyperoperations beyond exponentiation.[7][16][17] The significance of the b + 1 in the previous expression is that = , where b counts the number of operators (exponentiations), rather than counting the number of operands ("a"s) as does the b in , and so on for the higher-level operations. (See the Ackermann function article for details.)

Notations

This is a list of notations that have been used for hyperoperations.

Name Notation equivalent to Comment
Knuth's up-arrow notation Used by Knuth[18] (for n ≥ 2), and found in several reference books.[19][20]
Goodstein's notation Used by Reuben Goodstein.[5]
Original Ackermann function Used by Wilhelm Ackermann.[15]
Ackermann–Péter function This corresponds to hyperoperations for base 2.
Nambiar's notation Used by Nambiar[21]
Box notation Used by Rubtsov and Romerio.[13][14]
Superscript notation Used by Robert Munafo.[10]
Subscript notation Used for lower hyperoperations by Robert Munafo.[10]
Square bracket notation Used in many online forums; convenient for ASCII.
Conway chained arrow notation Used by John Horton Conway
Bowers' Exploding Array Function Used by Jonathan Bowers

Generalization

For different initial conditions or different recursion rules, very different operations can occur. Some mathematicians refer to all variants as examples of hyperoperations.

In the general sense, a hyperoperation hierarchy is a family of binary operations on , indexed by a set , such that there exists where

  • (addition),
  • (multiplication), and
  • (exponentiation).

Also, if the last condition is relaxed (i.e. there is no exponentiation), then we may also include the commutative hyperoperations, described below. Although one could list each hyperoperation explicitly, this is generally not the case. Most variants only include the successor function (or addition) in their definition, and redefine multiplication (and beyond) based on a single recursion rule that applies to all ranks. Since this is part of the definition of the hierarchy, and not a property of the hierarchy itself, it is difficult to define formally.

There are many possibilities for hyperoperations that are different from Goodstein's version. By using different initial conditions for or , the iterations of these conditions may produce different hyperoperations above exponentiation, while still corresponding to addition and multiplication. The modern definition of hyperoperations includes for all , whereas the variants below include , and .

An open problem in hyperoperation research is whether the hyperoperation hierarchy can be generalized to , and whether forms a quasigroup (with restricted domains).

Variant starting from a

In 1928, Wilhelm Ackermann defined a 3-argument function which gradually evolved into a 2-argument function known as the Ackermann function. The original Ackermann function was less similar to modern hyperoperations, because his initial conditions start with for all . Also he assigned addition to , multiplication to and exponentiation to , so the initial conditions produce very different operations for tetration and beyond.

n Operation Comment
0
1
2
3 An offset form of tetration. The iteration of this operation is much different than the iteration of tetration.
4 Not to be confused with pentation.

Another initial condition that has been used is (where the base is constant ), due to Rózsa Péter, which does not form a hyperoperation hierarchy.

Variant starting from 0

In 1984, C. W. Clenshaw and F. W. J. Olver began the discussion of using hyperoperations to prevent computer floating-point overflows.[22] Since then, many other authors[23][24][25] have renewed interest in the application of hyperoperations to floating-point representation. While discussing tetration, Clenshaw et al. assumed the initial condition , which makes yet another hyperoperation hierarchy. Just like in the previous variant, the fourth operation is very similar to tetration, but offset by one.

n Operation Comment
0
1
2
3
4 An offset form of tetration. The iteration of this operation is much different than the iteration of tetration.
5 Not to be confused with pentation.

Commutative hyperoperations

Commutative hyperoperations were considered by Albert Bennett as early as 1914,[6] which is possibly the earliest remark about any hyperoperation sequence. Commutative hyperoperations are defined by the recursion rule

which is symmetric in a and b, meaning all hyperoperations are commutative. This sequence does not contain exponentiation, and so does not form a hyperoperation hierarchy.

n Operation Comment
0
1
2 This is due to the properties of the logarithm.
3 A commutative form of exponentiation.
4 Not to be confused with tetration.

Balanced hyperoperations

Balanced hyperoperations, first considered by Clément Frappier in 1991,[26] are based on the iteration of the function , and are thus related to Steinhaus-Moser notation. The recursion rule used in balanced hyperoperations is

which requires continuous iteration, even for integer b.

n Operation Comment
0 Rank 0 does not exist.[nb 2]
1
2
3 This is exponentiation.
4 Not to be confused with tetration.

Lower hyperoperations

An alternative for these hyperoperations is obtained by evaluation from left to right. Since

define (with ° or subscript) with , , and for

But this suffers a kind of collapse, failing to form the "power tower" traditionally expected of hyper4:

How can be so different from for n>3? This is because of a symmetry called associativity that's defined into + and × (see field) but which ^ lacks. Let's demonstrate this lack of associativity in exponentiation, which differentiates the higher and lower hyperoperations. Take for example the product: . This expression unambiguously evaluates to 24. However, if we replace the multiplication symbols with those of exponentiation, the expression becomes ambiguous. Do we mean or ? There is a big difference, since the former expression can be rewritten as while the latter is . In other words, left associative folds of the exponential operator on sequences do not coincide with right associative folds, the latter usually resulting in larger numbers. It is more apt to say the two (n)s were decreed to be the same for n<4. (On the other hand, one can object that the field operations were defined to mimic what had been "observed in nature" and ask why "nature" suddenly objects to that symmetry…)

The other degrees do not collapse in this way, and so this family has some interest of its own as lower (perhaps lesser or inferior) hyperoperations. With hyperfunctions greater than three, it is also lower in the sense that the answers you get are actually often a lot lower than the answers you get when using the standard method.

n Operation Comment
0 increment, successor, zeration
1
2
3 This is exponentiation.
4 Not to be confused with tetration.
5 Not to be confused with pentation.

Coincidence of hyperoperations

Hyperoperations and are said to coincide on when . For example, for all , i.e. all hyperoperations above addition, . Similarly, , but in this case both addition and multiplication must be excluded. A point at which all hyperoperations coincide (excluding the unary successor function which does not really belong as a binary operation) is (2, 2) i.e. for all we have that . There is a connection between the arity of these functions i.e. two and this point of coincidence: since the second argument of a hyperoperation is the length of the list on which to fold the previous operation, and this is 2, we get that the previous operation is folded over a list of length two, which amounts to applying it to the pair represented by that list. Also, since the first argument is itself 2, and this is duplicated in the recursion, we arrive again at the pair (2, 2) with each recursion. This happens until we get to 2 + 2 = 4.

To be more precise, we have that = = . Note that the unit of need not be supplied to fold when the list has length > 1. To demonstrate this recursion by means of an example we take , which is two by itself twice i.e. . This, in turn is two plus itself twice i.e. . At +, the recursion terminates and we are left with four.

See also

Notes

  1. ^ Sequences similar to the hyperoperation sequence have historically been referred to by many names, including: the Ackermann function[1] (3-argument), the Ackermann hierarchy,[2] the Grzegorczyk hierarchy[3][4] (which is more general), Goodstein's version of the Ackermann function,[5] operation of the nth grade,[6] z-fold iterated exponentiation of x with y,[7] arrow operations,[8] reihenalgebra[9] and hyper-n.[1][9][10][11][12] The most commonly used of any of these terms is the Ackermann function, whose Google search gives almost a million hits, mostly referring to the 2-argument function.
  2. ^ If there was a rank 0 balanced hyperoperation , then addition would be . Substituting in this equation gives which is a contradiction.

References

  1. ^ a b c Daniel Geisler (2003). "What lies beyond exponentiation?". Retrieved 2009-04-17.
  2. ^ Harvey M. Friedman (Jul 2001). "Long Finite Sequences". Journal of Combinatorial Theory, Series A. 95 (1): 102–144. doi:10.1006/jcta.2000.3154. Retrieved 2009-04-17.
  3. ^ Manuel Lameiras Campagnola and Cristopher Moore and José Félix Costa (Dec 2002). "Transfinite Ordinals in Recursive Number Theory". Journal of Complexity. 18 (4): 977–1000. doi:10.1006/jcom.2002.0655. Retrieved 2009-04-17.
  4. ^ Marc Wirz (1999). "Characterizing the Grzegorczyk hierarchy by safe recursion". CiteSeer. Retrieved 2009-04-21.
  5. ^ a b c d R. L. Goodstein (Dec 1947). "Transfinite Ordinals in Recursive Number Theory". Journal of Symbolic Logic. 12 (4): 123–129. doi:10.2307/2266486. JSTOR 2266486.
  6. ^ a b c d Albert A. Bennett (Dec 1915). "Note on an Operation of the Third Grade". Annals of Mathematics. Second Series. 17 (2): 74–75. doi:10.2307/2007124. JSTOR 2007124.
  7. ^ a b Paul E. Black (2009-03-16). "Ackermann's function". Dictionary of Algorithms and Data Structures. U.S. National Institute of Standards and Technology (NIST). Retrieved 2009-04-17. {{cite web}}: External link in |work= (help)
  8. ^ J. E. Littlewood (Jul 1948). "Large Numbers". Mathematical Gazette. 32 (300): 163–171. doi:10.2307/3609933. JSTOR 3609933.
  9. ^ a b Markus Müller (1993). "Reihenalgebra" (PDF). Retrieved 2009-04-17.
  10. ^ a b c Robert Munafo (November 1999). "Inventing New Operators and Functions". Large Numbers at MROB. Retrieved 2009-04-17.
  11. ^ a b A. J. Robbins (November 2005). "Home of Tetration". Retrieved 2009-04-17. [dead link]
  12. ^ a b I. N. Galidakis (2003). "Mathematics". Retrieved 2009-04-17.
  13. ^ a b C. A. Rubtsov and G. F. Romerio (December 2005). "Ackermann's Function and New Arithmetical Operation". Retrieved 2009-04-17.
  14. ^ a b G. F. Romerio (2008-01-21). "Hyperoperations Terminology". Tetration Forum. Retrieved 2009-04-21. {{cite web}}: External link in |publisher= (help)
  15. ^ a b Wilhelm Ackermann (1928). "Zum Hilbertschen Aufbau der reellen Zahlen". Mathematische Annalen. 99: 118–133. doi:10.1007/BF01459088.
  16. ^ Robert Munafo (1999-11-03). "Versions of Ackermann's Function". Large Numbers at MROB. Retrieved 2009-04-17.
  17. ^ J. Cowles and T. Bailey (1988-09-30). "Several Versions of Ackermann's Function". Dept. of Computer Science, University of Wyoming, Laramie, WY. Retrieved 2009-04-17.
  18. ^ Donald E. Knuth (Dec 1976). "Mathematics and Computer Science: Coping with Finiteness". Science. 194 (4271): 1235–1242. doi:10.1126/science.194.4271.1235. PMID 17797067. Retrieved 2009-04-21.
  19. ^ Daniel Zwillinger (2002). CRC standard mathematical tables and formulae, 31st Edition. CRC Press. p. 4. ISBN 1-58488-291-3.
  20. ^ Eric W. Weisstein (2003). CRC concise encyclopedia of mathematics, 2nd Edition. CRC Press. pp. 127–128. ISBN 1-58488-347-2.
  21. ^ K. K. Nambiar (1995). "Ackermann Functions and Transfinite Ordinals". Applied Mathematics Letters. 8 (6): 51–53. doi:10.1016/0893-9659(95)00084-4.
  22. ^ C.W. Clenshaw and F.W.J. Olver (Apr 1984). "Beyond floating point". Journal of the ACM. 31 (2): 319–328. doi:10.1145/62.322429. Retrieved 2009-04-21.
  23. ^ W. N. Holmes (Mar 1997). "Composite Arithmetic: Proposal for a New Standard". Computer. 30 (3): 65–73. doi:10.1109/2.573666. Retrieved 2009-04-21.
  24. ^ R. Zimmermann (1997). "Computer Arithmetic: Principles, Architectures, and VLSI Design" (PDF). Lecture notes, Integrated Systems Laboratory, ETH Zürich. Retrieved 2009-04-17.
  25. ^ T. Pinkiewicz and N. Holmes and T. Jamil (2000). "Design of a composite arithmetic unit for rational numbers". Proceedings of the IEEE. pp. 245–252. Retrieved 2009-04-17.
  26. ^ C. Frappier (1991). "Iterations of a kind of exponentials" (PDF). Fibonacci Quarterly. 29 (4): 351–361.