# Talk:Graham's number/Archive 2

## Interesting question

What is the smallest prime factor of Graham's number minus 4?? We know that this number is not divisible by 2, 3, or 5. Georgia guy (talk) 22:04, 7 September 2008 (UTC)

I don't know, but a question that I feel is far more important to address is: what is cos(G)? Seriously, I've actually pondered this question quite a bit, wondering whether it's possible to reduce the question to a more feasible variation. It seems like it really should be possible to answer (within the lifetime of our universe), but I'm at a loss for how. --72.177.97.222 (talk) 11:49, 11 September 2008 (UTC)

I think if you take cos(G degrees) it should be possible, since it seems like it ought to be possible to find the fractional part of the quotient G/360. That is, you can reduce G to a small coterminal. The reason I suspect this is that if the least significant digits of G are calculable, using modular analysis perhaps you could find the least significant integer digits, and also the most significant fractional digits, of G/360. Perhaps.
I'm not sure it would be possible to do an equivalent procedure for G/(2pi), since it would appear you would need a very large number of digits of pi (on the order of G itself). Then again, I have no idea how to actually do any of this, never having done it myself. Eebster the Great (talk) 01:16, 11 November 2008 (UTC)
Calculating cos(G degrees) is trivial, since G mod 360 can be calculated using the algorithm above. I get G ≡ 27 (mod 360), giving a cosine of about 0.891. Owen× 16:55, 11 November 2008 (UTC)
Was I correct to suggest it would be more difficult to calculate cos(G radians)? Eebster the Great (talk) 01:09, 12 November 2008 (UTC)
The smallest prime factor of G-4 is 107. (It happens that I've been computing factors of numbers near large towers of 3, for no particularly good reason; 3626257 is the smallest prime factor of G-16, for example, and 18540593 of G-46.) Joule36e5 (talk) 07:37, 22 November 2008 (UTC)

### So now we know, with \\ meaning "is divisible by"

• G \\ 3
• G-1 \\ 2
• G-2 \\ 5
• G-3 \\ 2
• G-4 \\ 107
• G-5 \\ 2
• G-6 \\ 3
• G-7 \\ 2

What is the smallest value of H for which the smallest prime factor of G-H is not known?? Georgia guy (talk) 14:51, 22 November 2008 (UTC)

I don't think it belongs on this talk page, unless it's relevant to something under consideration for adding to the article, but the answer is H=128. Specifically, factors are known for G-127 through G+33, while G-128 and G+34 have no prime factors smaller than 2^32. (There are a total of 53 remaining unfactored numbers in the range [G-1000,G+1000].) For the values near 0, see Sloane's A152178 and A152177. Joule36e5 (talk) 00:33, 11 December 2008 (UTC)

## Largest number

"Largest number" redirects here, but the article seems to indicate that Graham's Number is NOT the largest number used in serious mathematical proofs. If so than it shouldn't link here. Serendipodous 16:11, 1 December 2008 (UTC)

I agree. I've replaced the redirect with a disambiguation page. Joule36e5 (talk) 01:58, 11 December 2008 (UTC)

## Leading digits of Graham's number

The article does a nice job of listing and explaining the last n digits of Graham's number, but is it possible to determine the first (leftmost) few digits (or even the very first digit) as well? | Loadmaster (talk) 17:35, 11 December 2008 (UTC)

No. The last few digits can be proven using properties of n-digit endings of powers of 3. They go 03, 09, 27, 81, 43, 29, 87, 61, 83, 49, 47, 41, 23, 69, 07, 21, 63, 89, 67, 01... and repeat every 20th cycle. The last 2 digits of Graham's number are thus 87. I don't think we can ever find a way to calculate the first few digits. Georgia guy (talk) 18:22, 11 December 2008 (UTC)
It is possible but using modern methods it will took more than ${\displaystyle g_{63}}$ years. To number with really incalculable digits (on computer) check page about busy beaver function. 31.42.233.14 (talk) 14:51, 18 April 2013 (UTC)

## Least significant digits??

This term can't be right. Exact integers don't have "least significant digits". The 8 at the end of 128 is the "least significant digit" if the value is only known to be between 127.5 and 128.5, not if it is exactly 128 by definition. Georgia guy (talk) 14:31, 13 August 2009 (UTC)

According to Least significant digit, the term is correct as used. Owen× 22:33, 13 August 2009 (UTC)

## Too little space in the universe to write out the number?

Is there really too little space in the whole universe to write out Graham's number? This is relevant to the article, because if it's not right, it shouldn't be there. 89.249.0.170 (talk) 06:40, 27 October 2009 (UTC)

If you have to ask, you have no concept of large numbers. If each digit took up the space of one hydrogen atom, and the whole universe was packed with such digits, you could fit far less than 10^200 digits in the universe. This means the largest number you could represent is 10^10^200, which is less than 3^3^3^3^3 -- a mere pittance compared to g1. Owen× 14:02, 27 October 2009 (UTC)
Damn, are there only far less than 10^200 atoms in the universe? Sorry for going off-topic. 89.249.0.170 (talk) 14:28, 27 October 2009 (UTC)
For the sake of clarity, I revised the introduction slightly to specify that (1) we're talking about the finite observable universe, (2) we're talking about ordinary digital representations (e.g. binary, decimal, etc.), and (3) we're assuming each digit occupies at least one Planck volume. This is intended to be a simple analogy to bring home the enormous size of G -- it's not supposed to be a claim about physics, such as whether the Planck volume is really the smallest "meaningful" volume.
r.e.s. (talk) 14:36, 27 October 2009 (UTC)
This edit is better. Technically, the existence of virtual particles makes it hard to give an estimation of exactly how "many" particles, and thus how many possible microstates, there are in the observable universe. Perhaps one could in theory store an ordinary digital representation of Graham's number, if only for an unimaginably small amount of time. At least, this is my understanding of physics... there might be some fundamental limitations I am missing here. Either way, the new version is more clear and more accurate. Eebster the Great (talk) 23:44, 27 October 2009 (UTC)

## Why use G if N is a better bound?

The article states:

Graham & Rothschild [1971] proved that this problem has a solution, N*, and gave as a bounding estimate 6 ≤ N* ≤ N, with N a particular, explicitly defined, very large number; however, Graham (in an unpublished work) revised this upper bound to be a much larger number. [...] Thus, the best known bounding estimate for the solution N* is 11 ≤ N* ≤ G, where G is Graham's number.

If we know 6 ≤ N* ≤ N and N ≤ G, then N is a better (meaning sharper) upper bound for N* than G is. The best bounding estimate for N* would be 11 ≤ N* ≤ N. Why would Graham go back and work out a worse upper bound than the one he already had? Was there something wrong with the proof in Graham & Rothschild [1971], or did he want an upper bound with a nicer representation? AxelBoldt (talk) 23:58, 23 November 2009 (UTC)

That's precisely the puzzling situation discussed several times above in the sections titled "Martin Gardner's version vs. Graham's published version", "numbers larger than Graham's number", and "Number of Digits". As far as I have been able to determine, the number G was never published by Graham; rather, it was only published by Gardner, who (in the 1977 Sci. Am. article) attributed it to some unpublished work by Graham, and later (in a 2001 book) simply attributed it to Graham without mentioning its publication status. A couple of years ago, I asked Graham about this, and, as I recall, he had been unaware that Gardner's G differs from what's in G&R[1971] (and there was no suggestion of an error in that paper); however, I didn't pursue the matter any further with him.
— r.e.s. 11:52, 24 November 2009 (UTC)
I've revised the "Graham's problem" section of the article in an attempt to improve the wording and historical accuracy (as far as we know it).
— r.e.s. 12:47, 24 November 2009 (UTC)
Thanks! It reads a lot better now. Cheers, AxelBoldt (talk) 23:22, 24 November 2009 (UTC)

## The largest number that "matters"

I don't understand. If the number is larger then the number of elementary particles in the universe, then why could it possibly matter. How/why was it used in proofs? If it's too large to even think about what's the point. What is the largest number that does "matter"? I define numbers that matter as numbers that can be used. (i.e. I have n dollars or There are n of those in the world.)--Ledlaxman24 01:45, 21 April 2006 (UTC)

If you have 70 people standing in line to get into a movie or a concert that isn't too many. Yet there are over a googol (10100) ways in which those mere 70 people could end up being arranged in line. This is more than the number of particles in the visible universe, yet the situation I've described is easy to imagine. Or think about how many possible 40-move chess games there are. If you do some calculations (see Shannon number) you get around 10120. A computer can't calculate all those possibilites because, again, there isn't enough space in the entire space to store all those possibilities. Does that mean they don't exist? To me it means we have certain scenarios in which we have humongous numbers and we find it useful to develop some techniques to deal with them. Kaimiddleton 07:36, 22 April 2006 (UTC)

Thsnks, now it makes sense. Could you add that in somewhere on the page (preferably at the top). I'm in tenth grade geometry, and didn't understad any of the hypercube stuff. I think it would benefit everyone (especially non-mathmaticians) if you explained it like that.--Ledlaxman24 12:35, 22 April 2006 (UTC)

Ok, I've reworked the article some to put it into perspective. You might want to check out the article on large numbers for more discussion. Kaimiddleton 04:02, 23 April 2006 (UTC)
For some help on thinking how big this number is:

${\displaystyle 3\uparrow 3=3^{3}=27}$

${\displaystyle 3\uparrow \uparrow 3=3\uparrow 3\uparrow 3=3^{3^{3}}=3^{27}=7,625,597,484,987}$

${\displaystyle 3\uparrow \uparrow \uparrow 3=3\uparrow \uparrow 3\uparrow \uparrow 3=3\uparrow \uparrow (3\uparrow 3\uparrow 3)}$

That last one would have 7,625,597,484,987 3's in a power tower.

${\displaystyle 3\uparrow \uparrow \uparrow \uparrow 3=3\uparrow \uparrow \uparrow 3\uparrow \uparrow \uparrow 3\uparrow \uparrow \uparrow 3=3\uparrow \uparrow \uparrow (3\uparrow \uparrow 3\uparrow \uparrow 3)=3\uparrow \uparrow \uparrow (3\uparrow \uparrow (3\uparrow 3\uparrow 3))}$

And at this point of only 4 up arrows, it starts hurting my head. That's ${\displaystyle 3\uparrow \uparrow \uparrow (7,625,597,484,987)}$, or 7,625,597,484,987 groups of ${\displaystyle 3\uparrow \uparrow 3...\uparrow \uparrow 3}$. Graham's number has 64 of the up arrows, so it's much more than what a brain can comprehend without just thinking of infinity. 98.223.56.77 (talk) 02:14, 22 September 2008 (UTC)

Graham's number has many, many, many more than 64 up arrows. g1 has 4 up arrows. g2 has g1 up arrows. That is, the huge number you just discussed, ${\displaystyle g_{1}=3\uparrow \uparrow \uparrow \uparrow 3}$, is the number of up arrows that you then put between 3's to make g2. Then you calculate g2 (at this stage you are already way, way beyond the possibilities of human contemplation), and you put that many up arrows between 3's to make g3. Then you repeat the process 61 more times. Maelin (Talk | Contribs) 04:14, 22 September 2008 (UTC)
You're absolutely right. Sorry, but my brain stopped working properly after all those arrows. 98.223.56.77 (talk) 12:55, 22 September 2008 (UTC)
Thanks, I actually understood this explanation, this is way better than whats currently in the article. Can someone try to work this explanation into the article? Mloren (talk) 10:07, 19 March 2009 (UTC)
I took your suggestion, and added a few more words of explanation in the Definition section. --r.e.s. (talk) 14:19, 19 March 2009 (UTC)

You are probably correct, the upper bound probably dind'nt require being calculated. I dont think anyone is going to test every possible value and find that the lowest possible answer is " grahams number minus 10"...

I dont think anyone is going to do any mathematics with grahams number.

I would like to publish a proof that there is a larger equally useful That is "grahams number + 1". This has the benefit of looking more "positive". I would also like to state that "grahams number + 69" looks more sexy

202.92.40.8 (talk) 13:13, 2 July 2010 (UTC)

## Incorrect

Note that 3^3^3 = 7,625,597,484,987.

I was about to comment on that. 3^3^3^3 = 7,625,597,484,987. Either the number of 3s is incorrect or it should show 3^3^3 = 19,683 with an update to the remainder of the terms. —Preceding unsigned comment added by 174.7.64.18 (talk) 04:12, 11 March 2010 (UTC)
No, by definition the operations associate from the right, not from the left; i.e., 3^3^3 = 3^(3^3) ≠ (3^3)^3, and 3^3^3^3 = 3^(3^(3^3)) ≠ ((3^3)^3)^3. It just so happens that ((3^3)^3)^3 = (3^3)^(3*3) = 3^(3*3*3) = 3^(3^3) = 3^3^3 = 7,625,597,484,987 — a special case of (...((3^3)^3)^3 ...)^3 = 3^3^(n-1), where there are n (≥ 1) 3s in the left-hand-side expression. — r.e.s. (talk) 15:45, 11 March 2010 (UTC)

This first term is already inconceivably greater than the number of atoms in the observable universe, and grows at an enormous rate as it is iterated through the sequence.

This is very wrong. Someone should fix this. The number of atoms in the observable universe is closer to 10^86.

May the wind be always at your back.

The sentence you quoted wasn't referring to the number 3^3^3, but rather to g1. I've made this clearer in the article to prevent similar misunderstandings. Maelin (Talk | Contribs) 04:47, 12 March 2007 (UTC)

## Graham's number via Wiki matrix

In the section above, 'The current definition of g_1 is wrong', I noticed a representation of g_1 using the Maurer/Goodstein/Rucker tetration notation. Using it as a basis,I worked up the following for g_64, Graham's Number--

${\displaystyle g_{64}=\left.{\begin{matrix}\underbrace {^{^{^{^{^{3}\cdot }\cdot }\cdot }3}3} _{\underbrace {^{^{^{^{^{3}\cdot }\cdot }\cdot }3}3} {\text{copies of 3}}}\\\vdots \\\underbrace {^{^{^{^{^{3}\cdot }\cdot }\cdot }3}3} _{^{{}^{3}3}3{\text{copies of 3}}}{\text{copies of 3}}\end{matrix}}\right\}64operations}$

Tell me what you think--

1. Is this correct? I believe each level in a tetration tower to be equivalent to an additional arrow in the up-arrow operation, but am I wrong?

2. If it is correct, would it make a good addition to the article, or is it say, too confusing?

3. Or, WTF is going on here with that monstrosity? :)

Thanks Mytg8 (talk) 15:38, 9 February 2010 (UTC)

Sorry, but this is not even close. I don't think you fully understand Knuth's up-arrow notation. Let me give you some insight into how large this number is.
${\displaystyle g_{1}=3\uparrow \uparrow \uparrow \uparrow 3=3\uparrow \uparrow \uparrow (3\uparrow \uparrow \uparrow 3)=3\uparrow \uparrow \uparrow (3\uparrow \uparrow (3\uparrow \uparrow 3))=3\uparrow \uparrow \uparrow ^{^{3}3}3=3\uparrow \uparrow (3\uparrow \uparrow (3\uparrow \uparrow (\cdot \cdot \cdot (3\uparrow \uparrow 3)\cdot \cdot \cdot ){\mbox{ where the number of 3s is }}^{^{3}3}3}$
Each operation in that final list is tetration (two up-arrows represent tetration), but I didn't feel like making the proper braces to express it as you did. Now, ${\displaystyle g_{2}=3\uparrow ^{g_{1}}3}$, which means the size of the operation used is now completely unfathomable. To express three up-arrows with tetration is easy, you just tetrate three times. To express four up-arrows with tetration, as in our expression of g1, is harder, but not too difficult with the help of braces. To express five up-arrows with tetration is very difficult, and will result in a cross-braced behemoth similar to the expression of g1 using only exponents shown on the page, only with tetration replacing exponentiation. Expressing six up-arrows with tetration is probably impossible with any amount of cross-bracing.
Now imagine trying to express ten up-arrows. Impossible! And surely fifteen is far beyond the capacity of any quantum computer ever conceived. But g2 does not require ten, fifteen, or even one trillion up-arrows. It requires g1 up-arrows, a number already far beyond anything accessible by ordinary arithmetic. "Large" numbers like a google-plex pale in comparison to g1 = 3^^^^3 (in fact, even 3^^^3 is already far larger). Yet this is the number of up-arrows required for g2.
Now, considering that, remember that g3 has g2 up-arrows, etc., with each successive layer having the number of up-arrows as the layer preceding it. Obviously, as hopeless as it is to try to express g2 solely with tetration and braces, it is far more hopeless to try to express any higher step in this way.
I hope this helps. Eebster the Great (talk) 01:24, 10 February 2010 (UTC)
Yep, you are correct. Each layer's number of _operators_ is what is involved--so in the expression I made, the part at the bottom (the beginning)--
${\displaystyle \underbrace {^{^{^{^{^{3}\cdot }\cdot }\cdot }3}3} _{^{{}^{3}3}3{\text{ copies of 3}}}}$
is equal to g_1, but rapidly falls behind the up-arrow expression as I go up the layers. Hm. If g_1 is the number of operators for g_2, would I get to g_2 if I exchanged the 64 in my expression with ${\displaystyle \underbrace {^{^{^{^{^{3}\cdot }\cdot }\cdot }3}3} _{^{{}^{3}3}3{\text{ copies of 3}}}}$ which is my g_1?
I suppose it doesn't really matter anyway. The article is about g_64, not g_1 or g_2.Mytg8 (talk) 14:20, 10 February 2010 (UTC)
No, that will not give you g_2. To prove this to yourself, first try doing 3^^^^^^^^^^3, or similar, with just tetration. You will quickly realize that no number of braces will allow really you to express really large operators with just a few symbols. Eebster the Great (talk) 21:55, 10 February 2010 (UTC)
You're right again--tetration towers just do not foot the bill. <gr> Mytg8 (talk) 13:58, 11 February 2010 (UTC)

## Graham’s number puny, wimpish

And I’ll tell you why. All math notation ultimately involves accepted procedures, and whether these procedures are conventional or not depends of changing circumstances. When you write ${\displaystyle 10^{10}}$ you are telling the reader that they must follow a procedure to formulate the number, here being to multiply 10 by itself 10 times. Graham's number can't be written using conventional notation which includes exponents, but it can be written very easily using Knuth’s arrow procedures. When you write such procedures down you are telling someone how to construct the number, and thus, in a real sense, generating that number no matter how psychologically impossible it may be for us to comprehend it. In fact the only non-procedural math notation would be to map the number on a one to one basis using a tally set up. (For example, 10 is I I I I I I I I I )

I’m not a mathematician but I figure this. Truly big, specific, finite numbers are those which are so large that no procedure to formulate them can ever exist. That means that a computer the size of Universe could not contain a description of that procedure no matter how concisely and compactly it was written. How can we mathematically define a "perfect procedure with no redundancy or waste"? Well, I don't know but Turing defined computability in an analogous fashion, so it would be possible – perhaps it has already been done – to use information and complexity theory to ascertain how such a procedure could be specified.

The question might be put in the form: If you were to write a number generating computer program - a program limited in size to a pre-determined number of bytes - what is the largest number you could generate running such a program? You can use whatever symbols and notation you want but it must be sufficiently comprehensible so that a human, or perhaps another computer, can follow the logic of what you have proposed. Based on such proof of concept experiments, it might be possible to speculate - or even determine - what a program that took all the theoretically available computing power in the Universe to store and run would look like. And also what the large number this program generated would look like, without of course actually generating the number in any kind of conventional format.

What is the biggest number a super computer could generate using whatever special symbols and procedures a team of programmers desired? Well, when you think that the procedure to generate Graham's number can be written on a postcard, the super computer number would be unimaginably bigger. And a computer the size of the Universe, well, now we are really talking! Myles325a (talk) 11:44, 7 June 2010 (UTC)

No need to speculate; that's exactly what the Busy beaver model is. You may also want to check out Fast-growing hierarchy for some examples of functions and numbers that truly make Graham's number look puny and Knuth's notation inadequate. Owen× 12:11, 7 June 2010 (UTC)
Along these lines, it may be worth noting that although Busy beaver functions (such as S and Σ) grow faster (asymptotically) than any computable function, there is no known value of a BB function that's not dwarfed by Graham's number G (or even by g1 for that matter), and certainly by values of relatively "small functions" in a Fast-growing hierarchy (e.g. in the Wainer hierarchy, fω+1(64) > G). Also note that although one could cite fα(2), say, for ever-larger ordinals α, part of the definition of a fast-growing hierarchy is a specific system of fundamental sequences involving some particular ordinal notation system — and, in every such system now known, there is some least limit ordinal which is not assigned a fundamental sequence; consequently, any particular fast-growing hierarchy of functions fα is defined only for α less than some corresponding large limit ordinal.
r.e.s. (talk) 17:31, 7 June 2010 (UTC)

## What about if you did this?

What if you did googolcentuplex^googolcentuplex^googolcentuplex^googolcentuplex^googolcentuplex^googolcentuplex^googolcentuplex^googolcentuplex^googolcentuplex^googolcentuplex^googolcentuplex^googolcentuplex^googolcentuplex^googolcentuplex^googolcentuplex^googolcentuplex? Surely that would be bigger? —Preceding unsigned comment added by 79.64.225.154 (talk) 17:56, 30 June 2010 (UTC)

That would be (googolcentuplex)^^16. It is less than (10^^101)^^17, which in turn is less than (10^^101)^^^2. This in turn is less than (10^^^2)^^^2. In turn, that is less than 10^^^^3. So it's still less than Graham's number. Georgia guy (talk) 18:25, 30 June 2010 (UTC)
Even if you repeated the above operation a googolcentuplex times, it would not come remotely close to g1. Perhaps you are misunderstanding the Knuth notation. Eebster the Great (talk) 08:00, 3 July 2010 (UTC)

## How many digits does it have?

I'm not aksing for the acutal number, just how many digits, I'm thinking it is over a googolplex. —Preceding unsigned comment added by 88.106.88.99 (talk) 17:05, 1 July 2010 (UTC)

The number of digits it has is way too big. Worse, whereas with Graham's number you can calculate the last few digits using modulus arithmetic, the number of digits Graham's number has there's no way to calculate the last few digits. Georgia guy (talk) 17:51, 1 July 2010 (UTC)
But the article gives the last 500 digits of GN. See subsection: "Rightmost decimal digits of Graham's number". Myles325a (talk) 08:42, 5 July 2010 (UTC)
Let D(G) be the number of decimal digits in G. Some rightmost decimal digits have been computed for G, but not for D(G). — r.e.s. (talk) 15:06, 5 July 2010 (UTC)
I don't understand this. Do you mean that the last digits of Graham's Number have been calculated, but the last digits of the number of digits it has, have not been? Myles325a (talk) 08:35, 6 July 2010 (UTC)
Yes, exactly. The last few digits of the number of decimal digits in Graham's number are unknown. That is to say, the last few digits of log10 g64 cannot be easily calculated. Eebster the Great (talk) 05:59, 7 July 2010 (UTC)

## Graham's number (GN) without Knuth's arrows.

Being a non-mathematician, I was rather puzzled by the Knuth's arrow business. Now, I THINK I've got it straight, but if I've got it wrong, someone with the expertise please tell me so. I came to the conclusion that there is really no need to resort to Knuth to explain how GN is constructed, and the explanation is much simpler for the maths layperson.

3 is the "selected number" in the GN case, though it could be any number at all for other purposes. Now you construct a series of towers going from left to right across the page (as shown in the article). The first of these towers (not really a tower) has just one storey, which has the value 3. The tower next to it has a total of 3 stories, all of them with the value 3, like this: ${\displaystyle 3^{3{^{3}}}}$ This gives a total of 7,625,597,484,987. That's 7 trillion +. (All the stories of all the towers have the value 3)

Now, you are ready to construct the next tower. The ground storey has the value 3, but this tower is 7,625,597,484,987 stories high, the total value of the previous tower. This is a number that is far larger than the number of atoms you could fit in the observable universe, and we have only reached the 3rd tower! If you could figure out the number which is the total of this 7 trillion high tower, you would then be ready to construct the 4th tower. This is a tower of 3s which has a number of stories equal to the number that you got as the total of the previous tower. And you create as many towers as you like, going across the page from left to right, except the universe would not be big enough to hold a graph of anything except the first 2 towers.

Now my idea is that these towers are given labels that tell you which tower they are. The first is PT1 (Power Tower 1), and then comes PT2, PT3, PT4 etc. Write each tower's label, neatly, underneath each tower. Now we get to the part where we really start to talk about big numbers, the kind of number that is Graham's Number. Suppose we go back to PT2, the tower we talked about earlier: ${\displaystyle 3^{3{^{3}}}}$ This one adds up to 7,625,597,484,987. Suppose we were now told to consider this number as the number of a particular tower, and to locate that tower, that is, we have to find PT 7,625,597,484,987. We would have to go to the right past PT1, 2, 3, 4… until we finally get to this one. Well, considering that each successive tower makes the previous one look like something hardly above zero, this power tower is something else again. But wait, there's more! Instead of making the incomprehensibly large total of this tower our big number, we have to calculate the total of that tower and find the Power Tower that has THAT number as its label.

Graham's Number is developed on these lines, but it starts from PT4 (not 2) and it is 64 layers deep. That is, you keep finding the total of a tower, finding the Power Tower with that number, finding the total of THAT Power Tower, and then finding the Power Tower with that number and so on, 64 times! At the end, you have your number, and it is so big, no one would be able to even imagine it.

The WP article and various sites on GN don't give really give readers much of an idea of just how fast these towers grow. PT3 for example is already 7 trillion + stories high, and the towers to its right are not just taller, they are exponentially far taller. All the towers from 3 onward have numbers that cannot be written down, and are larger than any collection of objects in the observable universe. All the numbers humankind ordinarily uses are swallowed up in the first couple of towers. Because we cannot really imagine very big numbers, we tend to think of billions, trillions, quadrillions, "squillions" as all much of a muchness, and we tend of think of these towers as merely "bigger than the last one". Pick a tower that is not one of the first couple. Now that tower is greater than all the previous towers stacked onto each other, and multiplied many billions of times over. But it is just virtually zero itself compared to the tower to its right.

You can use this parade of towers to make your own big number. Mine is much larger than GN. For a start instead of 3 as the selected number, I use googolplex: 10^10^100. And my starting tower is PT 10^10^100, and finally I go 10^10^100 layers deep. Now that is a number than makes GN look very small.

Of course, I might have got something wrong in the way I've put this together, but my general point is that, if I am right, then you don't need any arrows to describe this procedure. It's a bit like me telling you that I live at: ${\displaystyle \uparrow \uparrow \uparrow \uparrow \uparrow \uparrow \uparrow }$ Railway Cuttings, Islington, instead of just saying 7 Railway Cuttings, Islington. Why complicate things when you don't have to? Myles325a (talk) 10:22, 6 July 2010 (UTC)

You are on the right track, but the number you are describing is basically g1. Look at the expression of g1 on the page in terms of exponents. To restate using your notation, PT1 is 3, PT2 is 3↑↑PT1 = 33^3 = 7,625,597,484,987, PT3 is 3↑↑PT2 = 3↑↑33^3 which is a power tower of 3s about 7 trillion stories high, and so on. PT3 is already quite large, but to find g1 we need to go much further. In fact, even PT7,625,597,484,987 is not large enough. In fact, g1 is PTn, where n = PT3 = 3↑↑33^3 (which is again that power tower of 3s about 7 trillion stories high).
But that is just g1 = 3↑↑↑↑3. Note that 3↑3 = 9, 3↑↑3 = 33^3 = 7,625,597,484,987, and 3↑↑↑3 = 3↑↑PT3, so you can see how adding up-arrows is very powerful. Each new up-arrow is an enormously more powerful operation than the last one. If that was just 3↑↑↑↑3 = 3↑4, imagine 3↑↑↑↑↑3 = 3↑53, or 3↑↑↑↑↑↑↑↑↑↑3 = 3↑103! But we can do better. g2 = 3↑g13. That is a lot of up-arrows! Similarly, g3 = 3↑g23, and so on. Each new g is 3↑↑...↑3, where the number of up-arrows is equal to the previous g.
Graham's number is G = g64.
I hope that helped! Eebster the Great (talk) 06:13, 7 July 2010 (UTC)

A nice reply was just posted while I was composing mine, which I'll post anyway because it makes an additional point or two ...
Let x be your fixed "base" of either 3 or googolplex. I think you'll see where you went wrong if you just write your "numbered" power towers in terms Knuth arrows, noting that x^^n is just a power tower of n x's:
PT1 = x
PT2 = x^^PT1 = x^^x
PT3 = x^^PT2 = x^^x^^x
and so on, giving
PTn = x^^x^^...x^^x with n x's;
that is,
PTn = x^^^n (so PT can be seen as the operator x^^^).
Now with x = 3, it is incorrect to say that Graham's number G is obtained by starting with PT4 and then "finding the total of a tower, finding the Power Tower with that number, finding the total of THAT Power Tower, and then finding the Power Tower with that number and so on, 64 times!" In other words, you're mistakenly saying that G = PT(PT(....PT(PT4)...)) (with 64 "PT"s). The main mistake is to think that each iteration of PT corresponds to going up to the next "level" in the 64-term sequence g_1, g_2, ..., g_64 in which G = g_64. (Also, PT4 = 3^^^4, which is not g_1 = 3^^^^3 = PT(PT3).)
Because of that main mistake, even your proposed number to make G "look small" (with x = googolplex) is very tiny compared to G:
PT(PT(...PT(PT(x))...)) with x "PT"s
= x^^^(x^^^(...x^^^(x^^^x)...)) with x+1 x's
= x^^^^(x+1)
< 3^^^^^3
< g_2.
The general idea that a number as large as G can be expressed by just iterating PT with no need for more Knuth arrows is wrong, in the sense that some kind of special notation would be needed to do essentially what arrows already do so nicely.
r.e.s. (talk) 15:25, 7 July 2010 (UTC)
Thanks to both of you for excellent critiques, and so well presented. I should have known that my formulation was too simple. I will think it over. Myles325a (talk) 08:51, 8 July 2010 (UTC)

## Best known upper bound is much less than N

The article states that N = F^7(12) is the best known upper bound, but this is far from the case. From the proof of the Graham-Leeb-Rothschild theorem in "Ramsey Theory", by Graham, Rothschild, and Spencer, one can derive an upper bound of the Graham's number case of about 2^^^18.

Further, in Saharon Shelah's "Primitive Recursive Bounds for Van der Waerden's Numbers", a primitive recursive upper bound was found for the Graham-Leeb-Rothschild theorem, on the order of 2^^^^n. The Graham's number problem is the smallest nontrivial case of Graham-Leeb-Rothschild, so the bound for it may well be smaller, but I haven't been able to determine a precise bound yet.

I would like to hear what others have to say before I make an edit.Deedlit11 (talk) 22:21, 30 September 2010 (UTC)

Yes, this is the most interesting edit someone should make, but we have to be sure. Shelah [recursive bounds for Van der Waerden numbers] is beyond my grasp.

Perhaps that Timothy Gowers 2^2^r^2^2^(k+9) [der Waerden number] has something to say about just this too.

So 2^^^18 would be awesome if you can prove it (or if Graham did it, I don't have the book), but Gowers' (type of) limit (in comparable cases) is much lower still.

Also I think the number Graham originally published should have its own Knuth arrow version too on the page. [Rothschild article p.290

Cheers! Gerard van Novaloka --Gerard van Novaloka (talk) 02:22, 2 February 2011 (UTC)

## A bit off topic

I think about a year ago the TV programme QI mentioned Graham's Number and said it was the best known upper bound to the problem. Wikipedia or QI, the two most perfect sources of knowledge and truth. One of you must be wrong. —Preceding unsigned comment added by 82.6.96.22 (talk) 07:04, 25 October 2010 (UTC)

## Rightmost digits of G (cont'd)

This continues the now-archived discussion "Can we write some of the digits?", which led to creating the section Rightmost decimal digits of Graham's number in the article (2008/9/5). A main point was that for any positive n, all towers 3↑3↑...3↑3 of height greater than n have the same n rightmost decimal digits. (I.e., 3↑↑2 = 27, 3↑↑3 = 7625597484987, 3↑↑4 = ...9387, and so on, thus defining a unique infinite sequence of digits ...2464195387.) Some relevant source material has appeared since then:

The number of rightmost digits of G that are "stable" (i.e., that are in this sequence), say s(G), is both incomprehensibly large and very small relative to the total number of digits of G, say n(G). In fact, g63 << s(G) << n(G). This is because, on the one hand, G = 3↑↑(s(G)+1), so n(G) > log10(G) >> s(G), and on the other hand G = g64 = 3↑g633 >> 3↑↑(g63 + 1), so s(G) >> g63 (because the number of stable digits of 3↑↑(d+1) is equal to d).

It would be nice to find citable sources relating to the apparent randomness of this digit-sequence, but there seem to be none.
r.e.s. (talk) 19:34, 26 December 2010 (UTC)

## The type of subgraph in question

Graham mentions the sets of crossing edges in Some Ramsey-type results for the n-cube 2009, so the picture seems to be correct, though the explanation is somewhat vague for the novice. In the prepublication Some Ramsey results for the n-cube Graham refers to the Wikipedia for the size of the upper bound and so gives fuel to the legend of Graham's number (which was perhaps just an error by Martin Gardner), quote prepub 2008:

Let X denote the structure consisting of the set of 6 edges spanned by 4 coplanar vertices of an n-cube. In this case, the occurrence of a monochromatic X is guaranteed once n >= N0, where N0 is a very large (but well defined) integer [see: This Wikipedia Page], sometimes referred to as Graham's number. The best lower bound currently [2007] available for N0 is 11 (due to G. Exoo - personal communication].

So the crossing edges in opposite hypersquares should be taken together as corresponding in a single plane. But even then there is a lot of freedom in the choice of colors. I have compared the original article of Graham and Rothschild 1971 and the same hypergraph is defined there. The upper bound is smaller than Graham's number as noted in the Wiki article above. There probably never was an "unpublished publication" of Graham's as Gardner mentions in his 1977 Scientific American article. Because Graham himself refers to Graham's number as the same given in the definition on this Wikipedia page in the above quoted prepub.

However, in the 1971 article the hypercube graph (as described above) is just an example that is worked out of a general combinatorial scheme. This hypercube and the K4 graph is just a simple representation of a Euclidean Ramsey theory that can produce potentially very much larger upper bounds for monochromatic substructures. Perhaps if one of these more complex graphs was worked out the upper bound would be the Graham's number that we've come to know and love. Maybe Graham made some notes on such a larger structure and forgot about it and Gardner saw the notes and based his article on that, that's an alternative explanation. --Gerard van Novaloka (talk) 20:11, 12 February 2011 (UTC)

Inconsistent terminology in the above-quoted preprint: On the one hand, it says that "N0 is a very large (but well defined) integer (see: [this Wikipedia article]), sometimes referred to as Graham's number"; on the other hand, it refers to a "best lower bound currently available for N0". Thus, the preprint conflates Graham's number -- the number G which is a particular known upper bound on the as-yet unknown solution to Graham's problem -- with the actual solution itself (which is not known to be "very large").
Aside from this preprint, I seem to recall that some other more-or-less recent papers have (mis-)used the term "Graham's number" as though it is the unknown solution itself (which might in fact be a smallish number), incorrectly citing this WP article for that usage! This article has always presented Graham's number G as the specific very large number that was published by Gardner and attributed to Graham.
r.e.s. (talk) 17:41, 28 February 2011 (UTC)

## Statement of Grahams problem correct?

The article says "What is the smallest value of n for which every such colouring contains at least one single-coloured 4-vertex planar complete subgraph?" Shouldn't it be something like "..contains a single-coloured 4-vertex complete subgraph whose vertices are coplanar" or "..whose vertices lie in the same plane"? Any 4-vertex graph seems to be "planar" by the definition in the linked article. Or am I missing something? — Preceding unsigned comment added by 130.238.58.123 (talk) 15:16, 23 August 2011 (UTC)

## Too technical?

Pending some discussion of the issues here, I've removed the recently-added tag that said the article is too technical for most readers to understand. When the tag was added, the edit-comment asked "how can lay readers understand how large graham's number is?". Here are some points:

• The focus of the present article is "Graham's number" G, as originally published by Gardner in terms of Knuth's up-arrow notation. Any difficulties in understanding this notation should be resolved at the linked article, rather than addressing them in this one. I think the section Magnitude of Graham's number that's already in the article is not "too technical" for anyone who understands the up-arrow notation.
• The focus of the present article is not "Graham's problem", whose solution is bounded above by G. Although some discussion of this problem is needed for the historical context of Graham's number, Graham's problem involves a degree of technical detail that's totally irrelevant to understanding the definition of G. (Previous attempts to "simplify" the statement of Graham's problem have in my opinion backfired.)
• Graham's number G is regarded as incomprehensibly large, and no amount or kind of explanation is likely to overcome this. Additional attempts to convey how large it is will not make the incomprehensible comprehensible.
r.e.s. (talk) 17:49, 2 March 2011 (UTC)
Very good points. I agree. This article seems to be as non-technical as possible given that the subject is a number so large, technical notations are necessary to define it. Cheers, — sligocki (talk) 01:31, 5 March 2011 (UTC)
Hi everyone. I put up the original "too technical" tag. I understand the idea of an article being as "non-technical" as it can be. But I would encourage editors here to think harder about it. It's not really as non-technical as it can be. Rather, it's as non-technical as you're willing to make it. Now, don't get me wrong, there is real value in describing a topic in technical detail. Furthermore, I know that many topics can only really be described adequately if technical language is used. However, if the lay reader is not really understanding the topic to even a reasonable degree, then I think we ought to try harder.
As with many mathematics articles on Wikipedia, this article is difficult, if not impossible to truly understand without a degree in mathematics. Almost all of the information below the lede is beyond the "understandibility" of a lay person. One needs to know perhaps obscure (to the lay person at least) mathematical notation and concepts. And the lede itself tells the lay person only that Graham's number is unimaginably large. I understand the pitfalls of describing something that is so large that it can never even be written out. I also understand the pitfalls of explaining something that can only be described in a circumspect manner -- such as an unimaginably large number. But I think that lay readers still deserve a better basic explanation than they are receiving here.
These are just some thoughts. I'd love to hear the thoughts of some of you who have contributed to and are invested in this article. Cheers, ask123 (talk) 20:27, 5 March 2011 (UTC)
Also, in response to the previous informative post by r.e.s., the section Magnitude of Graham's number is definitely too technical for the lay reader to understand without outside references. The mathematical notation used in that section does not magically become digestible to most readers simply because the concept of Graham's Number is rather incomprehensible to begin with. I get it that the number is incomprehensibly large. All I'm saying is that 4/5ths of this article is made up of jargon and notation that most people don't get. Cheers, ask123 (talk) 20:35, 5 March 2011 (UTC)
You make it sound like there is a lot of jargon, but the only jargon is Knuth's up-arrow notation. And the up-arrow notation is simply enough defined:
a^^b = a^(a^(a^(...(a^a)...))) with b a's.
a^^^b = a^^(a^^(a^^(...(a^^a)...))) with b a's.
a^^^^b = a^^^(a^^^(a^^^(...(a^^^a)...))) with b a's,
and so on.
I'm not sure what there is to understand. How would you make the above definitions more acessible to a lay audience? Deedlit11 (talk) 10:58, 13 April 2011 (UTC)
First, I disagree that the up-arrow notation is simply enough defined. It is defined through the writing out of mathematical notation, which is not adequate for most readers. Second, jargon was the wrong word -- the wrong way to describe it. I mean mathematical notation and jargon. Most people cannot readily understand the mathematics in that section. I know there may be no other way to express it, but, given that, at least provide adequate linkage (as has been done with the word "tetration") to other wiki articles that will help readers get through it. And, also, it would be helpful to offer more definitional explanations -- be more verbose so that readers can understand through language. Again, the word tetration is a good example of a term that can quickly be defined in a brief appositive. ask123 (talk) 19:27, 18 June 2011 (UTC)

I can't comprehend why a layperson would feel the need to understand this number. Outside of the field of pure math, there really is no value to this information. Nobody will ever need to use it to measure anything. I would imagine that studying and understanding the relevant information (which appears to be adequately Wikilinked in the article) is the only way to appreciate the topic. I don't want to tell anyone what they can and can't learn, but if you don't have a math-inclined mind, you really have no reason to be reading this. Some things just inherently can only be dumbed down so far. InedibleHulk (talk) 04:36, 18 June 2011 (UTC)

Your point is taken, Hulk, but wikipedia is not designed as an advanced encyclopedia. It is meant as an encyclopedia for all. Ideally, there is not supposed to be any topic herein that is beyond the understandability of the average reader. The reader doesn't need to become an expert, but at least understand the topic adequately. If you are interested in advanced and technical encyclopedias, there are other wikis for that. ask123 (talk) 19:19, 18 June 2011 (UTC)

There are a lot of wikipedia math and physics articles that are full of notations and are extremely useful for those studying math and physics. If you are not interested in subjects like this in general then you shouldn't be reading this article. Besides, if you really want to understand how big this number is, image that number of particles in the universe, (10^80), for each particle, put a universe inside, now you have a lot more particles. For each of the particles inside the universe that's in another particle, put another universe in there. Repeat the process the number of times that equals the number of particles in the universe. Then repeat this repetition the number of times that equals the number of times in the universe. And even this construction wouldn't be large enough to compare to Graham's number. Can you think of way to simplify this description? — Preceding unsigned comment added by 99.47.69.214 (talk) 11:59, 4 December 2011 (UTC)

The underlying reason why any "ordinary" process of comparison is bound to fall short (e.g., repeatedly replacing particles by "ordinary" universes of particles an "ordinary number" of times, then repeating all of that a similar number of times), is that it repeats at each stage no faster-growing operations than those of ordinary arithmetic (add, multiply, exponentiate). In contrast, Graham's number involves two levels of "acceleration into the large": (1) The ordinary operations of arithmetic (+, *, ↑) are extended into a sequence of ever-more-rapidly-growing operations (using Knuth up-arrows: 3↑n, 3↑↑n, 3↑↑↑n, ...), and (2) a procedure called diagonalization is used to leap over this sequence into one that's a version of the (diagonalized) Ackermann function (i.e., 3↑n3). See my earlier comments just below, in which fk(n) = 3↑kn, and fω(n) = 3↑n3.
r.e.s. (talk) 18:46, 4 December 2011 (UTC)

--
In the article, Graham's number G is defined in terms of the gk-sequence, which involves iterating a certain function defined in terms of "base-3" up-arrow notation, and this up-arrow notation can itself be neatly defined using iterated-function notation. So it might be more to the point to just "cut to the chase" and define G directly in terms of iterated function notation alone. Here's the equivalent definition of G that begins by iterating 3× (i.e. iterated multiplication by 3, otherwise known as base-3 exponentiation):

For any integer n > 0,
f1(n) ::= (3×)n-1(3)  ( = 3n)
fk(n) ::= fk-1n-1(3) for any integer k > 1
fω(n) ::= fn(3)

Graham's number G ::= fω64(4).


There is really only one simple standard notation that a reader needs to grasp here: for any function name f, fn(x) means f(...f(f(x))...) with n fs. (I used the function name fω only to show this as a kind of fast-growing hierarchy — it might be less confusing to use a function name like g instead of fω. Note that fωk(4) is what the article calls gk.)
r.e.s. (talk) 16:54, 29 June 2011 (UTC)

PS: If even the use of superscript notation is considered too confusing, then that too can be replaced (as in Deedlit's comment above) with more explicit but more cumbersome forms as follows:

For any integer n > 0,
f1(n) ::= 3×(...3×(3×(3))...)  with n-1 "3×"s   ( = 3n)
fk(n) ::= fk-1(...fk-1(fk-1(3))...)  with n-1  "fk-1"s, for any integer k > 1
fω(n) ::= fn(3)

Graham's number G ::= fω(...fω(fω(4))...)  with 64 "fω"s


Whatever notational system is used to express them, the essential mathematical concepts here are (1) recursion of functions (i.e., starting with one basic function and then defining an ordered collection of functions in which each is defined in terms of those previously defined), and (2) iteration of functions (i.e. repeating the application of a function a specified number of times).
r.e.s. (talk) 15:53, 30 June 2011 (UTC)

Just a note, I was wondering if there were plans to show the number itself in this article? I think it would help the non technical among us to understand it. 83.244.154.99 (talk) 12:03, 24 July 2012 (UTC)

Some theories maintain that the universe is expanding. If that's the case, once it has expanded by a factor of, well, roughly Graham's number from its current size, we can make plans to fill it with a digital representation of that number. Until then, I'm afraid you'd have to make do with the description found in the "Definition" section of the article. Owen× 23:21, 24 July 2012 (UTC)
That would take some time for the univerese to expand that much, would it not? I'm guessing again roughly a Graham's number of lifetimes of our universe. I wonder if by then things like time, space and matter would even exist as we know them. But yes Owen, your response gives some appreciation for how large the number is. Our observable universe is insignificantly tiny, unimaginably minuscule compared to the vastness required to show even the mere number of Knuth up-arrows in the Knuth's up-arrow notation representation of the number. And of course the number of up-arrows is unfathomably tiny compared to the number of digits in the decimal representation of Graham's Number. --RacerX11 Talk to meStalk me 00:06, 25 July 2012 (UTC)

## Scale: Comparisons of Graham's Number, Skewes' Number, Moser's Number, and Googolplex

On space-related Wikipedia articles, the authors often provide the reader with analogies to make the magnitude of the article subject more readily understandable. So, for example, in the Naos article, the authors state that the star is so luminous that the Earth would need to be 450 AU away to maintain traditional temperature ranges, etc. The VY Canis Majoris and Betelgeuse articles also do a great job of comparing their sizes with the Sun. (I believe that the Betelgeuse article compares the Sun to an orange, Sirius to a soccer ball, the Earth to a pearl, and Betelgeuse to a stadium.)

Can someone provide a similar analogy between these four numbers? If a googolplex were represented as '1', how would the other numbers compare? Would Skewes' Number be 1,000? 10,000,000? Another googolplex? I imagine this would go a long way towards making this number more comprehensible to the reader. 70.138.217.107 (talk) 20:41, 25 November 2011 (UTC)

It's possible to do this with Googolplex and Skewes' Number, but not with truly large numbers like the other two. The ratio between Graham's Number and Moser's Number is, for all practical purposes, roughly equal to Graham's Number. The same will hold true even if you took, say, the logarithm of either or both numbers. Any comparison that is not, itself, built on recursion is doomed to be meaningless when dealing with such numbers. And again, comparing either of them with Skewes' or with Googolplex is like comparing them with '1'. Owen× 20:50, 25 November 2011 (UTC)
Thanks for the perspective! Although there doesn't seem to be a simple way to compare the two larger numbers with a googolplex or with Skewes' number, I still found your explanation to be really helpful. 70.138.217.107 (talk) 02:26, 1 December 2011 (UTC)

## Smallest factor of G-128

Is there any project anywhere online that reveals people are searching for the smallest prime factor of G-128?? For example, there's http://www.oddperfect.org where people are searching for odd perfect number requirements. (Note: G means Graham's number.) Georgia guy (talk) 14:45, 28 December 2011 (UTC)

By "G-128" do you mean "Graham's number minus 128"? If so, why 128? If not, what's the definition? — r.e.s. (talk) 02:46, 29 December 2011 (UTC)
Yes, that's what I mean. In an archived part of this talk page, it reveals that G-H has known prime factors for all 0 <= H < 128. Georgia guy (talk) 13:52, 29 December 2011 (UTC)
Ah, I'd forgotten about that archived section. I'm unaware of any projects of that kind, though. — r.e.s. (talk) 16:28, 29 December 2011 (UTC)

## Rightmost (frozen) digits calculation

In my book (unfortunately in Italian) on tetration and others hyperoperators, it is described the "frozen digits" distribution for every natural base. A preview (introduction) of the book can be found here: http://www.uni-service.it/images/stories/product_uploaded_file/Coda%20Preview.pdf So, it's possible to calculate the p-adic convergence range of Graham's number too [p. 13].

In this case (n:=3), the convergence speed is equal to the height of the iterated tetration tower minus one (say [hyper-4 exponent]-1). Let this height be "k+1", for example, G(mod 10^k)==G^^G(mod 10^k). So "k", the number of "frozen" (calculable) digits have to satisfy the following inequality:

${\displaystyle g(63)<

Could I have the permission to add this info/reference to the Graham's number page?

Best regards, Marco Ripà — Preceding unsigned comment added by Marcokrt (talkcontribs) 15:47, 19 January 2012 (UTC)

I suggest we limit to 500, because:
• We don't want Wikipedia articles to be huge.
• Somewhere deep along the line, the digits will start to mismatch the infinite 3^3^3^3^3...3^3^3^3^3 sequence; we don't have a way to tell where this point is.
Georgia guy (talk) 15:50, 19 January 2012 (UTC)
Marco:
What you're calling "frozen digits" are evidently the ones I've called "stable digits", e.g. in the recently-archived section Rightmost digits of G (cont'd) that was at the top of this very talk-page when you posted. Note that the upper bound in your inequality can be made tighter, as explained there. (I think "Georgia guy" may have misunderstood your intention, if I'm correct that you merely want to include in the article such a bounding inequality on the number of stable/frozen digits.) I haven't yet looked at your publication, so I have no comment about whether it is adequate as a source for citing such a result.
r.e.s. (talk) 16:13, 19 January 2012 (UTC)

Hello again, My book is about the randomness of the tetrates digits too (i.e. caos theory and so on)... not only about perfect convergence (a very small amount of info can be found here http://math.eretrandre.org/tetrationforum/showthread.php?tid=720). BTW, IMHO, we could only add the point that the stable digits correspond (exactly) to k, where k+1 is the height of the equivalent hyper-4 hyperexponent value. This is a specific natural number, not only an upper bound. Let me know if this info could be found interesting or not for the page.

Marcokrt (talk) 18:26, 19 January 2012 (UTC) — Preceding unsigned comment added by Marcokrt (talkcontribs) 16:27, 19 January 2012 (UTC)

## Visualizing Graham's number

The number of times you have to perform the "How many Knuth arrows are needed??" operation on Graham's number is finally small enough to write out; it is only 64. But have numbers for which the number of times you have to perform the "How many Knuth arrows are needed??" operation on a number too large to write out ever been used?? There have been a few references to numbers larger than Graham's number in some sections of the talk page. Do any of these numbers meet this criterion?? Georgia guy (talk) 16:52, 19 January 2012 (UTC)

For example, my BOX_M~ (a specific natural number involving "human problems" solutions) satisfies this criterion: http://www.scribd.com/doc/77714896/The-largest-number-ever-il-numero-dei-record The busy beaver function "Σ" is even faster (non computable functions). BTW I don't find this outcome very interesting for this section :)

Marcokrt (talk) 18:24, 19 January 2012 (UTC) — Preceding unsigned comment added by Marcokrt (talkcontribs) 18:19, 19 January 2012 (UTC)

Georgia guy:
In the fast-growing hierarchy we have fω64(6) > G, where fω is a version of Ackermann's function and corresponds to the operation you're asking about. If we look at a much larger ordinal index, say ε0 instead of ω, then we'll encounter numbers like fε0(2), which certainly meets your criterion -- even with the small argument of 2, it requires f8(8) > 2↑78 iterations of fω.
Marko:
I think it's not hard to show that your number satisfies ${\displaystyle Box_{-}{\widetilde {M}}\ll f_{\epsilon _{0}}(3)\,\!}$, where fε0 is as just described (linked) above.
(Please sign your posts by typing four tildes (~~~~) at the very end.)
r.e.s. (talk) 18:30, 20 January 2012 (UTC)

Ok r.e.s., even if probably ${\displaystyle Box_{-}{\widetilde {M}}>f_{\epsilon _{0}}(3)\,\!}$, it's easily possible to surpass its size, for example considering the one at the bottom of this page: http://www.polytope.net/hedrondude/array.htm My number is based on a set of philosophical/empiric assumptions and its definition could (abstracly) be useful to construct upper limits for particular kinds of "human condition related problems"... it's quite similar to a universal outlet, even if far less interesting ;)

Best regards Marcokrt (talk) 00:08, 21 January 2012 (UTC)

## Parsing?

There was something in the article on Knuth's up-arrow notation which made me pause -- a single arrow is ordinary exponentiation? So... if I've parsed that -- and consequently the contents of this article -- correctly, then I think the following expansion defines G1.

(I'm using 'U' for 'Up Arrow')

G1 = 3 U U U U 3

3 U 3 = 3^3 = 27

3 U U 3 = 3 U (3U3) = 3^27 ~= 7.6 e12

3 U U U 3 = 3 U (3 U (3U3)) = 3 U (3^27) = 3^(3^27) ~= 3^(7.6e12) = some monster that is ~3.6*10^12 digits long ~= 10^(10^(10^1.1))

3 U U U U = 3 U (3U (3U (3U3))) = 3 ^ (some monster) = G1, "a jillion"

and then 3 (a jillion instances of U) 3 = G2, "a kajillion", etc etc up to G64.

Is this correct? DS (talk) 14:05, 11 September 2012 (UTC)

Yup! I think you got it. Owen× 14:37, 11 September 2012 (UTC)
Yes and No. It is correct that g1 = 3↑↑↑↑3, g2 = 3↑↑...↑3 (with g1 ↑s), and so on up to g64; however, your evaluations of 3↑↑↑3 and 3↑↑↑↑3 are way too small.
For example, 3↑↑↑3 = 3↑↑3↑↑3 = 3↑3↑3↑...3↑3 where the number of 3s is 3↑↑3 = 3↑27. This is much larger than your 3↑3↑3↑3, which is actually just 3↑↑4.
r.e.s. (talk) 01:15, 12 September 2012 (UTC)
R.e.s. is right; my mistake. Owen× 11:54, 12 September 2012 (UTC)

## Up arrows

Just a suggestion. The article already uses → for Conway's notation. I wonder if it would be an idea to use the unicode up arrow too: ↑↑. Would make it easier for readers to copy / paste the formulae, I just wanted to post to facebook about Graham's number and remade one of the formulae as 2↑↑(2↑↑(2↑↑9)) which you can use anywhere that supports unicode.

I'm not going to actually do it because there are so many formulae in the article and for consistency of style you'd need to do the whole thing. But might be worth considering as something to do? Robert Walker (talk) 09:00, 16 August 2013 (UTC)

## Improved bound

Look at this article. There are presented much stronger upper bound for that problem where Graham's number was used. 31.42.233.14 (talk) 22:23, 18 March 2013 (UTC)

It is a wiki article on Wikia, not a peer reviewed source. Probably not even considered reliable by Wikipedia guidelines. SpinningSpark 15:52, 17 August 2013 (UTC)

## What does this mean?

I've read and re read this text. But I have no idea of what Grahams number explains. Could someone write something in plain english for the non professional what Graham's number is supposed to solve. — Preceding unsigned comment added by 92.32.104.205 (talk) 11:22, 18 October 2013 (UTC)

Maybe this will help ... Here's a formulation of the problem in terms of row-vectors (one dimensional arrays of n numbers), which I used in a program to study this problem with arbitrary colorings (OR):
A unit n-cube has 2n corners, corresponding to the 2n binary row-vectors of length n. (E.g. for n = 3, these are the 23 = 8 vectors [0 0 0], [0 0 1], [0 1 0], [0 1 1], [1 0 0], [1 0 1], [1 1 0], [1 1 1], and similarly for other values of n.) A coloring is just an assignment of either 'red' or 'blue' to each possible pair of these vectors. Let's call a given coloring eventful if there are four such vectors (say a,b,c,d) with the following two properties:
(1) the six possible pairs of the four vectors (i.e. the pairs {a, b} {a, c} {a, d} {b, c} {b, d} {c, d}) are all assigned the same color, and
(2) if a difference vector is formed from each of the six pairs (subtracting one vector of the pair from the other, in either order (e.g. {a, b} → a - b, {a, c} → c - a, etc.) then any two of these difference vectors is sufficient to generate the other four by linear combination. (This is an algebraic way of saying that the points corresponding to the four vectors are coplanar. There are well-known algorithms for checking this.)
NB: The number of pairs to be colored is 2n - 1(2n - 1), so the number of possible colorings is 22n - 1(2n - 1), which gets large very quickly.
Here's the behavior as n increases ...
n = 1:   0% of the possible colorings are eventful (because there are fewer than 4 corners),
n = 2:  ~3% of the possible colorings are eventful (2 of the 64 possible colorings),
n = 3:  ~30% of the possible colorings are eventful (estimated by random colorings),
n = 4:  ~95% of the possible colorings are eventful (estimated by random colorings),
5 ≤ n < N*:  the percentage of colorings that are eventful approaches (but does not reach) 100%,
n ≥ N*100% of the possible colorings are eventful, where N* is some unknown number greater than 12.
The existence of the number N* was proved by Graham & Rothschild, and "Graham's number" (G) was specially invented by Graham as an easily described upper bound on N*. (Maybe the number N* itself should be called the "Graham-Rothschild number" to distinguish it from G.)
r.e.s. (talk) 22:18, 23 October 2013 (UTC)
I think the IP was asking for something to go in to the article rather than an explanation here. Personally, I think the explanation already in the article is easier to understand than the above. SpinningSpark 15:30, 24 October 2013 (UTC)
You are right. This is even harder tounderstand. it is neither written in plain english or for the non professional. — Preceding unsigned comment added by 78.78.76.98 (talk) 07:33, 28 October 2013 (UTC)
Let me try in a thoroughly non-formal way that will probably annoy the hell out of every real mathematician reading this. For n=2 draw a square of dots (four dots). Draw lines joining every dot to every other dot. How many rectangles can be found in this figure? Only one of course, the original square. Now colour every line either black or red. Can you find a colouring scheme so that no rectangle is all the same colour? Trivially easy of course since there is only one rectangle. That one is a pass meaning Graham's number is more than 2. Now draw a cube of eight dots (n=3) and join every dot to every other dot. This is no longer trivial because besides the six squares of the cube faces there are also rectangles on the diagonals. It may be non-trivial but it is still pretty easy to find a colouring scheme so that no rectangle is all one colour. Pass that one too, Graham's number is greater than 3. For n=4 one must draw a four-dimensional cube, a tesseract or 4-cube. This is starting to get hard because rectangles can be found all over the place, but still possible to do. Carry on increasing n for higher and higher dimensions. Eventually you will come to a value of n where it is impossible to colour every rectangle in mixed colours. Graham manged to prove that this value of n cannot be greater than Graham's number (although it may well be a lot smaller). Graham's number is an upper bound on the maximum value of n. Regards, SpinningSpark 09:31, 28 October 2013 (UTC)
My reply above is not intended for the general reader, but is for anyone acquainted with (or willing to learn) the most-basic vector/array concepts. It is an explanation of the problem in terms of simple arrays of integers, completely avoiding the difficulty of visualizing hypercube geometry. Saying "just do with 4 & higher dimensional hypercubes what was done with a cube" does not convey a clear idea of *how* this is to be actually carried over to the higher dimensions, whereas the above explanation does describe the *how* (in terms of integer arrays), which makes it slightly technical.
r.e.s. (talk) 20:30, 30 October 2013 (UTC)
My question was if someone could write something in the article for the general reader. Not for the professional. — Preceding unsigned comment added by 92.32.106.60 (talk) 12:29, 6 November 2013 (UTC)

## The Biggest number (with a name)

It seems that Graham's number is the biggest number with a name, because numbers bigger than it don't have a name, or do they?--RicHard-59 (talk) 12:05, 16 January 2014 (UTC)

I removed it because it was cited to Wiki Answers. It might be true, but answers from anonymous people on the internet is the epitome of an unreliable source. Besides, these things change very rapidly, it is quite easy to find sources that say the largest number ever needed to solve a genuine problem is the much smaller Skewes' number. Another question is what counts as naming a number? I am pretty sure that Ronald Graham did not name the number in his original paper. If just one person referred to one of these larger numbers by the name of the original author do we then consider it named? It is not a definite thing like christening; a name gradually develops by consensus. Thus, we need reliable sources to provide the synthesis for us. SpinningSpark 13:27, 16 January 2014 (UTC)