Talk:Exponentiation by squaring

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Mathematics (Rated Start-class, Low-priority)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
Start Class
Low Priority
 Field: Basics

Maybe you should add a link to the Peasant multiplication in Ancient_Egyptian_multiplication


—Preceding unsigned comment added by 85.250.163.119 (talk) 10:21, 23 August 2008 (UTC)

I guess we should present the iterative version of this algorithm:

power(x,n) is computed as long as n is not negative
    assign 1 to result
    as long as n is positive
       assign result*x to result if n is odd
       assign x*x to x
       assign the truncated to next integer of n diveded by 2 to n
    return result


Robert Dober 2003-07-05 MEDST

This is the 2^k-ary method with k=1 so it is covered. However, the current presentation hides this; when I have time, I will improve it.Fizban99 (talk) 11:07, 23 January 2010 (UTC)


Never mind, and there is even an error in my algorithm, really well done Robert, sorry for the noise :(

Robert Dober 2003-07-05 MEDST


Article incomplete[edit]

One aspect of this subject not mentioned is that this binary algorithm doesn't get to the exponent as fast as possible. For example:

1. x^62 => x^31*x^31
2. x^31 => x*x^30
3. x^30 => x^15*x^15
4. x^15 => x*x^14
5. x^14 => x^7*x^7
6. x^7 => x*x^6
7. x^6 => x^3*x^3
8. x^3 => x*x^2
9. x^2 => x*x

but assuming we remember all exponents we've previously created...

1. x*x => x^2
2. x^2*x^2 => x^4
3. x^4*x^4 => x^8
4. x^8*x^8 => x^16
5. x^4*x^16 => x^20
6. x^20*x^20 => x^40
7. x^20*x^40 => x^60
8. x^2*x^60 => x^62

This takes one step less. I have some more information on this if it would be interesting to have here. This isn't exactally exponentiating by squaring, but along the same lines.

---Jay

Yes, this would be interesting! Especially if there is (as I would expect) a quick systematic way to get at the fastest method, then this should be mentioned in the article as a variation. (And even if there's not a fast alternative algorithm, then this fact can still be mentioned.) Is this the addition chain exponentiation that Henrygb linked? -- Toby Bartels 19:58, 7 Aug 2004 (UTC)

Yes, both of these are considered addition chains. It is considered hard to generate minimal addition chains (where one takes the absolute least number of steps required to perform the exponentiation). In other words, there's a ton of setup that goes into generating these minimal chains, but they execute faster than any other addition chain. That makes them useless when only a single exponentiation is being performed, as so much time is spent in the setup. Binary addition chains (which is what this article is about) require no setup and are close enough to minimal to make them useful. Note that in the above example, there was only a one step difference. Such a binary addition chain executes somewhat more slowly than a minimal-length chain, but the time saved in skipping initial setup causes the net speed to be in the binary chain's favor. -- Vesta 09:12, 16 Aug 2004 (UTC)
As an aside, by definition exponentiation by squaring is a variation of addition chain exponentiation, not the other way around. :) -- Vesta 09:14, 16 Aug 2004 (UTC)

Iterative version useful?[edit]

It doesn't use recursion, which increases the speed even further.

Is it useful at all, though? To take the example from the article, x^1000000 requires 41 multiplications (and even fewer recursive calls than that). If you calculate x^1000000 for any x bigger than 1, the relative time spent on calls will be completely negligible. Fredrik | talk 16:40, 17 Feb 2005 (UTC)

If the function is executed every millisecond or so, it's probably very useful. 190.60.93.218 (talk) 17:27, 9 September 2013 (UTC)

Those code snippets are "yowch"[edit]

The following Haskell code implements this algorithm in just 3 lines of code:

power b 0          = 1
power b e | even e =     power (b*b) (e `div` 2)
power b e | odd e  = b * power (b*b) (e `div` 2)

The following code is tail-recursive, so it uses less stack space:

power' r b 0          = r
power' r b e | even e = power'  r    (b*b) (e `div` 2)
power' r b e | odd e  = power' (r*b) (b*b) (e `div` 2)
power    b e          = power'  1     b     e

The above will work for any type of b that the * operator is defined on (any type in the Num class). It can be generalized to use any function in place of multiplication:

power f r b 0          = r
power f r b e | even e = power'  r      (f b b) (e `div` 2)
power f r b e | odd e  = power' (f r b) (f b b) (e `div` 2)

The original function is now power (*) 1, and, say, multiplication can be implemented as power (+) 0. --Ihope127 02:13, 12 April 2006 (UTC) (edited 14:43, 12 April 2006 (UTC))

Far too many examples[edit]

The article needs to be severely cut down. Most wikipedia math articles could use more examples, but this page has far too many examples. The same ground is covered repeatedly. 165.189.91.148 20:54, 10 May 2006 (UTC)

Text Concatenation[edit]

This algorithm is unsuitable for text concatenation. Instead, the following algorithm is more efficient:

function repeat ( s, n ) {
  if ( s == "" || n < 1 )
    return "";
  if ( n == 1 )
    return s;
  var res = s;
  for ( var i = 2 ; i < n ; i *= 2 )
    res = res + res;
  return res + res.substr( 0, s.length * n - res.length );
}

Results of binary decomposition assume a slightly different algorithm[edit]

The case for n=1 is missing in the original definition of the algorithm. I realize it's probably obvious, but it should probably be included for completeness. In fact, why bother with restricting the odd case to n>2 at all (since it reduces to 'x' anyway for n=1) and just use what's below instead?


\mbox{Power}(x,\,n)=
  \begin{cases} 1, & \mbox{if }n\mbox{ = 0} \\ 
                \mbox{Power}(x^2,\,n/2), & \mbox{if }n\mbox{ is even} \\
                x\times\mbox{Power}(x^2,\,(n-1)/2), & \mbox{if }n\mbox{ is odd}
\end{cases}

But I have a bigger concern, and that's the fact that the definition given above doesn't result in an expression with the same economy that the binary decomposition does. When I found that the binary decomposition of the expression results in one with fewer multiplications, I was confused. I asked myself why a better definition wasn't given instead, one which results in the same number of multiplications as the binary decomposition, but without the intermediate expression. It seems cumbersome to apply the above definition to get one expression, then go through the manual process of binary decomposition of that to get a more economical expression (which is the way it was done in the examples in the article). So why not use the following definition instead:


\mbox{Power}(x,\,n)=
  \begin{cases} 1, & \mbox{if }n\mbox{ = 0} \\ 
                (\mbox{Power}(x,\,n/2))^2, & \mbox{if }n\mbox{ is even} \\
                x\times(\mbox{Power}(x,\,(n-1)/2))^2, & \mbox{if }n\mbox{ is odd}
\end{cases}

For instance, in the case of x100, the expression it produces requries 8 multiplications, while the expression produced by the first definition requires 15 multiplications. (For either definition, an extra multiplication is saved by recognizing that Power(x,n)=x Power(x,1)=x (sorry, typo), and treating it as a fourth case.)

Does anyone have any objections to me updating the definition in the article with this updated one? I think it would reduce some confusion.

--Paul 06:53, 7 March 2007 (UTC)

Agreed. Also I fail to see the need for multiple recursive definitions on the page. I merged the content.Fizban99 (talk) 10:54, 23 January 2010 (UTC)

Text application[edit]

I don't understand at all how the text application applies at all to the article. I recommend deletion.

R00723r0 08:58, 15 July 2007 (UTC)

Exponentiation can be viewed as multiplying exponents. Multiplying can be viewed as adding exponents (i.e., logarithms). The text application leverages this by viewing concatenation as a variant of multiplication. Then the same techniques that are used for exponentiation (and thus that are available to multiplying exponents where "multiplying exponents" is just another name for exponentiation) are now available to multiplying strings. Thus it is quite related and should stay as is. —optikos (talk) 03:48, 17 June 2008 (UTC)

No R00723r0 is quite correct. This is original research that is not related to exponentiation. Please provide a citation of where it has been published before that shows the relation. Furthermore it is a terrible way to do string repetition as it assumes that the copy/append operations are unit-time, where-as in most implementations they would be linear time over the length of the operands. As a result, in any run-time where the linear component dominates over the fixed start-up cost, or in any run-time over large enough operands (ie asymptotically) this method is *slower* than the naive approach of allocating a buffer and then looping over the repeated string to fill it. I have removed this section pending some sort of citation to show that it is not original research. Amoss (talk) 12:38, 9 April 2009 (UTC)

Programming[edit]

I think it would be a good idea to stick to one programming language throughout the article, or even just a pseudo-language. Furthermore, I don't think most people can benefit from Javascript examples.

R00723r0 09:03, 15 July 2007 (UTC)

I agree, please just use pseudo-code. Binary exponentiation is simple enough that we hardly need to get into the syntax of any particular programming language du jour. 24.61.43.18 15:36, 14 August 2007 (UTC)
Your comment did benefit me ironically, since you said there was a JavaScript implementation out there thus making me search for this implementation and helping me out understand 190.60.93.218 (talk) 17:32, 9 September 2013 (UTC)

Simpler[edit]

Why unroll the recursion manually? This is simpler:


\mbox{Power}(x,\,n)=
  \begin{cases} 1, & \mbox{if }n\mbox{ = 0} \\ 
                x\times\mbox{Power}(x,\,n-1), & \mbox{if }n\mbox{ is odd} \\
                \mbox{Power}(x,\,n/2)^2, & \mbox{if }n\mbox{ is even}
\end{cases}

For "n=0" and "n is odd", this is standard recursive exponentiation. Just "n is even" case is optimized, according to:


x^n=x^{n/2} \times x^{n/2}

exe (talk) 18:08, 22 May 2008 (UTC)

I believe the reason is that the former emphasizes you get precisely lg n calls to the function, facilitating the analysis.Fizban99 (talk) 10:48, 23 January 2010 (UTC)

Underlying idea: I'd like to add one logical step[edit]

This is the current version: 
x^n=     
    \begin{cases}
                1,                                          & \mbox{if }  n = 0   \\ 
                1/x^{-n},                                        & \mbox{if }  n < 0   \\ 
                x \cdot \left( x^{\frac{n-1}{2}} \right)^2, & \mbox{if } n \mbox { is odd} \\
               \left(  x^{\frac{n}{2}} \right)^2,           & \mbox{if }n\mbox{ is even}
     \end{cases}

But I feel that the third equation could be modified to say



x^n=     
    \begin{cases}
                1,                                          & \mbox{if }  n = 0   \\ 
                1/x^{-n},                                        & \mbox{if }  n < 0   \\ 
                x \cdot x^{n-1}, & \mbox{if } n \mbox { is odd, i.e. } n - 1 \mbox{ is even} \\
               \left(  x^{\frac{n}{2}} \right)^2,           & \mbox{if }n\mbox{ is even}
     \end{cases}

That would make this equation a lot clearer, and the next step follows by the fourth equation, instead of having both in one step. It should be noted that the algorithm makes both steps at once for increased performance, though. —Preceding unsigned comment added by 134.96.57.229 (talk) 18:03, 6 February 2011 (UTC)

I didn't really see the post above, so sorry for that. I'm still all for the change though, especially now that we're talking about "underlying idea" instead of the actual algorithm used lateron. Also, for the logarithmic time you don't need to divide the argument by 2 on every iteration; You merely have to do so every O(1) steps. --134.96.57.229 (talk) 18:09, 6 February 2011 (UTC)

Underlying idea: Negative Exponent[edit]

Seeing how there is an edit war between those who believe that


x^n = 1/{x^n}

And those who know that


x^n = 1/{x^{-n}}

Just look at this:

x^n = 1/x^{-n}

Multiply both sides by  x^{-n} and you get

x^n \cdot x^{-n} = 1

Product with same base allows contraction of exponents, such that

x^{n + \left(-n\right)}  =  1

Performing the addition yields

x^0  =  1 = 1

which is true by reflexivity.

The other one only works for n=0 or x=1.

--134.96.57.229 (talk) 18:43, 6 February 2011 (UTC)