Talk:Karatsuba algorithm

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Divide and conquer (February 2011)[edit]

The Karatsuba algorithm is the first fast computational algorithm, Merge-sort from 1945 --- isn't!!! The note below is written by a person who is not a specialist in this field.91.79.192.32 (talk) 12:59, 1 February 2011 (UTC)[reply]

Says who? Please cite a reliable published source. -- X7q (talk) 13:36, 1 February 2011 (UTC)[reply]

In his book (D.E.Knuth, The art of computer programming. v.2 Addison-Wesley Publ.Co., 724 pp., Reading (1969).) Donald Knuth wrote (when he described the A.A. Karatsuba method), that "this idea was not known before" and wrote that even people who multiplied numbers very fast in mind and all other sources didn't know this idea. But here you try to prescribe the idea to von Neumann!! It's an absolutely wrong statement, von Neumann didn't know the Karatsuba method and the Karatsuba idea!91.79.192.32 (talk) 13:56, 1 February 2011 (UTC)[reply]

He wrote that with regard to multiplication algorithms. I don't see how "the first fast computational algorithm" (not just for multiplication) follows from that. -- X7q (talk) 14:40, 1 February 2011 (UTC)[reply]

By the way, it's impossible to write that "Karatsuba algorithm is a notable example of "Divide and Conquer" or "Binary Splitting", since just Karatsuba invented this idea, the names "Divide and Conquer", "Binary Splitting" were called much later. Somebody called the Karatsuba idea "Divide and Conquer", so Karatsuba didn't use somebody else' idea, somebody else used the Karatsuba idea. See you the difference? To write "Karatsuba algorithm is a notable example of "Divide and Conquer" means to write that Karatsuba used the method "Divide and Conquer" to create a fast multiplication, but Karatsuba just invented this method (in computational mathematics).91.79.192.32 (talk) 14:05, 1 February 2011 (UTC)[reply]

The comments above by 91.79.192.32 are utter nonsense and reflect a near complete lack of cognitive competence. Relative timing of the development of concepts and algorithms has no bearing on whether the algorithms are examples of the concepts. -- Jibal (talk) 22:26, 17 January 2020 (UTC)[reply]

Divide and conquer[edit]

The Karatsuba multiplication algorithm, ... in 1960 ... The Karatsuba algorithm can be considered to be the earliest example of a binary splitting algorithm, or more generally, a divide and conquer algorithm.

I think this is wrong. According to Merge_sort, von Neumann invented Merge Sort in 1945. —Preceding unsigned comment added by 84.74.154.5 (talk) 14:11, 23 July 2008 (UTC)[reply]

I think, this is an absurd to compare von Neumann Merge Sort and the Karatsuba fast multiplication algorithm. Only after the Karatsuba algorithm the history of fast algorithms began. Such fast processes like AGM method of Gauss, Newton method etc. become FAST only with application of fast multiplication. Von Neumann or anybody else results can not help here. That is why the Karatsuba algorithm can be considered as the frist FAST algorithm in the history of computations. —Preceding unsigned comment added by 83.149.209.253 (talk) 14:37, 31 March 2010 (UTC)[reply]

I read the Divide and Conquer topic now --- what is a dreadful content! I wrote the remarks which I will add also below: "The problem is that here in "Divide and Conquer" topic different algorithms and approaches were mixed, what is absolutely incorrect and bad for readers who are not specialists in the considered subjects.

What relation "Merge Sort" has to fast computational algorithms? Can you increase the efficiency of calculation of, say, sinus of x, applying the von Neumann idea? No. You can not. So it is a great difference between the Karatsuba idea and the von Neumann idea. Using the Karatsuba you obtain a tool for calculation of a great number of functions, intagrals and series much more effectively. Using von Neumann --- you obtain almoust nothing. How it is possible not only to compare these two approaches, but even to "put them in one box"?

Karatsuba didn't use the "divide and Conquer" paradigma to invent his method, he just invented such a general method that permit to increase the efficiency of the computer work. After the Karatsuba invention the name "Divide and Conquer" was introduced, not before. To equal such different by importance results as the Karatsuba method and von Neumann Sorting means to diminish the advantage of the Karatsuba result.

Knuth in his "Art of Computer Programming" is writing that the Karatsuba idea was not known till 1960 (4000 years of calculations and multiplications). Seems, Donald Knuth is not agree with the authors of D&C paradigma, believing that the Karatsuba idea is the same that von Neumann idea." —Preceding unsigned comment added by 83.149.209.253 (talk) 15:13, 31 March 2010 (UTC)[reply]

"To equal such different by importance results as the Karatsuba method and von Neumann Sorting means to diminish the advantage of the Karatsuba result." — This sentence shows a lot of ignorance. Claiming that a fast sort algorithm is not important is uninformed at best. In fact, most applications depend on sorting rather than on multiplication of long numbers. — Adrianwn (talk) 08:19, 1 April 2010 (UTC)[reply]

The comments above by 83.149.209.253 are utter nonsense and reflect a near complete lack of cognitive competence. -- Jibal (talk) 22:33, 17 January 2020 (UTC)[reply]

Rule of thumb[edit]

None of the references really talked about any "rule of thumb" I moved those references. I doubt this rule of thumb is any close to truth I think there was a mistake, I think in this case n refers to the operands themselves and not to the number of digits (Cause if Karatsuba's algorithm only got faster after such an insane amount of digits it wouldn't have any practical use) Perhaps this claim should be removed or at least fixed to remove this ambiguity not to mention including an actual reference 200.87.23.227 (talk) 17:35, 8 January 2009 (UTC)[reply]

In fact the references did. GMP uses 32-bit limbs on 32-bit processors, and 64-bit limbs on 64-bit processors. Thus the 10-limb crossover point mentioned in the GMP reference suggests 320 to 640 bit numbers, that is, numbers above 2^320 or 2^640, respectively. The other reference suggest an 800 to 2000-bit x86 crossover point (for numbers above 2^800 or 2^2000).
CRGreathouse (t | c) 18:43, 8 January 2009 (UTC)[reply]

Well, either way the amguity about "n" remains, I will change n to "the operands" —Preceding unsigned comment added by 200.87.23.104 (talk) 17:52, 13 January 2009 (UTC)[reply]

Merge from Karatsuba multiplication[edit]

Consider merging material from The Karatsuba multiplication, which has some descriptions of variants and some more sources. JackSchmidt (talk) 12:57, 26 February 2009 (UTC)[reply]

Calculations in Example[edit]

With n=2^10=1024, the formula 3n^(log_2(3)) approx 3n^1.585 evaluates to 177193, not 59049 as claimed in the heading paragraph. Now, I do not know what's wrong, the formula or just the result. Someone might want to look into this issue. Gulliveig (talk) 05:46, 20 July 2009 (UTC)[reply]

Good catch. Looks like the original writer forgot the 3 in front — evaluates to exactly , in fact. Shreevatsa (talk) 06:31, 20 July 2009 (UTC)[reply]
Sorry, but the original number 59049 was correct. Note that the number of single-digit multiplies is at most 3n^(log_2(3)) for general n, but is less than that for many vaues of n. In particular, when n is exactly a power of two 2k, the count is exactly n^(log_2(3)) = 3k, not 3 × 3k. All the best, --Jorge Stolfi (talk) 22:12, 24 July 2009 (UTC)[reply]
Ok, I edited the text to mention this explicitly, to avoid future confusion. Thanks, Shreevatsa (talk) 22:24, 24 July 2009 (UTC)[reply]

Actually, i was confused as well. Should we move excerpt from talks to article itself? --178.120.229.169 (talk) 11:49, 22 November 2010 (UTC)[reply]

Using B = 10^9[edit]

Someone objected to the suggestion of using B = 109 because "multiplication by 109 is not realizable by bit-shifts, so choosing B this way doesn't make sense". Actually multiplication by 109 corresponds to shifting the array of "digits" (each stored in one 32-bit word) by one full word.
This choice of basis for bignum arithmetic may be advantageous when one expects a lot of input/output in decimal. Other similar choices are B = 10000 (each "digit" stored in a 16-bit short word) and B = 100 (each "digit" stored in one byte). The last two choices allow decimal output and input by table look-up, without divisions or multiplications.
All the best, --Jorge Stolfi (talk) 03:02, 12 December 2009 (UTC)[reply]

Sounds interesting, you should add that to the article. Adrianwn (talk) 08:15, 12 December 2009 (UTC)[reply]


I don˙t understand the point: here we talk not about implementations, but about fast algorithms. Fast algorithms, and this first fast algorithm for multiplication, just been created to be useful not only for computers of today but to any computer in future, without essential changing the algorithm. Today "multiplication by 10N is not realizable by bit-shifts" and tomorrow "multiplication by 10100N is not realizable by bit-shifts", but fast algorithms can not loss or can not obtain any advantage from it: this is the work of implementators, not algorithmists. —Preceding unsigned comment added by 195.29.122.4 (talk) 20:53, 16 June 2010 (UTC)[reply]

And what exactly do you want to tell us by that? Furthermore, could you please explain this edit? I reverted a previous, similar edit of yours, because it was unclear and seemed redundant. – Adrianwn (talk) 05:37, 17 June 2010 (UTC)[reply]

Original research by 93.118.212.93 (Florin Matei) I (collapsed)[edit]

Subdivision 1:4 that is not Toom-Cook
Further subdivision 1 to 4

if the operands are 1:4 length they might be a particular form of 4:4 n then we might find better orders for Karatsuba algo (1960) — Preceding unsigned comment added by 93.118.212.93 (talk) 20:54, 25 May 2012 (UTC)[reply]

Yes, that is true (if I understand your idea correctly) and is called Toom-Cook multiplication.--LutzL (talk) 12:45, 7 December 2012 (UTC)[reply]

i guess this ideas stands evn its for pure creative reasons only, thank u for let it posted — Preceding unsigned comment added by 93.118.212.93 (talk) 15:11, 2 February 2013 (UTC)[reply]

well, its a bit problematic, but still, i think there is a chance to look 4 optimization by... allocating the preferable conditions in two directions : optimizing the plan on a level of the plan/plans d&c, n allocating the preferable conditions both to make it working in more than one level not only one that takes all the advantage of 1:4 condition i think it could b a chance to b a problem oriented on programmer analysts problematic :-) 93.118.212.93 (talk) 12:52, 13 February 2013 (UTC)[reply]

Possible linear long integer multiplyings algo

i think that could be possible keeping a O(1) *n loop to count , in O(1) the number of bits products sum 4 the result bit rank < h, 0<h<2*n

might be producing a linear mul algo, 4 informatics complexity acception :) — Preceding unsigned comment added by 93.118.212.93 (talk) 11:57, 22 July 2012 (UTC)[reply]

Please try again in plain english with more details. Did you intend to describe something like Ancient Egyptian multiplication?--LutzL (talk) 12:45, 7 December 2012 (UTC)[reply]

sssswell, its not the egiptian algo, maybe bit oriented processing n dynamic programming might achieve O(N) or similary processing time... who knows cz im such poor encoder when it comes to write programmes :( — Preceding unsigned comment added by 93.118.212.93 (talk) 15:15, 2 February 2013 (UTC)[reply]

n*log(n) muls or even better

wow, im sorry lets say we want to multiply a1 n b1, to avoid confusion a1*b1=m*m-n*n; m*m =(m1;m2)*(m1;m2) , m1 the most significant part m1=3*a+b, m2=3*b+a, computing m1*m1, m2*m2 n m1*m2 from "autosolve". good luck n dont forget credits, ok?? Florin Matei, Romania 93.118.212.93 (talk) 19:16, 6 December 2012 (UTC)[reply]

What exactly are you trying to say? What is "autosolve"? The idea to reduce multiplication to two squarings is old. How is 3a+b the most significant part of m? What is m? Usually one would take m=(a+b)/2 and n=(a-b)/2. m=3a+b would require n=3a-b to cancel the squares, then m*m-n*n=12*a*b. How do you propose to achieve the squarings? Using Karatsuba? Do you resolve the mixed scale multiplication again to a difference of two squares? What is the complexity relation, that is, how is T(2n) expressed in terms of n and T(n)? n*log(n)-Multiplication is Schönhage-Strassen, an algebraic variant of the FFT. Did you check that?--LutzL (talk) 12:35, 7 December 2012 (UTC)[reply]
Please study Daniel J. Bernstein: Multidigit multiplication for mathematicians. for an overview of fast multiplication algorithms and ways to communicate them. If the formalism is right, the language doesn't matter that much.--LutzL (talk) 20:16, 19 December 2012 (UTC)[reply]

autosolve in a nonexpansive way to compute m1*m2 using the notations m1=3a+b n m2=3b+a, m1, m2 the 2 parts of spilting m (whos m? dont make me laught, u is the one to say idea is old), n the computed values m1*m1 n m2*m2 that works recursively, dont need extra ideas... once we got m1*m1, m2*m2 n M1*m2 we've completed the curent recursivity level by computing m*m... idea is to use a decent sum m1*m1+r*m2*m2 with the notation suggested in order to find m1*m2 (algo main idea) i know its a bit crazy but might evn work n if so then Order is n*log(n)... n dont bother me with master method formulas i practically reinvented it once n way to verify it also :P 93.118.212.93 (talk) 14:47, 2 February 2013 (UTC)[reply]

Please publish a paper, preferably refereed, where you give a detailed description of what happens when in your algorithm, some example and also a detailed account of your reinvention of the "master theorem". But even then this discussion page is not the right place to talk about this fundamentally different "idea".--LutzL (talk) 20:32, 5 February 2013 (UTC)[reply]

idk abt the rite place 4 this, apparently is all i can afford it here, not to mention the brain treat... they r doing this 2 me 4 their fun, i guess 93.118.212.93 (talk) 09:29, 9 February 2013 (UTC)[reply]

Karatsuba idea n estimating MSb mantissa for factorials

i think due to the arithmetic progression of operands n using binary codifyings 4 numbers we could use ideas similary to Karatsuba to multiply in polynomial time a great deal of numbers such as factorials, only for the significant mantissa (n exponent) the details r let to u :) — Preceding unsigned comment added by 93.118.212.93 (talk) 11:53, 2 February 2013 (UTC)[reply]

estimating modulus of such product could b possible , anyway, in order for primality tests, but i think i m not able to do that one, not in polynomial time, anyway 93.118.212.93 (talk) 12:00, 2 February 2013 (UTC)[reply]


Possible linear algorithm simple n efficient just like K idea

ok, once n last 4 everybody that wanna challenge this idea: my challenge 2 u is that using bit aritmetics , to easily compute , 4 example (A+4*B)*(C+4*D), O(N) time, when we already got (A+2*B)*(C+2*D), letters ar very long integers, lets say. i think there is a chance 4 this to work so give it a try, y not ? — Preceding unsigned comment added by 93.118.212.93 (talk) 10:21, 10 February 2013 (UTC)[reply]

(yeah, i think there is a chance 4 this to work, around of computed kernel which questions is if might work in recurrence in order to rightly compute the desired 2 similar values) lets try to multiply m=AB with n=CD, A most significant part of m, etc... we could do that by computing first (A+2*B)*(C+2*D) , (A+4*B)*(C+4*D), n (A+8*B)*(C+8*D) , after that solving the system once we got first result from these 3 there might b an easy way to compute the other two, like O(N)time complexity which is the algo main idea, n honestly idk if its working, but if it does i wish credits :)) 93.118.212.93 (talk) 07:53, 9 February 2013 (UTC)[reply]

Anyway, this idea was meant 4 binary computations and if its not working, we might use partial operands mixing such as from 2 operands of N/2 length forming the result of 3*N/4 length , n keep trying gaining "autosolve" results in order to obtain for example T(N)=7*T(N/4) master method formulas which is not linear, i agree :) 93.118.212.93 (talk) 09:19, 9 February 2013 (UTC)[reply]

This is the general Toom-Cook idea. Please read and understand the paper by Bernstein, as I told You before.--LutzL (talk) 19:09, 9 February 2013 (UTC)[reply]

swell, r u sure that Toom-Cook is linear 4 splitting variant k=2, if i rmb well, bcz of those coetients from the system Toom-Cook is not O(N) complexity in any case. my ideas r meant as an iq test also 93.118.212.93 (talk) 10:07, 10 February 2013 (UTC)[reply]

How it has been computed?[edit]

This article axiomatic stated that:

z0 = 345 × 789 = 272,205

However, it doesn't explains the algorithm for compute that. Is there the time for applying long multiplication, or just apply Karatsuba algorithm again to that expression? 31.42.239.14 (talk) 21:54, 25 February 2013 (UTC)[reply]

Original research by 93.118.212.93 (Florin Matei) II (collapsed)[edit]

Subdivision 1:4 that is not Toom-Cook
ok, abt some a bit different K idea possible able 4 generalization

multiplying A=(a1;a2) with B=(b1;b2) might be planned like this (a1+b1)*(a2+b2), (a1-b1)*(a2-b2), and a1*b1... i think this could challenge Toom-Cook algo, by some K. generalization that use mostly algebraic sums n possibly finding more simplier systems of equation... the algebraic sums to b multiplied that ofer decent agebraig sums of desired coetients might b search evn automatically, by the computer...good luck! (i posted similary tries on Toom - Cook talk page.) 93.118.212.93 (talk) 10:51, 27 March 2013 (UTC)[reply]

... ive tested the idea above but there r just a few matches 4 that, found with the computer , n im not able to make a desirable system of linear equations ,but im trying here another generalization idea, inspired by the Toom-Cook algo also:

lets consider that we want actualy to multiply 2 polinoms A(x)=(h1(x);l1(x)) n B(x)=(h2(x);l2(x)) wedd want the value of the product for some xo=2^k value... we plan to do 3 small multiplications h1(1)*h2(1), l1(1)*l2(1) n (h1+l1)(1)*(h2+l2)(1) but, after obtaining the values (using K. classic algo) we multiply properly with 2^t two of the resulted polynoms action that sooner or later, is expected to ofer the desired results of the multiplication of the two polynoms n aplied to the x0 =2^k value meaning the multiplication is over... time looks pretty appealing, IF its working :) 93.118.212.93 (talk) 07:06, 28 March 2013 (UTC)[reply]

A possible generalization aiding by complex number possibilities ?

if we have to multiply 2 polinoms but in fact looking 4 their value resulted number obtained for x0= 2^k for the polinom product, we might use as xi, the values that create the system of equations, xi= roots_complex_order_j(1)... j somewhere 1...n, n the numbers of plan partitions: it could help geting a decent system of linear equations thats solves more fast... :) 93.118.212.93 (talk) 08:21, 28 March 2013 (UTC)[reply]

Please locate a proper math forum to discuss your reinventions of the wheel. For instance mathoverflow.net, mathforum.org or the mathematical news groups. Now You are proposing the widely explored application of numerical FFT to polynomial and integer multiplication.--LutzL (talk) 12:54, 28 March 2013 (UTC)[reply]

ok, touche, n i think u r not such well intended regarding my work, somehow i think i know people like u, here in Romania: please watch ur style talking to me, ok:just a friendly advise, n i mean it !!!! 93.118.212.93 (talk) 13:12, 7 April 2013 (UTC)[reply]

now B a sport n tell to the people witch weel is those :

hi, i think that this old O(1) floodfill algo that trades time 4 mem, might b help in time consuming by backtracking concepts of smart test, backjumping so where is no necessary to walk on the same line bcz we already know by pasts walks there is no reason, we can simply add a marker to help comparing forward next line of forward bacvtracking algo with the experience from last pass... we might evn reach O(N) time, if we r a bit lucky 93.118.212.93 (talk) 11:43, 19 January 2013 (UTC)


hi, i think there could be written algos based on grid concept for the stored values, n a small square 4 data, also, that possibly work in O(N) time n O(N^(2/3)) mem, for example or even less O(N^(1/2)), O((log(N)), or evn O(1) mem for O(N) time, so i dont think really its need to trade time for mem . for example O(1) mem algo first seek for target without filling then intelligent border keeping also condition of conexivitty (local test) 4 the remaining zone... only if there is no bordering n need to spilt, then proceed splitting... O(N^1/2) of O(N^2/3) are evn more simple they use O(N) time. it some cautious terms N is the maximum number of pixel of a possible image to b filled... — Preceding unsigned comment added by 93.118.212.93 (talk) 10:29, 18 January 2013 (UTC) — Preceding unsigned comment added by 93.118.212.93 (talk)


a lil-big difference in K. algo idea

this idea could make the difference between O(N^1.59) n O(N*log(N)) by simply redisn the plan of recursive calls of this good idea: basically instead of planning 3 muls of T(n/2) we plan (and that is reffering to the muls of sums) the T(n/2) sum n then the two remaining two T(n/4) sums. i agree there is some terms let 4 autosolve but this doesent yet charge the data stack (hypotetical data stack of some implementention of the algo) n those remaining unconsidered ("yet") terms are expected to become smaller n smaller. practically if the tack grows with N/2+N/4+N/4=N then this small idea could lead us to an O(n*log(n)) time complexity algo. i agree my "demonstration" here is far to b complete, take it as a Russian style problem , if u wish :) Florin Matei, Romania 93.118.212.93 (talk) 13:12, 14 April 2013 (UTC)[reply]

No, russian style is to leave the solution of a problem to the reader. What you want is to leave the formulation of a problem to the reader. This is crank style. Come back when you have an implementation with demonstrable runtimes.--LutzL (talk) 12:13, 15 April 2013 (UTC)[reply]

well, another pure creative idea is that observing/remarking first that if we manage to write the master method equivalent for finding O() of Karatsuba algo idea in such way that being possible the following kinda write T(N)=3*(1-p)*T(N/2), p a small percentage of 1 meaning the problem solved first from each of those 3 planned, we might find better O() for K. algo idea... basically insted of planning a mul of T(N/2) we solve a small percentage of that n planning only the remaining (1-p)*T(N/2) claiming we already talk abt a better O()... as a rule of thumb the lil mul n the remaining mul should b equalize in time wasting in order to find a most representative O() 93.118.212.93 (talk) 05:15, 20 April 2013 (UTC)[reply]

inspired by Karatsuba's idea

if we have to multiply 2 long integers of N bits each, first we might do the multiplyings of first 3*N/4 with first 3*N/4 bits, obtaining recursively a result, then we might do the same for the last 3*N/4 bits multiplied with others operand 3*N/4 bits .we might b remarking that we already got (3/8)*N (first)+ (3/8)*N (last) required bits of the result now we might b focusing on the middle of the operands (pretending that at the middle of operands there is some aiding hypotethic floating point)... at a summary view in order to get a clue abt master method equation, i got 8^alfa=2*6^alfa ... 93.118.212.93 (talk) 13:42, 26 May 2013 (UTC) Florin Matei, Romania[reply]

"Invented by Karatsuba"/"Kolmogorov was very agitated"[edit]

The lead of our article claims that this algorithm was "invented by Karatsuba". Given that the original publication was joint with Ofman, is there reliable sourcing (independent of Karatsuba himself) that this algorithm was invented solely by Karatsuba? If so, what was Ofman's contribution? —David Eppstein (talk) 02:24, 5 August 2013 (UTC)[reply]

Hi, I believe the history section does a good job on that: invented by Karatzuba during a seminar of Kolmogorov, published two years later by Kolmogorov and Ofman (another student in the seminar I read somewhere?) under the names of Karatsuba and Ofman. I do not believe that minutes of the seminar exist, so the amount and influence of brainstorming is not reconstructable.--LutzL (talk) 14:46, 5 August 2013 (UTC)[reply]

The history section is problematic, even if we ignore the plausible claims for Karatsuba, because that section also says "Kolmogorov was very agitated about the discovery; he communicated it at the next meeting of the seminar, which was then terminated." That is an uncharitable depiction that doesn't seem to be recorded in any of the sources. This sentence should either be sourced or removed.

Algorithms are generally invented, not discovered[edit]

The opening paragraphs states that Karatsuba discovered this algorithm. Although the principles or properties underlying the algorithm may indeed have been discovered, the algorithm itself would have been invented, unless it was somehow preexistent. I posted this instead of modifying the article directly in case somebody has a justification for the current wording. — Preceding unsigned comment added by 38.104.57.138 (talk) 16:01, 14 April 2017 (UTC)[reply]

Nonsense. This O(n^1.58) algorithm has always existed; therefore it was discovered. On can generally interchange "discovered" and "invented" for algorithms, but the simpler and more fundamental the algorithm, the more "discovered" is appropriate, and the more ad hoc the algorithm, the more "invented" is appropriate. -- Jibal (talk) 22:40, 17 January 2020 (UTC)[reply]

Karatsuba method is not (comparatively) as fast as the article said[edit]

The description in the article indicate:

"For example, to multiply two 1024-digit numbers (n = 1024 = 2^10), the traditional algorithm requires (2^10)^2 = 1,048,576 single-digit multiplications, whereas the Karatsuba algorithm requires 3^10 = 59,049 thus being ~17.758 times faster"

What the article don't say is that the 59,049 Karatsuba's multiplications are NOT single-digit multiplications! (like the traditional algorithm) so they're comparing pears vs apples.

In order to correctly do comparisons, we need to set a standard. If the standard is single-digit multiplications, then the results in the article change.

In the example of the product of 12345 and 6789, the traditional algorithm requires 5 digits X 4 digits = 20 single-digit multiplications. The description indicate:

  Only three multiplications, which operate on smaller integers, are used to compute three partial results:
  
  z2 = 12 × 6 = 72
  z0 = 345 × 789 = 272205
  z1 = (12 + 345) × (6 + 789) − z2 − z0 = 357 × 795 − 72 − 272205 = 283815 − 72 − 272205 = 11538

However, 12 × 6 is NOT one multiplication but 2 single-digit multiplications, and 345 × 789 requires 9 single-digit multiplications, and (12 + 345) × (6 + 789) also requires 9 single-digit multiplications, giving a total of 20 multiplications! (not 3).

In this way, Karatsuba's method is not as fast as the article said...

Where is the error in my reasoning, if any? 2806:2F0:93C0:FD4B:E92A:EC7:7FF6:5CF0 (talk) 01:46, 14 June 2023 (UTC)[reply]

This is a divide-and-conquer algorithm, so the multiplications in z0 and z1 wouldn't be carried out by the conventional nxn method, but rather by calling Karatsuba recursively for each of z0 and z1. I find the article's example poorly chosen and awkward, so I'm not guaranteeing there's nothing wrong with it; but what you're talking about isn't it. EEng 06:22, 14 June 2023 (UTC)[reply]
I am not talking nor referring to advanced details on the method, but about a single and specific point: the comparison of the number of operations on both the traditional algorithm and Karatsuba algorithm, that the article said that is ~17.758 times faster in the first example...
The posterior example on Karatsuba method (the one that specify "Only three multiplications, which operate on smaller integers (NOT single-digit integers), are used to compute three partial results") uses multiplications over three-digits numbers. For this reason I concluded that the first example, the one that requires 1024*1024 = 1,048,576 single-digit multiplications, would require: 1024/3 = 342*342 three-digits multiplications; that is: 116,964 multiplications.
In this way, if the traditional algorithm requires 116,964 (three-digits) multiplications and Karatsuba algorithm requires 59,049 (three-digits) multiplications (as the second example clearly indicate), then Karatsuba is just 1.98 times faster (and not 17.758 times faster as the article said). 2806:2F0:93C0:FD4B:B85C:D1F7:77AB:F805 (talk) 04:58, 21 June 2023 (UTC)[reply]
I've removed these quantitative claims because (a) they're way overprecise; and (b) they appear to be WP:OR. EEng 07:21, 21 June 2023 (UTC)[reply]
To answer the OP's original question "Where is the error in my reasoning": the error is that the OP is confusing the number of single-digit multiplications at the bottom of the recursion (given for the example of two 1024-digit numbers) with the number of recursive multiplications at the top of the recursion (given for the example of 12345 and 6789 and always three). The top-level recursive multiplications are not single-digit. The multiplications at the bottom level of the recursion would be single-digit in whatever base you are using (which really should be much larger than base-10). —David Eppstein (talk) 07:34, 21 June 2023 (UTC)[reply]
As usual, the prof put it best. EEng 07:49, 21 June 2023 (UTC)[reply]

Was Babbage faster?[edit]

"four multiplications...were known to Charles Babbage," isn't that faster than the "quadratic 'grade school' algorithm" (wherein each digit of a multiplicand is multiplied by each digit of a multiplier and shifted results are added)? If so, that'd mean the Karatsuba algorithm wasn't the first faster multiplication algorithm. 213.41.102.186 (talk) 10:17, 15 December 2023 (UTC)[reply]

The great observation of Karatsuba is that 3 multiplications are sufficient where Babbage needed 4 multiplications, and that this leads to an algorithm that is faster (for large factors) than all previously known algorithms. D.Lazard (talk) 11:03, 15 December 2023 (UTC)[reply]
Indeed. But, the four multiplications known to Babbage already beats "grade school," right? 213.41.102.186 (talk) 15:04, 15 December 2023 (UTC)[reply]
Why do you think that? BTW, the 4 multiplications are the same as the "quadratic 'grade school' algorithm" with 2-digit numbers in a high radix. — Vincent Lefèvre (talk) 15:35, 15 December 2023 (UTC)[reply]
Oh, doesn't beat, understood. 213.41.102.186 (talk) 16:50, 15 December 2023 (UTC)[reply]