# Talk:Numerical analysis

WikiProject Computer science (Rated B-class, Top-importance)
This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
B  This article has been rated as B-Class on the project's quality scale.
Top  This article has been rated as Top-importance on the project's importance scale.
WikiProject Mathematics (Rated B-class, Top-importance)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
 B Class
 Top Importance
Field: Applied mathematics
One of the 500 most frequently viewed mathematics articles.

## Operations Research

What is the relationship, if any, between operations research (or utility theory) and numerical analysis? — Preceding unsigned comment added by 107.2.167.71 (talk) 06:47, 5 July 2012 (UTC)

## Comment

"The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done."

It could be argued that computers evolved from the Hollerith machines, and thence from administrative purposes. Why does that sentence belong in an article about a mathematical research field? (Please move to the appropriate section in the talk page) 189.0.224.34 (talk) 20:10, 16 March 2009 (UTC)

Why are some of the free links without apostrophes also closed to editing?

Apparently because there are newlines in them in the source --drj

So now Catastrophic loss of significance is fixed. rchase

The following link seems to be broken.

## Scientific computing

As far as I know, numerical analysis is nowdays called scientific computing. It seems natural to me to use the modern term rather than the old term. Should this article be moved/redirected to scientific computing? --DanielJanzon 15:22, 3 Nov 2004 (UTC)

You don't have to do any "computing" to do numerical analysis. Gershgorin discs fit under numerical analysis, but not necessarely under scientific computing. Loisel 17:00, 3 Nov 2004 (UTC)
Exactly, the theoretcial analysis side of it is separate, though closely related to the computing issue. - Taxman 18:17, Nov 3, 2004 (UTC)
Should scientific computing have it's own article? On my (Swedish) university (of Uppsala) all courses previosuly given in the field numerical analysis are now regarded as courses in "science of computing", which I assumed to be synonymous to the english term scientific computing. Do you guys know what you are talking about or do you just assume that numerical analysis is not fully contained in scientific computing because it doesn't sound like that listening to the term "scientific computing"? A search on Amazon seems (to me) to favor your view. Sorry for the rambling, --DanielJanzon 13:09, 4 Nov 2004 (UTC)

The words are interchangeable to a degree, but yes I do know what I'm talking about. I'm doing a postdoc at university of Geneva, more or less in scientific computing and numerical analysis. Loisel 14:25, 4 Nov 2004 (UTC)

I can confirm what Loisel says. Numerical analysis refers more to the theoretical side (for instance, the Gershgorin discs mentioned above) and scientific computing more to the practical side (for instance, MPI = Message Passing Interface), but there is a large overlap. Some people indeed use scientific computing in a broad sense to include all of numerical analysis, but other people use numerical analysis in a broad sense to include all of scientic computing. Ask some professors in Uppsala if you don't believe us. -- Jitse Niesen 15:44, 4 Nov 2004 (UTC)
Since "numerical metehods" (NM) redirects to "numerical analysis" (NA) I think "scientific computing" (SC) should do that also. To me, NM and SC seem very close to each other (closer than NA and SC). For instance Hamming writes in "Numerical Methods for Scientist and Engineers" p. 4 "Numerical analysis seems to be the study in depth of a few, somewhat arbitrarily selected, topics and is carried out in a formal mathematical way devoid of relevance to the real world. Numerical methods, on the other hand, try to meet the need for methods to cope with the potentially infinite variety of problems that can arise in practice". A bit frenzied, and maybe not very up-to-date, but I still believe SC should redirect to NA if NM does. Am I right? --DanielJanzon 19:33, 5 Nov 2004 (UTC)
Yes, I think so too. -- Jitse Niesen 23:11, 5 Nov 2004 (UTC)

I thought numerical analyis was essentially aimed at approximating continous mathematics. This is not obvious in the definition in the article. Someone competent in the matter might consider adding that to the article. --DanielJanzon 19:33, 5 Nov 2004 (UTC)

You're right. I think the article is generally not in a very good shape (I worked a bit on it myself, so hopefully nobody will take this personally), but I haven't found the energy to try and improve on it. Perhaps later. -- Jitse Niesen 23:11, 5 Nov 2004 (UTC)

## Reversion of recent anonimous contributions

The recently added text has quite a bit of flaws:

1. First sentence is almost as a tautology, saying "numerical analysis is about numerical solutions". For now, the wording "numerical solution" is not defined, that will be dealt with later.
2. The wording "it deals with" shows up all the time.
3. The introductory paragraph went into way too much detail, it is meant to be a short overview.
4. The part in the ==General introduction== goes off to talk optimization, and again, goes into unnecesary detail for this stage.

As such, I found the contributions to be overall poor style, without adding much valuable, so I reverted the thing. Oleg Alexandrov 19:26, 31 July 2005 (UTC)

## Moving the software out of the page

I moved the list of software out of the page, to a seperate article List of numerical analysis software. This is for several reasons.

First, the list was outgrowing the page. I felt that the article as losing it's main point as a science topic, and becoming an application topic. Second, it hindered the freedom to add rarely used software, which would be non-notable in the main article. Third, this way, we can add much information to the list of software, may be maintain it as a table (showing which platform they run on, licence, whether they are free/opensource, how large amounts of data they can handle etc.). I hope no one would oppose this move. Greenleaf 07:58, 9 September 2005 (UTC)

I think that shortening the software section in this article and moving most of it to a new article was a good idea. Thanks. Oleg Alexandrov 15:18, 9 September 2005 (UTC)

## Inaccurate definition

"Numerical analysis is the study of algorithms for the problems of continuous mathematics (as distinguished from discrete mathematics) that do not have an explicit, direct solution." Gaussian elimination is one of the most prominent algorithms in numerical analysis. It has nothing to do with continuous mathematics and the solution is direct. That's just one example. I think that the inaccuracy can be characterized by the definition including only methods which suffer from truncation error. It neglects methods which suffer from only round-off error.

The definition is certainly not supposed to include only methods suffering from truncation error. Gaussian elimination solves Ax = b where x is a vector of reals. This is a problem in continuous mathematics (x is chosen from a continuum) and not in discrete mathematics. The phrase "that do not have an explicit, direct solution" serves to distinguish numerical analysis from symbolic mathematics. However, I agree that the word "direct" is ill-chosen since Gaussian elimination is called a "direct method" as opposed to an "iterative method" like Gauss-Seidel. Do you have a proposal for a better definition? -- Jitse Niesen (talk) 13:25, 29 November 2005 (UTC)
Would it work to substitute closed-form solution? Otherwise how about just removing the direct. Explicit already gets across part of the idea. - Taxman Talk 13:45, 29 November 2005 (UTC)
I thought about replacing it by "closed-form", but I fear that that is also open to misinterpretation: for instance, computing the sine is also part of numerical analysis, I think, but expressions like sin(1) are general considered to be closed form. I hadn't thought about just removing "direct". I like that, so I changed the definition, expecting that people who disagree will say so. -- Jitse Niesen (talk) 14:47, 29 November 2005 (UTC)
Yes, but in that sense the definition is wrong anyway, because numerical analysis includes evaluating polynomials which are clearly closed form. So perhaps it should say numerical analysis techniques are particularly valuable for evaluating problems where there are no closed form solutions, but the methods can also be useful in situations where there are. I couldn't think of a way to say that, so I'll let you have a stab at it. - Taxman Talk 14:58, 29 November 2005 (UTC)
How about getting really basic? Something about how we want analytic solutions. If we can't get them, then we go to approximate (with some control over error), and eventually to numerical (as a last resort). Additionally, aren't we really talking about solving potentially sophisticated problems using ONLY basic arithmetic (computing)? I'm just an undergrad, so I'm not qualified to contribute to the article yet, but this is how I see things. DB Nov 29 2005
That is a good point, and I was surprised to see that it is not mentioned in the article. It should definitely be mentioned somewhere. As you probably know, analytic formulas are not always preferable; sometimes, they are so complicated that it's hard to make sense of them. The relation between approximate / asymptotic and numerical results is even less straightforward. -- Jitse Niesen (talk) 13:35, 30 November 2005 (UTC)
So it could be said that we seek a practical analytic solution first. Failing that, we might pursue an approximate solution within a radius of convergence perhaps. Assuming this does not yield a practical solution, we attempt to find a numerical solution to describe our problem.
We have to assume that an exact solution exists, and that the exact solution depends continuously on any data (i.e. that the problem is well-posed), otherwise we have no convergence theorems. So why not just define NA as numerical analysis is the study of algorithms for obtaining approximations to the exact (but usually unknown) solutions of problems in continuous mathematics. ? --Des-Downunder 02:31, 3 September 2006 (UTC)
What about the point about numerical solutions are found using only basic arithmetic? Is that a valid point? DB, Nov 30 2005
I believed the "basic arithmetic" part needed to be removed. Left as it was, it sounded as if numerical analysts are merely adding and subtracting numbers--don't get confused here. . . this is not what we do in the profession. -- The preceding unsigned comment was added by 24.18.243.107 (talk • contribs) 07:55, 7 January 2006 (UTC)
I don't see your point. The basic steps in all algorithms are merely adding and subtracting numbers, and using these steps we solve complicated problems, don't we? I also don't like your edits which says that numerical analysis "is largely related to functional analysis". As I see it, the two main theoretical disciplines of maths used by numerical analysis are matrix theory and approximation theory, and matrix theory is usually not taken as a part of functional analysis. So I reverted your edit. -- Jitse Niesen (talk) 13:08, 7 January 2006 (UTC)

I have to agree with the anon at a certain point. There are better things to say in the introduction I think than that numerical analysis is done using addition and multiplication. While that's true, you could say in the same way that computer science is the study of things which are sequences of zeros and ones. So I would cut that part. Oleg Alexandrov (talk) 17:48, 7 January 2006 (UTC)

I see what you mean. It is indeed not that important by itself. However, the point is that we should distinguish in some way between numerical analysis and symbolic computation (see the above discussion). As far as I know, nobody has come up with a better way to achieve this then referring to these basic arithmetic operations. -- Jitse Niesen (talk) 21:58, 7 January 2006 (UTC)
I also believe "basic arithmetic. . . addition" is misleading and should be removed from the definition. This page should describe the core aspects of Numerical Analysis and it should be careful to distinguish itself from Numerical Methods. What is more, including "basic arithmetic. . .addition" is not only misleading, but it is incorrect. I am at a loss for words here . . . I think this should be clear to anyone who manages this entry; there are entire journals devoted to NA and these articles usually never even mention or include algorithms. One of the definitions I find useful is "Numerical Analysis is Functional Analysis with an emphasis on algorithms". The preceding unsigned comment was added by ExamplePuzzle (talk • contribs) .
I don't like either the basic arithmetic or the functional analysis descriptions. But I don't have any suggestions. Oleg Alexandrov (talk) 09:33, 11 January 2006 (UTC)
Hey. I would not vote to use my "functional analysis with. . ." description on this page either. I was just trying to move the focus away from including "basic arithmetic". I will go ahead and remove it and then let other users read over it and offer their comments (this includes you :D).

I like the definition "Numerical Analysis is Functional Analysis with an emphasis on algorithms" (supra).

The view of NA as a bunch of "numerical recipes," devoid of any underlying theory, and which may or may not work for mysterious reasons, is a terrible misrepresentation of the theory. Some people may use numerical analysis this way, but that doesn't mean that the subject is as hit-and-miss as some think! (No offence to the authors of the book by Press et. al.)

Most importantly, the only mathematically rigorous definitions of Numerical stability, consistency, and convergence, and the proof of the Lax Equivalence theorem, all require at least some functional analysis. (I hope to add something more comprehensive to the Numerical stability Discussion section soon.)

Des-Downunder 04:41, 22 January 2007 (UTC)

Writing about numerical analysis in terms of functional analysis would make the text harder to understand I think. Not everybody has a good functional analysis background. Oleg Alexandrov (talk) 05:08, 22 January 2007 (UTC)
I agree. I certainly think of the subject as being numerical functional analysis but that's hard to bring out at this level. The current definition (Lloyd Trefethen's, I think) works for me. Part of the issue is distinguishing between numerical analysis, computational science (and engineering), scientific computation, and any of several other terms now in vogue. I commented on this here Talk:Numerical_analysis#definition_of_numerical_analysis_discussion_copied_from_Talk:Mathematical_analysis also.JJL 18:30, 22 January 2007 (UTC)
I'm also wondering whether functional analysis covers all of numerical analysis. Granted, it's used heavily in approximation theory and numerical PDEs, but where does functional analysis enter when studying Gaussian elimination (which is indeed a special case, as JJL observed), Newton's method or the Runge-Kutta method? -- Jitse Niesen (talk) 00:11, 23 January 2007 (UTC)
Well, functional analysis is the key idea in the backward stability of G.E., and analyzing N.M. as a fixed point iteration also qualifies. Runge-Kutta is, from one point of view, approximations of Taylor's series. Not all of these are very deep applications of the subject, perhaps, but they still fit the mold. Many numerical methods fit the over-arching theme (as I view it) of projecting an infinite-dim. operator on an infinite-dim. space to finite spaces one can work with, which is a Banach space model. So, functional analysis is always there...but less visible, it seems, every year. JJL 00:58, 23 January 2007 (UTC)

To amplify JJL's point, the usual algorithm for the Cholesky decomposition is just Gaussian elimination, modified to exploit the special structure of the matrices involved. (Cholesky decomposition applies to Hermitian, positive semi-definite matrices.)

While the proof of the existence of Cholesky decompositions, for the positive definite case, mainly uses matrix theory, the extension of the proof to the positive semi-definite case relies heavily on functional analysis.

Perhaps we need a new Theory of Numerical Analysis section that's seperate from the general section. Does anyone like this suggestion of a two-layered approach?

Des-Downunder 03:16, 23 January 2007 (UTC)

I'm not sure if it's needed...but I've been singing this song since 1989 on sci.math and would enjoy seeing such an article whether it's needed or not! Too many of my colleagues think I program computers. At the Joint Meetings this month I had a pure mathematician I had just met explain to me that numerical analysis is very easy. I thought: Lax Equivalence Theorem, Kreiss Matrix Theorem, etc....it's most aggravating! Count me in on this. I was just discussing an ind. study from Theoretical Numerical Analysis: A Functional Analysis Framework (by Kendall Atkinson and Weimin Han) with an undergrad. student this afternoon. JJL 04:35, 23 January 2007 (UTC)
You seem to have a more inclusive view of functional analysis than I have. I would call Taylor series and norms and continuity on R^{n \times n} as used in Cholesky decomposition#Proof for positive semi-definite matrices just real analysis, which is part of (mathematical) analysis, but that's just a question of taste, I guess, and not terribly important. I'll have a look at the book by Atkinson and Han that JJL suggests, to satisfy my curiosity.
I'm not sure what is meant with "a new Theory of Numerical Analysis section", but theory should certainly not be ignored; it's much too beautiful to be kept secret! Incidentally, there is much room for improvement in article on the Lax equivalence theorem. -- Jitse Niesen (talk) 12:48, 24 January 2007 (UTC)
I think we are largely in agreement on what is real vs. functional analysis. I was trying to get at that when I said Taylor series etc. were not "very deep applications of the subject" (i.e., barely qualifying as functional vs. real analysis). But to me, the key model is a Banach space, since I need norms to measure various errors. What happens in that space and the associated spaces is "functional" enough for me, and I do tend to view real analysis as a special case. I'm sure my colleagues with a more topological view of real analysis would disagree.
I think that a theory of n.a. section would be most useful as a single paragraph that basically says "yes, of course we prove things" and mentions ideas like functional analysis, then points to a separate article. I'm not convinced WP needs such an entry...but I do think it'd be a nice reference page to have for those who think numerical analysis is principally a list of specific numerical methods. But it's a pet peeve of mine, so I'm waiting to see if there's consensus among more objective folks that it's needed. I'll look at the Lax Equiv. article.
Getting off-track for a moment, the Atkinson/Han book is written at a level I can use with an advanced undergrad. who has taken my n.a. class and has had real analysis. Last time I used it to give a "functional analysis with a numerical analysis flavor" ind. study with a student who was headed to an applied math. Ph.D. program and who had missed our every-other-year undergrad. functional analysis course. It has the preliminary info. one needs for that before jumping into n.a. topics. It's definitely worth a look if you need a text like that; I still go to Stoer and Bulirsch first for specific results. JJL 13:47, 24 January 2007 (UTC)
As in previous discussions, we believe that the word 'approximate' does not belong in the initial definition. For more information on this topic, see Numerical Linear Algebra by Trefethen and Bau (first addition, page 323). Also, why does every page tend to begin with "Since the beginning of time . . .", or "Human kind has always . . .", or <insert your own grandiose phrase>? This page is no exception. I have my eye on the phrase 'For thousands of years, man . . .'. What do you think? —Preceding unsigned comment added by ExamplePuzzle (talkcontribs) 05:49, 7 December 2006

## approximate in def vs. exact in direct methods

The definition and the article seem to be inconsistent.

"Numerical analysis is the study of approximate methods for the problems of continuous mathematics ..."

"Some problems can be solved exactly by an algorithm. These algorithms are called direct methods."

If the methods are approximate, how can there be an exact solution?

--Jtir 18:18, 29 September 2006 (UTC)

Maybe it has to do with whether real numbers or floating point numbers are being used. --Jtir 18:25, 29 September 2006 (UTC)

## definition of numerical analysis discussion copied from Talk:Mathematical analysis

(This was copied after my ill-fated attempt to define numerical analysis at Mathematical analysis.) --Jtir 19:32, 29 September 2006 (UTC)

"Numerical analysis, the study of algorithms for approximating the problems of continuous mathematics."

"Numerical analysis solves problems of continuous mathematics by iteratively computing approximations until they converge to a numerical solution."

• It is now a declarative sentence.
• I didn't understand the phrase "approximating the problems", so I replaced it with the "solving problems" formulation.
• It now identifies both the computational and numerical aspects of the subject.
• The word "algorithm" got dropped. Maybe it could be worked in.
• As I read somewhere on WP, some numerical analysts do not actually write or run computational programs themselves. However, running programs is the ultimate objective of the subject. My definition doesn't quite capture these nuances. Maybe saying "analyzes and solves problems" would be broad enough.
• The definition is longer.

--Jtir 17:20, 29 September 2006 (UTC)

The new definition is wrong. Numerical analysis is not merely about iterative methods. Fredrik Johansson 17:39, 29 September 2006 (UTC)
Thanks for pointing this out. The definition does not account for direct methods. I reverted. --Jtir 19:01, 29 September 2006 (UTC)
Please stick to the formulation at Numerical analysis: Numerical analysis [is] the study of approximate methods for the problems of continuous mathematics. Concerning the use of the word "algorithm": Take for example the Runge–Kutta methods. It is strange and in any case unconventional to call these methods "algorithms". (And, while R-K methods are iterative, it is downright wrong to state that they iteratively compute approximations until they converge.) Likewise, interpolation and extrapolation are methods, not by themselves algorithms.
—The preceding unsigned comment was added by Lambiam (talkcontribs) 18:57, 29 September 2006 (UTC)
There's no hope of getting a precise definition of the field. (This isn't unique to numerical analysis.) Philip J. Davis refused to supply one to me when asked, giving a lengthy reason why it couldn't and shouldn't be done. But, the appendix of Trefethen and Bau's Numerical Linear Algebra is instructive here, especially the comment by Trefethen (if memory serves) that G.E. is a highly unusual algorithm in many ways. The definition shouldn't make or break on that one case. Linear eqs. are a special case, after all. I have several older books on my shelves entitled Numerical Functional Analysis. That's where many eventual numerical analysts came from way back when (including Davis, literally drafted into the USAF to do numerical aerodynamics). To my mind, it's applied functional analysis, and I like that definition for nostalgic reasons even though it no longer captures the field as well as it could. The current definition in this article is essentially the one used by Trefethen, save that he omits the 'approximate'. I would be fine with 'iterative methods'. There aren't enough exceptions to worry about, even though the G.E. exception is so prominent. JJL 19:52, 3 October 2006 (UTC)

## Should examples replace informative exposition?

(copied from User talk:Loisel) --Jtir 18:34, 30 September 2006 (UTC)

Hi, adding examples is welcome, but could you do it without zapping some excellent exposition? Perhaps you could put your examples in subsections after the introductory exposition, naming the subsections Example. --Jtir 20:30, 29 September 2006 (UTC)

I am being bold WP:BB, which is policy. My examples are meant to replace the vague exposition that precede it. The style I intend is to start with an example, and end with some sort of general statement that resembles the previous text. I also intend to do that to the rest of the article. Loisel 17:39, 30 September 2006 (UTC)
Actually, WP:BB is a guideline that reads in part "avoid deleting information wherever possible." I found the information in the text you are replacing very useful. I suggest you merge, not replace.
And the talk page is the first place to ask for opinions of editors when you "intend to do that to the rest of the article".
--Jtir 18:34, 30 September 2006 (UTC)

While the current changes added good material, I am not happy with the removal of much of the previous text. That text was more accessible I think, and was not bogging the reader into details that much. Also, the first part of the article is now written in a totally different format than the latter part (which is old text, and which looks nicer). I would vote to rollback most of the current changes, or integrate them in the old text (without too much detail, that could belong in separate articles and linked to from relevant sections). Oleg Alexandrov (talk) 19:28, 30 September 2006 (UTC)

Feel free to edit, merge or delete yourselves. This is the Wikipedia. I added lots of examples because I frankly think that this article was useless to novices and experts alike. It was useless to novices because none of the concepts were explained in any way, and it was useless to experts because all the stuff was trivial. Loisel 20:39, 30 September 2006 (UTC)

Maybe an alternative would be to put the examples in sidebars, like this.

 ${\displaystyle 3x^{3}+4=28}$ ${\displaystyle 3x^{3}=24}$ substracted 4 from each side ${\displaystyle x^{3}=8}$ divided by 3 on each side ${\displaystyle x=2}$ cubic root on each side

Loisel 21:36, 30 September 2006 (UTC)

They look great. I like the graphs and pics, too. I interpret the ellipses at the ends of the decimal expansions to possibly mean an infinite expansion. How are the examples of iteration generated? --Jtir 17:25, 1 October 2006 (UTC)

The decimal expansion is finite but long, so I put some ellipses.

I built the iteration in the following way. I want to compute sqrt(2) using a fixed point iteration. One way to do this is to write a fixed point iteration for a polynomial whose roots include sqrt(2). When you have a fixed point iteration x[k+1]=f(x[k]), x[k+1] is to the right of x[k] iff f(x[k])>x[k]. So I had to pick a polynomial f(x) that lies above the line y=x near sqrt(2) and such that f(sqrt(2))=sqrt(2). This way, if you start the iteration from the left of sqrt(2), it will move to the right and converge to sqrt(2). If you start to the right of sqrt(2), it will also move to the right and therefore diverge.

Complications arise when you also want the coefficients to be nice. Basically, I took g(x)=x^2-2, which has root sqrt(2), and then I picked f(x)=g^2+x. Clearly g^2 also has sqrt(2) for a root, and so f(sqrt(2))=sqrt(2). Because g^2 >= 0, you have f(x) >= x. Loisel 18:45, 1 October 2006 (UTC)

Thanks, that's excellent. I'd like to put in a footnote(?) about the method so that the example is verifiable. What software are you using to compute the examples? --Jtir 19:53, 1 October 2006 (UTC)
Ok. I use Matlab. Loisel 20:07, 1 October 2006 (UTC)
How do editors feel about using a Matlab snippet together with exposition similar to the above to ensure verifiability and avoid any charge that the example is original research? Obviously the the value of the example would be enhanced by doing this anyway. --Jtir 20:49, 1 October 2006 (UTC)
Not a good idea. Normal people don't have access to Matlab and those people who do, are able to write the Matlab snippet themselves from the description. You have to be very careful about including code in publications because code has a tendency to add a lot of clutter without really helping anyone. Loisel 21:26, 1 October 2006 (UTC)
I agree that too much code in the wrong places would constitute clutter. BTW, I "don't have access to Matlab".  ;-) --Jtir 20:15, 3 October 2006 (UTC)

I definitely like the direction the page is moving! A little Matlab here doesn't bother me. JJL 19:53, 3 October 2006 (UTC)

## Old history removed for now

I removed this for now, it could use some work.

### History

The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method.

To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables, often calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions. The canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very large number of commonly used formulas and functions and their values at many points. The function values are no longer very useful when a computer is available, but the large listing of formulas can still be very handy.

The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done.

Loisel 22:19, 2 October 2006 (UTC)

## def for the lead sentence

"The overall goal is the design and analysis of techniques to give approximate solutions to hard problems."
This would make a good def for the lead sentence. I like the phrase "approximate solutions to hard problems". I'm not sure what could be done to avoid a double occurrence of analysis. --Jtir 19:35, 3 October 2006 (UTC)

## regarding software

there was a comment in there about matlab's speed advantage over octave. i changed it be more generic as to apply to all the packages listed. i also tacked on a reference to an actual speed comparison (it originally came from the wiki page for R; dunno where they got it).

i was about to add a comment about comparing non-numerical abilities. for instance, matlab's add-on packages make it very attractive for certain specialized applications. in another instance, the graphical layout abilities of R far outstrip any other of the listed products.

but then i stopped myself and began to wonder whether we should really be in the business of pitting one software package against another. that is, imho, wikipedia not a software comparison service. ("just the facts, ma'am.") does it sound reasonable to take out the speed comparison, too? Lunch 23:23, 3 October 2006 (UTC)

That section is looking good. A general discussion of the ways in which NA SW may be compared (speed, accuracy, flexibility, reliability, etc) is useful IMO. The article could generically describe Matlab's features, say, but not name Matlab explicitly. Further, a general discussion would not become dated so quickly. Sourced comparisons from credible, independent (i.e. not any vendor) sources appear to meet the sourcing policy: "articles ... must refer only to facts, assertions, theories, ideas, claims, opinions, and arguments that have already been published by reputable publishers." The benchmark site in the article compares a variety of SW and explains the methodology. Hostname reports that the registrant is at the "UMH, Numerical Ecology Laboratory", Mons, BE. So it seems both credible and independent.
BTW, could something about handheld scientific calculators be added to the discussion? They aren't strictly SW, but the section title could be broadened slightly. What about slide rules?  ;-) --Jtir 00:18, 4 October 2006 (UTC)

## comments on fixed point iteration footnote

That looks good. There are no subscripts on the variables in the fixed point iteration footnote.
The rationale for the second square seemed fairly subtle and thus would be invaluable in the footnote. --Jtir 20:40, 7 October 2006 (UTC)

OK, I just noticed that the footnote says it's an equation, not an iteration --Jtir 20:57, 7 October 2006 (UTC).
Normally, the fixed point iteration article should explain how to generate fixed point iterations from an equation. Basically, you put the subscript k on one side of the equation, and the subscript k+1 on the other side. You are then supposed to check whether the iteration converges. This particular example has peculiar convergence properties for didactic purposes. Loisel 21:39, 7 October 2006 (UTC)
OK on the FPI article for the general method. This is what you said before that I thought was especially interesting:
• "if you start the iteration from the left of sqrt(2), it will move to the right and converge to sqrt(2). If you start to the right of sqrt(2), it will also move to the right and therefore diverge."
The example itself doesn't put it as directly as you did here.
The footnote could say something to the effect "for the purposes of the example we use [this] equation because of [that]". It may be an instance of stating the obvious per the MOS. --Jtir 22:01, 7 October 2006 (UTC)
I don't see any need for a Matlab snippet anymore. If a reader wanted to reproduce the example and compare his results with the example, what might he expect, supposing he might or might not be using Matlab? --Jtir 22:13, 7 October 2006 (UTC)
To reproduce results that require less than 30 iterations (say), you could use one of:
• A piece of paper, a pen and lots of grunt work
• A piece of paper, a pen and a calculator, and less grunt work
For more than 100 iterations, you would need any programming language such as
• Matlab (~3 lines)
• C (~30 lines)
• Octave etc... might work, but for 1000000 iterations it might take a while. Recall I mentionned that Matlab is a couple of orders of magnitude faster than Octave for loops.
Loisel 22:45, 7 October 2006 (UTC)
Maybe Newton could do 30 iterations on paper ...:-) Would the user get the same decimal expansion at each iteration as shown in the examples? (This may be another one of those stating-the-obvious cases.) --Jtir 23:04, 7 October 2006 (UTC)
Not an identical expansion, especially for the slow iteration. Loisel 01:17, 8 October 2006 (UTC)
The new version is clear and concise. I like that you used the inequalities. This now makes a nice example of a footnote going into more detail than what can be said in the main article. --Jtir 22:29, 7 October 2006 (UTC)

Had an idea to say: if we had instead used the function ${\displaystyle x=(x^{2}-2)+x=f(x)}$, which also has ${\displaystyle {\sqrt {2}}}$ as a solution, such-n-such would have happened. --Jtir 22:42, 7 October 2006 (UTC)

That iteration is not as interesting in this case because it diverges regardless of where you start, for different reasons. Because ${\displaystyle f(x) to the left of ${\displaystyle {\sqrt {2}}}$ and ${\displaystyle f(x)>x}$ to the right of ${\displaystyle {\sqrt {2}}}$, ${\displaystyle {\sqrt {2}}}$ is a repelling fixed point. Because the slope is too steep at ${\displaystyle -{\sqrt {2}}}$, that's also a repelling fixed point. Loisel 22:53, 7 October 2006 (UTC)
I think the contrast between the behavior of the two functions is what is interesting. That's perfect for the [slowly growing] footnote. :-) --Jtir 23:10, 7 October 2006 (UTC)
I've seen attractive graphs illustrating the progress of fixed point iteration. They would obviously not be suitable for a footnote. :-) Maybe for the Fixed point iteration article? --Jtir 00:39, 8 October 2006 (UTC)
The iteration ${\displaystyle x^{*}=(x^{2}-2)+x}$ is indeed less interesting, but I'd say that makes it more proper to be used as an example here. The more interesting iteration ${\displaystyle x^{*}=(x^{2}-2)^{2}+x}$ is also more complicated, possibly confusing, and artifical; of course, it's good for the (as yet) imaginary fixed point iteration article.
Another thing: Are you sure the divergent iteration is said to be "numerically unstable"? It's indeed unstable (in the sense of dynamical systems), but personally I wouldn't call it numerically unstable, which makes me doubt that other people do this. -- Jitse Niesen (talk) 03:45, 8 October 2006 (UTC)
Summary: ${\displaystyle f(x)=x^{2}-2+x}$ is a bad example, and ${\displaystyle f(x)=(x^{2}-2)^{2}+x}$ is much better and is numerically unstable.
The first time I was introduced to numerical instability, it was the following example. Note that ${\displaystyle 2hg'(x)=g(x+h)-g(x-h)+O(h^{2})}$, which leads to the midpoint rule of ODE integration ${\displaystyle 2hf(y_{k})=y_{k+1}-y_{k-1}}$ for the system ${\displaystyle y'=f(y)}$, which has local truncation error ${\displaystyle O(h^{3})}$ and therefore (one hopes) global error ${\displaystyle O(h^{2})}$.
One can easily implement this for the well-conditioned problem ${\displaystyle y'=-y}$ (solution is a constant times ${\displaystyle exp(-t)}$.) In MATLAB, it looks like
f=@(y) -y; h=0.01; y=[1 1+h*f(1)]; ts=[0 h]; for k=2:round(10/h) y(k+1)=2*h*f(y(k))+y(k-1); ts(k+1)=ts(k)+h; end; plot(y);
This integrates from t=0 to t=10 using a stepsize of 0.01, which is apparently very small, considering how nice this problem is. Unfortunately, if you do this, you get this picture.
The numerical solution is completely wrong, that's numerical instability and it's a very famous example (the midpoint rule is unconditionally unstable: it is unstable no matter how small h is, and no matter how nice the RHS is.) But why does it happen? The way to analyze this is to plug in ${\displaystyle f(y)=-y}$ in the iteration, one obtains the three-term recurrence ${\displaystyle -2hy_{k}=y_{k+1}-y_{k-1}}$, whose general solution is ${\displaystyle y_{k}=ap_{+}^{k}+bp_{-}^{k}}$ where ${\displaystyle p_{\pm }={-2h\pm {\sqrt {4h^{2}+4}} \over 2}}$. Regarless of ${\displaystyle h>0}$, you will always have ${\displaystyle p_{-}<-1}$ so that the solution is guaranteed to explode when ${\displaystyle b\neq 0}$. With proper initial data, ${\displaystyle b}$ will be small, but it will never be zero, or will become nonzero due to roundoff.
Another way of checking that this iteration is unstable is to view the ODE solver as a fixed point iteration for the function ${\displaystyle \phi (Y)=AY}$ where A is the matrix [-2h 1;1 0] whose spectral radius is h+sqrt(h^2+1)>1. Stated in this way, suddenly it doesn't sound like numerical instability at all, but rather it sounds like stupidity on the part of the designer. Who would expect this to work? The problem is, once you know the magic trick, all numerical instabilities seem like stupidity.
The crucial elements are: you state an algorithm, which for some reason you believe it will work, and you try it and it works a little bit. If it starts failing mysteriously when you twiddle it around, it's numerically unstable.
It is very hard to state an algorithm which will sound natural to the layperson, and certainly giving the midpoint rule would be much more complex than what I did in this article, requiring an understanding of ordinary differential equations.
The iteration I gave has many virtues:
• If you know the fixed point iteration, it's obvious that sqrt(2) is a fixed point, so it's obvious why you would try this.
• If you don't know the fixed point iteration, we have an example, starting from 1.4, where it converges to the right thing, so this establishes the credibility of the iteration.
• We have a perturbation (starting from 1.42) where it fails to converge. This is akin to taking slightly wrong initial data for the midpoint rule.
• It can be understood by anyone in high school.
The main difficulty with the iteration for ${\displaystyle f(x)=(x^{2}-2)+x}$ is that it becomes impossible to establish the credibility of this algorithm to laypeople: there is no iteration that will converge.
Loisel 07:52, 8 October 2006 (UTC)
The midpoint rule is an excellent example, we both agree on that, but it would be nice to have an easier example. I'm interested in your "definition" of numerical instability (when algorithms should work but don't), because it agrees with how I think that the term is used, but I could not find a source for that definition when doing a quick hack on numerical stability. Do you know any? The same goes for my question (are you sure the divergent iteration is said to be "numerically unstable"); that was meant to be short for asking for a reference. I think this is especially important for such a nebulous concept as numerical stability.
I don't think that ${\displaystyle x^{*}=(x^{2}-2)^{2}+x}$ is that obvious. When designing a fixed point iteration, you start with the equation you want to solve: ${\displaystyle x^{2}=2}$. From this, one is naturally (IMHO) led to ${\displaystyle x=2/x}$, which unfortunately does not work. Then, one can try taking the average (which yields Newton) or rewriting the equation as ${\displaystyle x=x^{2}+x-2}$. From this point of view, the iteration ${\displaystyle x^{*}=x^{2}-2+x}$ is more credible than ${\displaystyle x^{*}=(x^{2}-2)^{2}+x}$. The credibility of fixed point iteration can be established quickly by doing the Newton iteration before the unstable iteration.
In the end, which iteration is used is not that important to me. I'm more interested in trying whether we can simplify the example. If we move Newton before the unstable iteration, I think we can remove the first half of the paragraph "numerically unstable algorithm" (the part which starts from 1.4). What do you think of that?
By the way, I like your examples / illustrations and I think they're a huge improvement, even though some may question the place of examples in an encyclopaedia. -- Jitse Niesen (talk) 12:16, 8 October 2006 (UTC)
I just realized I never answered this. Indeed, your suggestions for alternate iterations sound reasonable. For a definition of stability, Conte and de Boor says "...instability describes the sensitivity of a numerical process for the calculation of f(x) from x to the inevitable rounding errors committed during its execution in finite precision arithmetic." Loisel 17:59, 4 December 2006 (UTC)

That is a fantastic graphic and it would be perfect for the Fixed point iteration article, as would the associated discussion of it.
I favor the current example because it is easy to understand while illustrating an important phenomenon. The NA article does not need to explain how the example was designed. That can be done in the FPI article. A footnote can briefly explain why the example was chosen instead of the "simpler" alternative and link to the FPI article where the two can be compared in detail.

--Jtir 15:50, 8 October 2006 (UTC)

Well, actually, lemme just add one thing. The spectral radius of A is approximately 1+ch for some c, so if you take N=1/h integration steps (to integrate over [0,1]), the explosion is (1+ch)^N=(1+c/N)^N, which converges to exp(c) as h goes to zero. So some people consider that the midpoint rule is actually stable, because the integration rule does not explode more as you make h smaller.
An example that does explode more the smaller you make h is this. You start with ${\displaystyle 6hg'(x)=g(x-2h)-6g(x-h)+3g(x)+2g(x+h)+O(h^{3})}$ to obtain the iteration ${\displaystyle 6hf(y_{k})=y_{k-2}-6y_{k-1}+3y_{k}+2y_{k+1}}$. You apply it to the test problem ${\displaystyle f(y)=-y}$ and you obtain a fixed point iteration whose matrix has spectral radius is 2.69+2.04h (rounded to two digits). So if you take N=1/h steps to integrate over [0,1], your blow up will be (2.69+2.04h)^N>2.69^N which goes to infinity very fast as h goes to zero.
That's not the example I gave because more people are familiar with the midpoint rule. I guess with the midpoint rule, you can take h sufficiently small, and be sufficiently careful with the initial condition, that it becomes usable. It's just really hard to do. Some people will still say that the midpoint rule is unconditionally unstable. Loisel 18:41, 8 October 2006 (UTC)

## Table layout

I changed one table to an alternate layout. I'm not sure I like it, but the previous one is a little cluttered, as Tompw points out. Loisel 22:08, 7 October 2006 (UTC)

I saw it and can only say it is hard to decide. The horizontal arrangement has space for still more images, though. :-) --Jtir 22:47, 7 October 2006 (UTC)

## created the Fixed point iteration article

I have created the Fixed point iteration article.
It is a stub with a link to Fixed point (mathematics).
I also started the Talk:Fixed point iteration page. --Jtir 14:34, 8 October 2006 (UTC)