Talk:Numerical methods for ordinary differential equations

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Mathematics (Rated B-class, High-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
B Class
High Importance
 Field:  Applied mathematics

Names[edit]

According to the St Andrews' MacTutor website, specifically http://www-history.mcs.st-and.ac.uk/history/Mathematicians/Runge.html and http://www-history.mcs.st-and.ac.uk/history/Mathematicians/Kutta.html, the names are as written by 142.177.19.200 (Carle David Tolmé Runge and Martin Wilhelm Kutta), and not as I wrote them earlier. Jitse Niesen 15:20, 8 Jan 2004 (UTC)

Gear[edit]

I removed the following item from the History section:

1968 - C. William Gear invents the first stable algorithms to solve stiff differential equations.

I suppose this refers to BDF (backward differentiation formula), which were in fact already introduced by Curtiss and Hirschfelder in the same 1952 paper where they talk about stiffness. Please correct me if I am wrong. -- Jitse Niesen (talk) 5 July 2005 16:58 (UTC)

Consistent methods[edit]

It seems that the consistence of a method is mentioned in the pages about Runge-Kutta and Adams method, but it is never defined. Is this page the right place to put its definition? Fph 12:41, 21 June 2006 (UTC)

Yes, I think so. It would perhaps fit in nicely in the discussion about order. By the way, benvenuti a Wikipedia! -- Jitse Niesen (talk) 13:37, 21 June 2006 (UTC)
Grazie! I have added some words about consistency (by the way, it seems consistency is more widespread than consistence). Someone should add a short comment about the consistency being a weaker condition than convergence to ensure the method makes (at least some) sense. I'm not sure I know English enough to write it correctly, so I'll better leave it to someone else. :-) --Fph 19:14, 28 June 2006 (UTC)

Slight Change[edit]

I think it would make the equations easier to understand if h is replaced by .

Please comment on my suggestion. --Freiddy 18:48, 2 March 2007 (UTC)

Perhaps (I suppose you mean ). Your notation is indeed makes it easier for the reader to remember that it stands for the step size. On the other hand, expressions like and become slightly more awkward: you get (could be misinterpreted, though adding some spacing might remedy this) and (might need parentheses). So, I don't know. -- Jitse Niesen (talk) 03:05, 3 March 2007 (UTC)
is mostly understandable, since most people are quite used to the notation . You can also just change into which is just like an integral. --Freiddy 12:28, 3 March 2007 (UTC)

ON THE ACCURACY OF DIGITAL INTEGRATION ALGORITHMS[edit]

My 14 years of experience with analog computers, my 40 years of experience with feedback controls, and my 40 years of experience with simulation (both analog and digital) have given me a somewhat different perspective on digital integration than I find in the literature. A digital integration algorithm must be evaluated on how well its gain matches 1/j-omega, and how close its phase is to -90 deg. The primary cause of problems with digital integration algorithms is the phase error, not the gain error. Some years ago I tested several digital integration algorithms and found only one that gave both good gain error and good phase error. This one is the Adams-Bashforth 2. All the other algorithms were very poor. Looking at amplitude error only gives an false confidence in the algorithm. To evaluate the algorithms, we did two different tests. We first measured the gain and phase with a digital signal analyzer which we programmed in C along with the integration algorithm. This was done on a 386-25 which dates the work. Then we programmed a second order loop with no damping to observe how fast the solution diverged or how fast it damped to zero. Once again, the AB 2 was the best by a wide margin. It isn't perfect, and it isn't nearly as good as a good analog integrator, but it was the best we could find. We didn't test every algorithm, but we did test other AB algorithms, the RK algorithms, Euler's method, and probably a predictor-corrector and Adams Moulton methods. The result was always the same: AB 2 wins by a wide margin.

The only time phase is not important is when the simulation is open loop. This is not the normal case. The normal case with the solution to differential equations is that the simulation is closed loop and the phase makes a huge difference.

Midpoint Method[edit]

The Midpoint method is mantioned in the graph, but there is no mention of it in the article. Shouldn't some mention of it be made? - GeiwTeol 08:15, 19 March 2008 (UTC)

Iterative method[edit]

Link to description of algorithm: iterative_method.htm Jeffareid (talk) 06:46, 20 July 2009 (UTC)

That's not about computing integrals but computing the solution of a differential equation; see Numerical ordinary differential equations. The predictor is forward Euler and the corrector is the trapezoidal rule, so I'd call it an Euler-trapezoidal method, iterated till convergence. It's the first one in a series of predictor-corrector methods called Adams-Bashforth-Moulton or AB/AM because they use an Adams-Bashforth method as predictor and an Adams-Moulton method as corrector (see linear multistep method). -- Jitse Niesen (talk) 10:46, 20 July 2009 (UTC)
Copied from Talk:Numerical_integration Jeffareid (talk) 19:49, 20 July 2009 (UTC)
  • iterated till convergence - I haven't seen iterated till convergence mentioned in the related wiki articles. Considering the speeds of current PC's and computers, it's probably a reasonable approach. Jeffareid (talk) 19:56, 20 July 2009 (UTC)
  • iterative trapezoidal algorithm restated here (using y instead of f for convergence test):

first calculate an initial guess value

next calculate successive guesses

...

until the guesses converge to within some error tolerance e:

Once convergence is reached, then use the final guess as the next step:

If the guesses don't converge within some number of steps, such as reduce h and repeat the step. To optimize this, if the steps converge too soon, such as 4 steps, then increase h. If I remember correctly, the iterative process converges quadratically.

Jeffareid (talk) 20:26, 20 July 2009 (UTC)

Page title[edit]

There is no such thing as a numerical equation -- ODE or otherwise. This page should be named Numerical solutions to... or Numerical methods of... -- no? jheiv talk contribs 21:44, 15 December 2011 (UTC)

Solution to second order bvps[edit]

Greetings, I'm sorry but I don't quite know LaTeX well enough to put a space between the and
if you do know please do correct me. — Preceding unsigned comment added by Fuse809 (talkcontribs) 15:28, 20 January 2012 (UTC)

Assessment comment[edit]

The comment(s) below were originally left at Talk:Numerical methods for ordinary differential equations/Comments, and are posted here for posterity. Following several discussions in past years, these subpages are now deprecated. The comments may be irrelevant or outdated; if so, please feel free to remove this section.

Needs some more background and illustrations. List in history section should become prose. -- Jitse Niesen (talk) 11:32, 28 April 2007 (UTC)

Substituted at 18:31, 17 July 2016 (UTC)