Talk:Secant method

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Mathematics (Rated C-class, Mid-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
C Class
Mid Importance
 Field:  Analysis

Convergence Rate for Repeated Roots?[edit]

Is there a fixed order of convergence for repeated roots with the secant method? For instance, with the Newton-Raphson method, R=2 (quadratic) for simple roots and R=1 for repeated roots. For the Secant Method, R=1.618.... for simple roots, but what about repeated/complex roots? Computer Guru (talk) 21:40, 26 May 2008 (UTC)

Incorrect image?[edit]

Either I'm going crazy, or the image on the page isn't correct. Shouldn't the second secant go from f(x0) to f(x1), rather than between f(x2) and f(x1) as it appears to be doing now? --VPeric (talk) 17:34, 18 March 2009 (UTC)

You are probably confusing the Secant method with the False position method. Tovrstra (talk) 12:27, 22 October 2009 (UTC)

secant method iteration requires single function evaluation?[edit]

Assuming that evaluation of a function and evaluation of its derivative takes the same amount of time, the article writes that an iteration of the secant method is twice as quick as an iteration of Newton's method. Doesn't the secant method require evaluating the function at two points, though? —Preceding unsigned comment added by Intellec7 (talkcontribs) 04:58, 21 May 2011 (UTC)

To close this question: Yes and no. Two function values are used, but in every step, only one is new, the other is known from the step before.--LutzL (talk) 07:19, 16 January 2014 (UTC)

Citation for order of convergence[edit]

The article states that the order of convergence is equal to the golden ratio. However, I seem to miss a direct citation of a reference where this is demonstrated. Mjpnijmeijer (talk) 16:53, 16 December 2011 (UTC)

Really Cool History Missing[edit]

3000 years of history and the basis of other algorithms? It seems like there must be a history section missing. Anyone know it? I checked Wikipedia and couldn't find anything.... (talk) 23:25, 8 August 2012 (UTC)EAZen

The link offered for the proposition of 3,000 years of history is not very useful. It cites a talk for which there seems to be no publication. The same person coauthored a more recent paper on the same topic here: "Origin and Evolution of the Secant Method in One Dimension" by Joanna M. Papakonstantinou, Richard A. Tapia, Amer. Math. Monthly: Vol. 120, No. 6 (June–July 2013), pp. 500-518. A preview is available at JSTOR, and possibly its three free articles at a time policy applies. At any rate I think it would make an improved citation. Hardmath (talk) — Preceding undated comment added 20:51, 27 February 2015 (UTC)

Numerical Example?[edit]

It may be useful to some readers to see the secant method applied in a numerical example. An example (maybe similar to the one below) could help clarify the method and the iterative process....thoughts?

A numerical example[edit]

Consider . We know the exact solution to be . To approximate this solution using the secant method, let's let and . Then f(x0) = f(1) = -1 and f(x1) = f(2) = 6. Now use the formula to calculate x2:

In the next step use x1 and x2 together with f(x1) = 6 and f(x2) = -174/343 or approximately 0.5073 to calculate x3:

Likewise in the third iteration:

Clearly, we are getting nearer to our exact solution with each iteration of the secant method. We can continue on in this manner until we have a solution correct to our desired level of precision.

Brmcvet (talk) 00:54, 11 September 2012 (UTC)

I changed some math formatting to be consistent, but I'm not sure it's better; think about it.
You probably need one more iteration so that people see the pattern.
Mjmohio (talk) 19:21, 18 September 2012 (UTC)
I have added a few more iterations to the example in an attempt to make the method a bit clearer.
Brmcvet (talk) 23:16, 4 October 2012 (UTC)
Give the approximate numerical value for so we can see how good the approximation is so far.
Mjmohio (talk) 15:23, 7 October 2012 (UTC)
Perhaps a more compact representation would be easier to integrate into the article:
  // Code for CAS Magma (University of Sydney)
  x0:=1.0; f0:=x0^3-2.0;
  for k in [1..8] do
    f1:=x1^3-2.0; m:=(f1-f0)/(x1-x0);
    x2:= x1-f1/m;
    k,"& ",RR!x1,"& ",RR!f1,"& ",RR!m,"  \\\\";
  end for;
--LutzL (talk) 07:34, 16 January 2014 (UTC)

Advantages of secant method[edit]

  • It converges at faster than a linear rate, so that it is more rapidly convergent than the bisection method.
  • It does not require use of the derivative of the function, something that is not available in a number of applications.
  • It requires only one function evaluation per iteration, as compared with Newton’s method which requires two.

Disadvantages of secant method[edit]

  • It may not converge.
  • There is no guaranteed error bound for the computed iterates.
  • It is likely to have difficulty if f′(α) = 0. This means the x-axis is tangent to the graph of y = f (x) at x = α.
  • Newton’s method generalizes more easily to new methods for solving simultaneous systems of nonlinear equations.

la740411ohio (talk) 11:55, 18 September 2012 (UTC)

Changed to a list.
Add in cross-links to mentioned things like Newton's Method and references as available.
Some of this duplicates Secant Method#Comparison with other root-finding methods; it would be better to improve that section than to add new sections.
Mjmohio (talk) 00:29, 19 September 2012 (UTC)