This article is substantially duplicated by a piece in an external publication. Please do not flag this article as a copyright violation of the following source:
Surhone, L. M., Non-linear least squares: Least squares, nonlinear regression, linear least squares, errors and residuals in statistics, gradient, Gauss-Newton Algorithm, parabola, Betascript Publishing
This concept is simple enough that it should be illustrated in such a way that a high school senior can understand it
The least squares method does not have to be expressed in such a complex, jargon-laden way. It is a pretty simple concept that doesn't really benefit from vocabulary brought in from esoteric graduate mathematics. If you can understand the way the least squares method was presented here, then you don't need to be reading about it... —Preceding unsigned comment added by 184.108.40.206 (talk) 09:27, 2 January 2011 (UTC)
say x1,x2,x3...xn are values measured of a function f(x,y)=0 with corresponding y1,y2,y3..yn, A curve that passes through all these points is given by: Π((ai(x-xi))^2+(bi(y-yi))^2)=0 implying a fantastic coef of determination/zero chi-square. Can such solutions be considered valid for modelling purposes? logically no because only xi,yi satisfy the above equation, but can someone think of a way to add a variation to this? Once one plugs in known xi and know yi the function is of the form f(xn,y)=0 where we know xn but do not know y for that xn Can one solve it by equating the above to (or it's integral) to e(x,y), where e(x,y) is the error in measurement and solving for (e`(x,y))^2 being a minima etc?
In other words can one say Π(((x-xi))^2+((y-yi))^2)=e^2 where e is the error and then minimize the error? — Preceding unsigned comment added by Alokdube (talk • contribs) 13:53, 12 February 2014 (UTC)