Talk:Propagation of uncertainty/Archive 1: Difference between revisions
m Disambiguation link repair - (You can help!) |
This artical was a disgrace to humanity |
||
Line 51: | Line 51: | ||
As you can see, I rewrote the header and renamed the article. —[[User:Yoshigev|Yoshigev]] 17:44, 27 March 2006 (UTC) |
As you can see, I rewrote the header and renamed the article. —[[User:Yoshigev|Yoshigev]] 17:44, 27 March 2006 (UTC) |
||
== This artical was a disgrace to humanity == |
|||
First the article defined <math>\Delta A</math> as the absolute error of <math>A</math> but then the example formulas section went ahead and defined error propagation with respect to <math>\Delta A</math> as the standard deviation of <math>A</math>. Even then the examples had constants that weren't even considered in the error propagation. Then to add insult to injury I found a journal article which shows that at least two of the given definitions were only approximations so I had to redo the products formula and added notes to the remaining formulas explaining that they were only approximations with an example of how they are only approximations. (doesn't it seem just a little bit crazy to only find approximations to the errors when you can have an exact analysis of the error propagation, to me it just seems like approximating an approximation.) So now there are TWO columns: ONE for ABSOLUTE ERRORS and ANOTHER for STANDARD DEVIATIONS! sheash! it's not that hard to comprehend that absolute errors and standard deviations are NOT EQUIVILANT! <math>\sigma_A</math> is the standard deviation of <math>A</math>, and <math>\Delta A</math> is the absolute error NOT the standard deviation of <math>A</math>. --[[User:ANONYMOUS COWARD0xC0DE|ANONYMOUS COWARD0xC0DE]] 04:02, 21 April 2007 (UTC) |
Revision as of 04:02, 21 April 2007
i or j
Is it necessary to use both i and j as indices for the summation of the general formulae? it appears to me that i only appears in the maths whilst j only appears in the english. True? If not, it could be more clearly explained as to the reasons for the change / use of both.
Thanks Roggg 09:35, 20 June 2006 (UTC)
Geometric mean
Example application: the geometric mean? Charles Matthews 16:33, 12 May 2004 (UTC)
- From the article (since May 2004!): "the relative error ... is simply the geometric mean of the two relative errors of the measured variables" -- It's not the geometric mean. If it were, it would be the product of the two relative errors in the radical, not the sum of the squares. I'll fix this section. --Spiffy sperry 21:47, 5 January 2006 (UTC)
Delta?
In my experience, the lower-case delta is used for error, while the upper-case delta (the one currently used in the article) is used for the change in a variable. Is there a reason the upper-case delta is used in the article? --LostLeviathan 02:01, 20 Oct 2004 (UTC)
Missing Definition of Δxj
A link exists under the word "error" before the first expression of Δxj in the article, but this link doesn't take one to a definition of this expression. The article can be improved if this expression is properly defined. — Preceding unsigned comment added by 65.93.221.131 (talk • contribs) 4 October 2005 (UTC)
Formulas
I think that the formula given in this article should be credited to Kline-Mcklintock. —lindejos
First, I'd like to comment that this article looks like Klingonese to the average user, and it should be translated into English.
Anyway, I was looking at the formulas, and I saw this allegation: X = A ± B (ΔX)² = (ΔA)² + (ΔB)², which I believe is false.
As I see it, if A has error ΔA then it means A's value could be anywhere between A-ΔA and A+ΔA. It follows that A±B's value could be anywhere between A±B-ΔA-ΔB and A±B+ΔA+ΔB; in other words, ΔX=ΔA+ΔB.
If I am wrong, please explain why. Am I referring to a different kind of error, by any chance?
aditsu 21:41, 22 February 2006 (UTC)
- As the document I added to External links ([1]) explain it, we are looking at ΔX as a vector with the variables as axes, so the error is the length of the vector (the distance from the point where there is no error).
- It still seems odd to me, because this gives the distance in the "variable plane" and not in the "function plane". But the equation is correct. —Yoshigev 22:14, 23 March 2006 (UTC)
- Now I found another explanation: We assume that the variables has Gaussian distribution. The addition of two Gaussians gives a new Gaussians with a width equals the quadrature of the width of the originals. (see [2]) —Yoshigev 22:27, 23 March 2006 (UTC)
Article title
The current title "Propagation of errors resulting from algebraic manipulations" seems to me not so accurate. First, the errors don't result from the algebraic manipulations, they "propagate" by them. Second, I think that the article describe the propagation of uncertainties. And, third, the title is too long.
So I suggest moving this article to "Propagation of uncertainty". Please make comments... —Yoshigev 23:39, 23 March 2006 (UTC)
- Seems okay. A problem with the article is that the notation x + Δx is never explained. From your remarks, it seems to mean that the true value is normally distributed with mean x and variance Δx. This is one popular error model, leading to the formula (Δ(x+y))² = (Δx)² + (Δy)².
- Another one is that x + Δx means that the true value of x is in the interval . This interpretation leads to the formula , which aditsu mentions above.
- I think the article should make clear which model is used. Could you please confirm that you have the first one in mind? -- Jitse Niesen (talk) 00:58, 24 March 2006 (UTC)
- Not exactly. I have in mind that for the measured value x, the true value might be in , like your second interpretation. But for that true value, it is more probable that it will be near x. So we get a normal distribution of the probable true value around the measured value x. Then, 2Δx is the width of that distribution (I'm not sure, but I think the width is defined by the standard deviation), and when we add two of them we use (Δx)² + (Δy)², as explained in Sum of normally distributed random variables.
- I will try to make it clearer in the article. —Yoshigev 17:45, 26 March 2006 (UTC)
As you can see, I rewrote the header and renamed the article. —Yoshigev 17:44, 27 March 2006 (UTC)
This artical was a disgrace to humanity
First the article defined as the absolute error of but then the example formulas section went ahead and defined error propagation with respect to as the standard deviation of . Even then the examples had constants that weren't even considered in the error propagation. Then to add insult to injury I found a journal article which shows that at least two of the given definitions were only approximations so I had to redo the products formula and added notes to the remaining formulas explaining that they were only approximations with an example of how they are only approximations. (doesn't it seem just a little bit crazy to only find approximations to the errors when you can have an exact analysis of the error propagation, to me it just seems like approximating an approximation.) So now there are TWO columns: ONE for ABSOLUTE ERRORS and ANOTHER for STANDARD DEVIATIONS! sheash! it's not that hard to comprehend that absolute errors and standard deviations are NOT EQUIVILANT! is the standard deviation of , and is the absolute error NOT the standard deviation of . --ANONYMOUS COWARD0xC0DE 04:02, 21 April 2007 (UTC)