Wikipedia:Reference desk/Archives/Mathematics/2009 December 11

From Wikipedia, the free encyclopedia
Mathematics desk
< December 10 << Nov | December | Jan >> December 12 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


December 11[edit]

weighted mean and errors[edit]

I have a sample of data points xi, each with an associated error Δxi, now i would like to average the xi's , weighteing the data by the its Δxi (ie the data with the smallest error contributes the most to the average. Currently i have my weighting factors as wi=1-(abs(Δxi/ (sum of Δxi))

Now, is this
a) Something that would actualy give me an meaningful result? (wi will sum to 1 so i think it can just do sum of wi * xi)
b) Is there a better way of doing this kind of problem? (surely someones has done a theory or two on this problem)
c) If i wanted to calculate the error in the average, could i do something with error propagation/adding in quadrature (weighted somehow) or would i be confined to only being able to use the standard deviation as a measure of the error?
Thanks for any help--137.205.21.58 (talk) 14:47, 11 December 2009 (UTC)[reply]

Just for readability, , right? --PST 14:57, 11 December 2009 (UTC)[reply]
yes, thanks, should get round to learning latex commands at some point--137.205.124.72 (talk) 15:15, 11 December 2009 (UTC)[reply]

If you know the variances of the errors, then the standard thing is to make the weigths proportional to the reciprocals of the variances. That minimizes the mean squared error of estimation. But in just what sense you "know the errors" is not clear from your posting. Michael Hardy (talk) 21:28, 11 December 2009 (UTC)[reply]

original op here, the errors for each data point are a combination of the systematic/statistical errors associated with measuring the value, i'm using some one elses data set and they've quoted x with an error Δx--86.27.192.94 (talk) 23:03, 11 December 2009 (UTC)[reply]
Our meta-analysis article mentions a few relevant techniques, including the "inverse variance method" Michael Hardy discussed above. -- Avenue (talk) 08:31, 12 December 2009 (UTC)[reply]
Are the data points all supposed to approximate the same value? The reciprocal of variances is fairly reasonable in that case. Or is it more like something like a problem I was looking where the points themselves are all different and I had to give some average value to their aggregate? With that I used the variance of the points plus their intrinsic variance to weigh them which probably didn't distringuish the better points as much as theoretically one should but worked well in practice. Dmcq (talk) 21:51, 13 December 2009 (UTC)[reply]
Original op here, yes the data should be approximately the same, just for clarity for working this out using the reciprocal of the variances:
Calculate (average of errors)
Calculate (get the sum of the variances)
Calculate (normalise)
op here i think what i just wrote wasnt worth the electrons its displayed on, and thinking about this, doesnt using the reciprocal variance mean were giving more weight to the points closest to the mean, rather than the smallest points? which if i'm weighting by the errors (small error good, big error bad) what i want? —Preceding unsigned comment added by 86.27.192.94 (talk) 19:33, 14 December 2009 (UTC)[reply]
What was being said is to use the following formulae:
Personally I think this tends to accentuate the ones with a small variance too much - perhaps there is some way of taking into account that the variance has its own variation but I've not thought about that too much. Dmcq (talk) 00:59, 15 December 2009 (UTC)[reply]

Non-linear ODE question[edit]

Hi guys, I've been trying to find an analytical solution to for about half a day now, to no avail. What's screwing with me is the non-linear term in there; even if I use the fact that to try to simplify it to a linear ODE, I get a nasty elliptical integral that I can't evaluate. Any ideas how to tackle this one? Titoxd(?!? - cool stuff) 17:25, 11 December 2009 (UTC)[reply]

There's a derivative on the last F, right? ~~ Dr Dec (Talk) ~~ 17:29, 11 December 2009 (UTC)[reply]
Yep. Titoxd(?!? - cool stuff) 17:31, 11 December 2009 (UTC)[reply]
As you said: 2F·dF/dψ = d(F2)/dψ, so your equation becomes d3F/dψ3 − d(F2)/dψ = 0. It follows that d2F/dψ2F2 = k for some constant k. You should be able to solve this in terms of the Weierstrass ℘-function. ~~ Dr Dec (Talk) ~~ 17:41, 11 December 2009 (UTC)[reply]
That will make evaluating the boundary conditions painful... thanks, though. Titoxd(?!? - cool stuff) 19:20, 11 December 2009 (UTC)[reply]
What would you need to do? The solution is quite nice. Given arbitrary constants k (from my last post), c1 and c2 we have:
~~ Dr Dec (Talk) ~~ 21:53, 11 December 2009 (UTC)[reply]
Also note the particular solutions F(ψ)=12(ψ-c1)-2 (I guess they may be limit cases of your general formula). --pma (talk) 11:01, 12 December 2009 (UTC)[reply]
More or less. The particular solution is F(ψ) = 6(ψc1)−2, and this corresponds to k = c2 = 0. ~~ Dr Dec (Talk) ~~ 00:49, 13 December 2009 (UTC)[reply]
oh it's 6, thanks --pma (talk) 08:50, 13 December 2009 (UTC)[reply]