Wikipedia:Reference desk/Archives/Mathematics/2009 May 16

From Wikipedia, the free encyclopedia
Mathematics desk
< May 15 << Apr | May | Jun >> May 17 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.



May 16[edit]

Correcting linear least squares coefficients for small samples[edit]

I performed a linear regression on a large population and got y=a0*x+b0. Then I performed a linear regression on a sub-sample of (similar) members and got another equation: y=a1*x+b1. How do I correct the sub-sample's regression to take into account that the size of the sub-sample is very small (compared to total population). Right now I have a1'=(a1-a0)/sqrt(#members in sub-sample)+a0 and b1'=(b1-b0)/sqrt(#members in subsample)+b0. Is there a more accurate adjustment method? Especially one that uses variance as well? 70.171.0.134 (talk) 04:03, 16 May 2009 (UTC)Mathnoob[reply]

I find your question very unclear, and I think I'm much better at seeing through opaque writing on this sort of thing than are most of those who post here, and I have a Ph.D. in statistics. So first we have to work on what it is you're trying to ask. Usually when you fit a line based on a small sample from a large population, you don't know the data for the whole population, but only for the small sample, in such cases, one bases estimates of properties of the whole population upon the data from the small sample. But you seem to say that you do know the whole population. What, then, is the purpose of the sample? If you're trying to quantify uncertainty about the population when all you have to go on is the small sample, there are standard ways of proceeding, including, e.g., a confidence interval for the slope of the fitted line. Is that sort of thing what you have in mind? Michael Hardy (talk) 00:50, 17 May 2009 (UTC)[reply]
Maybe I can simply the problem. Suppose I have 1 million marbles where each marble weights 5 grams on average. Each blue marble weights 50 grams on average and there are 1000 blue marbles, and each red marble weights 1 gram on average and there are 100 red marbles in the set. And I want to compute the averages of blue/red marbles that are closest to their true averages. However in my case, instead estimating averages I want to estimate linear regression equations.70.171.0.134 (talk) 02:24, 17 May 2009 (UTC)Mathnoob[reply]

You lost me with this sentence:

And I want to compute the averages of blue/red marbles that are closest to their true averages.

I have no idea what you mean by that. Michael Hardy (talk) 10:41, 18 May 2009 (UTC)[reply]

Let me try... *wipes dust off crystal ball*. I think the OP is trying to say that s/he has a large population K, of whose members some small fraction have a (boolean) property P, and that s/he's trying to estimate the correlation between the (continuous) properties x and y conditioned on P. (I'm not a statistician, so I apologize if my terminology is also a bit off.) Furthermore, the OP apparently would like a more stable estimator than simply applying linear regression to the subpopulation J = {x ∈ K: P(x)}, which would seem to make sense if that subpopulation is very small. In particular, I suspect s/he might like to start by testing the null hypothesis that the linear relationship between x and y is independent of P.
From a Bayesian viewpoint, what the OP seems to want is to use the posterior distribution of regression coefficients for the general population to define a prior distribution for the corresponding coefficients in the subpopulation, but adjusted in some way so as to account for their degree of belief in the similarity of the populations. In general, I suspect the problem isn't entirely well posed in a mathematical sense, yet such problems do seem common enough in practice to merit at least some consideration. I suppose that, for all I know, there might be some standard statistical formula for this, but I kind of doubt it. —Ilmari Karonen (talk) 15:25, 18 May 2009 (UTC)[reply]
I guess some (very special) kind of maximum likelihood estimation is called for. Pallida  Mors 04:41, 19 May 2009 (UTC)[reply]

Showing an element is irreducible[edit]

Hi, I am trying to show is irreducible in . I am trying the obvious method: Assume

 (1) 

which gives

 (2) 

and

 (3) .

I can show little bits. If a = 0, then it must be that d = 0 and c is not zero by (1) and you get . Similarly, if b = 0, . The last case is a and b are both nonzero. By the reasoning above, we must have c and d nonzero also for if one were 0 it would imply a = 0 or b = 0.

But, here I am stuck. I thought about comparing parity (right word for even/odd right?). That is, solve (2) above for a to get

 (4) 

then plug this in to the second to get

 (5) 

Then, think about even/oddness, so for example assume c is odd. Then, a must be even by the equation (2). By (5), is even if c is odd. Since the equation adds up to 1, the bc part must be odd so b is odd. Any way, so this is what I'm trying here and I'm not getting anywhere.

I thought about using the norm also, but I don't see how that helps and it's not introduced until the next chapter of the book so I don't think that is intended. I would be interested in knowing how to do this problem by any method, even if it is not "allowable" for this problem. Thanks StatisticsMan (talk) 14:47, 16 May 2009 (UTC)[reply]

Remember that an element that's irreducible is still divisible by units. For example, is a unit in the ring , since
and therefore:
The simplest way to proceed is using the norm. Define
Then for any . Moreover, is a unit if and only if . The norm of is 10, so any two non-unit divisors would have to have norms 2 and 5. But norm 2 is impossible (to see this, think about in mod 10), and therefore 10 is irreducible. Jim (talk) 15:56, 16 May 2009 (UTC)[reply]
Another way that is slower but doesn't use norms is with statistics man's eqn 3:ad+bc=1. So there's a lot of relative primeness.You get c=kb and a=md with km = -10.Then (a+bsqrt10)(c+dsqrt10) is (md+bsqrt10)(kb+dsqrt10)=sqrt10, or mdd+kbb=1=kbb - (10/k)dd. but then exactly one of the two factors (a+bsqrt10), c+dsqrt10 has an inverse you can construct so it's a unit.I think(hope) this works.Best wishes, Rich (talk) 23:01, 16 May 2009 (UTC)[reply]

Proof that n orthogonal vectors form a basis for ?[edit]

How can one show that n orthogonal vectors form a basis for ?

I can see how to show that they're linearly independent, so I suppose what I'm really asking is how can you show that if n vectors are linearly independent they span ? If anyone can find a proof online I'll be happy to just read that, you don't need to explain it yourself, I just can't actually find a proof anywhere :) Thanks a lot! —Preceding unsigned comment added by 131.111.8.97 (talk) 15:07, 16 May 2009 (UTC) 131.111.8.97 (talk) 15:08, 16 May 2009 (UTC)[reply]

Have you looked in elementary linear algebra books? The result you are looking for is closely tied to the proof that the dimension of is n. There are several ways to prove it, depending on which path to the final result you want to take. I am certain there is a proof in David Lay's linear algebra textbook. — Carl (CBM · talk) 15:19, 16 May 2009 (UTC)[reply]
P.S. A simple proof, but not aesthetically pleasing, is to take your n orthogonal vectors and place them into the columns of a matrix. Because the vectors are independent, the matrix is invertible. Thus the linear transformation from to obtained from the matrix is surjective, which means that the columns of the matrix span . — Carl (CBM · talk) 15:21, 16 May 2009 (UTC)[reply]


Thanks very much Carl, I just figured out a proof for myself but it's good to have a couple under the belt! 131.111.8.97 (talk) 15:31, 16 May 2009 (UTC)[reply]