Is the assumption of equal variances fundamental to this? Should say one way or another. —The preceding unsigned comment was added by 18.104.22.168 (talk) .
Thanks for the comment. The assumption of equal variances is not required. I will add some information about this shortly. --Zvika 19:29, 27 September 2006 (UTC)
Looking forward to this addition. Also, what can be done if the variances are not known? After all, if is not known then probably is not either. (Can you use some version of the sample variances, for instance?) Thanks! Eclecticos (talk) 05:26, 5 October 2008 (UTC)
Thanks for the great article on the James-Stein estimator. I think you may also want to mention the connection to Emprirical Bayes methods (e.g., as discsussed by Effron and Morris in their paper "Stein's Estimation Rule and Its Competitors--An Empirical Bayes Approach"). Personally, I found the Empirical Bayes explanation provided some very useful intuition to the "magic" of this estimator. — Preceding unsigned comment added by 22.214.171.124 (talk) 17:54, 18 April 2007 (UTC)
Thanks for the compliment! Your suggestion sounds like a good idea. User:Billjefferys recently suggested a similar addition to the article Stein's example, but neither of us has gotten around to working on it yet. --Zvika 07:55, 19 April 2007 (UTC)
A confusing point about this article: y is described as "observations" of an m-dimensional vector , suggesting that it should be an m by n matrix, where n is the number of observations. However, this doesn't conform to the use of y in the formula for the James-Stein estimator, where y appears to be a single m-dimensional vector. (Is there some mean involved? Is computed over all mn scalars?) Furthermore, can we still apply some version of the James-Stein technique in the case where we have more observations of than of , i.e., there is not a single n? Thanks for any clarification in the article. Eclecticos (talk) 05:19, 5 October 2008 (UTC)
The setting in the article describes a case where there is one observation per parameter. I have added a clarifying comment to this effect. In the situation you describe, in which several independent observations are given per parameter, the mean of these observations is a sufficient statistic for estimating θ, so that this setting can be reduced to the one in the article. --Zvika (talk) 05:48, 5 October 2008 (UTC)
The wording is still unclear, especially the sentence: "Suppose θ is an unknown parameter vector of length m, and let y be a vector of observations of θ (also of length m)". How can a vector of m-dimensional observations have length m? --StefanVanDerWalt (talk) 11:07, 1 February 2010 (UTC)
Indeed, it does not make sense. I'll give it a shot. 126.96.36.199 (talk) 19:49, 17 February 2010 (UTC)
Is the formula using σ2/ni applicable for different sample sizes in groups?. In Morris, 1983, Parametric Empirical Bayes Inference: Theory and Applications, it is claimed that a more general version (which is also derived there) of Stein's estimator is needed if the variances Vi are unequal, where Vi denotes σ2i/ni so as I understands it, Steins formula is only applicable for equal ni as well.
The graph of the MSE functions would need a bit more precisions : we are in the case where ν=0, probably m=10 and σ=1, aren't we ? (I thought that, in this case , for θ = 0, MSE should be equal to 2 ; maybe the red curve represents the positive JS ?) —Preceding unsigned comment added by 188.8.131.52 (talk) 15:40, 10 May 2011 (UTC)