On looking at this article again, I wondered whether it would be improved by:
making it a joint article so that the lead-in explicitly mentions "sum of squares due to pure error" about which there is already a lot in the article. It would need a little thought to find a parallel article title such as "pure error sum of squares" that could be redirected to this article;
adding more discussion of "replication" (right word?) of design points, firstly in the context of experimental design where replicated observations (at some or all points) might be included specifically so that the test of lack-of-fit can be implemented, or so that a assessment of homogeneity of observation error can be made separately from modelling error, and secondly to show what happens to the maths when there is no replication.
the page (mathematical details section) makes the following definitions:
however the subscript makes no sense. "i" runs from 1 to n, so is j supposed to run from 1 to n^2 or what? I assume the _i subscript is erroneous. Flies 1 (talk) 16:38, 19 July 2010 (UTC)
No, it's not erroneous; it just means the number of values of j depends on the value of i. For example, suppose n = 3. Then n1, n2, and n3 could have three different values. Michael Hardy (talk) 16:43, 19 July 2010 (UTC)
If n = 3 then n1 = 31, which is clearly nonsense. --Yecril (talk) 23:20, 1 March 2013 (UTC)
I have read several books on Econometrics and Regression Analysis and never ever read the phrase "pure error". Usually books talk about total sum of squares (variation of actual Y values around Y (unconditional) mean), explained sum of squares (variation of estimated Y values around Y (unconditional) mean), and unexplained sum of squares (variation of actual Y valuesa round the regresion line (conditional Y mean)). I think that you should change the wording to make it compatible to the rest of the literature, or at least clarify the point. — Preceding unsigned comment added by 188.8.131.52 (talk) 05:49, 30 September 2013 (UTC)