|WikiProject Statistics||(Rated C-class, Mid-importance)|
|WikiProject Mathematics||(Rated C-class, Mid-importance)|
- 1 Merging with Bagging
- 2 Discussion of contents
- 3 Deriving Confidence intervals from the bootstrap distribution
- 4 Congrats
- 5 Newcomb's speed of light
Merging with Bagging
(see later for diuscussion of contents)
Yes this page should be merged.Gpeilon 15:01, 10 January 2007 (UTC)
I agree. Tolstoy the Cat 19:11, 22 January 2007 (UTC)
I agree. --Bikestats 13:08, 9 February 2007 (UTC)
I also agree. Tom Joseph 20:53, 13 February 2007 (UTC)
i also agree.
I do not agree because I was looking for an explanation of the word 'bootstrapping', not 'bootstrap'
I agree Eagon 14:49, 13 March 2007 (UTC)
I do not agree. Bagging is now one of the most famous ensemble methods in machine learning and have their own many unique properties. Nowadays, the reasons why bagging work very well in various situations are still mystery, and there are many theoretical explanations trying to explain bagging click here for a survey.
To sum up, bagging has its own unique place in literatures, and should also have their own page here. -- Jung dalglish 03:12, 7 May 2007 (UTC)
--- I agree completely with this point; bagging is one of the key approaches to ensemble-based machine learning, and it certainly has its own life entirely apart from the origins of bootstrapping in statistics. From a machine learning point of view, it would be meaningless to remove it to a statistics based article; machine learners would not find it, because they would not look there.
I do not agree that they should be merged. Bagging is a sufficiently unique and well-defined method that it warrants its own page. I was looking for bagging as a machine learning method, and would not have immediately thought to look under boostrapping.
Bagging is a specific application of bootstrapping, which is different enough from the usual applications that it deserves its own page: You are using the bootstrap sample of estimators to create another estimator, rather than using it merely to estimate the distribution of estimators. --Olethros 15:10, 6 June 2007 (UTC)
I do not agree that they should be merged. This article provided a quick and readily absorbed reference for me today, and if it had been buried in a lengthy broad discussion I probably would not have found it and benefitted from the information.
I think they should not be merged as "bagging" seems a particular specific application that should not appear in a mainstream initial discussion of bootstrapping. A brief description with cross-reference would be more suitable. Melcombe 13:21, 16 July 2007 (UTC)
-- There seem to be 2 separate discussions on this page. The first related to "bootstrap" and "bootstrapping". The second to merging "bagging" into the bootstrap article. Like others, I don't think bagging should be merged in. As others have said, it is one particular application. Tolstoy the Little Black Cat 16:50, 19 August 2007 (UTC)
I don't think Bootstrap aggregating (bagging) should be merged in with Boostrapping. The current bootstrapping page is simple and general. To merge in a relatively large, highly specific, relatively atypical application (the page on on bagging) will confuse those looking for a basic understanding of what statistical bootstrapping is, and the basic bootstrapping information will be mostly irrelevant for the typical person looking for bagging. Each article should certainly link to the other, but I think merging will drastically reduce the value. Glenbarnett 03:18, 27 September 2007 (UTC)
I also disagree about merging these. Bootstrap methods are great for inference, but bootstrap aggregation is a method for ensemble learning - i.e. to aggregate collections of models, for robust development using subsamples of the data. To include bagging into bootstrapping is to misunderstand the use of bagging. —Preceding unsigned comment added by 126.96.36.199 (talk) 05:32, 27 September 2007 (UTC)
I also disagree about merging the Bootstrap and the Bootstrap Aggregating (Bagging) pages; the former is a resampling method for estimating the properties of an estimator while the latter, although it uses bootstrap methodology, is a an Ensemble Learner technique from Statistical Learning and / or Data Mining. In my opinion they are only related by the fact that Bagging uses some modified bootstrap technique to acheive its goal.
I disagree with merging these. The primary use of bootstrapping is in inferential statistics, providing information about the distribution of an estimator - its bias, standard error, confidence intervals, etc. It is not usually used in its own right as an estimation method. It is tempting for beginners to do so - to use the average of bootstrap statistics as an estimator in place of the statistic calculated on the original data. But this is dangerous, as it typically gives about double the bias.
In contrast, bootstrap aggregation is a randomization method, suitable for use with low-bias high-variability tools such as trees - by averaging across trees the variability is reduced. Yes, the mechanism is the same as what beginners often do, but I don't want to encourage that mistake. Yes, the randomization method happens to use the same sampling mechanism as the simple nonparametric bootstrap, but that is accidental. The intent is different - reducing variability by averaging across random draws, vs quantifying the sampling variation of an estimator.
Discussion of contents
I would like to raise an isssue with the mention of "mediation" in the intro material. Should there be a minor subsection for this, explaining what "mediation" means, somw brief details of how boostrapping applies, and possibly with its own example being shown to show the contrast with an ordinary single sample case. Melcombe 13:21, 16 July 2007 (UTC)
- now added new section, but possibly there is a need for a much more technical description of bootstrapping overall in order to provide enough context/information. This need for a more formal specification would also benefit other parts perhaps. Melcombe (talk) 11:31, 12 February 2008 (UTC)
The definition of "wild bootstrap" is incomplete and does not describe the most commonly used method, see http://fmwww.bc.edu/RePEc/es2000/1413.pdf, page 7. —Preceding unsigned comment added by Arnehe (talk • contribs) 10:07, 4 May 2010 (UTC)
I was looking for a definition of the bootstrap method, and couldn't understand the definition given here, in the 2nd sentence: "Bootstrapping is the practice of estimating properties of an estimator (such as its variance) by measuring those properties when sampling from an approximating distribution. One standard choice for an approximating distribution is the empirical distribution of the observed data." Since I do not know whar bootsrapping is, I cannot change it myself, so I wrote it here instead. Setreset (talk) 15:27, 8 April 2008 (UTC)
- The lines are fine, they are somewhat complicated, but they mean what they intend to and what they need to. You could have looked up point estimation and point estimators before delving through this topic, but yes, math on wikipedia seems to be getting crowded with technical jargon. --188.8.131.52 (talk) 13:55, 23 October 2009 (UTC)
- Um, bootstrapping is a fairly advanced topic; that is, it is not for complete novices. You need to already know what an estimator is before coming here, and likely a distribution. Ideally maybe those topics should be linked in the sentences at the top of this article, but we can't start every article from a point of absolute zero knowledge. I am going to re-write the informal example to make it clearer. Maybe someone can look at that and then comment here on how we could make the sentences at the top of the article clearer? Doctorambient (talk) 01:04, 30 September 2011 (UTC)
The formatting in the R code is ugly. Beside fixing its formatting, another possible solution is to replace that code with Python code. —Preceding unsigned comment added by 184.108.40.206 (talk) 15:56, 19 July 2009 (UTC)
- I heart Python and all, but R is the de facto standard computing language for statistical research. You could make a case for SAS code instead (bleah!) but what is the case for Python? I'll fix the R code. Doctorambient (talk) 01:06, 30 September 2011 (UTC)
- Wait! What R code? I guess it was deleted. Doctorambient (talk) 01:08, 30 September 2011 (UTC)
- Bootstrapping (machine learning) already contains a link to Bootstrap aggregating and that article would be a better candidate for merging to (or from). The topic here is rather different as indicated by the "(statistics)" in the title. Melcombe (talk) 11:15, 24 September 2009 (UTC)
- The techniques are very different, one is for classifier improvement, the other for inference testing. There is a slim link between the two that the machine learning 'bootstrapping' was in principal derived from statistical bootstrapping technique, but the argument is very thin. I believe it would lead to a lot of confusion. So my vote is for keeping the topics separate while providing the link to either topics in the See also section, i.e. maintain the status quo. --220.127.116.11 (talk) 13:48, 23 October 2009 (UTC)
Informal description changes
When the example refers to a sample of N heights, and then using a computer to make a new sample (called a bootstrap sample) that is also of size N: We need to specify, as I understand it, a bootstrap sample (1) of size n (n is not necessarily equal to N) and (2) that the sample should be made with replacement.
Deriving Confidence intervals from the bootstrap distribution
I addded a section talking about " Deriving Confidence intervals from the bootstrap distribution", after finding this part missing in the article although this is a very useful use for bootstrapping methods. However, I left a lot to be extended, so whoever has the knowledge and time, please see to filling up the gaps. Cheers, Talgalili (talk) 10:46, 24 September 2009 (UTC)
Adèr et al.(2008)?
"Adèr et al.(2008)" is mentioned, but not explained. The same sentence is also in a document at dissertationrecipes.com, but it is hard to say who copies who...18.104.22.168 (talk) 13:02, 21 December 2011 (UTC)
Additionally, two of the three bullet points following this citation are incorrect. The second bullet, "When the sample size is insufficient for straightforward statistical inference," is misleading. "Straigtforward statistical inference" is vague. It would be better to say "asymptotic statistical inference", or "inference based on asymptotic distributions." More importantly, the bootstrap itself only gives correct inference asymptotically. Generally, there is no theoretical reason to expect the bootstrap to have better performance in finite samples than standard procedures. There are some cases where the bootstrap provides an asymptotic refinement, and can be expected to lead to more accurate finite sample inference, but these cases are not described or alluded to. The third bullet point repeats the mistake of the second. — Preceding unsigned comment added by 22.214.171.124 (talk) 05:24, 23 March 2012 (UTC)
- The intended reference might be: Adèr, H. J., Mellenbergh, G. J., & Hand, D.J. (2008). Advising on research methods: A consultant's companion. Huizen, The Netherlands: Johannes van Kessel Publishing.
- I have no way of seeing this, so cannot confirm it is the right source. Melcombe (talk) 12:46, 11 June 2012 (UTC)
Congratulations on this accesible but reasonably thorough article. Especially the section with the informal description and practical example should be part of any mathematical page, and possibly every wikipedia page. — Preceding unsigned comment added by 126.96.36.199 (talk) 15:44, 20 March 2012 (UTC)
Newcomb's speed of light
The example section contains figures for Newcomb's speed of light measurements. Going to the referenced URL, one finds the following dataset:
Simon Newcomb's measurements of the speed of light, from Stigler (1977). The data are recorded as deviations from $24,\!800$ nanoseconds. Table 3.1 of Bayesian Data Analysis. 28 26 33 24 34 -44 27 16 40 -2 29 22 24 21 25 30 23 29 31 19 24 20 36 32 36 28 25 21 28 29 37 25 28 26 30 32 36 26 30 22 36 23 27 27 28 27 31 27 26 33 26 32 32 24 39 28 24 25 32 25 29 27 28 29 16 23
The two outliers are clearly -44 and -2 but the rest of the data ranges from 16 to 40. The bar charts accompanying the example don't reflect the above data at all -- the x-axis labels don't match this range. So WTF!? I slapped a disputed sticker onto the mess. linas (talk) 04:20, 11 April 2012 (UTC)
The plot appears to be the density of the resampled medians, not the raw data. The point is to obtain the sampling distribution of the median. — Preceding unsigned comment added by 188.8.131.52 (talk) 19:13, 5 June 2012 (UTC)
Not so Simple
"very simple methods"
I have to disagree with this statement. See discussion here: http://stats.stackexchange.com/questions/26088/explaining-to-laypeople-why-bootstrapping-works
Compared to deriving asymptotic sampling distribution it is not difficult to apply bootstrapping (http://kurt.schmidheiny.name/teaching/bootstrap2up.pdf). However, to a non-statistician it could be fairly difficult.
The main reason I disagree with the statement is that Wikipedia departs from an expert-driven approach to encyclopaedias and the citation refers to a publication that targets experts. 184.108.40.206 (talk) 14:59, 23 September 2012 (UTC)
Two History Sections?
There's one history section at the top and one history section at the bottom (before "see also"). The bottom contains similar information to the top history section. Weaktofu (talk) 02:32, 7 January 2013 (UTC)