Talk:Bootstrapping (statistics)

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Statistics (Rated C-class, Mid-importance)
WikiProject icon

This article is within the scope of the WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page or join the discussion.

C-Class article C  This article has been rated as C-Class on the quality scale.
 Mid  This article has been rated as Mid-importance on the importance scale.
 
WikiProject Mathematics (Rated C-class, Mid-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
C Class
Mid Importance
 Field: Probability and statistics

Merging with Bagging[edit]

(see later for diuscussion of contents)

Yes this page should be merged.Gpeilon 15:01, 10 January 2007 (UTC)

I agree. Tolstoy the Cat 19:11, 22 January 2007 (UTC)

I agree. --Bikestats 13:08, 9 February 2007 (UTC)

I also agree. Tom Joseph 20:53, 13 February 2007 (UTC)

i also agree.

I do not agree because I was looking for an explanation of the word 'bootstrapping', not 'bootstrap'

I agree Eagon 14:49, 13 March 2007 (UTC)

I Agree


I do not agree. Bagging is now one of the most famous ensemble methods in machine learning and have their own many unique properties. Nowadays, the reasons why bagging work very well in various situations are still mystery, and there are many theoretical explanations trying to explain bagging click here for a survey.

IMO, merging bagging with Bootstrapping (statistics) is rather similar to merging maximum entropy with information entropy which is not appropriate.

To sum up, bagging has its own unique place in literatures, and should also have their own page here. -- Jung dalglish 03:12, 7 May 2007 (UTC)

--- I agree completely with this point; bagging is one of the key approaches to ensemble-based machine learning, and it certainly has its own life entirely apart from the origins of bootstrapping in statistics. From a machine learning point of view, it would be meaningless to remove it to a statistics based article; machine learners would not find it, because they would not look there.

---

I do not agree that they should be merged. Bagging is a sufficiently unique and well-defined method that it warrants its own page. I was looking for bagging as a machine learning method, and would not have immediately thought to look under boostrapping.

--

Bagging is a specific application of bootstrapping, which is different enough from the usual applications that it deserves its own page: You are using the bootstrap sample of estimators to create another estimator, rather than using it merely to estimate the distribution of estimators. --Olethros 15:10, 6 June 2007 (UTC)

--

I do not agree that they should be merged. This article provided a quick and readily absorbed reference for me today, and if it had been buried in a lengthy broad discussion I probably would not have found it and benefitted from the information.

--

I think they should not be merged as "bagging" seems a particular specific application that should not appear in a mainstream initial discussion of bootstrapping. A brief description with cross-reference would be more suitable. Melcombe 13:21, 16 July 2007 (UTC)


-- There seem to be 2 separate discussions on this page. The first related to "bootstrap" and "bootstrapping". The second to merging "bagging" into the bootstrap article. Like others, I don't think bagging should be merged in. As others have said, it is one particular application. Tolstoy the Little Black Cat 16:50, 19 August 2007 (UTC)

--

I don't think Bootstrap aggregating (bagging) should be merged in with Boostrapping. The current bootstrapping page is simple and general. To merge in a relatively large, highly specific, relatively atypical application (the page on on bagging) will confuse those looking for a basic understanding of what statistical bootstrapping is, and the basic bootstrapping information will be mostly irrelevant for the typical person looking for bagging. Each article should certainly link to the other, but I think merging will drastically reduce the value. Glenbarnett 03:18, 27 September 2007 (UTC)

--

I also disagree about merging these. Bootstrap methods are great for inference, but bootstrap aggregation is a method for ensemble learning - i.e. to aggregate collections of models, for robust development using subsamples of the data. To include bagging into bootstrapping is to misunderstand the use of bagging. —Preceding unsigned comment added by 71.132.132.11 (talk) 05:32, 27 September 2007 (UTC)

I also disagree about merging the Bootstrap and the Bootstrap Aggregating (Bagging) pages; the former is a resampling method for estimating the properties of an estimator while the latter, although it uses bootstrap methodology, is a an Ensemble Learner technique from Statistical Learning and / or Data Mining. In my opinion they are only related by the fact that Bagging uses some modified bootstrap technique to acheive its goal.

Gérald Jean —Preceding unsigned comment added by 206.47.217.67 (talk) 20:06, 22 November 2007 (UTC)

--

I disagree with merging these. The primary use of bootstrapping is in inferential statistics, providing information about the distribution of an estimator - its bias, standard error, confidence intervals, etc. It is not usually used in its own right as an estimation method. It is tempting for beginners to do so - to use the average of bootstrap statistics as an estimator in place of the statistic calculated on the original data. But this is dangerous, as it typically gives about double the bias.

In contrast, bootstrap aggregation is a randomization method, suitable for use with low-bias high-variability tools such as trees - by averaging across trees the variability is reduced. Yes, the mechanism is the same as what beginners often do, but I don't want to encourage that mistake. Yes, the randomization method happens to use the same sampling mechanism as the simple nonparametric bootstrap, but that is accidental. The intent is different - reducing variability by averaging across random draws, vs quantifying the sampling variation of an estimator.

Tim Hesterberg --Tim Hesterberg (talk) 05:30, 6 December 2007 (UTC)

Can we now agree that merging is not appropriate and remove this from the discussion, or at least from the top of the article page? Melcombe (talk) 11:35, 12 February 2008 (UTC)

Yeah, I agree with removing this discussion...what exactly are the rules for that? Doctorambient (talk) 00:59, 30 September 2011 (UTC)

Discussion of contents[edit]

mediation[edit]

I would like to raise an isssue with the mention of "mediation" in the intro material. Should there be a minor subsection for this, explaining what "mediation" means, somw brief details of how boostrapping applies, and possibly with its own example being shown to show the contrast with an ordinary single sample case. Melcombe 13:21, 16 July 2007 (UTC)


pivots[edit]

This page needs to mention pivotal statistics, which are critial to bootstrapping. —Preceding unsigned comment added by 129.2.18.171 (talk) 22:18, 11 February 2008 (UTC)

now added new section, but possibly there is a need for a much more technical description of bootstrapping overall in order to provide enough context/information. This need for a more formal specification would also benefit other parts perhaps. Melcombe (talk) 11:31, 12 February 2008 (UTC)


Wild bootstrap[edit]

The definition of "wild bootstrap" is incomplete and does not describe the most commonly used method, see http://fmwww.bc.edu/RePEc/es2000/1413.pdf, page 7. —Preceding unsigned comment added by Arnehe (talkcontribs) 10:07, 4 May 2010 (UTC)


unclear[edit]

I was looking for a definition of the bootstrap method, and couldn't understand the definition given here, in the 2nd sentence: "Bootstrapping is the practice of estimating properties of an estimator (such as its variance) by measuring those properties when sampling from an approximating distribution. One standard choice for an approximating distribution is the empirical distribution of the observed data." Since I do not know whar bootsrapping is, I cannot change it myself, so I wrote it here instead. Setreset (talk) 15:27, 8 April 2008 (UTC)

The lines are fine, they are somewhat complicated, but they mean what they intend to and what they need to. You could have looked up point estimation and point estimators before delving through this topic, but yes, math on wikipedia seems to be getting crowded with technical jargon. --128.2.48.150 (talk) 13:55, 23 October 2009 (UTC)
The lines are better than fine. They're a great definition for someone who understands the topic already. Utterly useless for anyone else. —Preceding unsigned comment added by 194.171.7.39 (talk) 13:23, 6 April 2010 (UTC)
Um, bootstrapping is a fairly advanced topic; that is, it is not for complete novices. You need to already know what an estimator is before coming here, and likely a distribution. Ideally maybe those topics should be linked in the sentences at the top of this article, but we can't start every article from a point of absolute zero knowledge. I am going to re-write the informal example to make it clearer. Maybe someone can look at that and then comment here on how we could make the sentences at the top of the article clearer? Doctorambient (talk) 01:04, 30 September 2011 (UTC)

R code[edit]

The formatting in the R code is ugly. Beside fixing its formatting, another possible solution is to replace that code with Python code. —Preceding unsigned comment added by 79.43.56.205 (talk) 15:56, 19 July 2009 (UTC)

I heart Python and all, but R is the de facto standard computing language for statistical research. You could make a case for SAS code instead (bleah!) but what is the case for Python? I'll fix the R code. Doctorambient (talk) 01:06, 30 September 2011 (UTC)
Wait! What R code? I guess it was deleted. Doctorambient (talk) 01:08, 30 September 2011 (UTC)

Merger proposal[edit]

Bootstrapping (machine learning) seems to cover basically the same topic. I think the few content of that article should be merged here. Calimo (talk) 11:33, 8 September 2009 (UTC)

Bootstrapping (machine learning) already contains a link to Bootstrap aggregating and that article would be a better candidate for merging to (or from). The topic here is rather different as indicated by the "(statistics)" in the title. Melcombe (talk) 11:15, 24 September 2009 (UTC)


The techniques are very different, one is for classifier improvement, the other for inference testing. There is a slim link between the two that the machine learning 'bootstrapping' was in principal derived from statistical bootstrapping technique, but the argument is very thin. I believe it would lead to a lot of confusion. So my vote is for keeping the topics separate while providing the link to either topics in the See also section, i.e. maintain the status quo. --128.2.48.150 (talk) 13:48, 23 October 2009 (UTC)

Informal description changes[edit]

When the example refers to a sample of N heights, and then using a computer to make a new sample (called a bootstrap sample) that is also of size N: We need to specify, as I understand it, a bootstrap sample (1) of size n (n is not necessarily equal to N) and (2) that the sample should be made with replacement.

Deriving Confidence intervals from the bootstrap distribution[edit]

I addded a section talking about " Deriving Confidence intervals from the bootstrap distribution", after finding this part missing in the article although this is a very useful use for bootstrapping methods. However, I left a lot to be extended, so whoever has the knowledge and time, please see to filling up the gaps. Cheers, Talgalili (talk) 10:46, 24 September 2009 (UTC)

Adèr et al.(2008)?[edit]

"Adèr et al.(2008)" is mentioned, but not explained. The same sentence is also in a document at dissertationrecipes.com, but it is hard to say who copies who...193.166.223.5 (talk) 13:02, 21 December 2011 (UTC)

Additionally, two of the three bullet points following this citation are incorrect. The second bullet, "When the sample size is insufficient for straightforward statistical inference," is misleading. "Straigtforward statistical inference" is vague. It would be better to say "asymptotic statistical inference", or "inference based on asymptotic distributions." More importantly, the bootstrap itself only gives correct inference asymptotically. Generally, there is no theoretical reason to expect the bootstrap to have better performance in finite samples than standard procedures. There are some cases where the bootstrap provides an asymptotic refinement, and can be expected to lead to more accurate finite sample inference, but these cases are not described or alluded to. The third bullet point repeats the mistake of the second. — Preceding unsigned comment added by 24.84.24.221 (talk) 05:24, 23 March 2012 (UTC)

The intended reference might be: Adèr, H. J., Mellenbergh, G. J., & Hand, D.J. (2008). Advising on research methods: A consultant's companion. Huizen, The Netherlands: Johannes van Kessel Publishing.
I have no way of seeing this, so cannot confirm it is the right source. Melcombe (talk) 12:46, 11 June 2012 (UTC)
I have now found this in an old version of the reference list, so have felt justified in restoring it to the article. The text that is supposed to be supported by this source still nededs to be checked. Melcombe (talk) 13:17, 11 June 2012 (UTC)

Congrats[edit]

Congratulations on this accesible but reasonably thorough article. Especially the section with the informal description and practical example should be part of any mathematical page, and possibly every wikipedia page. — Preceding unsigned comment added by 145.18.30.3 (talk) 15:44, 20 March 2012 (UTC)

Newcomb's speed of light[edit]

The example section contains figures for Newcomb's speed of light measurements. Going to the referenced URL, one finds the following dataset:

Simon Newcomb's measurements of the speed of light, from Stigler
(1977).  The data are recorded as deviations from $24,\!800$
nanoseconds.  Table 3.1 of Bayesian Data Analysis.

28 26 33 24 34 -44 27 16 40 -2
29 22 24 21 25 30 23 29 31 19
24 20 36 32 36 28 25 21 28 29
37 25 28 26 30 32 36 26 30 22
36 23 27 27 28 27 31 27 26 33
26 32 32 24 39 28 24 25 32 25
29 27 28 29 16 23 

The two outliers are clearly -44 and -2 but the rest of the data ranges from 16 to 40. The bar charts accompanying the example don't reflect the above data at all -- the x-axis labels don't match this range. So WTF!? I slapped a disputed sticker onto the mess. linas (talk) 04:20, 11 April 2012 (UTC)

The plot appears to be the density of the resampled medians, not the raw data. The point is to obtain the sampling distribution of the median. — Preceding unsigned comment added by 108.75.137.21 (talk) 19:13, 5 June 2012 (UTC)

Since there is no source for the example in the article it really sould be deleted as WP:OR. Melcombe (talk) 12:49, 11 June 2012 (UTC)

Not so Simple[edit]

"very simple methods"

I have to disagree with this statement. See discussion here: http://stats.stackexchange.com/questions/26088/explaining-to-laypeople-why-bootstrapping-works

Compared to deriving asymptotic sampling distribution it is not difficult to apply bootstrapping (http://kurt.schmidheiny.name/teaching/bootstrap2up.pdf). However, to a non-statistician it could be fairly difficult.

The main reason I disagree with the statement is that Wikipedia departs from an expert-driven approach to encyclopaedias and the citation refers to a publication that targets experts. 81.226.179.21 (talk) 14:59, 23 September 2012 (UTC)


Two History Sections?[edit]

There's one history section at the top and one history section at the bottom (before "see also"). The bottom contains similar information to the top history section. Weaktofu (talk) 02:32, 7 January 2013 (UTC)