Jump to content

Talk:Training, validation, and test data sets

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 92.27.180.78 (talk) at 20:39, 3 May 2020 (→‎Out-of-sample redirects here, but...: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconStatistics Start‑class Mid‑importance
WikiProject iconThis article is within the scope of WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-importance on the importance scale.
WikiProject iconRobotics Start‑class Low‑importance
WikiProject iconThis article is within the scope of WikiProject Robotics, a collaborative effort to improve the coverage of Robotics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
LowThis article has been rated as Low-importance on the project's importance scale.

Merge

There is absolutely no value added of having two articles Training set and Test set separately when neither can be discussed alone. The concept is Training and test sets with references to information science, statistics, data mining, biostatistics, etc. Currently the two articles are near duplicates (or could be based on the available information. Can we imagine some information for either which is not relevant for the other? Sda030 (talk) 22:53, 27 February 2014 (UTC)[reply]

I agree they should be merged. Both articles say as much in their introductions. Prax54 (talk) 04:03, 10 January 2015 (UTC)[reply]
Merger done, some rewrites needed.Prax54 (talk) 15:55, 20 June 2015 (UTC)[reply]

Totally agree with the suggestion - training set, testing set and validation set are all parts of one whole and should be presented in one topic. (MM-Professor of QM & MIS, WWU-USA)

synonym "discovery set"

A training set is also called a discovery set, right? (See for example <DOI: 10.1056/NEJMoa1406498>.) Perhaps a link should be created so that looking up "discovery set" redirects to here. Now, "discovery set" just gets a bunch of mostly-irrelevant search results. 73.53.61.168 (talk) 11:17, 13 December 2015 (UTC)[reply]

"Gold standard"

I have seen the term "gold standard" been used at a few places in connection with articles about machine learning. On the page Gold standard (disambiguation), it says that in statistics and machine learning, gold standard is "a manually annotated training set or test set". What does it mean that the test set is manually annotated? And is "gold standard" a term that is important enough to be mentioned in this article perhaps? —Kri (talk) 16:00, 19 January 2016 (UTC)[reply]

Remove GNG template

Lots of mentions in ML literature. Wqwt (talk) 20:52, 22 March 2018 (UTC)[reply]

Claim that the meaning of test and validation is flipped in practice

It's not clear in _whose_ practice this terms are flipped. In lots of posts by recognized practitioners (e.g. [1]) they're not flipped. — Preceding unsigned comment added by FabianMontescu (talkcontribs) 19:35, 21 September 2018 (UTC)[reply]

References

Sampling Methods

Should this page make some reference to the way in which data is sampled - split into training/validation/test sets? This article is written in the context of Machine Learning, and often when training/validation/test sets are sampled from the main data source they are done so either randomly or in a stratified way. I think that this is worthy of mention in this article, even if not in detail.

aricooperdavis (talk) 14:18, 13 November 2018 (UTC)[reply]

Earliest source for this method

I can't seem to find here or in other places the earliest source for this method. it seems the holdout method was separately proposed by Highleyman in 1962, and cross validation was separately proposed by Stone in 1974, but the mixture of those two method resulting the train/validation/test is yet to be credited to one person. is this the truth? earliest source here is the Bishop book in 1995, but I don't think he is the one responsible for proposing this in literature

Out-of-sample redirects here, but...

... no explanation is given, let alone in bold. Can someone please rectify? Thanks. 92.27.180.78 (talk) 20:39, 3 May 2020 (UTC)[reply]