Active learning (machine learning): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
fixed cite error, use the SHOW PREVIEW button to prevent cite errors
Citation bot (talk | contribs)
m Add: year, issue, volume, journal, title, doi, date, isbn, pages, hdl, author pars. 1-4. Converted bare reference to cite template. Removed parameters. Formatted dashes. | You can use this bot yourself. Report bugs here. | User-activated.
Line 24: Line 24:
|url = https://rd.springer.com/book/10.1007/978-1-4899-7637-6
|url = https://rd.springer.com/book/10.1007/978-1-4899-7637-6
|doi=10.1007/978-1-4899-7637-6
|doi=10.1007/978-1-4899-7637-6
|hdl=11311/1006123
}}</ref><ref name="das2016">{{cite book
}}</ref><ref name="das2016">{{cite book
|last1=Das|first1=Shubhomoy
|last1=Das|first1=Shubhomoy
Line 37: Line 38:
|chapter=Incorporating Expert Feedback into Active Anomaly Discovery
|chapter=Incorporating Expert Feedback into Active Anomaly Discovery
|title=IEEE 16th International Conference on Data Mining
|title=IEEE 16th International Conference on Data Mining
|pages=853–858
|date=2016
|date=2016
|publisher=IEEE
|publisher=IEEE
|url = https://doi.org/10.1109/ICDM.2016.0102
|doi=10.1109/ICDM.2016.0102
|doi=10.1109/ICDM.2016.0102
|isbn=978-1-5090-5473-2
}}</ref> In statistics literature it is sometimes also called [[optimal experimental design]].<ref name="olsson">{{cite journal | url=http://eprints.sics.se/3600/ | title=A literature survey of active machine learning in the context of natural language processing | author=Olsson, Fredrik}}</ref>
}}</ref> In statistics literature it is sometimes also called [[optimal experimental design]].<ref name="olsson">{{cite journal | url=http://eprints.sics.se/3600/ | title=A literature survey of active machine learning in the context of natural language processing | author=Olsson, Fredrik| date=April 2009 }}</ref>


There are situations in which unlabeled data is abundant but manually labeling is expensive. In such a scenario, learning algorithms can actively query the user/teacher for labels. This type of iterative supervised learning is called active learning. Since the learner chooses the examples, the number of examples to learn a concept can often be much lower than the number required in normal supervised learning. With this approach, there is a risk that the algorithm is overwhelmed by uninformative examples. Recent developments are dedicated to multi-label active learning<ref name="multi"/>, hybrid active learning<ref name="hybrid"/> and active learning in a single-pass (on-line) context,<ref name="single-pass"/> combining concepts from the field of machine learning (e.g. conflict and ignorance) with adaptive, [[incremental learning]] policies in the field of [[online machine learning]].
There are situations in which unlabeled data is abundant but manually labeling is expensive. In such a scenario, learning algorithms can actively query the user/teacher for labels. This type of iterative supervised learning is called active learning. Since the learner chooses the examples, the number of examples to learn a concept can often be much lower than the number required in normal supervised learning. With this approach, there is a risk that the algorithm is overwhelmed by uninformative examples. Recent developments are dedicated to multi-label active learning<ref name="multi"/>, hybrid active learning<ref name="hybrid"/> and active learning in a single-pass (on-line) context,<ref name="single-pass"/> combining concepts from the field of machine learning (e.g. conflict and ignorance) with adaptive, [[incremental learning]] policies in the field of [[online machine learning]].
Line 64: Line 66:
*'''Expected error reduction''': label those points that would most reduce the model's [[generalization error]]
*'''Expected error reduction''': label those points that would most reduce the model's [[generalization error]]
*'''Exponentiated Gradient Exploration for Active Learning''':<ref name="Bouneffouf(2016)" /> In this paper, the author proposes a sequential algorithm named exponentiated gradient (EG)-active that can improve any active learning algorithm by an optimal random exploration.
*'''Exponentiated Gradient Exploration for Active Learning''':<ref name="Bouneffouf(2016)" /> In this paper, the author proposes a sequential algorithm named exponentiated gradient (EG)-active that can improve any active learning algorithm by an optimal random exploration.
*'''Membership Query Synthesis''': This is where the learner generates its own instance from an underlying natural distribution. For example, if the dataset are pictures of humans and animals, the learner could send a clipped image of a leg to the teacher and query if this appendage belongs to an animal or human. This is particularly useful if your dataset is small.<ref>{{Cite journal|date=2015-01-05|title=Active learning via query synthesis and nearest neighbour search|url=https://www.sciencedirect.com/science/article/pii/S0925231214008145|journal=Neurocomputing|language=en|volume=147|pages=426–434|doi=10.1016/j.neucom.2014.06.042|issn=0925-2312}}</ref>
*'''Membership Query Synthesis''': This is where the learner generates its own instance from an underlying natural distribution. For example, if the dataset are pictures of humans and animals, the learner could send a clipped image of a leg to the teacher and query if this appendage belongs to an animal or human. This is particularly useful if your dataset is small.<ref>{{Cite journal|date=2015-01-05|title=Active learning via query synthesis and nearest neighbour search|url=https://www.sciencedirect.com/science/article/pii/S0925231214008145|journal=Neurocomputing|language=en|volume=147|pages=426–434|doi=10.1016/j.neucom.2014.06.042|issn=0925-2312|last1=Wang|first1=Liantao|last2=Hu|first2=Xuelei|last3=Yuan|first3=Bo|last4=Lu|first4=Jianfeng}}</ref>
*'''Pool-Based Sampling''': In this scenario, instances are drawn from the entire data pool and assigned an informative score, a measurement of how well the learner “understands” the data. The system then selects the most informative instances and queries the teacher for the labels.<ref>{{Cite web|url=https://deepai.org/machine-learning-glossary-and-terms/active-learning|title=How does Active Learning Work in Practice?|last=|first=|date=|website=deepai.org|archive-url=|archive-date=|dead-url=|access-date=}}</ref>
*'''Pool-Based Sampling''': In this scenario, instances are drawn from the entire data pool and assigned an informative score, a measurement of how well the learner “understands” the data. The system then selects the most informative instances and queries the teacher for the labels.<ref>{{Cite web|url=https://deepai.org/machine-learning-glossary-and-terms/active-learning|title=How does Active Learning Work in Practice?|last=|first=|date=|website=deepai.org|archive-url=|archive-date=|dead-url=|access-date=}}</ref>
*'''Stream-Based Selective Sampling''': Here, each unlabeled data point is examined one at a time with the machine evaluating the informativeness of each item against its query parameters. The learner decides for itself whether to assign a label or query the teacher for each datapoint.
*'''Stream-Based Selective Sampling''': Here, each unlabeled data point is examined one at a time with the machine evaluating the informativeness of each item against its query parameters. The learner decides for itself whether to assign a label or query the teacher for each datapoint.
Line 93: Line 95:
<ref name="Bouneffouf(2014)">Bouneffouf et al. (2014), [https://hal.archives-ouvertes.fr/hal-01069802 Contextual Bandit for Active Learning: Active Thompson Sampling.] Neural Information Processing - 21st International Conference, ICONIP 2014</ref>
<ref name="Bouneffouf(2014)">Bouneffouf et al. (2014), [https://hal.archives-ouvertes.fr/hal-01069802 Contextual Bandit for Active Learning: Active Thompson Sampling.] Neural Information Processing - 21st International Conference, ICONIP 2014</ref>
<ref name="multi">Yang B, Sun J T, Wang T J, et al.(2009), [https://www.microsoft.com/en-us/research/wp-content/uploads/2009/01/sigkdd09-yang.pdf "Effective Multi-Label Active Learning for Text Classification".] Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2009: 917-926.</ref>
<ref name="multi">Yang B, Sun J T, Wang T J, et al.(2009), [https://www.microsoft.com/en-us/research/wp-content/uploads/2009/01/sigkdd09-yang.pdf "Effective Multi-Label Active Learning for Text Classification".] Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2009: 917-926.</ref>
<ref name="single-pass">E. Lughofer (2012), [https://link.springer.com/article/10.1007/s12530-012-9060-7 Single-Pass Active Learning with Conflict and Ignorance.] Evolving Systems, vol. 3 (4), pp. 251-271, 2012.</ref>
<ref name="single-pass">{{Cite journal | doi=10.1007/s12530-012-9060-7|title = Single-pass active learning with conflict and ignorance| journal=Evolving Systems| volume=3| issue=4| pages=251–271|year = 2012|last1 = Lughofer|first1 = Edwin}}</ref>
<ref name="Bouneffouf(2016)">Bouneffouf et al. (2016), [http://www.mdpi.com/2073-431X/5/1/1 Exponentiated Gradient Exploration for Active Learning.] Computers, vol. 5 (1), 2016, pp. 1-12</ref>
<ref name="Bouneffouf(2016)">Bouneffouf et al. (2016), [http://www.mdpi.com/2073-431X/5/1/1 Exponentiated Gradient Exploration for Active Learning.] Computers, vol. 5 (1), 2016, pp. 1-12</ref>
<ref name="shubhomoydas_github">{{Cite web|url=https://github.com/shubhomoydas/ad_examples#query-diversity-with-compact-descriptions|title=shubhomoydas/ad_examples|website=GitHub|language=en|access-date=2018-12-04}}</ref>
<ref name="shubhomoydas_github">{{Cite web|url=https://github.com/shubhomoydas/ad_examples#query-diversity-with-compact-descriptions|title=shubhomoydas/ad_examples|website=GitHub|language=en|access-date=2018-12-04}}</ref>

Revision as of 14:27, 22 January 2019

Active learning is a special case of machine learning in which a learning algorithm is able to interactively query the user (or some other information source) to obtain the desired outputs at new data points.[1][2][3] In statistics literature it is sometimes also called optimal experimental design.[4]

There are situations in which unlabeled data is abundant but manually labeling is expensive. In such a scenario, learning algorithms can actively query the user/teacher for labels. This type of iterative supervised learning is called active learning. Since the learner chooses the examples, the number of examples to learn a concept can often be much lower than the number required in normal supervised learning. With this approach, there is a risk that the algorithm is overwhelmed by uninformative examples. Recent developments are dedicated to multi-label active learning[5], hybrid active learning[6] and active learning in a single-pass (on-line) context,[7] combining concepts from the field of machine learning (e.g. conflict and ignorance) with adaptive, incremental learning policies in the field of online machine learning.

Definitions

Let T be the total set of all data under consideration. For example, in a protein engineering problem, T would include all proteins that are known to have a certain interesting activity and all additional proteins that one might want to test for that activity.

During each iteration, i, T is broken up into three subsets

  1. : Data points where the label is known.
  2. : Data points where the label is unknown.
  3. : A subset of TU,i that is chosen to be labeled.

Most of the current research in active learning involves the best method to choose the data points for TC,i.

Query strategies

Algorithms for determining which data points should be labeled can be organized into a number of different categories, based upon their purpose:[1]

  • Balance exploration and exploitation: the choice of examples to label is seen as a dilemma between the exploration and the exploitation over the data space representation. This strategy manages this compromise by modelling the active learning problem as a contextual bandit problem. For example, Bouneffouf et al.[8] propose a sequential algorithm named Active Thompson Sampling (ATS), which, in each round, assigns a sampling distribution on the pool, samples one point from this distribution, and queries the oracle for this sample point label.
  • Expected model change: label those points that would most change the current model
  • Expected error reduction: label those points that would most reduce the model's generalization error
  • Exponentiated Gradient Exploration for Active Learning:[9] In this paper, the author proposes a sequential algorithm named exponentiated gradient (EG)-active that can improve any active learning algorithm by an optimal random exploration.
  • Membership Query Synthesis: This is where the learner generates its own instance from an underlying natural distribution. For example, if the dataset are pictures of humans and animals, the learner could send a clipped image of a leg to the teacher and query if this appendage belongs to an animal or human. This is particularly useful if your dataset is small.[10]
  • Pool-Based Sampling: In this scenario, instances are drawn from the entire data pool and assigned an informative score, a measurement of how well the learner “understands” the data. The system then selects the most informative instances and queries the teacher for the labels.[11]
  • Stream-Based Selective Sampling: Here, each unlabeled data point is examined one at a time with the machine evaluating the informativeness of each item against its query parameters. The learner decides for itself whether to assign a label or query the teacher for each datapoint.
  • Uncertainty sampling: label those points for which the current model is least certain as to what the correct output should be
  • Query by committee: a variety of models are trained on the current labeled data, and vote on the output for unlabeled data; label those points for which the "committee" disagrees the most
  • Querying from diverse subspaces or partitions:[12] When the underlying model is a forest of trees, the leaf nodes might represent (overlapping) partitions of the original feature space. This offers the possibility of selecting instances from non-overlapping or minimally overlapping partitions for labeling.
  • Variance reduction: label those points that would minimize output variance, which is one of the components of error

A wide variety of algorithms have been studied that fall into these categories.[1][4]

Minimum Marginal Hyperplane

Some active learning algorithms are built upon Support vector machines (SVMs) and exploit the structure of the SVM to determine which data points to label. Such methods usually calculate the margin, W, of each unlabeled datum in TU,i and treat W as an n-dimensional distance from that datum to the separating hyperplane.

Minimum Marginal Hyperplane methods assume that the data with the smallest W are those that the SVM is most uncertain about and therefore should be placed in TC,i to be labeled. Other similar methods, such as Maximum Marginal Hyperplane, choose data with the largest W. Tradeoff methods choose a mix of the smallest and largest Ws.

Meetings

  • 2016 "Workshop Active Learning: Applications, Foundations and Emerging Trends" at iKNOW, Graz, Austria[13]
  • 2018 "Interactive Adaptive Learning" Workshop at ECML PKDD, Dublin, Ireland[14]

See also

Notes

  1. ^ a b c Settles, Burr (2010), "Active Learning Literature Survey" (PDF), Computer Sciences Technical Report 1648. University of Wisconsin–Madison, retrieved 2014-11-18
  2. ^ Rubens, Neil; Elahi, Mehdi; Sugiyama, Masashi; Kaplan, Dain (2016). "Active Learning in Recommender Systems". In Ricci, Francesco; Rokach, Lior; Shapira, Bracha (eds.). Recommender Systems Handbook (2 ed.). Springer US. doi:10.1007/978-1-4899-7637-6. hdl:11311/1006123. ISBN 978-1-4899-7637-6. {{cite book}}: External link in |last2= (help)
  3. ^ Das, Shubhomoy; Wong, Weng-Keen; Dietterich, Thomas; Fern, Alan; Emmott, Andrew (2016). "Incorporating Expert Feedback into Active Anomaly Discovery". In Bonchi, Francesco; Domingo-Ferrer, Josep; Baeza-Yates, Ricardo; Zhou, Zhi-Hua; Wu, Xindong (eds.). IEEE 16th International Conference on Data Mining. IEEE. pp. 853–858. doi:10.1109/ICDM.2016.0102. ISBN 978-1-5090-5473-2.
  4. ^ a b Olsson, Fredrik (April 2009). "A literature survey of active machine learning in the context of natural language processing". {{cite journal}}: Cite journal requires |journal= (help)
  5. ^ Yang B, Sun J T, Wang T J, et al.(2009), "Effective Multi-Label Active Learning for Text Classification". Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2009: 917-926.
  6. ^ E. Lughofer (2012), Hybrid Active Learning (HAL) for Reducing the Annotation Efforts of Operators in Classification Systems. Pattern Recognition, vol. 45 (2), pp. 884-896, 2012.
  7. ^ Lughofer, Edwin (2012). "Single-pass active learning with conflict and ignorance". Evolving Systems. 3 (4): 251–271. doi:10.1007/s12530-012-9060-7.
  8. ^ Bouneffouf et al. (2014), Contextual Bandit for Active Learning: Active Thompson Sampling. Neural Information Processing - 21st International Conference, ICONIP 2014
  9. ^ Bouneffouf et al. (2016), Exponentiated Gradient Exploration for Active Learning. Computers, vol. 5 (1), 2016, pp. 1-12
  10. ^ Wang, Liantao; Hu, Xuelei; Yuan, Bo; Lu, Jianfeng (2015-01-05). "Active learning via query synthesis and nearest neighbour search". Neurocomputing. 147: 426–434. doi:10.1016/j.neucom.2014.06.042. ISSN 0925-2312.
  11. ^ "How does Active Learning Work in Practice?". deepai.org. {{cite web}}: Cite has empty unknown parameter: |dead-url= (help)
  12. ^ "shubhomoydas/ad_examples". GitHub. Retrieved 2018-12-04.
  13. ^ "Workshop Active Learning: Applications, Foundations and Emerging Trends - iKNOW 2016". vincentlemaire-labs.fr. Retrieved 2018-12-04.
  14. ^ Kottke, Daniel. "IAL2018". www.uni-kassel.de. Retrieved 2018-12-04.

Other references

  • N. Rubens, M. Elahi, M. Sugiyama, D. Kaplan. Recommender Systems Handbook: Active Learning in Recommender Systems (eds. F. Ricci, P.B. Kantor, L. Rokach, B. Shapira). Springer, 2015 [1], [2].
  • Active Learning Tutorial, S. Dasgupta and J. Langford.