Transfer learning

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Transfer learning is a research problem in machine learning that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem.[1] For example, knowledge gained while learning to recognize cars could apply when trying to recognize trucks. This area of research bears some relation to the long history of psychological literature on transfer of learning, although formal ties between the two fields are limited.

History[edit]

The earliest cited work on transfer in machine learning is attributed to Lorien Pratt, who formulated the discriminability-based transfer (DBT) algorithm in 1993.[2]

In 1997, the journal Machine Learning published a special issue devoted to transfer learning,[3] and by 1998, the field had advanced to include multi-task learning,[4] along with a more formal analysis of its theoretical foundations.[5] Learning to Learn,[6] edited by Pratt and Sebastian Thrun, is a 1998 review of the subject.

Transfer learning has also been applied in cognitive science, with the journal Connection Science publishing a special issue on reuse of neural networks through transfer in 1996.[7]

Applications[edit]

Algorithms are available for transfer learning in Markov logic networks[8] and Bayesian networks.[9] Transfer has been applied to building utilization,[10] text classification[11][12] and spam filtering.[13]

See also[edit]

Sources[edit]

References[edit]

  1. ^ West, Jeremy; Ventura, Dan; Warnick, Sean (2007). "Spring Research Presentation: A Theoretical Foundation for Inductive Transfer". Brigham Young University, College of Physical and Mathematical Sciences. Archived from the original on 2007-08-01. Retrieved 2007-08-05. 
  2. ^ Pratt, L. Y. (1993). "Discriminability-based transfer between neural networks" (PDF). NIPS Conference: Advances in Neural Information Processing Systems 5. Morgan Kaufmann Publishers. pp. 204–211. 
  3. ^ Pratt, L. Y.; Thrun, Sebastian (July 1997). "Machine Learning - Special Issue on Inductive Transfer". link.springer.com. Springer. Retrieved 2017-08-10. 
  4. ^ Caruana, R., "Multitask Learning", pp. 95-134 in Pratt & Thrun 1998
  5. ^ Baxter, J., "Theoretical Models of Learning to Learn", pp. 71-95 Pratt & Thrun 1998
  6. ^ Thrun & Pratt 2012.
  7. ^ Pratt, L. (1996). "Special Issue: Reuse of Neural Networks through Transfer". Connection Science. Retrieved 2017-08-10. 
  8. ^ Mihalkova, Lilyana; Huynh, Tuyen; Mooney, Raymond J. (July 2007), "Mapping and Revising Markov Logic Networks for Transfer" (PDF), Learning Proceedings of the 22nd AAAI Conference on Artificial Intelligence (AAAI-2007), Vancouver, BC, pp. 608–614, retrieved 2007-08-05 
  9. ^ Niculescu-Mizil, Alexandru; Caruana, Rich (March 21–24, 2007), "Inductive Transfer for Bayesian Network Structure Learning" (PDF), Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics (AISTATS 2007), retrieved 2007-08-05 
  10. ^ Arief-Ang, I.B.; Salim, F.D.; Hamilton, M. (2017-11-08). DA-HOC: semi-supervised domain adaptation for room occupancy prediction using CO2 sensor data. 4th ACM International Conference on Systems for Energy-Efficient Built Environments (BuildSys). Delft, Netherlands. pp. 1–10. doi:10.1145/3137133.3137146. ISBN 978-1-4503-5544-5. 
  11. ^ Do, Chuong B.; Ng, Andrew Y. (2005). "Transfer learning for text classification". Neural Information Processing Systems Foundation, NIPS*2005 (PDF). Retrieved 2007-08-05. 
  12. ^ Rajat, Raina; Ng, Andrew Y.; Koller, Daphne (2006). "Constructing Informative Priors using Transfer Learning". Twenty-third International Conference on Machine Learning (PDF). Retrieved 2007-08-05. 
  13. ^ Bickel, Steffen (2006). "ECML-PKDD Discovery Challenge 2006 Overview". ECML-PKDD Discovery Challenge Workshop (PDF). Retrieved 2007-08-05.