Yann LeCun

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by TAnthony (talk | contribs) at 04:58, 16 June 2016 (→‎Life: The use of USA is deprecated, per MOS:NOTUSA, and overlinking using AWB). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Yann LeCun
Born (1960-07-08) July 8, 1960 (age 63)
Alma materPierre and Marie Curie University
Known forDeep learning
Scientific career
InstitutionsNew York University
Facebook Artificial Intelligence Research
Thesis Modeles connexionnistes de l'apprentissage (connectionist learning models)  (1987)
Doctoral advisorMaurice Milgram
Websiteyann.lecun.com

Yann LeCun (born 1960) is a computer scientist with contributions in machine learning, computer vision, mobile robotics and computational neuroscience. He is well known for his work on optical character recognition and computer vision using convolutional neural networks (CNN), and is a founding father of convolutional nets.[1][2] He is also one of the main creators of the DjVu image compression technology (together with Léon Bottou and Patrick Haffner). He co-developed the Lush programming language with Léon Bottou.

Life

Yann LeCun was born near Paris, France, in 1960. He received a Diplôme d'Ingénieur from the Ecole Superieure d'Ingénieur en Electrotechnique et Electronique (ESIEE), Paris in 1983, and a PhD in Computer Science from Université Pierre et Marie Curie in 1987 during which he proposed an early form of the back-propagation learning algorithm for neural networks.[3] He was a postdoctoral research associate in Geoffrey Hinton's lab at the University of Toronto.

In 1988, he joined the Adaptive Systems Research Department at AT&T Bell Laboratories in Holmdel, New Jersey, United States, headed by Lawrence D. Jackel, where he developed a number of new machine learning methods, such as a biologically inspired model of image recognition called Convolutional Neural Networks,[4] the "Optimal Brain Damage" regularization methods,[5] and the Graph Transformer Networks method (similar to conditional random field), which he applied to handwriting recognition and OCR.[6] The bank check recognition system that he helped develop was widely deployed by NCR and other companies, reading over 10% of all the checks in the US in the late 1990s and early 2000s.

In 1996, he joined AT&T Labs-Research as head of the Image Processing Research Department, which was part of Lawrence Rabiner's Speech and Image Processing Research Lab, and worked primarily on the DjVu image compression technology,[7] used by many websites, notably the Internet Archive, to distribute scanned documents. His collaborators at AT&T include Léon Bottou and Vladimir Vapnik.

After a brief tenure as a Fellow of the NEC Research Institute (now NEC-Labs America) in Princeton, NJ, he joined New York University (NYU) in 2003, where he is Silver Professor of Computer Science Neural Science at the Courant Institute of Mathematical Science and the Center for Neural Science. He is also a professor at the Tandon School of Engineering.[8][9] At NYU, he has worked primarily on Energy-Based Models for supervised and unsupervised learning,[10] feature learning for object recognition in Computer Vision,[11] and mobile robotics.[12]

In 2012, he became the founding director of the NYU Center for Data Science.[13] On December 9, 2013, LeCun became the first director of Facebook AI Research in New York City.,[14] and stepped down from the NYU-CDS directorship in early 2014.

LeCun is the recipient of the 2014 IEEE Neural Network Pioneer Award and the 2015 PAMI Distinguished Researcher Award.

In 2013, he and Yoshua Bengio co-founded the International Conference on Learning Representations, which adopted a post-publication open review process he previously advocated on his website. He was the chair and organizer of the "Learning Workshop" held every year between 1986 and 2012 in Snowbird, Utah. He is a member of the Science Advisory Board of the Institute for Pure and Applied Mathematics[15] at UCLA, and has been on the advisory board of a number of companies, including MuseAmi, KXEN Inc., and Vidient Systems.[16] He is the Co-Director of the Neural Computation & Adaptive Perception research program of CIFAR[17]

In 2016, he is the visiting professor of computer science on the "Chaire Annuelle Informatique et Sciences Numériques" at Collège de France in Paris. His "leçon inaugurale" (inaugural lecture) has been an important event in 2016 Paris intellectual life.

References

  1. ^ Convolutional Nets and CIFAR-10: An Interview with Yann LeCun. Kaggle 2014
  2. ^ LeCun, Yann; Léon Bottou; Yoshua Bengio; Patrick Haffner (1998). "Gradient-based learning applied to document recognition" (PDF). Proceedings of the IEEE. 86 (11): 2278–2324. doi:10.1109/5.726791. Retrieved 16 November 2013.
  3. ^ Y. LeCun: Une procédure d'apprentissage pour réseau a seuil asymmetrique (a Learning Scheme for Asymmetric Threshold Networks), Proceedings of Cognitiva 85, 599–604, Paris, France, 1985.
  4. ^ Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard and L. D. Jackel: Backpropagation Applied to Handwritten Zip Code Recognition, Neural Computation, 1(4):541-551, Winter 1989.
  5. ^ Yann LeCun, J. S. Denker, S. Solla, R. E. Howard and L. D. Jackel: Optimal Brain Damage, in Touretzky, David (Eds), Advances in Neural Information Processing Systems 2 (NIPS*89), Morgan Kaufmann, Denver, CO, 1990.
  6. ^ Yann LeCun, Léon Bottou, Yoshua Bengio and Patrick Haffner: Gradient Based Learning Applied to Document Recognition, Proceedings of IEEE, 86(11):2278–2324, 1998.
  7. ^ Léon Bottou, Patrick Haffner, Paul G. Howard, Patrice Simard, Yoshua Bengio and Yann LeCun: High Quality Document Image Compression with DjVu, Journal of Electronic Imaging, 7(3):410–425, 1998.
  8. ^ "People - Electrical and Computer Engineering". Polytechnic Institute of New York University. Retrieved 13 March 2013.
  9. ^ http://yann.lecun.com/
  10. ^ Yann LeCun, Sumit Chopra, Raia Hadsell, Ranzato Marc'Aurelio and Fu-Jie Huang: A Tutorial on Energy-Based Learning, in Bakir, G. and Hofman, T. and Schölkopf, B. and Smola, A. and Taskar, B. (Eds), Predicting Structured Data, MIT Press, 2006.
  11. ^ Kevin Jarrett, Koray Kavukcuoglu, Marc'Aurelio Ranzato and Yann LeCun: What is the Best Multi-Stage Architecture for Object Recognition?, Proc. International Conference on Computer Vision (ICCV'09), IEEE, 2009
  12. ^ Raia Hadsell, Pierre Sermanet, Marco Scoffier, Ayse Erkan, Koray Kavackuoglu, Urs Muller and Yann LeCun: Learning Long-Range Vision for Autonomous Off-Road Driving, Journal of Field Robotics, 26(2):120–144, February 2009.
  13. ^ http://cds.nyu.edu
  14. ^ https://www.facebook.com/yann.lecun/posts/10151728212367143
  15. ^ http://www.ipam.ucla.edu/programs/gss2012/ Institute for Pure and Applied Mathematics
  16. ^ Vidient Systems.
  17. ^ "Neural Computation & Adaptive Perception Advisory Committee Yann LeCun". CIFAR. Retrieved 16 December 2013.

External links