Yann LeCun

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Yann LeCun
Yann LeCun at the University of Minnesota.jpg
Born 1960 (age 53–54)
Institutions New York University
Facebook Artificial Intelligence Research
Alma mater Pierre and Marie Curie University
Thesis Modeles connexionnistes de l'apprentissage (connectionist learning models) (1987)
Doctoral advisor Maurice Milgram
Known for Deep Learning
Website
yann.lecun.com

Yann LeCun (born 1960) is a computer science researcher with contributions in machine learning, computer vision, mobile robotics and computational neuroscience. He is well known for his work on optical character recognition and computer vision using convolutional neural networks (CNN), and is a founding father of convolutional nets.[1][2] He is also one of the main creators of the DjVu image compression technology (together with Léon Bottou and Patrick Haffner). He co-developed the Lush programming language with Léon Bottou.

Life[edit]

Yann LeCun was born near Paris, France, in 1960. He received a Diplôme d'Ingénieur from the Ecole Superieure d'Ingénieur en Electrotechnique et Electronique (ESIEE), Paris in 1983, and a PhD in Computer Science from Université Pierre et Marie Curie in 1987 during which he proposed an early form of the back-propagation learning algorithm for neural networks.[3] He was a postdoctoral research associate in Geoffrey Hinton's lab at the University of Toronto.

In 1988, he joined the Adaptive Systems Research Department at AT&T Bell Laboratories in Holmdel, New Jersey, USA, where he developed a number of new machine learning methods, such as a biologically inspired model of image recognition called Convolutional Neural Networks,[4] the "Optimal Brain Damage" regularization methods,[5] and the Graph Transformer Networks method (similar to conditional random field), which he applied to handwriting recognition and OCR.[6] The bank check recognition system that he helped develop was widely deployed by NCR and other companies, reading over 10% of all the checks in the US in the late 1990s and early 2000s.

In 1996, he joined AT&T Labs-Research as head of the Image Processing Research Department, which was part of Lawrence Rabiner's Speech and Image Processing Research Lab, and worked primarily on the DjVu image compression technology,[7] used by many websites, notably the Internet Archive, to distribute scanned documents. His collaborators at AT&T include Léon Bottou and Vladimir Vapnik.

After a brief tenure as a Fellow of the NEC Research Institute (now NEC-Labs America) in Princeton, NJ, he joined New York University (NYU) in 2003, where he is Silver Professor of Computer Science Neural Science at the Courant Institute of Mathematical Science and the Center for Neural Science. He is also a professor at Polytechnic Institute of New York University.[8][9] At NYU, he has worked primarily on Energy-Based Models for supervised and unsupervised learning,[10] feature learning for object recognition in Computer Vision,[11] and mobile robotics.[12]

Yann LeCun is general chair and organizer of the "Learning Workshop" held every year since 1986 in Snowbird, Utah. He is a member of the Science Advisory Board of the Institute for Pure and Applied Mathematics[13] at UCLA, and a scientific adviser of KXEN Inc., and Vidient Systems.[14] He is the Co-Director of the Neural Computation & Adaptive Perception research program of CIFAR[15]

On December 9, 2013, LeCun became the head of Facebook's new Artificial Intelligence laboratory in New York City.[16]

References[edit]

  1. ^ Convolutional Nets and CIFAR-10: An Interview with Yann LeCun. Kaggle 2014
  2. ^ LeCun, Yann; Léon Bottou; Yoshua Bengio; Patrick Haffner (1998). "Gradient-based learning applied to document recognition". Proceedings of the IEEE 86 (11): 2278–2324. doi:10.1109/5.726791. Retrieved 16 November 2013. 
  3. ^ Y. LeCun: Une procédure d'apprentissage pour réseau a seuil asymmetrique (a Learning Scheme for Asymmetric Threshold Networks), Proceedings of Cognitiva 85, 599–604, Paris, France, 1985.
  4. ^ Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard and L. D. Jackel: Backpropagation Applied to Handwritten Zip Code Recognition, Neural Computation, 1(4):541-551, Winter 1989.
  5. ^ Yann LeCun, J. S. Denker, S. Solla, R. E. Howard and L. D. Jackel: Optimal Brain Damage, in Touretzky, David (Eds), Advances in Neural Information Processing Systems 2 (NIPS*89), Morgan Kaufmann, Denver, CO, 1990.
  6. ^ Yann LeCun, Léon Bottou, Yoshua Bengio and Patrick Haffner: Gradient Based Learning Applied to Document Recognition, Proceedings of IEEE, 86(11):2278–2324, 1998.
  7. ^ Léon Bottou, Patrick Haffner, Paul G. Howard, Patrice Simard, Yoshua Bengio and Yann LeCun: High Quality Document Image Compression with DjVu, Journal of Electronic Imaging, 7(3):410–425, 1998.
  8. ^ "People - Electrical and Computer Engineering". Polytechnic Institute of New York University. Retrieved 13 March 2013. 
  9. ^ http://yann.lecun.com/
  10. ^ Yann LeCun, Sumit Chopra, Raia Hadsell, Ranzato Marc'Aurelio and Fu-Jie Huang: A Tutorial on Energy-Based Learning, in Bakir, G. and Hofman, T. and Schölkopf, B. and Smola, A. and Taskar, B. (Eds), Predicting Structured Data, MIT Press, 2006.
  11. ^ Kevin Jarrett, Koray Kavukcuoglu, Marc'Aurelio Ranzato and Yann LeCun: What is the Best Multi-Stage Architecture for Object Recognition?, Proc. International Conference on Computer Vision (ICCV'09), IEEE, 2009
  12. ^ Raia Hadsell, Pierre Sermanet, Marco Scoffier, Ayse Erkan, Koray Kavackuoglu, Urs Muller and Yann LeCun: Learning Long-Range Vision for Autonomous Off-Road Driving, Journal of Field Robotics, 26(2):120–144, February 2009.
  13. ^ http://www.ipam.ucla.edu/programs/gss2012/ Institute for Pure and Applied Mathematics
  14. ^ Vidient Systems.
  15. ^ "Neural Computation & Adaptive Perception Advisory Committee Yann LeCun". CIFAR. Retrieved 16 December 2013. 
  16. ^ https://www.facebook.com/yann.lecun/posts/10151728212367143

External links[edit]