Michael I. Jordan
|Michael I. Jordan|
February 25, 1956 |
|Institutions||University of California, Berkeley
University of California, San Diego
Massachusetts Institute of Technology
|Thesis||The Learning of Representations for Sequential Performance (1985)|
|Doctoral advisor||David Rumelhart
|Doctoral students||Eric Xing
|Other notable students||Andrew Ng
|Known for||Latent Dirichlet allocation|
|Notable awards||Fellow of the U.S. National Academy of Sciences
AAAI Fellow (2002)
Jordan was born in Ponchatoula, Louisiana, to a working class family, and received his BS magna cum laude in Psychology in 1978 from the Louisiana State University, his MS in Mathematics in 1980 from the Arizona State University and his PhD in Cognitive Science in 1985 from the University of California, San Diego. At the University of California, San Diego Jordan was a student of David Rumelhart and a member of the PDP Group in the 1980s.
Jordan is currently a full professor at the University of California, Berkeley where his appointment is split across the Department of Statistics and the Department of EECS. He was a professor at MIT from 1988-1998.
Jordan received numerous awards, including a best student paper award  (with X. Nguyen and M. Wainwright) at the International Conference on Machine Learning (ICML 2004), a best paper award (with R. Jacobs) at the American Control Conference (ACC 1991), the ACM/AAAI Allen Newell Award, the IEEE Neural Networks Pioneer Award, and an NSF Presidential Young Investigator Award. In 2010 he was named a Fellow of the Association for Computing Machinery "for contributions to the theory and application of machine learning."
It is notable that many of Jordan's graduate students and postdocs continue to strongly influence the machine learning field after their PhDs. Francis Bach, Zoubin Ghahramani, Tommi Jaakkola, Andrew Ng, Lawrence Saul and David Blei (all former students or postdocs of Jordan) have all continued to make significant contributions to the field.
In the 1980s Jordan started developing recurrent neural networks as a cognitive model. In recent years, though, his work is less driven from a cognitive perspective and more from the background of traditional statistics.
He popularised Bayesian networks in the machine learning community and is known for pointing out links between machine learning and statistics. Jordan was also prominent in the formalisation of variational methods for approximate inference and the popularisation of the expectation-maximization algorithm in machine learning.
Resignation from Machine Learning Journal
In 2001, Michael Jordan and others resigned from the Editorial Board of Machine Learning. In a public letter, they argued for less restrictive access; a new journal Journal of Machine Learning Research (JMLR) was created to support the evolution of the field of machine learning.
- Zeliadt, N. (2013). "Profile of Michael I. Jordan". Proceedings of the National Academy of Sciences 110 (4): 1141–1143. doi:10.1073/pnas.1222664110. PMID 23341554.
- Jacobs, R. A.; Jordan, M. I.; Nowlan, S. J.; Hinton, G. E. (1991). "Adaptive Mixtures of Local Experts". Neural Computation 3: 79. doi:10.1162/neco.19184.108.40.206.
- David M. Blei, Andrew Y. Ng, Michael I. Jordan. Latent Dirichlet allocation. The Journal of Machine Learning Research, Volume 3, 3/1/2003
- Michael I. Jordan, ed. Learning in Graphical Models. Proceedings of the NATO Advanced Study Institute, Ettore Maiorana Centre, Erice, Italy, September 27-October 7, 1996
- Bayesian or Frequentist, Which are You? Machine Learning Summer School (MLSS), Cambridge 2009
- Vitae MICHAEL I. JORDAN Department of Electrical Engineering and Computer Science, Department of Statistics, University of California. Accessed September 3, 2013
- Vitae MICHAEL I. JORDAN
- "Long Nguyen's Publications". Stat.lsa.umich.edu. Retrieved 2012-05-21.
- "ACM Names 41 Fellows from World's Leading Institutions — Association for Computing Machinery". Acm.org. Retrieved 2012-05-21.
- Editorial Board of the Kluwer Journal, Machine Learning: Resignation Letter (2001)