Robert M. French
Robert M. French is a research director at the French National Centre for Scientific Research. He is currently at the University of Burgundy in Dijon. He holds a Ph.D. from the University of Michigan, where he worked with Douglas Hofstadter on the Tabletop computational cognitive model. He specializes in cognitive science and has made an extensive study of the process of analogy-making.
French is the inventor of Tabletop, a computer program that forms analogies in a microdomain consisting of everyday objects placed on a table.
He has done extensive research in artificial intelligence and written several articles about the Turing Test, which was proposed by Alan Turing in 1950 as a means of determining whether an advanced computer can be said to be intelligent. French was for a long time an outspoken critic of the test, which, he suggested, no computer might ever be able to meet. More recently, however, he has noted that artificial intelligence is advancing so quickly that a computer might soon be able to pass the test.
He has published work on catastrophic forgetting in neural networks, the Turing test and foundations of cognitive science, the evolution of sex, and categorization and learning in infants, among other topics.
Early life and education
French attended Miami University of Ohio from 1969 to 1972, earning a B.S. in mathematics after three years of study. From 1972 to 1974 he was at Indiana University, from which received an M.A. in mathematics.
Early career and doctoral studies
From 1972 to 1974, French worked as a teaching assistant in mathematics at Indiana University. For several months in 1975, he taught mathematics at Hanover College in Hanover, Indiana.
He then moved to France, where from 1976 to 1985 he lived in Paris, working as a freelance translator and interpreter. During his years there, he collaborated with a colleague, Jacqueline Henry, on the French translation of Douglas Hofstadter's bestseller Gödel, Escher, Bach.
French returned to the U.S. in 1985 to become a graduate student in computer science at the University of Michigan, Ann Arbor, where he pursued a Ph.D. under Hofstadter in artificial intelligence/cognitive science. He completed his doctoral work in 1992, receiving a degree in computer science.
His Ph.D. dissertation was entitled Tabletop: An Emergent, Stochastic Computer Model of Analogy-Making. His thesis committee consisted of Hofstadter, John Holland, Daniel Dennett, Arthur Burks, John Laird, and Steve Lytinen.
“The key notion underlying the research presented in this dissertation,” wrote French in his summary of the dissertation, “is my conviction that the cognitive mechanisms giving rise to human analogy-making form the very basis of intelligence. Our ability to perceive and create analogies is made possible by the same mechanisms that drive our ability to categorize, to generalize, and to compare different situations.”
From 1985 to 1992 he was a research assistant in Computer Science at the University of Michigan, Ann Arbor. During this period he was also a Visiting Researcher at CREA, Ecole Polytechnique, Paris (1988), and a Visiting Lecturer in Computer Science at Earlham College in Richmond, Indiana (1991).
He spent several months in 1992 as a postdoctoral fellow at the Center for Research on Concepts and Cognition at Indiana University. From 1992 to 1994, he was Visiting Assistant Professor of Computer Science at Willamette University in Salem, Oregon. From 1994 to 1995, he was a Postdoctoral Fellow at the Department of Psychology at the University of Wisconsin, Madison, and a Lecturer in Cognitive Science in the Department of Educational Psychology at the same institution.
From 1995 to 1998, French was a Research Scientist in the Department of Psychology at the University of Liège. From 1998 to 2000, he was an Associate Professor in Quantitative Psychology and Cognitive Science in the same department. From 2001 to 2004, he was a Professor of Quantitative Psychology and Cognitive Science in that department.
Since 2004, he has been the Research Director at the French National Center for Scientific Research (CNRS).
- French, R. (1995). The Subtlety of Sameness: A theory and computer model of analogy-making. Cambridge, MA: MIT Press.
- In his foreword to the book, Daniel Dennett wrote that French “has created a model of human analogy-making that attempts to bridge the gap between classical top-down AI and more recent bottom-up approaches.” French's research, Dennett explained, “is based on the premise that human analogy-making is an extension of our constant background process of perceiving—in other words, that analogy-making and the perception of sameness are two sides of the same coin. At the heart of the author's theory and computer model of analogy-making is the idea that the building-up and the manipulation of representations are inseparable aspects of mental functioning, in contrast to traditional AI models of high-level cognitive processes, which have almost always depended on a clean separation.” Dennett maintained that “French's work is exciting not only because it reveals analogy-making to be an extension of our complex and subtle ability to perceive sameness but also because it offers a computational model of mechanisms underlying these processes. This model makes significant strides in putting into practice microlevel stochastic processing, distributed processing, simulated parallelism, and the integration of representation-building and representation-processing.” Arthur B. Markman of Columbia University, in a review for the International Journal of Neural Systems described The Subtlety of Sameness as “fascinating.”
- A review in Choice said that “French reveals analogy-making to be an extension of our complex and subtle ability to perceive sameness. His computer program, Tabletop, forms analogies in a microdomain consisting of objects (utensils, cups, drinking glasses, etc.) on a table set for a meal. The theory and the program rely on the idea that stochastic choices made on the microlevel can add up to human-like robustness on a macrolevel. Thousands of program runs attempt to verify this on dozens of interrelated analogy problems in the Tabletop microworld.”
- French, R. M. (2012). Moving Beyond the Turing Test. Communications of the Association for Computing Machinery. French argued that “we need to put aside the attempt to build a machine that can flawlessly imitate humans,” and that can therefore pass the “Turing Test,” formulated by Alan Turing in the mid 20th century. Instead, computer scientists “should accept the computer as a valid interlocutor and interact with it as an interactive, high-level, sophisticated information source.” French declared that he was “convinced no machine will pass a Turing Test, at least not in the foreseeable future....There will remain recondite reaches of human cognition and physiognomy that will be able to serve as the basis for questions used to trip up any machine. So, set the Turing Test aside. I would be perfectly happy if a machine said to me, 'Look, I’m a computer, so don’t ask me any questions that require me to have a body to answer, no stuff about what it feels like to fall off a bicycle or have pins and needles in my foot. This fooling you to think I’m a human is passé. I’m not trying to fool you. I’m a computer, ok?”
- French, R. M. (2012). Dusting off the Turing Test. Science, 336,160-161. French reported that “[t]wo revolutionary advances in information technology may bring the Turing test out of retirement,” one of them being “the ready availability of vast amounts of raw data” and the other being “the advent of sophisticated techniques for collecting, organizing, and processing this rich collection of data.” He invited the reader to suppose “that all the words you have ever spoken, heard, written, or read, as well as all the visual scenes and all the sounds you have ever experienced, were recorded and accessible, along with similar data for hundreds of thousands, even millions, of other people,” and that this record of sensory experience could be supplemented with information supplied by tactile and olfactory receptors. Advanced computer researchers, French said, “think that this kind of life-experience recording will become commonplace in the not-too-distant future.” He further asked the reader to assume “that the software exists to catalog, analyze, correlate, and cross-link everything in this sea of data. These data and the capacity to analyze them appropriately could allow a machine to answer heretofore computer-unanswerable questions that tap into facts derived from our embodiment or from our subcognitive associative networks.” Given all this, French asked, “is it so far-fetched to think that the machine might be able to use that data to construct a cognitive and subcognitive network similar to your own? Similar enough, that is, to pass the Turing test.”
- French, R. M. (2000). The Turing Test: the first 50 years. Trends in Cognitive Sciences, 4(3), 115-121. Noting that “no other single article in computer science, and few other articles in science in general, have generated [as] much discussion” as Alan Turing's article about the Turing Test, French chronicles the history of the article's reception, arguing that changing perceptions of the test have “paralleled the changing attitudes in the scientific community towards artificial intelligence: from the unbridled optimism of 1960's to the current realization of the immense difficulties that still lie ahead.”
- French, R. M. (1996). The Inverted Turing Test: How a simple (mindless) program could pass it. Psycoloquy 7(39) turing-test.6.french. In this article, French argued that the “inverted Turing Test...could be simulated by a standard Turing test” and that “a very simple program with no intelligence whatsoever could be written that would pass the inverted Turing test.” Therefore, “the inverted Turing test in its present form must be rejected.”
- Mareschal, D. and French, R. M. (1997). A connectionist account of interference effects in early infant memory and categorization. Proceedings of the 19th Annual Cognitive Science Society Conference, LEA, 484-489.
- Addyman, C. and French, R. M. (2012). Computational modeling in cognitive science: A manifesto for change. Topics in Cognitive Science, 4(3), 332-341.
- French, R. M., Addyman, C., and Mareschal, D. (2011). TRACX: A Recognition-Based Connectionist Framework for Sequence Segmentation and Chunk Extraction. Psychological Review, 118(4), 614-636.
- Cowell, R. A. and French, R. M. (2011). Noise and the Emergence of Rules in Category Learning: A Connectionist Model. IEEE Transactions on Autonomous Mental Development, 3(3), 194-206. This paper presents “a neural network model of category learning that addresses the question of how rules for category membership are acquired.”
- Thibaut, J.-P., French, R. M., and Vezneva, M. (2010). Cognitive Load and semantic analogies: searching the semantic space. Psychonomic Bulletin and Review, 17(4), 569-574.
- Van Rooy, D., Van Overwalle, F., Vanhoomissen, T., Labiouse, C., and French, R. M. (2003). A Recurrent Connectionist Model of Group Biases. Psychological Review, 110, 536-563.
- French, R. M., (2002). Natura non facit saltum: The need for the full continuum of mental representations. The Behavior and Brain Sciences. 25(3), 339-340.
- Jacquet, M. and French, R. M. (2002). The BIA++: Extending the BIA+ to a dynamical distributed connectionist framework. Bilingualism, 5(3), 202-205.
- Mareschal, D., Quinn, P. C., and French, R. M. (2002) Asymmetric interference in 3- to 4- month-olds’ sequential category learning. Cognitive Science, 26, 377-389
- French, R. M. and Chater, N. (2002). Using Noise to Compute Error Surfaces in Connectionist Networks: A Novel Means of Reducing Catastrophic Forgetting. Neural Computation, 14(7), 1755-1769.
- French, R. M. and Labiouse, C. (2001). Why co-occurrence information alone is not sufficient to answer subcognitive questions. Journal of Experimental and Theoretical Artificial Intelligence, 13(4), 419-429.
- French, R. M. and Thomas, E. (2001). The Dynamical Hypothesis in Cognitive Science: A review essay of Mind As Motion. Minds and Machines, 11, 1, 101-111.
- Mareschal, D., French, R. M., and Quinn, P. (2000). A Connectionist Account of Asymmetric Category Learning in Early Infancy. Developmental Psychology, 36, 635-645.
- French, R. M. and Thomas, E. (2000). Why Localist Connectionist Models are Inadequate for Categorization. The Behavior and Brain Sciences, 23(4), 477.
- "Gödel Escher Bach : Les Brins d'une Guirlande Eternelle". Amazon.com.
- "ROBERT M. FRENCH" (PDF). University of Burgundy.
- "Tabletop: an emergent, stochastic computer model of analogy-making (intelligence modeling)". OCLC WorldCat.
- "The Subtlety of Sameness: A Theory and Computer Model of Analogy-making". Google Books.
- "Extended Book Review: The Subtlety of Sameness by Robert M. French". World Scientific.
- "The subtlety of sameness a theory and computer model of analogy-making". Villanova University.
- "Moving Beyond the Turing Test" (PDF). Yildiz.
- "Artificial Intelligence Could Be on Brink of Passing Turing Test". Wired.
- "Computational Biology and "Dusting Off the Turing Test"". Computing Community Consortium.
- "The Turing Test: The First Fifty Years (2000)". CiteSeerX.