Confabulation (neural networks)
An editor has nominated this article for deletion. You are welcome to participate in the deletion discussion, which will decide whether or not to retain it. |
Confabulation is a neural process in a theory of cognition and consciousness in which all thoughts and ideas originate in both biological and synthetic neural networks as false or degraded memories nucleate upon various forms of neuronal and synaptic fluctuations and damage (Holmes, R., 1997, Thaler, S., 1997a, b). Such novel patterns of neural activation are promoted to ideas as other neural nets perceive utility or value to them (i.e., the thalamo-cortical loop). The exploitation of these false memories by other artificial neural networks forms the basis of inventive artificial intelligence systems currently utilized in product design, (Plotkin, R., 2009), materials discovery (Thaler, S., 1998), and improvisational military robots (Hesman, T., 2004). Compound, confabulatory systems of this kind (Thaler, 1997) have been used as sensemaking systems for military intelligence and planning (Hesman, 2004), self-organizing control systems for robots (Mayer, H. 2004, Levy, D., 2006), space vehicles and communications satellites (Patrick, M., 2007), and entertainment (Hesman, T., 2007). The concept of such opportunistic confabulation grew out of experiments with artificial neural networks that simulated brain cell apoptosis (Yam, P., 1993, 1995). It was discovered that through similar mechanisms that novel perception and motor planning via both reversible and irreversible neurobiological damage (Thaler, S., 1995).
Confabulation is a term used in describing inductive reasoning as studied through Bayesian networks. Confabulation is used to select the expectancy of the concept that follows a particular context. This is not an Aristotelian deductive process, although it yields simple deduction when memory only holds unique events. However, most events and concepts occur in multiple, conflicting contexts and so confabulation yields a consensus of an expected event that may only be minimally more likely than many other events. However, given the winner take all constraint of the theory, that is the event/symbol/concept/attribute that is then expected. This parallel computation on many contexts is postulated to occur in less than a tenth of a second. Confabulation grew out of vector analysis of data retrieval like that of latent semantic analysis and support vector machines. It is currently used to detect credit card fraud. It is being implemented computationally on parallel computers.
References
Hesman, T. (2004). The Machine That Invents, St. Louis Post-Dispatch, Jan. 25, 2004.
Levy, D. (2006). Robots Unlimited: Life in a Virtual Age. A. K. Peters, Ltd., Wellesley, MA.
Mayer, H. A. (2004). A Modular Neurocontroller for Creative Mobile Autonomous Robots Learning by Temporal Difference, Systems, Man, and Cybernetics, 2004 IEEE Conference on, Volume 6, Issue, 10-13 Oct., 2004.
Patrick, M. C., Stevenson-Chavis, K., Thaler, S. L. (2007). Demonstration of Self-Training Autonomous Neural Networks in Space Vehicle Docking Simulations, Aerospace Conference, 2007 IEEE, Volume , Issue , 3-10 March 2007 Page(s):1 - 6.
Pickover, C. A. (2005). Sex, Drugs, Einstein, & Elves, SmartPublications, Petaluma, CA, p. 70. (tertiary source quoting Hesman, T.)
Plotkin, R. (2009). The Genie in the Machine: How Computer-Automated Inventing is Revolutionizing Law and Business, Stanford University Press.
Thaler, S. L. (1995). Death of a gedanken creature, Journal of Near-Death Studies, 13(3), Spring 1995.
Thaler, S. L. (1996). A Proposed Symbolism for Network-Implemented Discovery Processes, In Proceedings of the World Congress on Neural Networks, (WCNN’96), Lawrence Erlbaum, Mawah, NJ.
Thaler, S. L. (1997a). U.S. 5,659,666, Device for the Autonomous Generation of Useful Information, Issued 8/19/1997.
Thaler, S. L. (1997b). A Quantitative Model of Seminal Cognition: the creativity machine paradigm, Proceedings of the Mind II Conference, Dublin, Ireland, 1997.
Thaler, S. L. (1998). Predicting ultra-hard binary compounds via cascaded auto- and hetero-associative neural newtorks, Journal of Alloys and Compounds, 279(1998), 47-59.
Thaler, S. L. (1999). No mystery intended. Neural Networks, Volume 12, Issue 1, January 1999, Pages 193-194.
Yam, P. (1993). "Daisy, Daisy" Do computers have near-death experience, Scientific American, May 1993.
Yam, P. (1995). As They Lay Dying, Scientific American, May 1995.