From Wikipedia, the free encyclopedia
Original author(s)OpenCog Developers
Developer(s)OpenCog Foundation
Initial release21 January 2008; 15 years ago (2008-01-21)[1]
Written inC++, Python, Scheme
TypeArtificial general intelligence
LicenseGNU Affero General Public License

OpenCog is a project that aims to build an open source artificial intelligence framework. OpenCog Prime is an architecture for robot and virtual embodied cognition that defines a set of interacting components designed to give rise to human-equivalent artificial general intelligence (AGI) as an emergent phenomenon of the whole system.[2] OpenCog Prime's design is primarily the work of Ben Goertzel while the OpenCog framework is intended as a generic framework for broad-based AGI research. Research utilizing OpenCog has been published in journals and presented at conferences and workshops including the annual Conference on Artificial General Intelligence. OpenCog is released under the terms of the GNU Affero General Public License.

OpenCog is in use by more than 50 companies, including Huawei and Cisco.[3]


OpenCog was originally based on the release in 2008 of the source code of the proprietary "Novamente Cognition Engine" (NCE) of Novamente LLC. The original NCE code is discussed in the PLN book (ref below). Ongoing development of OpenCog is supported by Artificial General Intelligence Research Institute (AGIRI), the Google Summer of Code project, Hanson Robotics, SingularityNET and others.


OpenCog consists of:

  • A collection of pre-defined atoms, that encode a type subsystem, including type constructors and function types. These are used to specify the types of variables, terms and expressions, and are used to specify the structure of generic graphs containing variables.
  • A collection of pre-defined atoms that encode a satisfiability modulo theories solver, built in as a part of a generic graph query engine, for performing graph and hypergraph pattern matching (isomorphic subgraph discovery). This generalizes the idea of a structured query language (SQL) to the domain of generic graphical queries; it is an extended form of a graph query language.
  • An attention allocation subsystem based on economic theory, termed ECAN.[4] This subsystem is used to control the combinatorial explosion of search possibilities that are met during inference and chaining.
  • An implementation of a probabilistic reasoning engine based on probabilistic logic networks (PLN). The current implementation uses the rule engine to chain together specific rules of logical inference (such as modus ponens), together with some very specific mathematical formulas assigning a probability and a confidence to each deduction. This subsystem can be thought of as a certain kind of proof assistant that works with a modified form of Bayesian inference.
  • A probabilistic genetic program evolver called Meta-Optimizing Semantic Evolutionary Search, or MOSES.[5] This is used to discover collections of short Atomese programs that accomplish tasks; these can be thought of as performing a kind of decision tree learning, resulting in a kind of decision forest, or rather, a generalization thereof.
  • A natural language generation system.[6]
  • Interfaces to Hanson Robotics robots, including emotion modelling[8] via OpenPsi. This includes the Loving AI project, used to demonstrate meditation techniques.

Organization and funding[edit]

In 2008, the Machine Intelligence Research Institute (MIRI), formerly called Singularity Institute for Artificial Intelligence (SIAI), sponsored several researchers and engineers. Many contributions from the open source community have been made since OpenCog's involvement in the Google Summer of Code in 2008 and 2009. Currently MIRI no longer supports OpenCog.[9] OpenCog has received funding and support from several sources, including the Hong Kong government, Hong Kong Polytechnic University, the Jeffrey Epstein VI Foundation[10] and Hanson Robotics. The OpenCog project is currently affiliated with SingularityNET and Hanson Robotics.


Similar to other cognitive architectures, the main purpose is to create virtual humans, which are three dimensional avatar characters. The goal is to mimic behaviors like emotions, gestures and learning. For example, the emotion module in the software was only programmed because humans have emotions. Artificial General Intelligence can be realized, if it simulates intelligence of humans.[11]

The self-description of the OpenCog project provides additional possible applications which are going into the direction of natural language processing and the simulation of a dog.[12]

See also[edit]


  • Hart, D; B Goertzel (2008). OpenCog: A Software Framework for Integrative Artificial General Intelligence (PDF). Proceedings of the First AGI Conference. Gbooks


  1. ^ "OpenCog Release". 21 January 2008. Retrieved 21 January 2008.
  2. ^ "OpenCog: Open-Source Artificial General Intelligence for Virtual Worlds | CyberTech News". 2009-03-06. Archived from the original on 2009-03-06. Retrieved 2016-10-01.{{cite web}}: CS1 maint: bot: original URL status unknown (link)
  3. ^ Rogers, Stewart (2017-12-07). "SingularityNET talks collaborative AI as its token sale hits 400% oversubscription". VentureBeat. Retrieved 2018-03-13.
  4. ^ "Economic Attention Allocation".
  5. ^ "MOSES".
  6. ^ "Natural Language Generation".
  7. ^ "OpenPsi".
  8. ^ "Emotion modeling - Hanson Robotics Wiki". Archived from the original on 2018-03-19. Retrieved 2015-04-24.
  9. ^ Ben Goertzel (2010-10-29). "The Singularity Institute's Scary Idea (and Why I Don't Buy It)". The Multiverse According to Ben. Retrieved 2011-06-24.
  10. ^ "Science Funder Jeffrey Epstein Launches Radical Emotional Software". Forbes. Oct 2, 2013.
  11. ^ David Burden; Maggi Savin-Baden (24 January 2019). Virtual Humans: Today and Tomorrow. CRC Press. ISBN 978-1-351-36526-0. Retrieved 25 August 2020.
  12. ^ Ben Goertzel; Cassio Pennachin; Nil Geisweiller (8 July 2014). Engineering General Intelligence, Part 1: A Path to Advanced AGI via Embodied Learning and Cognitive Synergy. Springer. pp. 23–. ISBN 978-94-6239-027-0.

External links[edit]