Computational intelligence

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Until now, there’s no commonly accepted definition of Computational Intelligence… For Bezdek (1994), "a system is called computationally intelligent if it deals with low-level data such as numerical data, if it has a pattern-recognition component and if it does not use knowledge as exact and complete as the Artificially Intelligent one".[1]

But generally, computational intelligence is a set of nature-inspired computational methodologies and approaches to address complex real-world problems to which mathematical or traditional modelling can be useless for a few reasons: the processes might be too complex for mathematical reasoning, it might contain some uncertainties during the process, or the process might simply be stochastic in nature.[1] Indeed, many real-life problems cannot be translated into binary language (unique values of 0 and 1) for computers to process it. Computational Intelligence therefore provides solutions for such problems.

The methods used are close to the human’s way of reasoning, i.e. it uses non exact and non-complete knowledge, and it is able to produce control actions in an adaptive way. CI therefore uses a combination of 5 main complementary techniques.[1] The fuzzy logic which enables the computer to understand natural language,[2][3] artificial neural networks which permits the system to learn experiential data by operating like the biological one, evolutionary computing which is based on the process of natural selection, learning theory, and probabilistic methods which helps dealing with uncertainty imprecision.[1]

Except those main principles, currently popular approaches include biologically inspired algorithms such as swarm intelligence[4] and artificial immune systems, which can be seen as a part of evolutionary computation, image processing, data mining, natural language processing, and artificial intelligence,which tends to be confused with Computational Intelligence. But although both Computational Intelligence (CI) and Artificial Intelligence (AI) seek similar goals, there’s a clear distinction between them.

Computational Intelligence is thus a way of performing like human beings. Indeed, the characteristic of "intelligence" is usually attributed to humans. More recently, many products and items also claim to be "intelligent", an attribute which is directly linked to the reasoning and decision making.


The notion of Computational Intelligence was first used by the IEEE Neural Networks Council in 1990. This Council was originally founded in the 1980s by a group of researchers interested in the development of biological and artificial neural networks. On November 21, 2001 the IEEE Neural Networks Council became the IEEE Neural Networks Society, to become the IEEE Computational Intelligence Society 2 years later by including new areas of interest such as fuzzy systems and evolutionary computation, which they related to Computational Intelligence in 2011 (Dote and Ovaska).

But the first clear definition of Computational Intelligence was introduced by Bezdek in 1994:[1] a system is called computationally intelligent if it deals with low-level data such as numerical data, has a pattern-recognition component and does not use knowledge in the AI sense, and additionally when it begins to exhibit computational adaptively, fault tolerance, speed approaching human-like turnaround and error rates that approximate human performance.

Bezdek and Marks (1993) clearly differentiated CI from AI, by arguing that the first one is based on soft computing methods, whereas AI is based on hard computing ones.

Difference between Computational and Artificial Intelligence[edit]

Although Artificial Intelligence and Computational Intelligence seek a similar long-term goal: reach general intelligence, which is the intelligence of a machine that could perform any intellectual task that a human being can; there’s a clear difference between them. According to Bezdek (1994), Computational Intelligence is a subset of Artificial Intelligence.

There are 2 types of machine intelligence: the artificial one based on hard computing techniques and the computational one based one soft computing methods, which enable adaptation to many situations.

Hard computing techniques work following a binary logic, which only works with two values (the Booleans true or false, 0 or 1) on which modern computer is based. But one of the main problems of this logic is that our natural language cannot always be translated easily into absolute terms of 0 and 1.This is where the soft computing techniques step in, based on a different logic, the fuzzy one[6]… Much closer to the way human brain works by aggregating data to partial truths (Crisp/fuzzy systems), this logic is one of the main exclusive aspects of CI.

Within the same principles of fuzzy and binary logics, follow the crispy and fuzzy systems.[7] The first one being a part of the artificial intelligence principles, consists in neither including an element in the set, or not. Whereas fuzzy systems (CI) enable elements to be partially in it. Following this logic, each element can be given a degree of membership (from 0 to 1) and not exclusively one of these 2 values.[8]

The 5 main principles of CI and its applications[edit]

The main applications of Computational Intelligence include computer science, engineering, data analysis and bio-medicine.

Fuzzy Logic[edit]

As explained before, the fuzzy logic which is one of CI’s main principles, consists in measurements and process modelling made for real life’s complex processes.[3] It can face incompleteness, and most importantly ignorance of data in a process model, contrarily to Artificial Intelligence which requires exact knowledge.

This technique tends to apply to a wide range of domains such as control, image processing and decision making. But it is also well introduced in the field of household appliances with washing machines, microwave ovens... We can face it too when using a video camera, where it helps stabilizing the image while holding the camera unsteadily. Other areas such as medical diagnostics, foreign exchange trading and business strategy selection are apart from this principle's numbers of applications.[1]

So fuzzy logic is mainly made for approximate reasoning, but doesn’t have learning abilities,[1] a qualification much needed that human beings have… It enables them to improve themselves by learning from their previous mistakes.

Neural Networks[edit]

This is why CI experts work on the development of artificial neural networks based on the biological ones which can be defined by 3 main components: the cell-body which processes the information, the axon which is a device enabling the signal conducting, and the synapse which controls signals. Therefore, artificial neural networks are doted of distributed information processing systems,[9] enabling the process and the learning from experiential data. Working as human beings, fault tolerance is also one of the main assets of this principle.[1]

Concerning its applications, neural networks can be classified into five groups which are data analysis and classification, associative memory, clustering generation of patterns and control.[1] Generally, this method aims to analyse and classify medical data, proceed to face and fraud detection, and most importantly deal with nonlinearities of a system in order to control it.[10] Furthermore, neural networks techniques share with the fuzzy logic ones the advantage of enabling data clustering.

Evolutionary Computation[edit]

Based on the process of natural selection firstly introduced by Charles Robert Darwin, the evolutionary computation consists in capitalizing on the strength of natural evolution to bring up new artificial evolutionary methodologies.[11] It also includes other areas such as evolution strategy, and evolutionary algorithms which are seen as problem solvers... This principle's main applications cover areas such as optimization and multi-objective optimization, to which traditional mathematical one techniques aren't enough anymore to apply to a wide range of problems such as DNA Analysis, scheduling problems...[1]

Learning Theory[edit]

Still looking for a way of "reasoning" close to the humans' one, learning theory is one of the main approaches of CI. In psychology, learning is the process of bringing together cognitive, emotional and environmental effects and experiences to acquire, enhance or change knowledge, skills, values and world views (Ormrod, 1995; Illeris, 2004).[1] Learning theories then helps understanding how these effects and experiences are processed, and then helps making predictions based on previous experience.[12]

Probabilistic Methods[edit]

Being one of the main elements of fuzzy logic, probabilistic methods firstly introduced by Paul Erdos and Joel Spencer [1](1974), aim to evaluate the outcomes of a Computation Intelligent system, mostly defined by randomness.[13] Therefore, probabilistic methods bring out the possible solutions to a reasoning problem, based on prior knowledge.

See also[edit]




  1. ^ a b c d e f g h i j k l Siddique; Adeli, Nazmul; Hojjat (2013). Computational Intelligence: Synergies of Fuzzy Logic, Neural Networks and Evolutionary Computing. John Wiley & Sons. ISBN 978-1-118-53481-6. 
  2. ^ Rutkowski, Leszek (2008). Computational Intelligence: Methods and Techniques. Springer. ISBN 978-3-540-76288-1. 
  3. ^ a b "Fuzzy Logic". Margaret Rouse. July 2006. 
  4. ^ Beni, G., Wang, J. Swarm Intelligence in Cellular Robotic Systems, Proceed. NATO Advanced Workshop on Robots and Biological Systems, Tuscany, Italy, June 26–30 (1989)
  5. ^ "IEEE Computational Intelligence Society History". Engineering and Technology history Wiki. 22 July 2014. Retrieved 2015-10-30. 
  6. ^ "Artificial Intelligence, Computational Intelligence, SoftComputing, Natural Computation - what's the difference? - ANDATA". Retrieved 2015-11-05. 
  7. ^ "Fuzzy Sets and Pattern Recognition". Retrieved 2015-11-05. 
  8. ^ R.Pfeifer. 2013. Chapter 5: FUZZY Logic. Editor’s/Editors’ initial/s and surname/s(ed. or eds.), Lecture notes on “Real-world computing”. Zurich. University of Zurich.
  9. ^ Stergiou, Siganos, Christos, Dimitrios. "". SURPRISE 96 Journal from Imperial College of London.  Check date values in: |access-date= (help);
  10. ^ Somers, Casal, Mark John, Jose C. (July 2009). "Using Artificial Neural Networks to Model Nonlinearity" (PDF). SAGE Journals. Retrieved 2015-10-31. 
  11. ^ De Jong, K. (2006). Evolutionary Computation:A Unified Approach. MIT Press. ISBN 9780262041942. 
  12. ^ Worrell, James. "Computational Learning Theory: 2014-2015". University of Oxford. Presentation page of CLT course. University of Oxford. Retrieved 02/11/2015.  Check date values in: |access-date= (help)
  13. ^ Palit , Popovic, Ajoy K., Dobrivoje (2006). Computational Intelligence in Time Series Forecasting : Theory and Engineering Applications. Springer Science & Business Media. p. 4. ISBN 9781846281846.