Logic in computer science

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Diagrammatic representation of computer logic gates

Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:

  • Theoretical foundations and analysis
  • Use of computer technology to aid logicians
  • Use of concepts from logic for computer applications

Theoretical foundations and analysis[edit]

The most essential foundations for computer science are grounded in logic and set theory. The logician Gottlob Frege who defined the first propositional calculus essentially created the first programming language. The language he defined has all the formal requirements for a powerful computer programming and specification language. The theory of computation is based on concepts defined by logicians and mathematicians such as Alonzo Church and Alan Turing.[1][2] In addition some other major areas of theoretical overlap between logic and computer science are:

  • Godel's incompleteness theorem proves that any logical system powerful enough to characterize arithmetic will contain statements that can neither be proven true nor false within that system. This has direct application to theoretical issues relating to the feasibility of proving the completeness and correctness of software.[3]
  • The Frame problem is a basic problem which must be overcome when using first order logic to represent the goals and state of an artificial intelligence agent.[4]
  • Category theory is the formal analysis and transformation of directed graphs, an area with some applications in computer science, most notably programming languages and compilers.[5]
  • The Curry-Howard correspondence is a proof about the relation between logical systems and software. This theory established the theoretical foundation for viewing a computer program as a formal logical statement that could be proven to be correct and consistent.

Computers to Assist Logicians[edit]

One of the first applications to use the term Artificial Intelligence was the Logic Theorist system developed by Allan Newell, J.C. Shaw, and Herb Simon in 1956. One of the things that a Logician does is to take a set of statements in Logic and deduce the conclusions (additional statements) that must be true by the laws of logic. For example If given a logical system that states "All humans are mortal" and "Socrates is human" a valid conclusion is "Socrates is mortal". Of course this is a trivial example. In actual logical systems the statements can be numerous and complex. It was realized early on that this kind of analysis could be significantly aided by the use of computers. The Logic Theorist validated the theoretical work of Bertrand Russell and Alfred North Whitehead in their influential work on mathematical logic called Principia Mathematica. In addition subsequent systems have been utilized by logicians to validate and discover new logical theorems and proofs.[6]

Logic applications for computers[edit]

There has always been a strong influence from mathematical logic on the field of Artificial Intelligence (AI). From the beginning of the field it was realized that technology to automate logical inferences could have great potential to solve problems and draw conclusions from facts. Ron Brachman has described First Order Logic (FOL) as metric by which all AI knowledge representation formalism should be evaluated. There is no more general or powerful known method for describing and analyzing information than FOL. The reason FOL itself is simply not used as a computer language is that it is actually too expressive, in the sense that FOL can easily express statements that no computer, no matter how powerful, could ever solve. For this reason every form of knowledge representation is in some sense a trade off between expressivity and computability. The more expressive the language is, the closer it is to FOL, the more likely it is to be slower and prone to an infinite loop.[7]

For example, IF THEN rules used in Expert Systems are a very limited subset of FOL. Rather than arbitrary formulas with the full range of logical operators the starting point is simply what logicians refer to as Modus Ponens. As a result the computability of rule based systems can be quite good, especially if they take advantage of optimization algorithms and compilation.[8]

Another major area of research for logical theory was software engineering. Research projects such as the Knowledge-Based Software Assistant and Programmer's Apprentice programs applied logical theory to validate the correctness of software specifications. They also used them to transform the specifications into efficient code on diverse platforms and to prove the equivalence between the implementation and the specification.[9] This formal transformation driven approach is often far more effort than traditional software development. However, in specific domains with appropriate formalisms and reusable templates the approach has proven viable for commercial products. The appropriate domains are usually those such as weapons systems, security systems, and real time financial systems where failure of the system has excessively high human or financial cost. An example of such a domain is Very Large Scale Integrated (VLSI) Design—the process for designing the chips used for the CPU's and other critical components of digital devices. An error in a chip is catastrophic. Unlike software chips can't be patched or updated. As a result there is commercial justification for using formal methods to prove that the implementation corresponds to the specification.[10]

Another important application of logic to computer technology has been in the area of Frame languages and automatic classifiers. Frame languages such as KL-ONE have a rigid semantics. Definitions in KL-ONE can be directly mapped to set theory and the predicate calculus. This allows specialized theorem provers called classifiers to analyze the various declarations between sets, subsets, and relations in a given model. In this way the model can be validated and any inconsistent definitions flagged. The classifier can also infer new information, for example define new sets based on existing information and change the definition of existing sets based on new data. The level of flexibility is ideal for handling the ever changing world of the Internet. Classifier technology is built on top of languages such as the Web Ontology Language to allow a logical semantic level on to the existing Internet. This layer of is called the Semantic web.[11][12]


  1. ^ Lewis, Harry R.; Christos H. Papadimitriou (1981). Elements of the Theory of Computation. Englewood Cliffs, New Jersey: Prentice-Hall. ISBN 0-13-273417-6. 
  2. ^ Davis, Martin. "Influences of Mathematical Logic on Computer Science". In Rolf Herken. The Universal Turing Machine. Springer Verlag. Retrieved 26 December 2013. 
  3. ^ Hofstadter, Douglas R. Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books. ISBN 978-0465026562. 
  4. ^ McCarthy, J; P.J. Hayes (1969). "Some philosophical problems from the standpoint of artificial intelligence". Machine Intelligence 4: 463–502. 
  5. ^ DeLoach, Scott; Thomas Hartrum (June 2000). "A Theory Based Representation for Object-Oriented Domain Models". IEEE Transactions on Software Engineering 25 (6). 
  6. ^ Newell, Allen; J.C. Shaw and H.C. Simon (1963). "Empirical explorations with the logic theory machine". In Ed Feigenbaum. Computers and Thought. McGraw Hill. pp. 109–133. ISBN 978-0262560924. 
  7. ^ Levesque, Hector; Ronald Brachman (1985). "A Fundamental Tradeoff in Knowledge Representation and Reasoning". In Ronald Brachman and Hector J. Levesque. Reading in Knowledge Representation. Morgan Kaufmann. p. 49. ISBN 0-934613-01-X. The good news in reducing KR service to theorem proving is that we now have a very clear, very specific notion of what the KR system should do; the bad new is that it is also clear that the services can not be provided... deciding whether or not a sentence in FOL is a theorem... is unsolvable. 
  8. ^ Forgy, Charles (1982). "Rete: A Fast Algorithm for the Many Pattern/Many Object Pattern Match Problem*". Artificial Intelligence 19: 17–37. doi:10.1016/0004-3702(82)90020-0. Retrieved 25 December 2013. 
  9. ^ Rich, Charles; Richard C. Waters (November 1987). "The Programmer's Apprentice Project: A Research Overview". IEE Expert Special Issue on the Interactions between Expert Systems and Software Engineering. Retrieved 26 December 2013. 
  10. ^ Stavridou, Victoria (1993). Formal Methods in Circuit Design. Press Syndicate of the University of Cambridge. ISBN 0-521-443369. Retrieved 26 December 2013. 
  11. ^ MacGregor, Robert (June 1991). "Using a description classifier to enhance knowledge representation". IEEE Expert 6 (3). Retrieved 10 November 2013. 
  12. ^ Berners-Lee, Tim; James Hendler and Ora Lassila (May 17, 2001). "The Semantic Web A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities". Scientific American. 


External links[edit]