Logic programming: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Fix template invocation.
→‎Negation as failure: DOI for that chapter.
Line 167: Line 167:
Negative conditions, implemented by means of [[negation as failure]] (NAF) were already a feature of early Prolog systems. The resulting extension of [[SLD resolution]] is called [[SLD resolution#SLDNF | SLDNF]]. A similar construct, called "thnot", also existed in [[Micro-Planner (programming language)|Micro-Planner]].
Negative conditions, implemented by means of [[negation as failure]] (NAF) were already a feature of early Prolog systems. The resulting extension of [[SLD resolution]] is called [[SLD resolution#SLDNF | SLDNF]]. A similar construct, called "thnot", also existed in [[Micro-Planner (programming language)|Micro-Planner]].


The logical semantics of negation as failure was unresolved until [[Keith Clark (computer scientist)|Keith Clark]]<ref>{{cite book |last=Clark |first=K.L. |author-link=Keith Clark (computer scientist) |date=1977 |chapter=Negation as failure |title=Logic and data bases |pages=293–322 |location=Boston, MA |publisher=Springer US}}</ref> showed that, under certain natural conditions, NAF is an efficient, correct (and sometimes complete) way of reasoning with the completion of a program using classical negation.
The logical semantics of negation as failure was unresolved until [[Keith Clark (computer scientist)|Keith Clark]]<ref>{{cite book |last=Clark |first=K.L. |author-link=Keith Clark (computer scientist) |date=1977 |chapter=Negation as failure |title=Logic and data bases |pages=293–322 |doi=10.1007/978-1-4684-3384-5_11 |location=Boston, MA |publisher=Springer US}}</ref> showed that, under certain natural conditions, NAF is an efficient, correct (and sometimes complete) way of reasoning with the completion of a program using classical negation.


Completion amounts roughly to regarding the set of all the program clauses with the same predicate in the head, say:
Completion amounts roughly to regarding the set of all the program clauses with the same predicate in the head, say:

Revision as of 21:51, 16 October 2023

Logic programming is a programming, database and knowledge-representation and reasoning paradigm which is based on formal logic. A program, database or knowledge base in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, Answer Set Programming (ASP) and Datalog. In all of these languages, rules are written in the form of clauses:

A :- B1, …, Bn.

and are read declaratively as logical implications:

A if B1 and … and Bn.

A is called the head of the rule, B1, ..., Bn is called the body, and the Bi are called literals. When n = 0, the rule is called a fact and is written in the simplified form:

A.

In the simplest case in which A, B1, ..., Bn are all atomic formulae, these clauses are called definite clauses or Horn clauses.

However, there are many extensions of this simple case, the most important one being the case in which conditions in the body of a clause can also be negations of atomic formulas. Logic programming languages that include this extension have the knowledge representation capabilities of a non-monotonic logic.

In ASP and Datalog, logic programs have only a declarative reading, and their execution is performed by means of a proof procedure or model generator whose behaviour is not meant to be controlled by the programmer. However, in the Prolog family of languages, logic programs also have a procedural interpretation as goal-reduction procedures:

to solve A, solve B1, and ... and solve Bn.

In the propositional case, given a goal of the form:

?- A, A1, …, Am.

with selected subgoal A that is an atomic formula, goal reduction generates the new goal:

?- B1,..., Bn, A1, …, Am.

If n=0, the subgoal is solved, and the new goal consists of the remaining subgoals:

?- A1, …, Am.

This general-purpose, procedural reading of logic programs can be used by a logic programmer to write a program that behaves as a desired special-purpose procedure.

Consider the following clause:

fallible(X) :- human(X).

based on an example used by Terry Winograd[1] to illustrate the programming language Planner. As a clause in a logic program, it can be used both as a procedure to test whether X is fallible by testing whether X is human, and as a procedure to find an X which is fallible by finding an X which is human. Even facts have a procedural interpretation. For example, the clause:

human(socrates).

can be used both as a procedure to show that socrates is human, and as a procedure to find an X that is human by "assigning" socrates to X.

Although Horn clause logic programs are Turing complete,[2][3] for most practical applications, as well as for applications that require non-monotonic reasoning in artificial intelligence, Horn clause programs need to be extended to "normal" logic programs with negative conditions.

In the Prolog family of logic programming languages, negative literals in the bodies of clauses also have a procedural interpretation, known as negation as failure: A negative literal not B is deemed to hold if and only if the positive literal B fails to hold.

Consider, for example, the program:

canfly(X) :- bird(X), not abnormal(X).
abnormal(X) :- wounded(X).
bird(john).
bird(mary).
wounded(john).

Given the goal of finding an X that can fly:

?- canfly(X).

the first clause reduces the goal to the two subgoals:

?- bird(X), not abnormal(X).

The first subgoal bird(X) has two candidate solutions X = john and X = mary. The first candidate yields the instantiated second subgoal:

?- not abnormal(john).

This negative subgoal fails, because the corresponding positive subgoal succeeds:

?- abnormal(john).
true.

Therefore the candidate solution X = john of the top-level goal also fails,

In contrast, the alternative candidate solution X = mary of the top-level goal succeeds, because the instantiated second subgoal not abnormal(mary) succeeds, because the corresponding positive subgoal abnormal(mary) fails.

Much of the research in the field of logic programming has been concerned with trying to develop a logical semantics for negation as failure and with developing other semantics and other implementations for negation. These developments have been important, in turn, for supporting the development of formal methods for logic-based program verification and program transformation.

History

The use of mathematical logic to represent and execute computer programs is also a feature of the lambda calculus, developed by Alonzo Church in the 1930s. However, the first proposal to use the clausal form of logic for representing computer programs was made by Cordell Green.[4] This used an axiomatization of a subset of LISP, together with a representation of an input-output relation, to compute the relation by simulating the execution of the program in LISP. Foster and Elcock's Absys, on the other hand, employed a combination of equations and lambda calculus in an assertional programming language that places no constraints on the order in which operations are performed.[5]

Logic programming, with its current syntax of facts and rules, can be traced back to debates in the late 1960s and early 1970s about declarative versus procedural representations of knowledge in artificial intelligence. Advocates of declarative representations were notably working at Stanford, associated with John McCarthy, Bertram Raphael and Cordell Green, and in Edinburgh, with John Alan Robinson (an academic visitor from Syracuse University), Pat Hayes, and Robert Kowalski. Advocates of procedural representations were mainly centered at MIT, under the leadership of Marvin Minsky and Seymour Papert.[6]

Although it was based on the proof methods of logic, Planner, developed at MIT, was the first language to emerge within this proceduralist paradigm.[7] Planner featured pattern-directed invocation of procedural plans from goals (i.e. goal-reduction or backward chaining) and from assertions (i.e. forward chaining). The most influential implementation of Planner was the subset of Planner, called Micro-Planner, implemented by Gerry Sussman, Eugene Charniak and Terry Winograd. It was used to implement Winograd's natural-language understanding program SHRDLU, which was a landmark at that time.[1] To cope with the very limited memory systems at the time, Planner used a backtracking control structure so that only one possible computation path had to be stored at a time. Planner gave rise to the programming languages QA4,[8] Popler,[9] Conniver,[10] QLISP,[11] and the concurrent language Ether.[12]

Hayes and Kowalski in Edinburgh tried to reconcile the logic-based declarative approach to knowledge representation with Planner's procedural approach. Hayes (1973) developed an equational language, Golux, in which different procedures could be obtained by altering the behavior of the theorem prover.[13]

In the meanwhile, Alain Colmerauer in Marseille was working on natural-language understanding, using logic to represent semantics and using resolution for question-answering. During the summer of 1971, Colmerauer invited Kowalski to Marseille, and together they discovered that the clausal form of logic could be used to represent formal grammars and that resolution theorem provers could be used for parsing. They observed that some theorem provers, like hyper-resolution, behave as bottom-up parsers and others, like SL resolution (1971)[14] behave as top-down parsers.

It was in the following summer of 1972, that Kowalski, again working with Colmerauer, developed the procedural interpretation of implications in clausal form, It also became clear that such clauses could be restricted to definite clauses or Horn clauses, and that SL-resolution could be restricted (and generalised) to SLD resolution. Kowalski's procedural interpretation and SLD were described in a 1973 memo, published in 1974.[15]

Colmerauer, with Philippe Roussel, used the procedural interpretation as the basis of Prolog, which was implemented in the summer and autumn of 1972. The first Prolog program, also written in 1972 and implemented in Marseille, was a French question-answering system. The use of Prolog as a practical programming language was given great momentum by the development of a compiler by David H. D. Warren in Edinburgh in 1977. Experiments demonstrated that Edinburgh Prolog could compete with the processing speed of other symbolic programming languages such as Lisp.[16] Edinburgh Prolog became the de facto standard and strongly influenced the definition of ISO standard Prolog.

Logic programming gained international attention during the 1980s, when it was chosen by the Japanese Ministry of International Trade and Industry to develop the software for the Fifth Generation Computer Systems (FGCS) project. The FGCS project aimed to use logic programming to develop advanced Artificial Intelligence applications on massively parallel computers. Although the project initially explored the use of Prolog, it later adopted the use of concurrent logic programming, because it was closer to the FGCS computer architecture.

However, the committed choice feature of concurrent logic programming interfered with the language's logical semantics and with its suitability for knowledge representation and problem solving applications. Moreover, the parallel computer systems developed in the project failed to compete with advances taking place in the development of more conventional, general-purpose computers. Together these two issues resulted in the FGCS project failing to meet its objectives. Interest in both logic programming and AI fell into world-wide decline.

In the meanwhile, more declarative logic programming approaches, including those based on the use of Prolog, continued to make progress independently of the FGCS project. In particular, although Prolog was developed to combine declarative and procedural representations of knowledge, the purely declarative interpretation of logic programs became the focus for applications in the field of deductive databases. Work in this field became prominent around 1977, when Hervé Gallaire and Jack Minker organized a workshop on logic and databases in Toulouse.[17] The field was eventually renamed as Datalog.

This focus on the logical, declarative reading of logic programs was given further impetus by the development of constraint logic programming in the 1980s and Answer Set Programming in the 1990s. It is also receiving renewed emphasis in recent applications of Prolog[18]

The Association for Logic Programming (ALP) was founded in 1986 to promote Logic Programming. Its official journal until 2000, was The Journal of Logic Programming. Its founding editor-in-chief was J. Alan Robinson.[19] In 2001, the journal was renamed The Journal of Logic and Algebraic Programming, and the official journal of ALP became Theory and Practice of Logic Programming, published by Cambridge University Press.

Concepts

Logic programs enjoy a rich variety of semantics and problem solving methods, as well as a wide range of applications in programming, databases, knowledge representation and problem solving.

Algorithm = Logic + Control

The procedural interpretation of logic programs, which uses backward reasoning to reduce goals to subgoals, is a special case of the use of a problem-solving strategy to control the use of a declarative, logical representation of knowledge to obtain the behaviour of an algorithm. More generally, different problem-solving strategies can be applied to the same logical representation to obtain different algorithms.[20]

In the Prolog family of logic programming languages, backward reasoning is used to solve subgoals in the order in which the subgoals are written, and different clauses whose heads match the same goal or subgoal are also used in the order in which the clauses are written. However, other variations of this problem-solving strategy can also be used. For example, subgoals can be solved in parallel, and clauses can also be tried in parallel. The first strategy is called and-parallelism and the second strategy is called or-parallelism.

In contrast with Prolog, in the Datalog family of logic programming languages, forward (or bottom-up) reasoning derives new facts from existing facts, by matching the existing facts with the conditions in the bodies of a clause, and deriving the instantiated head of the clause as a new fact. This process continues either until there are no new facts that can be derived in this way, or until some of the existing (initial and derived) facts directly answer a query to be solved. The range-restriction (that all variables in the head of a clause are also in the body of the clause) ensures that initial facts and derived facts contain no variables.

In most cases, backward reasoning from a query or goal is more efficient than forward reasoning. But sometimes with Datalog and Answer Set Programming, there may be no query that is separate from the set of clauses as a whole, and then generating all the facts that can be derived from the clauses is a sensible problem-solving strategy. Here is another example, where forward reasoning beats backward reasoning in a more conventional computation task, where the goal ?- fibonacci(n, Result) is to find the nth fibonacci number:

fibonacci(0, 0).
fibonacci(1, 1).

fibonacci(N, Result) :-
    N > 1,
    N1 is N - 1,
    N2 is N - 2,
    fibonacci(N1, F1),
    fibonacci(N2, F2),
    Result is F1 + F2.

Here the relation fibonacci(N, M) stands for the function fibonacci(N) = M, and the predicate N is Expression is Prolog notation for the predicate that instantiates the variable N to the value of Expression.

Given the goal of computing the fibonacci number of n, backward reasoning reduces the goal to the two subgoals of computing the fibonacci numbers of n-1 and n-2. It reduces the subgoal of computing the fibonacci number of n-1 to the two subgoals of computing the fibonacci numbers of n-2 and n-3, redundantly computing the fibonacci number of n-2. This process of reducing one fibonacci subgoal to two fibonacci subgoals continues until it reaches the numbers 0 and 1. Its complexity is of the order 2n. In contrast, forward reasoning generates the sequence of fibonacci numbers, starting from 0 and 1 without any recomputation, and its complexity is linear with respect to n.

Prolog cannot perform forward reasoning directly. But it can achieve the effect of forward reasoning within the context of backward reasoning by means of tabling: Subgoals are maintained in a table, along with their solutions. If a subgoal is re-encountered, it is solved directly by using the solutions already in the table, instead of re-solving the subgoals redundantly.

Problem solving

In the simple case of a propositional Horn clause program and a top-level atomic goal that contains no variables, backward reasoning determines an and-or tree, which constitutes the search space for solving the goal. The top-level goal is the root of the tree. Given any node in the tree and any clause whose head matches the node, there exists a set of child nodes corresponding to the sub-goals in the body of the clause. These child nodes are grouped together by an "and". The alternative sets of children corresponding to alternative ways of solving the node are grouped together by an "or".

Any search strategy can be used to search this space. Prolog uses a sequential, last-in-first-out, backtracking strategy, in which only one alternative and one sub-goal is considered at a time. Other search strategies, such as parallel search, intelligent backtracking, or best-first search to find an optimal solution, are also possible.

In the more general case, where sub-goals share variables, other strategies can be used, such as choosing the subgoal that is most highly instantiated or that is sufficiently instantiated so that only one procedure applies. Such strategies are used, for example, in concurrent logic programming.

Negation as failure

Negative conditions, implemented by means of negation as failure (NAF) were already a feature of early Prolog systems. The resulting extension of SLD resolution is called SLDNF. A similar construct, called "thnot", also existed in Micro-Planner.

The logical semantics of negation as failure was unresolved until Keith Clark[21] showed that, under certain natural conditions, NAF is an efficient, correct (and sometimes complete) way of reasoning with the completion of a program using classical negation.

Completion amounts roughly to regarding the set of all the program clauses with the same predicate in the head, say:

A :- Body1.
A :- Bodyk.

as a definition of the predicate

A iff (Body1 or … or Bodyk)

where iff means "if and only if". The completion also includes axioms of equality, which correspond to unification. Clark showed that proofs generated by SLDNF are structurally similar to proofs generated by a natural deduction style of reasoning with the completion of the program.

For example, the completion of the program given at the beginning of this article is:

canfly(X) iff bird(X), not abnormal(X).
abnormal(X) iff wounded(X).
bird(X) iff X = john or X = mary.
X = X.
not john = mary.
not mary = john.

The notion of completion is closely related to John McCarthy's circumscription semantics for default reasoning,[22] and to Ray Reiter's closed world assumption.[23]

As an alternative to the completion semantics, negation as failure can also be interpreted epistemically, as in the stable model semantics of answer set programming. In this interpretation not B means literally that B is not known or not believed. The epistemic interpretation has the advantage that it can be combined very simply with classical negation, as in "extended logic programming", to formalise such phrases as "the contrary can not be shown", where "contrary" is classical negation and "can not be shown" is the epistemic interpretation of negation as failure.

Knowledge representation

The fact that Horn clauses can be given a procedural interpretation and, vice versa, that goal-reduction procedures can be understood as Horn clauses + backward reasoning means that logic programs combine declarative and procedural representations of knowledge. The inclusion of negation as failure means that logic programming is a kind of non-monotonic logic.

Despite its simplicity compared with classical logic, this combination of Horn clauses and negation as failure has proved to be surprisingly expressive. For example, it provides a natural representation for the common-sense laws of cause and effect, as formalised by both the situation calculus and event calculus. It has also been shown to correspond quite naturally to the semi-formal language of legislation. In particular, Prakken and Sartor[24] credit the representation of the British Nationality Act as a logic program[25] with being "hugely influential for the development of computational representations of legislation, showing how logic programming enables intuitively appealing representations that can be directly deployed to generate automatic inferences".

Semantics and goal solution strategies

Viewed in purely logical terms, there are two approaches to the semantics of logic programming: One approach understands solving a goal as showing that the goal is a theorem that is a logical consequence of the program viewed as a set of axioms. In this approach, a theorem holds if and only if the theorem is true in all models of the program. This is the original logical consequence semantics, for which SLD resolution is a rule of inference. Whereas SLD resolution reasons backwards (or top-down) from goals to subgoals, hyper-resolution,[26] which reasons forwards (or bottom-up) from existing facts to derive new facts, is an alternative rule of inference for the same logical-consequence semantics.

The other approach to the semantics of logic programming, which has become more popular in recent years, is the satisfiability semantics, which understands solving a goal as showing that the goal is true (or satisfied) in some intended (or standard) model of the program.

In the case of Horn clause programs, the intended model is the unique, minimal model of the program, which always exists. Informally speaking, a minimal model of a program is a model that, when it is viewed as the set of all the facts that are true in the model, contains no smaller set of facts that is also a model of the program. The satisfiability semantics also has an alternative, more mathematical characterisation as the least fixed point of the function that derives new facts from existing facts in one step of hyper-resolution.[27]

The relationship between the two semantics of logic programs can be seen with the inductive definitions of addition and multiplication in successor arithmetic:

add(X, 0, X).
add(X, s(Y), s(Z)) :- add(X, Y, Z)).

multiply(X, 0, 0).
multiply(X, s(Y), W) :- multiply(X, Y, Z), add(X, Z, W).

Here the term s(X) represents the successor of X, namely X+1, add(X, Y, Z) represents X+Y=Z, and multiply(X, Y, Z) represents X Y = Z.

Both semantics give the same answers for the same existentially quantified conjunctions of addition and multiplication goals. For example:

?- multiply(s(s(0)), s(s(0)), Z).
Z = s(s(s(s(0)))).

?- multiply(X, X, Y), add(X, X, Y).
X = 0, Y = 0 
X = s(s(0)), Y = s(s(s(s(0)))).

However, with the logical-consequence semantics, there are non-standard models of the program in which, for example, 2+2=5. But with the satisfiability semantics, there is only one model, namely the standard model of arithmetic.

In the case of logic programs with negative conditions, there are two main variants of the model-theoretic semantics: In the well-founded semantics, the intended model of a logic program is a unique, three-valued, minimal model. The well-founded semantics generalises the notion of inductive definition in mathematical logic.[28] XSB Prolog[29] implements the well-founded semantics using SLG resolution.[30]

In the alternative stable model semantics, there may be no intended models or several intended models, all of which are minimal and two-valued. The stable model semantics underpins Answer set programming (ASP). Most implementations of ASP proceed in two steps: First they instantiate the program in all possible ways, reducing it to a propositional logic program (known as grounding). Then they apply a propositional logic problem solver, such as the DPLL algorithm or a Boolean SAT solver. However, some implementations, such as s(CASP)[31] use a goal-directed, top-down, SLD resolution-like procedure without grounding.

Variants and extensions

Prolog

The SLD resolution rule of inference is neutral about the order in which subgoals in the bodies of clauses can be selected for solution. For the sake of efficiency, Prolog restricts this order to the order in which the subgoals are written. SLD is also neutral about the strategy for searching the space of SLD proofs. Prolog searches this space, top-down, depth-first, trying different clauses for solving the same (sub)goal in the order in which the clauses are written.

This search strategy has the advantage that the current branch of the tree can be represented efficiently by a stack. When a goal clause at the top of the stack is reduced to a new goal clause, the new goal clause is pushed onto the top of the stack. When the selected subgoal in the goal clause at the top of the stack cannot be reduced to a new goal clause, the goal clause is removed from the top of the stack, and the previous goal clause is tried instead.

Backtracking can be restricted by using a subgoal, called cut, written as !, which always succeeds but cannot be backtracked. Cut can be used to improve efficiency, but can also interfere with the logical meaning of clauses. In many cases, the use of cut can be replaced by negation as failure. In fact, negation as failure can be defined in Prolog, by using cut, together with any literal, say false that matches the head of no clause:

not(P) :- P, !, false.
not(P).

This definition illustrates the homoiconic property of Prolog, which means that programs and program components (such as P in the definition of not(P)) can be treated as data. This is a powerful feature which has many applications, including its use for implementing metainterpreters.

The broad range of Prolog applications, both in isolation and in combination with other languages is highlighted in the Year of Prolog Book [18], celebrating the 50 year anniversary of Prolog in 2022.

Prolog has also contributed to the development of other programming languages, including ALF, Fril, Gödel, Mercury, Oz, Ciao, Visual Prolog, XSB, and λProlog.

Abductive logic programming

Abductive logic programming is an extension of normal Logic Programming that allows some predicates, declared as abducible predicates, to be "open" or undefined. A clause in an abductive logic program has the form:

H :- B1, …, Bn, A1, …, An.

where H is an atomic formula that is not abducible, all the Bi are literals whose predicates are not abducible, and the Ai are atomic formulas whose predicates are abducible. The abducible predicates can be constrained by integrity constraints, which can have the form:

false :- L1, …, Ln.

where the Li are arbitrary literals (defined or abducible, and atomic or negated). For example:

canfly(X) :- bird(X), normal(X).
false :- normal(X), wounded(X).
bird(john).
bird(mary).
wounded(john).

where the predicate normal is abducible.

Problem-solving is achieved by deriving hypotheses expressed in terms of the abducible predicates as solutions to problems to be solved. These problems can be either observations that need to be explained (as in classical abductive reasoning) or goals to be solved (as in normal logic programming). For example, the hypothesis normal(mary) explains the observation canfly(mary). Moreover, the same hypothesis entails the only solution X = mary of the goal of finding something which can fly:

:- canfly(X).

Abductive logic programming has been used for fault diagnosis, planning, natural language processing and machine learning. It has also been used to interpret Negation as Failure as a form of abductive reasoning.

Metalogic programming

Because mathematical logic has a long tradition of distinguishing between object language and metalanguage, logic programming also allows metalevel programming. The simplest metalogic program is the so-called "vanilla" meta-interpreter:

    solve(true).
    solve((A,B)):- solve(A),solve(B).
    solve(A):- clause(A,B),solve(B).

where true represents an empty conjunction, and clause(A,B) means that there is an object-level clause of the form A :- B.

Metalogic programming allows object-level and metalevel representations to be combined, as in natural language. It can also be used to implement any logic which is specified as inference rules. Metalogic is used in logic programming to implement metaprograms, which manipulate other programs, databases, knowledge bases or axiomatic theories as data.

Constraint logic programming

Constraint logic programming combines Horn clause logic programming with constraint solving. It extends Horn clauses by allowing some predicates, declared as constraint predicates, to occur as literals in the body of clauses. A constraint logic program is a set of clauses of the form:

H :- C1, …, Cn ◊ B1, …, Bn.

where H and all the Bi are atomic formulas, and the Ci are constraints. Declaratively, such clauses are read as ordinary logical implications:

H if C1 and … and Cn and B1 and … and Bn.

However, whereas the predicates in the heads of clauses are defined by the constraint logic program, the predicates in the constraints are predefined by some domain-specific model-theoretic structure or theory.

Procedurally, subgoals whose predicates are defined by the program are solved by goal-reduction, as in ordinary logic programming, but constraints are checked for satisfiability by a domain-specific constraint-solver, which implements the semantics of the constraint predicates. An initial problem is solved by reducing it to a satisfiable conjunction of constraints.

The following constraint logic program represents a toy temporal database of john's history as a teacher:

teaches(john, hardware, T) :- 1990  T, T < 1999.
teaches(john, software, T) :- 1999  T, T < 2005.
teaches(john, logic, T) :- 2005  T, T  2012.
rank(john, instructor, T) :- 1990  T, T < 2010.
rank(john, professor, T) :- 2010  T, T < 2014.

Here and < are constraint predicates, with their usual intended semantics. The following goal clause queries the database to find out when john both taught logic and was a professor:

:- teaches(john, logic, T), rank(john, professor, T).

The solution is 2010 ≤ T, T ≤ 2012.

Constraint logic programming has been used to solve problems in such fields as civil engineering, mechanical engineering, digital circuit verification, automated timetabling, air traffic control, and finance. It is closely related to abductive logic programming.

Concurrent logic programming

Concurrent logic programming integrates concepts of logic programming with concurrent programming. Its development was given a big impetus in the 1980s by its choice for the systems programming language of the Japanese Fifth Generation Project (FGCS).[32]

A concurrent logic program is a set of guarded Horn clauses of the form:

H :- G1, …, Gn | B1, …, Bn.

The conjunction G1, ... , Gn is called the guard of the clause, and | is the commitment operator. Declaratively, guarded Horn clauses are read as ordinary logical implications:

H if G1 and … and Gn and B1 and … and Bn.

However, procedurally, when there are several clauses whose heads H match a given goal, then all of the clauses are executed in parallel, checking whether their guards G1, ... , Gn hold. If the guards of more than one clause hold, then a committed choice is made to one of the clauses, and execution proceeds with the subgoals B1, ..., Bn of the chosen clause. These subgoals can also be executed in parallel. Thus concurrent logic programming implements a form of "don't care nondeterminism", rather than "don't know nondeterminism".

For example, the following concurrent logic program defines a predicate shuffle(Left, Right, Merge) , which can be used to shuffle two lists Left and Right, combining them into a single list Merge that preserves the ordering of the two lists Left and Right:

shuffle([], [], []).
shuffle(Left, Right, Merge) :-
    Left = [First | Rest] |
    Merge = [First | ShortMerge],
    shuffle(Rest, Right, ShortMerge).
shuffle(Left, Right, Merge) :-
    Right = [First | Rest] |
    Merge = [First | ShortMerge],
    shuffle(Left, Rest, ShortMerge).

Here, [] represents the empty list, and [Head | Tail] represents a list with first element Head followed by list Tail, as in Prolog. (Notice that the first occurrence of | in the second and third clauses is the list constructor, whereas the second occurrence of | is the commitment operator.) The program can be used, for example, to shuffle the lists [ace, queen, king] and [1, 4, 2] by invoking the goal clause:

shuffle([ace, queen, king], [1, 4, 2], Merge).

The program will non-deterministically generate a single solution, for example Merge = [ace, queen, 1, king, 4, 2].

Arguably, concurrent logic programming is based on message passing, so it is subject to the same indeterminacy as other concurrent message-passing systems, such as Actors (see Indeterminacy in concurrent computation). Carl Hewitt has argued that concurrent logic programming is not based on logic in his sense that computational steps cannot be logically deduced.[33] However, in concurrent logic programming, any result of a terminating computation is a logical consequence of the program, and any partial result of a partial computation is a logical consequence of the program and the residual goal (process network). Thus the indeterminacy of computations implies that not all logical consequences of the program can be deduced.[neutrality is disputed]

Concurrent constraint logic programming

Concurrent constraint logic programming combines concurrent logic programming and constraint logic programming, using constraints to control concurrency. A clause can contain a guard, which is a set of constraints that may block the applicability of the clause. When the guards of several clauses are satisfied, concurrent constraint logic programming makes a committed choice to use only one.

Inductive logic programming

Inductive logic programming is concerned with generalizing positive and negative examples in the context of background knowledge: machine learning of logic programs. Recent work in this area, combining logic programming, learning and probability, has given rise to the new field of statistical relational learning and probabilistic inductive logic programming.

Higher-order logic programming

Several researchers have extended logic programming with higher-order programming features derived from higher-order logic, such as predicate variables. Such languages include the Prolog extensions HiLog and λProlog.

Linear logic programming

Basing logic programming within linear logic has resulted in the design of logic programming languages that are considerably more expressive than those based on classical logic. Horn clause programs can only represent state change by the change in arguments to predicates. In linear logic programming, one can use the ambient linear logic to support state change. Some early designs of logic programming languages based on linear logic include LO,[34] Lolli,[35] ACL,[36] and Forum.[37] Forum provides a goal-directed interpretation of all linear logic.

Object-oriented logic programming

F-logic extends logic programming with objects and the frame syntax.

Logtalk extends the Prolog programming language with support for objects, protocols, and other OOP concepts. It supports most standard-compliant Prolog systems as backend compilers.

Transaction logic programming

Transaction logic is an extension of logic programming with a logical theory of state-modifying updates. It has both a model-theoretic semantics and a procedural one. An implementation of a subset of Transaction logic is available in the Flora-2 system. Other prototypes are also available.

See also

Citations

  1. ^ a b Winograd, Terry (1972). "Understanding natural language". Cognitive Psychology. 3 (1): 1–191. doi:10.1016/0010-0285(72)90002-3.
  2. ^ Tärnlund, S.Å. (1977). "Horn clause computability". BIT Numerical Mathematics. 17 (2): 215–226. doi:10.1007/BF01932293. S2CID 32577496.
  3. ^ Andréka, H.; Németi, I. (1978). "The generalised completeness of Horn predicate-logic as a programming language". Acta Cybernetica. 4 (1): 3–10.
  4. ^ Green, Cordell. Application of Theorem Proving to Problem Solving (PDF). IJCAI 1969.
  5. ^ Foster, J.M.; Elcock, E.W. (1969). ABSYS 1: An Incremental Compiler for Assertions: an Introduction. Fourth Annual Machine Intelligence Workshop. Machine Intelligence. Vol. 4. Edinburgh, UK: Edinburgh University Press. pp. 423–429.
  6. ^ Kowalski, R. A. (1988). "The early years of logic programming" (PDF). Communications of the ACM. 31: 38–43. doi:10.1145/35043.35046. S2CID 12259230.
  7. ^ Hewitt, Carl. Planner: A Language for Proving Theorems in Robots (PDF). IJCAI 1969.
  8. ^ Jeff Rulifson; Jan Derksen; Richard Waldinger (November 1973). QA4, A Procedural Calculus for Intuitive Reasoning (PDF) (Technical report). SRI AI Center Technical Note 73.
  9. ^ Davies, J.M., 1971. POPLER: a POP-2 planner. Edinburgh University, Department of Machine Intelligence and Perception.
  10. ^ McDermott, D.V.; Sussman, G.J. (May 1972). The Conniver reference manual (Technical report). Artificial Intelligence Memo No. 259.
  11. ^ Reboh, R.; Sacerdoti, E.D. (August 1973). A preliminary QLISP manual (Technical report). Artificial Intelligence Center, SRI International.
  12. ^ Kornfeld, W.A.; Hewitt, C.E. (1981). "The scientific community metaphor". IEEE Transactions on Systems, Man, and Cybernetics. 11 (1): 24–33. doi:10.1109/TSMC.1981.4308575. hdl:1721.1/5693. S2CID 1322857.
  13. ^ Hayes, Pat (1973). "Computation and Deduction". Proceedings of the 2nd MFCS Symposium. Czechoslovak Academy of Sciences. pp. 105–118.
  14. ^ Kowalski, Robert; Kuehner, Donald (1971). "Linear Resolution with Selection Function" (PDF). Artificial Intelligence. 2 (3–4): 227–260. doi:10.1016/0004-3702(71)90012-9.
  15. ^ Kowalski, Robert (1973). "Predicate Logic as a Programming Language" (PDF). Department of Artificial Intelligence, Edinburgh University. Memo 70. Also in Proceedings IFIP Congress, Stockholm, North Holland Publishing Co., 1974, pp. 569–574.
  16. ^ Warren, D.H.; Pereira, L.M.; Pereira, F. (1977). "Prolog-the language and its implementation compared with Lisp". ACM SIGPLAN Notices. 12 (8): 109–115. doi:10.1145/872734.806939.
  17. ^ Gallaire, Hervé; Minker, John 'Jack', eds. (1978), "Logic and Data Bases, Symposium on Logic and Data Bases, Centre d'études et de recherches de Toulouse, 1977", Advances in Data Base Theory, New York: Plenum Press, ISBN 978-0-306-40060-5.
  18. ^ a b Warren, D.S. (2023). "Introduction to Prolog". In Warren, D.S.; Dahl, V.; Eiter, T.; Hermenegildo, M.V.; Kowalski, R.; Rossi, F. (eds.). Prolog: The Next 50 Years. Lecture Notes in Computer Science(). Vol. 13900. Springer, Cham. pp. 3–19. doi:10.1007/978-3-031-35254-6_1. ISBN 978-3-031-35253-9.
  19. ^ Robinson, J. Alan (2001). "Invited Editorial". Theory and Practice of Logic Programming. 1 (1). Cambridge University Press: 1. doi:10.1017/s1471068400000028.
  20. ^ R.A.Kowalski (July 1979). "Algorithm=Logic + Control". Communications of the ACM. 22 (7): 424–436. doi:10.1145/359131.359136. S2CID 2509896.
  21. ^ Clark, K.L. (1977). "Negation as failure". Logic and data bases. Boston, MA: Springer US. pp. 293–322. doi:10.1007/978-1-4684-3384-5_11.
  22. ^ Gelfond, M.; Przymusinska, H.; Przymusinski, T. (1989). "On the relationship between circumscription and negation as failure". Artificial Intelligence. 38 (1): 75–94. doi:10.1016/0004-3702(89)90068-4.
  23. ^ Shepherdson, J.C. (1984). "Negation as failure: a comparison of Clark's completed data base and Reiter's closed world assumption". The Journal of Logic Programming. 1 (1): 51–79. doi:10.1016/0743-1066(84)90023-2.
  24. ^ Prakken, H.; Sartor, G. (October 2015). "Law and logic: a review from an argumentation perspective" (PDF). Artificial Intelligence. 227: 214–245. doi:10.1016/j.artint.2015.06.005. S2CID 4261497.
  25. ^ Sergot, M.J.; Sadri, F.; Kowalski, R.A.; Kriwaczek, F.; Hammond, P; Cory, H.T. (1986). "The British Nationality Act as a logic program" (PDF). Communications of the ACM. 29 (5): 370–386. doi:10.1145/5689.5920. S2CID 5665107.
  26. ^ Robinson, J. (1965). "Automatic deduction with hyper-resolution". International Journal of Computer Mathematics. 1: 227–234.
  27. ^ Van Emden, M.H.; Kowalski, R.A. (October 1976). "The semantics of predicate logic as a programming language". Journal of the ACM. 23 (4): 733–742. doi:10.1145/321978.321991. S2CID 11048276.
  28. ^ Denecker, M.; Ternovska, E. (2008). "A logic of nonmonotone inductive definitions". ACM Transactions on Computational Logic. 9 (2): 14:1–14:52. doi:10.1145/1342991.1342998. S2CID 13156469.
  29. ^ Rao, P.; Sagonas, K.; Swift, T.; Warren, D.S.; Freire, J. (July 28–31, 1997). XSB: A system for efficiently computing well-founded semantics. Logic Programming And Nonmonotonic Reasoning: 4th International Conference, LPNMR'97. Dagstuhl Castle, Germany: Springer Berlin Heidelberg. pp. 430–440.
  30. ^ W. Chen; D. S. Warren (January 1996). "Tabled Evaluation with Delaying for General Logic Programs". Journal of the ACM. 43 (1): 20–74.
  31. ^ Arias, J.; Carro, M.; Salazar, E.; Marple, K.; Gupta, G. (2018). "Constraint Answer Set Programming without Grounding". Theory and Practice of Logic Programming. 18 (3–4): 337–354. arXiv:1804.11162. doi:10.1017/S1471068418000285. S2CID 13754645.
  32. ^ Shunichi Uchida and Kazuhiro Fuchi. Proceedings of the FGCS Project Evaluation Workshop. Institute for New Generation Computer Technology (ICOT). 1992.
  33. ^ Hewitt, Carl (27 April 2016). "Inconsistency Robustness for Logic Programs". Hal Archives. pp. 21–26. Retrieved 7 November 2016.
  34. ^ Andreoli, Jean-Marc (1 June 1992). "Logic Programming with Focusing Proofs in Linear Logic". Journal of Logic and Computation. 2 (3): 297–347. doi:10.1093/logcom/2.3.297.
  35. ^ Hodas, Joshua; Miller, Dale (1994). "Logic Programming in a Fragment of Intuitionistic Linear Logic". Information and Computation. 110 (2): 327–365. doi:10.1006/inco.1994.1036.
  36. ^ Kobayashi, Naoki; Yonezawa, Akinori (1994). Asynchronous communication model based on linear logic. US/Japan Workshop on Parallel Symbolic Computing. pp. 279–294. CiteSeerX 10.1.1.42.8749.
  37. ^ Miller, Dale (30 September 1996). "Forum: A Multiple-Conclusion Specification Logic". Theoretical Computer Science. 165 (1): 201–232. doi:10.1016/0304-3975(96)00045-X.

Sources

General introductions

Other sources

Further reading

External links