Inductive logic programming: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Move history from the lead to a dedicated section.
→‎History: Rewriting the history up to the mid-1990s based on a 1997 book as a reliable secondary source.
Line 8: Line 8:


== History ==
== History ==
[[Gordon Plotkin]] and [[Ehud Shapiro]] laid the initial theoretical foundation for inductive machine learning in a logical setting.<ref>{{cite thesis |first=G.D. |last=Plotkin |title=Automatic Methods of Inductive Inference |date=1970 |type=PhD |publisher=University of Edinburgh |url=https://www.era.lib.ed.ac.uk/bitstream/handle/1842/6656/Plotkin1972.pdf |hdl=1842/6656}}</ref><ref>{{cite tech report|first=Ehud Y.|last=Shapiro|title=Inductive inference of theories from facts|id=192|date=1981|publisher=Department of Computer Science, Yale University|url=http://ftp.cs.yale.edu/publications/techreports/tr192.pdf}} Reprinted in {{cite book |title=Computational logic : essays in honor of Alan Robinson |publisher=MIT Press |year=1991 |isbn=978-0-262-12156-9 |editor1-last=Lassez |editor1-first=J.-L. |pages=199–254 |editor2-last=Plotkin |editor2-first=G.}}</ref><ref>{{cite book |last=Shapiro |first=Ehud Y. |url= |title=Algorithmic program debugging |date=1983 |publisher=MIT Press |isbn=0-262-19218-7 |location= |pages=}}</ref> Shapiro built their first implementation (Model Inference System) in 1981:<ref>{{cite book |last=Shapiro |first=Ehud Y. |url= |title=Proceedings of the 7th international joint conference on Artificial intelligence |date=1981 |publisher=Morgan Kaufmann |isbn= |volume=2 |location= |pages=1064 |chapter=The model inference system |chapter-url=https://www.ijcai.org/Proceedings/81-2/Papers/100.pdf}}</ref> a [[Prolog]] program that inductively inferred logic programs from positive and negative examples. The first full first-order implementation of inductive logic programming was ''Theorist'' in 1986.<ref>{{cite report |url=https://www.cs.ubc.ca/~poole/papers/Theorist-CS-86-06.pdf |title=Theorist: A Logical Reasoning System for Defaults and Diagnosis |last1=Poole |first1=David |last2=Goebel |first2=Randy |date=Feb 1986 |last3=Aleliunas |first3=Romas |institution=Univ. Waterloo |type=Research Report |number=CS-86-06}}</ref><ref>{{cite book |last1=Poole |first1=David |url= |title=The Knowledge Frontier &ndash; Essays in the Representation of Knowledge |last2=Goebel |first2=Randy |last3=Aleliunas |first3=Romas |publisher=Springer |year=1987 |isbn=978-1-4612-9158-9 |editor1=Nick J. Cercone |edition=1st |series=Symbolic Computation |volume= |location=New York, NY |pages=331&ndash;352 |contribution=Theorist: A Logical Reasoning System for Defaults and Diagnosis |doi=10.1007/978-1-4612-4792-0 |editor2=Gordon McCalla |s2cid=38209923}}</ref>{{cn|reason=Provide an independent and secondary source about Theorisst being the *first* implementation.|date=September 2022}} The term ''Inductive Logic Programming'' was first introduced<ref>{{cite book |last=De Raedt |first=Luc |url= |title=The Logic Programming Paradigm: A 25-Year Perspective |date=2012 |publisher=Springer |isbn=978-3-642-60085-2 |editor= |pages=335–346 |chapter=A Perspective on Inductive Logic Programming |citeseerx=10.1.1.56.1790 |chapter-url={{GBurl|uMWoCAAAQBAJ|p=333}} |orig-year=1999}}</ref> in a paper by [[Stephen Muggleton]] in 1991.<ref name="muggleton1995inverse">{{Cite journal |last1=Muggleton |first1=S.H. |year=1991 |title=Inductive logic programming |journal=New Generation Computing |volume=8 |issue=4 |pages=295–318 |citeseerx=10.1.1.329.5312 |doi=10.1007/BF03037089 |s2cid=5462416}}</ref> Muggleton also founded the annual international conference on Inductive Logic Programming, introduced the theoretical ideas of Predicate Invention, [[Inverse resolution]],<ref>{{cite book |last1=Muggleton |first1=S.H. |url= |title=Proceedings of the 5th International Conference on Machine Learning |last2=Buntine |first2=W. |date=1988 |isbn=978-0-934613-64-4 |pages=339–352 |chapter=Machine invention of first-order predicate by inverting resolution |doi=10.1016/B978-0-934613-64-4.50040-2}}</ref> and Inverse entailment.<ref>{{cite journal |last1=Muggleton |first1=S.H. |year=1995 |title=Inverting entailment and Progol |journal=New Generation Computing |volume=13 |issue=3–4 |pages=245–286 |citeseerx=10.1.1.31.1630 |doi=10.1007/bf03037227 |s2cid=12643399}}</ref> Muggleton implemented Inverse entailment first in the [[PROGOL]] system.
Building on earlier work on [[Inductive inference]], [[Gordon Plotkin]] was the first to formalise induction in a [[Horn clause|clausal]] setting around 1970, adopting an approach of generalising from examples.<ref name=":0">{{Cite book |last=Nienhuys-Cheng |first=Shan-hwei |title=Foundations of inductive logic programming |last2=Wolf |first2=Ronald de |date=1997 |publisher=Spinger |isbn=978-3-540-62927-6 |series=Lecture notes in computer science Lecture notes in artificial intelligence |location=Berlin Heidelberg |pages=174–177}}</ref><ref>{{cite thesis |first=G.D. |last=Plotkin |title=Automatic Methods of Inductive Inference |date=1970 |type=PhD |publisher=University of Edinburgh |url=https://www.era.lib.ed.ac.uk/bitstream/handle/1842/6656/Plotkin1972.pdf |hdl=1842/6656}}</ref> In 1981, [[Ehud Shapiro]] introduced several ideas that would shape the field in his new approach of model inference, an algorithm employing refinement and backtracing to search for a complete axiomatisation of given examples.<ref name=":0" /><ref>{{cite tech report|first=Ehud Y.|last=Shapiro|title=Inductive inference of theories from facts|id=192|date=1981|publisher=Department of Computer Science, Yale University|url=http://ftp.cs.yale.edu/publications/techreports/tr192.pdf}} Reprinted in {{cite book |title=Computational logic : essays in honor of Alan Robinson |publisher=MIT Press |year=1991 |isbn=978-0-262-12156-9 |editor1-last=Lassez |editor1-first=J.-L. |pages=199–254 |editor2-last=Plotkin |editor2-first=G.}}</ref> His first implementation was the Model Inference System in 1981:<ref>{{cite book |last=Shapiro |first=Ehud Y. |url= |title=Proceedings of the 7th international joint conference on Artificial intelligence |date=1981 |publisher=Morgan Kaufmann |isbn= |volume=2 |location= |pages=1064 |chapter=The model inference system |chapter-url=https://www.ijcai.org/Proceedings/81-2/Papers/100.pdf}}</ref><ref>{{cite book |last=Shapiro |first=Ehud Y. |url= |title=Algorithmic program debugging |date=1983 |publisher=MIT Press |isbn=0-262-19218-7 |location= |pages=}}</ref> a [[Prolog]] program that inductively inferred [[Horn clause]] logic programs from positive and negative examples.<ref name=":0" /> The term ''Inductive Logic Programming'' was first introduced in a paper by [[Stephen Muggleton]] in 1990, defined as the intersection of machine learning and logic programming.<ref name=":0" /> Muggleton and Wray Buntine introduced predicate invention and [[inverse resolution]] in 1988.<ref name=":0" /><ref>{{cite book |last1=Muggleton |first1=S.H. |url= |title=Proceedings of the 5th International Conference on Machine Learning |last2=Buntine |first2=W. |date=1988 |isbn=978-0-934613-64-4 |pages=339–352 |chapter=Machine invention of first-order predicate by inverting resolution |doi=10.1016/B978-0-934613-64-4.50040-2}}</ref>

Several inductive logic programming systems that proved influential appeared in the early 1990s. FOIL, introduced by Quinlan in 1990<ref>{{Cite journal |last=Quinlan |first=J. R. |date=1990-08 |title=Learning logical definitions from relations |url=http://dx.doi.org/10.1007/bf00117105 |journal=Machine Learning |volume=5 |issue=3 |pages=239–266 |doi=10.1007/bf00117105 |issn=0885-6125}}</ref> was based on upgrading [[Propositional calculus|propositional]] learning algorithms [[AQ (machine learning)|AQ]] and [[ID3 algorithm|ID3]].<ref name=":1">{{Cite book |last=Nienhuys-Cheng |first=Shan-hwei |title=Foundations of inductive logic programming |last2=Wolf |first2=Ronald de |date=1997 |publisher=Spinger |isbn=978-3-540-62927-6 |series=Lecture notes in computer science Lecture notes in artificial intelligence |location=Berlin Heidelberg |pages=354–358}}</ref> Golem, introduced by Muggleton and Feng in 1992, went back to a restricted form of Plotkin's least generalisation algorithm.<ref name=":1" /> The [[PROGOL]] system, introduced by Muggleton in 1995, first implemented inverse entailment.<ref name=":1" /><ref>{{cite journal |last1=Muggleton |first1=S.H. |year=1995 |title=Inverting entailment and Progol |journal=New Generation Computing |volume=13 |issue=3–4 |pages=245–286 |citeseerx=10.1.1.31.1630 |doi=10.1007/bf03037227 |s2cid=12643399}}</ref>


==Formal definition==
==Formal definition==
Line 90: Line 92:


=== Hypothesis search ===
=== Hypothesis search ===
The ILP systems Progol,<ref name="muggleton1995inverse" /> Hail <ref>{{cite book |last1=Ray |first1=O. |last2=Broda |first2=K. |last3=Russo |first3=A.M. |chapter=Hybrid abductive inductive learning |chapter-url=https://link.springer.com/chapter/10.1007/978-3-540-39917-9_21 |title=Proceedings of the 13th international conference on inductive logic programming |publisher=Springer |series=LNCS |volume=2835 |date=2003 |isbn=978-3-540-39917-9 |pages=311–328 |doi=10.1007/978-3-540-39917-9_21 |url= |citeseerx=10.1.1.212.6602}}</ref> and Imparo <ref>{{cite book |last1=Kimber |first1=T. |last2=Broda |first2=K. |last3=Russo |first3=A. |chapter=Induction on failure: learning connected Horn theories |chapter-url=https://link.springer.com/chapter/10.1007/978-3-642-04238-6_16 |title=Proceedings of the 10th international conference on logic programing and nonmonotonic reasoning |publisher=Springer |series=LNCS |volume=575 |date=2009 |isbn=978-3-642-04238-6 |pages=169–181 |doi=10.1007/978-3-642-04238-6_16 }}</ref> find a hypothesis {{mvar|H}} using the principle of the inverse entailment<ref name="muggleton1995inverse" /> for theories {{mvar|B}}, {{mvar|E}}, {{mvar|H}}: <math>B \land H \models E \iff B \land \neg E \models \neg H</math>. First they construct an intermediate theory {{mvar|F}} called a bridge theory satisfying the conditions <math>B \land \neg E \models F</math> and <math>F \models \neg H</math>. Then as <math>H \models \neg F</math>, they generalize the negation of the bridge theory {{mvar|F}} with anti-entailment.<ref>{{cite journal | url=https://link.springer.com/content/pdf/10.1007/s10994-011-5250-y.pdf | doi=10.1007/s10994-011-5250-y | title=Inverse subsumption for complete explanatory induction | year=2012 | last1=Yamamoto | first1=Yoshitaka | last2=Inoue | first2=Katsumi | last3=Iwanuma | first3=Koji | journal=Machine Learning | volume=86 | pages=115–139 | s2cid=11347607 }}</ref> However, the operation of anti-entailment is computationally more expensive since it is highly nondeterministic. Therefore, an alternative hypothesis search can be conducted using the operation of the inverse subsumption (anti-subsumption) instead which is less non-deterministic than anti-entailment.
The ILP systems Progol,<ref name="muggleton1995inverse">{{Cite journal |last1=Muggleton |first1=S.H. |year=1991 |title=Inductive logic programming |journal=New Generation Computing |volume=8 |issue=4 |pages=295–318 |citeseerx=10.1.1.329.5312 |doi=10.1007/BF03037089 |s2cid=5462416}}</ref> Hail <ref>{{cite book |last1=Ray |first1=O. |last2=Broda |first2=K. |last3=Russo |first3=A.M. |chapter=Hybrid abductive inductive learning |chapter-url=https://link.springer.com/chapter/10.1007/978-3-540-39917-9_21 |title=Proceedings of the 13th international conference on inductive logic programming |publisher=Springer |series=LNCS |volume=2835 |date=2003 |isbn=978-3-540-39917-9 |pages=311–328 |doi=10.1007/978-3-540-39917-9_21 |url= |citeseerx=10.1.1.212.6602}}</ref> and Imparo <ref>{{cite book |last1=Kimber |first1=T. |last2=Broda |first2=K. |last3=Russo |first3=A. |chapter=Induction on failure: learning connected Horn theories |chapter-url=https://link.springer.com/chapter/10.1007/978-3-642-04238-6_16 |title=Proceedings of the 10th international conference on logic programing and nonmonotonic reasoning |publisher=Springer |series=LNCS |volume=575 |date=2009 |isbn=978-3-642-04238-6 |pages=169–181 |doi=10.1007/978-3-642-04238-6_16 }}</ref> find a hypothesis {{mvar|H}} using the principle of the inverse entailment<ref name="muggleton1995inverse" /> for theories {{mvar|B}}, {{mvar|E}}, {{mvar|H}}: <math>B \land H \models E \iff B \land \neg E \models \neg H</math>. First they construct an intermediate theory {{mvar|F}} called a bridge theory satisfying the conditions <math>B \land \neg E \models F</math> and <math>F \models \neg H</math>. Then as <math>H \models \neg F</math>, they generalize the negation of the bridge theory {{mvar|F}} with anti-entailment.<ref>{{cite journal | url=https://link.springer.com/content/pdf/10.1007/s10994-011-5250-y.pdf | doi=10.1007/s10994-011-5250-y | title=Inverse subsumption for complete explanatory induction | year=2012 | last1=Yamamoto | first1=Yoshitaka | last2=Inoue | first2=Katsumi | last3=Iwanuma | first3=Koji | journal=Machine Learning | volume=86 | pages=115–139 | s2cid=11347607 }}</ref> However, the operation of anti-entailment is computationally more expensive since it is highly nondeterministic. Therefore, an alternative hypothesis search can be conducted using the operation of the inverse subsumption (anti-subsumption) instead which is less non-deterministic than anti-entailment.


Questions of completeness of a hypothesis search procedure of specific ILP system arise. For example, Progol's hypothesis search procedure based on the inverse entailment inference rule is not complete by '''Yamamoto's example'''.<ref>{{cite book |first=Akihiro |last=Yamamoto |chapter=Which hypotheses can be found with inverse entailment? |chapter-url=https://link.springer.com/chapter/10.1007/3540635149_58 |title=International Conference on Inductive Logic Programming |series=Lecture Notes in Computer Science |publisher=Springer |location= |date=1997 |volume=1297 |isbn=978-3-540-69587-5 |pages=296–308 |doi=10.1007/3540635149_58 |url= |citeseerx=10.1.1.54.2975}}</ref> On the other hand, Imparo is complete by both anti-entailment procedure <ref name="kimber2009induction">{{cite thesis |first=Timothy |last=Kimber |title=Learning definite and normal logic programs by induction on failure |date=2012 |type=PhD |publisher=Imperial College London |url=https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.560694 |id=ethos 560694 |access-date=2022-10-21 |archive-date=2022-10-21 |archive-url=https://web.archive.org/web/20221021035457/https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.560694 |url-status=dead }}</ref> and its extended inverse subsumption <ref>{{cite arXiv |first=David |last=Toth |title=Imparo is complete by inverse subsumption |date=2014 |class=cs.AI |eprint=1407.3836}}</ref> procedure.
Questions of completeness of a hypothesis search procedure of specific ILP system arise. For example, Progol's hypothesis search procedure based on the inverse entailment inference rule is not complete by '''Yamamoto's example'''.<ref>{{cite book |first=Akihiro |last=Yamamoto |chapter=Which hypotheses can be found with inverse entailment? |chapter-url=https://link.springer.com/chapter/10.1007/3540635149_58 |title=International Conference on Inductive Logic Programming |series=Lecture Notes in Computer Science |publisher=Springer |location= |date=1997 |volume=1297 |isbn=978-3-540-69587-5 |pages=296–308 |doi=10.1007/3540635149_58 |url= |citeseerx=10.1.1.54.2975}}</ref> On the other hand, Imparo is complete by both anti-entailment procedure <ref name="kimber2009induction">{{cite thesis |first=Timothy |last=Kimber |title=Learning definite and normal logic programs by induction on failure |date=2012 |type=PhD |publisher=Imperial College London |url=https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.560694 |id=ethos 560694 |access-date=2022-10-21 |archive-date=2022-10-21 |archive-url=https://web.archive.org/web/20221021035457/https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.560694 |url-status=dead }}</ref> and its extended inverse subsumption <ref>{{cite arXiv |first=David |last=Toth |title=Imparo is complete by inverse subsumption |date=2014 |class=cs.AI |eprint=1407.3836}}</ref> procedure.

Revision as of 18:26, 26 November 2023

Inductive logic programming (ILP) is a subfield of symbolic artificial intelligence which uses logic programming as a uniform representation for examples, background knowledge and hypotheses. The term "inductive" here refers to philosophical (i.e. suggesting a theory to explain observed facts) rather than mathematical (i.e. proving a property for all members of a well-ordered set) induction. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesised logic program which entails all the positive and none of the negative examples.

  • Schema: positive examples + negative examples + background knowledgehypothesis.

Inductive logic programming is particularly useful in bioinformatics and natural language processing.

History

Building on earlier work on Inductive inference, Gordon Plotkin was the first to formalise induction in a clausal setting around 1970, adopting an approach of generalising from examples.[1][2] In 1981, Ehud Shapiro introduced several ideas that would shape the field in his new approach of model inference, an algorithm employing refinement and backtracing to search for a complete axiomatisation of given examples.[1][3] His first implementation was the Model Inference System in 1981:[4][5] a Prolog program that inductively inferred Horn clause logic programs from positive and negative examples.[1] The term Inductive Logic Programming was first introduced in a paper by Stephen Muggleton in 1990, defined as the intersection of machine learning and logic programming.[1] Muggleton and Wray Buntine introduced predicate invention and inverse resolution in 1988.[1][6]

Several inductive logic programming systems that proved influential appeared in the early 1990s. FOIL, introduced by Quinlan in 1990[7] was based on upgrading propositional learning algorithms AQ and ID3.[8] Golem, introduced by Muggleton and Feng in 1992, went back to a restricted form of Plotkin's least generalisation algorithm.[8] The PROGOL system, introduced by Muggleton in 1995, first implemented inverse entailment.[8][9]

Formal definition

The background knowledge is given as a logic theory B, commonly in the form of Horn clauses used in logic programming. The positive and negative examples are given as a conjunction and of unnegated and negated ground literals, respectively. A correct hypothesis h is a logic proposition satisfying the following requirements.[10]

"Necessity" does not impose a restriction on h, but forbids any generation of a hypothesis as long as the positive facts are explainable without it. "Sufficiency" requires any generated hypothesis h to explain all positive examples . "Weak consistency" forbids generation of any hypothesis h that contradicts the background knowledge B. "Strong consistency" also forbids generation of any hypothesis h that is inconsistent with the negative examples , given the background knowledge B; it implies "Weak consistency"; if no negative examples are given, both requirements coincide. Džeroski [11] requires only "Sufficiency" (called "Completeness" there) and "Strong consistency".

Example

Assumed family relations in section "Example"

The following well-known example about learning definitions of family relations uses the abbreviations

par: parent, fem: female, dau: daughter, g: George, h: Helen, m: Mary, t: Tom, n: Nancy, and e: Eve.

It starts from the background knowledge (cf. picture)

,

the positive examples

,

and the trivial proposition true to denote the absence of negative examples.

Plotkin's [12][13] "relative least general generalization (rlgg)" approach to inductive logic programming shall be used to obtain a suggestion about how to formally define the daughter relation dau.

This approach uses the following steps.

  • Relativize each positive example literal with the complete background knowledge:
    ,
  • Convert into clause normal form:
    ,
  • Anti-unify each compatible [14] pair [15] of literals:
    • from and ,
    • from and ,
    • from and ,
    • from and , similar for all other background-knowledge literals
    • from and , and many more negated literals
  • Delete all negated literals containing variables that don't occur in a positive literal:
    • after deleting all negated literals containing other variables than , only remains, together with all ground literals from the background knowledge
  • Convert clauses back to Horn form:

The resulting Horn clause is the hypothesis h obtained by the rlgg approach. Ignoring the background knowledge facts, the clause informally reads " is called a daughter of if is the parent of and is female", which is a commonly accepted definition.

Concerning the above requirements, "Necessity" was satisfied because the predicate dau doesn't appear in the background knowledge, which hence cannot imply any property containing this predicate, such as the positive examples are. "Sufficiency" is satisfied by the computed hypothesis h, since it, together with from the background knowledge, implies the first positive example , and similarly h and from the background knowledge implies the second positive example . "Weak consistency" is satisfied by h, since h holds in the (finite) Herbrand structure described by the background knowledge; similar for "Strong consistency".

The common definition of the grandmother relation, viz. , cannot be learned using the above approach, since the variable y occurs in the clause body only; the corresponding literals would have been deleted in the 4th step of the approach. To overcome this flaw, that step has to be modified such that it can be parametrized with different literal post-selection heuristics. Historically, the GOLEM implementation is based on the rlgg approach.

Inductive Logic Programming system

An inductive logic programming system is a program that takes as an input logic theories and outputs a correct hypothesis H with respect to theories . Search-based ILP systems consist of two parts: hypothesis search and hypothesis selection. First a hypothesis is searched with an inductive logic programming procedure, then a subset of the found hypotheses (in most systems one hypothesis) is chosen by a selection algorithm. A selection algorithm scores each of the found hypotheses and returns the ones with the highest score. An example of score function include minimal compression length where a hypothesis with a lowest Kolmogorov complexity has the highest score and is returned. An ILP system is complete if and only if for any input logic theories any correct hypothesis H with respect to these input theories can be found with its hypothesis search procedure.

Hypothesis search

The ILP systems Progol,[16] Hail [17] and Imparo [18] find a hypothesis H using the principle of the inverse entailment[16] for theories B, E, H: . First they construct an intermediate theory F called a bridge theory satisfying the conditions and . Then as , they generalize the negation of the bridge theory F with anti-entailment.[19] However, the operation of anti-entailment is computationally more expensive since it is highly nondeterministic. Therefore, an alternative hypothesis search can be conducted using the operation of the inverse subsumption (anti-subsumption) instead which is less non-deterministic than anti-entailment.

Questions of completeness of a hypothesis search procedure of specific ILP system arise. For example, Progol's hypothesis search procedure based on the inverse entailment inference rule is not complete by Yamamoto's example.[20] On the other hand, Imparo is complete by both anti-entailment procedure [21] and its extended inverse subsumption [22] procedure.

Implementations

See also

References

  1. ^ a b c d e Nienhuys-Cheng, Shan-hwei; Wolf, Ronald de (1997). Foundations of inductive logic programming. Lecture notes in computer science Lecture notes in artificial intelligence. Berlin Heidelberg: Spinger. pp. 174–177. ISBN 978-3-540-62927-6.
  2. ^ Plotkin, G.D. (1970). Automatic Methods of Inductive Inference (PDF) (PhD). University of Edinburgh. hdl:1842/6656.
  3. ^ Shapiro, Ehud Y. (1981). Inductive inference of theories from facts (PDF) (Technical report). Department of Computer Science, Yale University. 192. Reprinted in Lassez, J.-L.; Plotkin, G., eds. (1991). Computational logic : essays in honor of Alan Robinson. MIT Press. pp. 199–254. ISBN 978-0-262-12156-9.
  4. ^ Shapiro, Ehud Y. (1981). "The model inference system" (PDF). Proceedings of the 7th international joint conference on Artificial intelligence. Vol. 2. Morgan Kaufmann. p. 1064.
  5. ^ Shapiro, Ehud Y. (1983). Algorithmic program debugging. MIT Press. ISBN 0-262-19218-7.
  6. ^ Muggleton, S.H.; Buntine, W. (1988). "Machine invention of first-order predicate by inverting resolution". Proceedings of the 5th International Conference on Machine Learning. pp. 339–352. doi:10.1016/B978-0-934613-64-4.50040-2. ISBN 978-0-934613-64-4.
  7. ^ Quinlan, J. R. (1990-08). "Learning logical definitions from relations". Machine Learning. 5 (3): 239–266. doi:10.1007/bf00117105. ISSN 0885-6125. {{cite journal}}: Check date values in: |date= (help)
  8. ^ a b c Nienhuys-Cheng, Shan-hwei; Wolf, Ronald de (1997). Foundations of inductive logic programming. Lecture notes in computer science Lecture notes in artificial intelligence. Berlin Heidelberg: Spinger. pp. 354–358. ISBN 978-3-540-62927-6.
  9. ^ Muggleton, S.H. (1995). "Inverting entailment and Progol". New Generation Computing. 13 (3–4): 245–286. CiteSeerX 10.1.1.31.1630. doi:10.1007/bf03037227. S2CID 12643399.
  10. ^ Muggleton, Stephen (1999). "Inductive Logic Programming: Issues, Results and the Challenge of Learning Language in Logic". Artificial Intelligence. 114 (1–2): 283–296. doi:10.1016/s0004-3702(99)00067-3.; here: Sect.2.1
  11. ^ Džeroski, Sašo (1996). "Inductive Logic Programming and Knowledge Discovery in Databases" (PDF). In Fayyad, U.M.; Piatetsky-Shapiro, G.; Smith, P.; Uthurusamy, R. (eds.). Advances in Knowledge Discovery and Data Mining. MIT Press. pp. 117–152 See §5.2.4. Archived from the original (PDF) on 2021-09-27. Retrieved 2021-09-27.
  12. ^ Plotkin, Gordon D. (1970). Meltzer, B.; Michie, D. (eds.). "A Note on Inductive Generalization". Machine Intelligence. 5: 153–163. ISBN 978-0-444-19688-0.
  13. ^ Plotkin, Gordon D. (1971). Meltzer, B.; Michie, D. (eds.). "A Further Note on Inductive Generalization". Machine Intelligence. 6. Edinburgh University Press: 101–124. ISBN 978-0-85224-195-0.
  14. ^ i.e. sharing the same predicate symbol and negated/unnegated status
  15. ^ in general: n-tuple when n positive example literals are given
  16. ^ a b Muggleton, S.H. (1991). "Inductive logic programming". New Generation Computing. 8 (4): 295–318. CiteSeerX 10.1.1.329.5312. doi:10.1007/BF03037089. S2CID 5462416.
  17. ^ Ray, O.; Broda, K.; Russo, A.M. (2003). "Hybrid abductive inductive learning". Proceedings of the 13th international conference on inductive logic programming. LNCS. Vol. 2835. Springer. pp. 311–328. CiteSeerX 10.1.1.212.6602. doi:10.1007/978-3-540-39917-9_21. ISBN 978-3-540-39917-9.
  18. ^ Kimber, T.; Broda, K.; Russo, A. (2009). "Induction on failure: learning connected Horn theories". Proceedings of the 10th international conference on logic programing and nonmonotonic reasoning. LNCS. Vol. 575. Springer. pp. 169–181. doi:10.1007/978-3-642-04238-6_16. ISBN 978-3-642-04238-6.
  19. ^ Yamamoto, Yoshitaka; Inoue, Katsumi; Iwanuma, Koji (2012). "Inverse subsumption for complete explanatory induction" (PDF). Machine Learning. 86: 115–139. doi:10.1007/s10994-011-5250-y. S2CID 11347607.
  20. ^ Yamamoto, Akihiro (1997). "Which hypotheses can be found with inverse entailment?". International Conference on Inductive Logic Programming. Lecture Notes in Computer Science. Vol. 1297. Springer. pp. 296–308. CiteSeerX 10.1.1.54.2975. doi:10.1007/3540635149_58. ISBN 978-3-540-69587-5.
  21. ^ a b Kimber, Timothy (2012). Learning definite and normal logic programs by induction on failure (PhD). Imperial College London. ethos 560694. Archived from the original on 2022-10-21. Retrieved 2022-10-21.
  22. ^ Toth, David (2014). "Imparo is complete by inverse subsumption". arXiv:1407.3836 [cs.AI].
  23. ^ Muggleton, Stephen; Santos, Jose; Tamaddoni-Nezhad, Alireza (2009). "ProGolem: a system based on relative minimal generalization". International Conference on Inductive Logic Programming. Springer. pp. 131–148. CiteSeerX 10.1.1.297.7992. doi:10.1007/978-3-642-13840-9_13. ISBN 978-3-642-13840-9.
  24. ^ Santos, Jose; Nassif, Houssam; Page, David; Muggleton, Stephen; Sternberg, Mike (2012). "Automated identification of features of protein-ligand interactions using Inductive Logic Programming: a hexose binding case study". BMC Bioinformatics. 13: 162. doi:10.1186/1471-2105-13-162. PMC 3458898. PMID 22783946.

Further reading