Talk:Knowledge representation and reasoning
|WikiProject Cognitive science||(Rated Start-class, Low-importance)|
|WikiProject Business||(Rated B-class, Low-importance)|
- 1 Opening comments
- 2 KR Language
- 3 Remarks on the History section
- 4 Tree of Porphyry
- 5 Sanskrit & Knowledge Representation ??
- 6 External link doesn't work
- 7 Criticism/Competing fields
- 8 Mycin? Hunh?
- 9 Lots of errors
- 10 New Intro
- 11 Dubious implications
- 12 Reasoning
- 13 Low quality article
- 14 Plan to do some major rewriting of this article
- 15 Expressivity: I think this is wrong as currently stated
- 16 Reference number four
- 17 Ontology Engineering section
As far as I understand, FIPA SL is also an example for a knowledge representation language. If nobody objects, I will add it to the list of example languages.
Revised introduction I added an introduction and attempted to answer some of the questions on this page, and give a clear overview of the topic. The topic of KR is broad, so I think we should split into parts, and I propose using the question system to direct links (on the other hand, this may be inconsistent with the rest of the site). There are philisophical, cognitive science (particular developmental), and artificial intelligent paths to take.
In terms of AI, there are thousands of different directions for the topic as well. We should at least make the distinction between those were are trying to model the entire world (e.g., common sense reasoning), and those who model just one specialized subset. Of those groups, there lots of different approaches: logical (situation/event calculus, non-monotonic reasoning), statistical/connectionist, another type of symbolic that I'm having trouble classifying (script, frame, semantic nets, etc)....
As an epistemologist and someone who has studied cognitive psychology a bit, I am very interested in having a much fuller, clearer explanation of what "the problem of how to store and manipulate knowledge" means. It's not self-evident what the problem even is supposed to be.
What is the difference between knowledge representation and just keeping data in a computer memory? -- User:Hirzel
- Computers don't know anything; they are not conscious.
- Why do you think Computers don't know anything, but humans do? I mean, what exactly constitutes the difference? --denny vrandečić 10:51, Aug 3, 2004 (UTC)
I would like to see a greater variety of representations listed e.g. patterns, distinctions, concepts, stories, activities, events, cases, rules, objects. More attention to ephemeral (audio) and visual forms.
This article needs an almost complete rewrite. It is very poor as it is. Any textbook on Artificial Intelligence is way better than this.
Should there be a catgeory for knowledge representation languages like KM, OWL, CycL and KIF? Would it be a subcategory of Category:logic programming languages or Category:declarative programming languages? What other examples are there? Bovlb 08:39, 2005 Apr 9 (UTC)
Remarks on the History section
The history section is kind of longish and expecially the beginning is too far fetched. In many fields of study people have a tendency to extend the history of a field beyond the actual beginning. Of course one can say that DNA is way to represent knowledge. However most of the interesting things how this is done are not know (yet). So it does not actually give a good account of what people think what knowledge represenation is. It is a term which has the roots in the context of data analysis and general computing and it probably about 20..30 years old. Who has some more details on the first uses of the term?
I propose to just delete the first three paragaphs.
The history of KR can be said to begin with DNA and memory molecules,...
Mathematics and related logical notations such as predicate ....the Big Bang ...
In philosophy knowledge is most commonly defined as "justified true belief". Hirzel 13:21, 20 October 2005 (UTC)
- As there was no reaction yet I think I may move the three paragraphs to here for the time beeing. Hirzel 23:24, 6 November 2005 (UTC)
- The history of KR can be said to begin with DNA and memory molecules, which represent information about how to construct various organisms. This may be considered a knowledge representation. Spoken and written language also represent knowledge. The sum total of all books used to pass knowledge from one generation to the next amount to an extensive KR, with the pace of change increasing exponentially since perhaps 1600.
- In philosophy knowledge is most commonly defined as "justified true belief". However, knowledge representation uses the term much more broadly: there need be no belief for DNA to function, and language can easily represent incorrect beliefs, as well as things not believed at all.
- Hirzel, thank you for your interest in this article. Clearly, the history of the subject stems from the AI days, which waned in the late '80s and which has become part of the stable infrastructure of the field, when it became clear just how difficult the problem of KR is. You can look in the AI books of the period (Nilsson, Winston, etc) to find a sentence whose canonical statement is something like If we can just choose the right representation for the problem at hand, then it becomes easy to solve. Unfortunately I do not have the time to dig up the books and search for the quote right now. But I agree that the timing was about 1975 - 1980 for the first statement. Ancheta Wis 00:54, 7 November 2005 (UTC)
Tree of Porphyry
Will someone add an image of the tree of porphyry here? It's a great example of the knowledge representation in the natural domain, making a nice counterpoint to a lot of current interest in knowledge representation from the "artificial intelligence" community... Plus, thi sarticle needs a pictureChris Chatham
Sanskrit & Knowledge Representation ??
- Knowledge Representation in Sanskrit and Artificial Intelligence by Rick Briggs, Roacs, NASA Ames Research Center. Published in AI Magazine, Volume 6, Number 1, Spring 1985 
any idea ?
-- 220.127.116.11 11:34, 22 July 2006 (UTC)
In References section link http://citeseer.nj.nec.com/context/177306/0 for Ronald J. Brachman; What IS-A is and isn't. An Analysis of Taxonomic Links in Semantic Networks doesn't respond —The preceding unsigned comment was added by Y2y (talk • contribs) 11:51, 4 March 2007 (UTC).
I have noticed that there is not a section about competing fields of AI and criticism of knowledge representation. At some universities, professors believe knowledge representation is not a promising field in AI (at Stanford, at least). I am not an AI expert, and it would be useful to have some comparison between KR and other schools of thought. 18.104.22.168 01:30, 5 March 2007 (UTC)
- I've worked with people from the AI group at Stanford and went there a long time ago. Who specifically thinks knowledge representation "is not a promising field in AI"? I've not heard that from any Stanford researcher that I know of. RedDog (talk) 13:15, 7 December 2013 (UTC)
This passage is confusing: In the field of artificial intelligence, problem solving can be simplified by an appropriate choice of knowledge representation. Representing the knowledge using a given technique may enable the domain to be represented. For example Mycin, a diagnostic expert system used a rule based representation scheme. An incorrect choice would defeat the representation endeavor....
That's all very blurry, at least to a layman.
- Agree. Poorly worded, I get what they are saying I think but it's not at all clear. Not sure if that is still even there if it is I will change it (I'm planning a rewrite and just reading through the Talk page first) RedDog (talk) 13:13, 7 December 2013 (UTC)
Lots of errors
After a quick read, I noticed a number of factual errors in this article. I've marked it with "expert" and "nofootnotes" tags to warn the reader. I will fix them eventually, but I have a lot on my plate. Are the original authors still watching this article? ---- CharlesGillingham (talk) 11:30, 21 November 2007 (UTC)
- I'm just starting to review it and plan on doing some major reworking. I agree it's not in great shape now. RedDog (talk) 13:11, 7 December 2013 (UTC)
I took a pass at a revised introduction based on some KR lectures I've given. I think the article should be reviesed around the notion of expressivity and complexity, with some links possibly to separate topics on general problems in KR (and in some cases how they were solved), for instance, the frame problem the symbol grounding problem, etc. Gorbag42 (talk) 18:51, 22 September 2008 (UTC)
- There is an article on the Frame problem fyi. I agree it may be worth a mention but it's rather tangential to the main issues of KR as discussed in mainstream AI IMO. RedDog (talk) 13:09, 7 December 2013 (UTC)
"Science has not yet completely described the internal mechanisms of the brain to the point where they can simply be replicated by computer programmers."
The word 'yet' implies that science will or is capable of completely describing the internal mechanisms of the brain. Certainly there is no scientific basis for this implication.
Even if we concede that it can, the further implication is that computer programmers would be able to replicate them. This is even more dubious than the first assertion. —Preceding unsigned comment added by 22.214.171.124 (talk) 06:47, 12 December 2009 (UTC)
- Just to state my bias I think they can and will eventually be "replicated" but I agree we are so far from that it's just pointless speculation and there is no reason to include that. Also, most of the leading people in AI would be careful not to make such grandiose and vague claims. RedDog (talk) 13:08, 7 December 2013 (UTC)
This article was renamed from "knowledge representation" to "knowledge representation and reasoning". If the reasoning part is to be kept, then a merge with automated reasoning should be considered. Representation and reasoning are inherently related but they seem to be split into different academic communities/conferences etc. Any opinions? pgr94 (talk) 18:16, 21 June 2010 (UTC)
- Strongly oppose merging with automated reasoning. The two terms are related for sure but they don't mean the same thing at all. Automated reasoning is a more vague term that usually refers to things like theorem provers, inference engines, etc. Knowledge representation refers more to languages such as LOOM, KL-One, KEE, etc. More about the structure of the data than the inferencing. I think we should just drop the "and reasoning" the term people use in AI is just Knowledge Representation. RedDog (talk) 13:05, 7 December 2013 (UTC)
Low quality article
- I actually didn't think it was quite THAT bad but I'm grading on a curve I've found a few articles in the AI and OO space that were really bad, there are some articles with code examples for OO that are just plain wrong, written as if by someone who wanted to say "here is what people who don't understand OO write as their first program" But I digress, anyway I've redone the Overview and plan to redo more of the article as well. MadScientistX11 (talk) 15:38, 24 December 2013 (UTC)
Plan to do some major rewriting of this article
I'm currently working on another related article which is smaller and easier to fix but when I finish that one I plan to work on this one. Just giving anyone watching this page a heads up in case they want to start or restart some discussion. So far I've just taken a quick look but I think this article needs lots of work. I am an expert in the field. I've been a principal investigator for DARPA, USAF, and NIST research projects and worked in the group doing KR research at the Information Sciences Institute. I think this article should just be called Knowledge Representation there is no need for the "reasoning" (reasoning is what you do with KR but when people talk about KR they usually just use those two words). More later. RedDog (talk) 13:02, 7 December 2013 (UTC)
Expressivity: I think this is wrong as currently stated
The overview section of the article currently says: "The more expressive a KR, the easier and more compact it is to express a fact or element of knowledge within the semantics and grammar of that KR. However, more expressive languages are likely to require more complex logic and algorithms to construct equivalent inferences. A highly expressive KR is also less likely to be complete and consistent." I think this is wrong. By more expressive I assume is meant "closer to a complete representation of First Order Logic". That is the holy grail for KR that is held up by researchers as the ideal that we can never quite achieve in reality but that we are aiming to get as close to as feasible. (e.g. see Brachman's papers on the topic in his book Reading in Knowledge Representation). But the more expressive a language is then actually the less complex any individual statement needs to be because you can't get more expressive than FOL. It's true that understanding HOW TO USE the system may be more complicated but that is a different issue than the complexity of any specific statement in the language. The same goes for completeness and consistency. The closer you get to FOL the MORE likely that you can automate things like completeness and consistency checking. The problem is that if you have full FOL then we know (it's been proven mathematically) that there will be some expressions (e.g. quantification over infinite sets) that even theoretically can never terminate and hence if you try to prove the completeness or correctness of a system with such statements your program won't terminate. Again, this is all covered by Brachman in the book I mentioned which is in my experience one of the best collection of influential KR papers. I plan to change this but wanted to document the issue in case anyone wants to discuss before I edit the article. MadScientistX11 (talk) 22:03, 23 December 2013 (UTC)
- I've rewritten it to make it more accurate and have added several references to classic papers on the topic of KR in AI. One aspect that I would like to see added at some point is something on the neural net view of things. I think those guys refer to the neural networks as a "knowledge base" as well and their approach to representing knowledge is diametrically opposite to the symbolic AI view represented so far. But I don't know as much about the neural net guys so right now I'm not going to write that. I've been reading up on the topic and if I feel competent to add something later i will. MadScientistX11 (talk) 15:35, 24 December 2013 (UTC)
Reference number four
Currently reference number four is a link to this site: http://aitopics.org/ This is just a general site with AI papers, it's unclear what specific paper is being referenced if one ever was so the ref is really meaningless and I'm going to delete it. MadScientistX11 (talk) 04:26, 24 December 2013 (UTC)
Ontology Engineering section
I've now rewritten the entire article except for the Ontology Engineering section which I left with some minor edits. There is a big chunk of the Ontology Engineering section that I still don't necessarily agree with it says:
"As a second example, medical diagnosis viewed in terms of rules (e.g., MYCIN) looks substantially different from the same task viewed in terms of frames (e.g., INTERNIST). Where MYCIN sees the medical world as made up of empirical associations connecting symptom to disease, INTERNIST sees a set of prototypes, in particular prototypical diseases, to be matched against the case at hand."
Now saying whether something is "substantially different" is a judgement call and I don't think there can be a black or white answer. But it seems to me that this argument actually contradicts the point that was made earlier in the same section. That point (which I agree with) is that Frames, rules, objects, semantic nets, Lisp code, don't ultimately matter what matters is the actual knowledge. My guess is if we actually looked at the way Internest and MYCIN work the medical concepts underneath them are essentially the same. What is different is the knowledge representation scheme which I thought the section was arguing earlier is really not that critical. Actually that isn't true either, try implementing complex rules in C code. It can be done but it will take a lot longer. I think these issues need to be teased apart better than in the current section but I'm not sure how to do that right now so I'm leaving it and documenting in case someone else agrees and wants to give it a shot. MadScientistX11 (talk) 16:52, 25 December 2013 (UTC)