Talk:Artificial intelligence

From Wikipedia, the free encyclopedia
Jump to: navigation, search
August 6, 2009 Peer review Reviewed
          This article is of interest to the following WikiProjects:
WikiProject Robotics (Rated B-class, Top-importance)
WikiProject icon Artificial intelligence is within the scope of WikiProject Robotics, which aims to build a comprehensive and detailed guide to Robotics on Wikipedia. If you would like to participate, you can choose to edit this article, or visit the project page (Talk), where you can join the project and see a list of open tasks.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 Top  This article has been rated as Top-importance on the project's importance scale.
 
WikiProject Technology (Rated B-class)
WikiProject icon This article is within the scope of WikiProject Technology, a collaborative effort to improve the coverage of technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
Checklist icon
 
WikiProject Philosophy (Rated B-class, High-importance)
WikiProject icon This article is within the scope of WikiProject Philosophy, a collaborative effort to improve the coverage of content related to philosophy on Wikipedia. If you would like to support the project, please visit the project page, where you can get more details on how you can help, and where you can join the general discussion about philosophy content on Wikipedia.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 High  This article has been rated as High-importance on the project's importance scale.
 
WikiProject Linguistics (Rated B-class, High-importance)
WikiProject icon This article is within the scope of WikiProject Linguistics, a collaborative effort to improve the coverage of Linguistics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 High  This article has been rated as High-importance on the project's importance scale.
Taskforce icon
This article is supported by the Philosophy of language task force.
 
WikiProject Computing (Rated B-class, High-importance)
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 High  This article has been rated as High-importance on the project's importance scale.
 
WikiProject Cognitive science (Rated C-class, Top-importance)
WikiProject icon This article is within the scope of WikiProject Cognitive science, a collaborative effort to improve the coverage of Cognitive science on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 Top  This article has been rated as Top-importance on the project's importance scale.
 
WikiProject Software / Computing  (Rated B-class, High-importance)
WikiProject icon This article is within the scope of WikiProject Software, a collaborative effort to improve the coverage of software on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 High  This article has been rated as High-importance on the project's importance scale.
Taskforce icon
This article is supported by WikiProject Computing (marked as High-importance).
 
WikiProject Computer science (Rated B-class, Top-importance)
WikiProject icon This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 Top  This article has been rated as Top-importance on the project's importance scale.
 
WikiProject Systems (Rated B-class, High-importance)
WikiProject icon This article is within the scope of WikiProject Systems, which collaborates on articles related to systems and systems science.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 High  This article has been rated as High-importance on the project's importance scale.
Taskforce icon
This article is within the field of Cybernetics.
 
WikiProject Religion (Rated B-class, Top-importance)
WikiProject icon This article is within the scope of WikiProject Religion, a project to improve Wikipedia's articles on Religion-related subjects. Please participate by editing the article, and help us assess and improve articles to good and 1.0 standards, or visit the wikiproject page for more details.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 Top  This article has been rated as Top-importance on the project's importance scale.
 
WikiProject Human Computer Interaction (Rated High-importance)
WikiProject icon This article is within the scope of WikiProject Human Computer Interaction, a collaborative effort to improve the coverage of Human Computer Interaction on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
 ???  This article has not yet received a rating on the project's quality scale.
 High  This article has been rated as High-importance on the project's importance scale.
 
Wikipedia Version 1.0 Editorial Team / v0.7 / Vital
WikiProject icon This article has been reviewed by the Version 1.0 Editorial Team.
Taskforce icon
This article has been selected for Version 0.7 and subsequent release versions of Wikipedia.
 
B-Class article B  This article has been rated as B-Class on the quality scale.
Taskforce icon
This article is a vital article.

Contents

On going issues[edit]

Length[edit]

I argue that this is WP:Summary article of a large field, and that therefor it is okay that it runs a little long. Currently, the article text is at around ten pages, but the article is not 100% complete and needs more illustrations. ---- CharlesGillingham (talk) 18:29, 2 November 2010 (UTC)

Todo: Illustration[edit]

The article needs a lead illustration and could use more illustrations throughout. ---- CharlesGillingham (talk) 18:29, 2 November 2010 (UTC)

Thanks to User:pgr94, the article is 70% illustrated. Almost there. ---- CharlesGillingham (talk) 00:03, 16 June 2011 (UTC)
Main illustration doesn't provide an actual example of an Artificial Intelligence, just a robot capable of mimicking human actions in a certain area (Namely, sport) — Preceding unsigned comment added by 86.163.226.52 (talk) 15:37, 4 August 2011 (UTC)
I've reverted the addition of a picture of the replica of a fictional AI to the lead. I don't think this fits the focus of this article which is about real-life AI endeavours (see for example comments in #Some definitions of AI). --Mirokado (talk) 13:15, 25 October 2014 (UTC)

Todo: Applications[edit]

The "applications" section does not give a comprehensive overview of the subject. ---- CharlesGillingham (talk) 18:29, 2 November 2010 (UTC)

Todo: Topics covered by major textbooks, but not this article[edit]

I can't decide if these are worth describing (in just a couple of sentences) or not. ---- CharlesGillingham (talk) 18:29, 2 November 2010 (UTC)

  1. Could use a tiny section on symbolic learning methods, such as explanation based learning, relevance based learning, inductive logic programming, case based reasoning.
  2. Could use a tiny section on knowledge representation tools, like semantic nets, frames, scripts etc.
  3. Control theory could use a little filling out with other tools used for robotics.
  4. Should mention Constraint satisfaction. (Under search). Discussion below, at Talk:Artificial intelligence/Archive 4#Constraint programming.
  5. Should mention the Frame problem in a footnote at least. ---- CharlesGillingham (talk) 19:52, 3 February 2011 (UTC)

Todo: redlinks and tags[edit]

  1. Where can we link Belief calculus? Does this include Dempster-Shafer theory (according to R&N)? I think that's more or less deprecated. Does R&N include expectation-maximization algorithm as a kind of belief calculus? I don't think so. Where is this in Wikipedia?
  2. There are still several topics with no source: Subjective logic, Game AI, etc. All are tagged in the article. ---- CharlesGillingham (talk) 19:59, 3 February 2011 (UTC)

Goals[edit]

I think a high level listing of AI's goals (from which more specific Problems inherit) is needed; for instance "AI attempts to achieve one or more of: 1) mimicking living structure and/or internal processes, 2) replacing living thing's external function, using a different internal implementation, 3) ..." At one point in the past, I had 3 or 4 such disjoint goals stated to me by someone expert in AI. I am not, however. DouglasHeld (talk) 00:11, 26 April 2011 (UTC)

We'd need a reliable source for this, such as a major AI textbook. ---- CharlesGillingham (talk) 16:22, 26 April 2011 (UTC)

"Human-like" intelligence[edit]

I object to the phrase "human-like intelligence" being substituted here and elsewhere for "intelligence". This is too narrow and is out of step with the way many leaders of AI describe their own work. This only describes the work of a small minority of AI researchers.

  • AI founder John McCarthy (computer scientist) argued forcefully and repeatedly that AI research should not attempt to create "human-like intelligence", but instead should focus on create programs that solve the same problems that humans solve by thinking. The programs don't need to be human-like at all, just so long as they work. He felt AI should be guided by logic and formalism, rather than psychological experiments and neurology.
  • Rodney Brooks (leader of MIT's AI laboratories for many years) argued forcefully and repeatedly that AI research (specifically robotics) should not attempt to simulate human-like abilities such as reasoning and deduction, but instead should focus on animal-like abilities such as survival and locomotion.
  • Stuart Russell and Peter Norvig (authors of the leading AI textbook) dismiss the Turing Test as irrelevant, because they don't see the point in trying to creating human-like intelligence. What we need is the intelligence it takes to solve problems, regardless of whether it's human-like or not. They write "airplanes are tested by how well they fly, not by how they can fool other pigeons into thinking they are pigeons."
  • They also object to John Searle's Chinese room argument, which claims that machine intelligence can never be truly "human-like", but at best can only be a simulation of "human-like" intelligence. They write "as long the program works, [we] don't care if you call it a simulation or not." I.e., they don't care if it's human-like.
  • Russell and Norvig define the field in terms of "rational agents' and write specifically that the field studies all kinds of rational or intelligent agents, not just humans.

AI research is primarily concerned with solving real-world problems, problems that require intelligence when they are solved by people. AI research, for the most part, does not seek to simulate "human like" intelligence, unless it helps to solve this fundamental goal. Although some AI researchers have studied human psychology or human neurology in their search for better algorithms, this is the exception rather than the rule.

I find it difficult to understand why we want to emphasize "human-like" intelligence. As opposed to what? "Animal-like" intelligence? "Machine-like" intelligence? "God-like" intelligence? I'm not really sure what this editor is getting at.

I will continue to revert the insertion "human-like" wherever I see it. ---- CharlesGillingham (talk) 06:18, 11 June 2014 (UTC)

Completely agree. The above arguments are good. Human-like intelligence is a proper subset of intelligence. The editor seems to be confusing "Artificial human intelligence" and the much broader field of "artificial intelligence". pgr94 (talk) 10:12, 11 June 2014 (UTC)

One more thing: the phrase "human-like" is an awkward neologism. Even if the text was written correctly, it would still read poorly. ---- CharlesGillingham (talk) 06:18, 11 June 2014 (UTC)

To both editors, WP:MOS requires that the Lead section only contain material which is covered in the main body of the article. At present, the five items which you outline above are not contained in the main body of the article but only on Talk. The current version of the Lead section accurately summarizes the main body of the article in its current state. FelixRosch (talk) 14:54, 23 July 2014 (UTC)
The article (nor any of the sources) does not define AI by using the term "human like" to specify the exact kind of intelligence that it studies. Thus the addition of the term "human-like" absolutely does not summarize the article. I think the argument from WP:SUMMARY is actually a very strong argument for striking the term "human like".
I still don't understand the distinction between "human-like" intelligence and the other kind of intelligence (whatever it is), and how this applies to AI research. Your edit amounts to the claim that AI studies "human-like" intelligence and NOT some other kind of intelligence. It is utterly not clear what this other kind of intelligence is, and it certainly does not appear in the article or the sources, as far as I can tell. It would help if you explain what it is you are talking about, because it makes no sense to me and I have been working on, reading and studying AI for something like 34 years now. ---- CharlesGillingham (talk) 18:23, 1 August 2014 (UTC)
Also, see the intro to the section Approaches and read footnote 93. This describes specifically how some AI researchers are opposed to the idea of studying "human-like" intelligence. Thus the addition of "human-like" to the the intro not only does not summarize the article, it actually claims the opposite of what the body the article states, with highly reliable sources. ---- CharlesGillingham (talk) 18:34, 1 August 2014 (UTC)
That's not quite what you said in the beginning of this section. Also, your two comments on 1August seem to be at odds with each other. Either you are saying that there is nothing other than human-like intelligence, or you wish to introduce material to support the opposite. If you wish to develop the material into the body of the article following your five points at the start of this section, then you are welcome to try to post them in the text prior to making changes in the Lead section. WP policy is that material in the Lede must be first developed in the main body of the article, which you have not done. FelixRosch (talk) 16:35, 4 September 2014 (UTC)
As I've already said, the point I am making is already in the article.
"Human-like" intelligence is not in the article. Quite the contrary.
The article states that this is long standing question that AI research has not yet answered: "Should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?"
And the accompanying footnote makes the point in more detail:
"Biological intelligence vs. intelligence in general:
  • Russell & Norvig 2003, pp. 2–3, who make the analogy with aeronautical engineering.
  • McCorduck 2004, pp. 100–101, who writes that there are "two major branches of artificial intelligence: one aimed at producing intelligent behavior regardless of how it was accomplioshed, and the other aimed at modeling intelligent processes found in nature, particularly human ones."
  • Kolata 1982, a paper in Science, which describes McCathy's indifference to biological models. Kolata quotes McCarthy as writing: "This is AI, so we don't care if it's psychologically real"[1]. McCarthy recently reiterated his position at the AI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence" (Maker 2006)."
This proves that the article does not state that AI studies "human like" intelligence. It states, very specifically, that AI doesn't know whether to study human-like intelligence or not. ---- CharlesGillingham (talk) 03:21, 11 September 2014 (UTC)

Human-like intelligence is the subject of each of the opening eight sections including "Natural language"[edit]

As the outline of this article plainly shows in its opening eight sections, each one of the eight sections of this page are all explicitly for 'human-like' intelligence. This fact should be reflected in the Lede as well. The first eight section are all devoted to human-like intelligence. In the last few weeks you have taken several differing positions. First you were saying that there is nothing other than human-like intelligence, then you wished to introduce multiple references to support the opposite, and now you appear to wish to defend an explicitly strong-AI version of your views against 'human-like' intelligence. You are expected on the basis of good faith to make you best arguments up front. The opening eight sections are all devoted to human-like intelligence, even to the explicit numbering of natural language communication into the list. There is no difficulty if you wish to write your own new page for "Strong-AI" and only Strong-AI. If you like, you can even ignore the normative AI perspective on your version of a page titled "Strong-AI". That however is not the position which is represented on the general AI page which is predominantly in its first eight sections oriented to human-like intelligence. FelixRosch (talk) 16:18, 11 September 2014 (UTC)

(Just to be clear: (1) I did not say there is nothing other than human-like intelligence. I don't know where you're getting that. (2) I find it difficult to see how you could construe my arguments as being in favor of research into "strong AI" (as in artificial general intelligence) or as an argument that machines that behave intelligently must also have consciousness (as in the strong AI hypothesis). As I said in my first post, AI research is about solving problems that require intelligence when solved by people. And more to the point: the solutions to these problems are not, in general, "human-like". This is the position I have consistently defended. (3) I have never shared my own views in this discussion, only the views expressed by AI researchers and this article. ---- CharlesGillingham (talk) 05:19, 12 September 2014 (UTC))
Hello Felix. My reading of the sections is not the same. Could you please quote the specific sentences you are referring to. I have reverted your edit as it is rather a narrow view of AI that exists mostly in the popular press, not the literature. pgr94 (talk) 18:28, 11 September 2014 (UTC)
Hello Pgr94; This is the list of the eight items which start off the article: 2.1 Deduction, reasoning, problem solving 2.2 Knowledge representation 2.3 Planning 2.4 Learning 2.5 Natural language processing (communication) 2.6 Perception 2.7 Motion and manipulation 2.8 Long-term goals. Each of these items is oriented to human-like intelligence. I have also emphasized 2.5, Natural language processing, as specifically unique to human alone. Please clarify if this is the same outline that should appear on your screen. Of the three approaches to artificial intelligence, weak-AI, Strong-AI, and normative AI, you should specify which one you are endorsing prior to reverting. My point is that the Lede should be consistent with the body of the article, and that it should not change until the new material is developed in the main body of the article before changing the Lede. Human-like intelligence are what all the opening 8 sections are about. Make Lede consistent with the contents of the article following WP:MoS. FelixRosch (talk) 20:11, 11 September 2014 (UTC)
It seems you just listed the sections rather than answer my query. Never mind.
The article is not based on human-like intelligence as you seem to be suggesting. If you look at animal cognition you will see that reasoning, planning, learning and language are not unique to humans. Consider also swarm intelligence and evolutionary algorithms that are not based on human behaviour. To say that the body of the article revolves around human-like intelligence is therefore inaccurate.
If you still disagree with both Charles and myself, may I suggest working towards consensus here before adding your change as I don't believe your change to the lede reflects the body of the article. pgr94 (talk) 23:51, 11 September 2014 (UTC)
All of the intelligent behaviors you listed above can demonstrated by very "inhuman" programs. For example, a program can "deduce" the solution of a Sudoku puzzle by iterating through all of the possible combinations of numbers and testing each one. A database can "represent knowledge" as billions of nearly identical individual records. And so on. As for natural language processing, this includes tasks such as text mining, where a computer searches millions of web pages looking for a set of words and related grammatical structures. No human could do this task; a human would approach the problem a completely different way. Even Siri's linguistic abilities are based mostly on statistical correlations (using things like support vector machines or kernel methods) and not on neurology. Siri depends more on the mathematical theory of optimization than it does on our understanding of the way the brain processes language. ---- CharlesGillingham (talk) 05:19, 12 September 2014 (UTC)
@Pgr94; Your comment appears to state that because there are exceptions to the normative reading of AI, therefore you can justify changes to the Lede to reflect these exceptions. WP:MoS is the exact opposite of this, where the Lede is required to give only a summary of material already used to describe the field covered in the main body of the article. No difficulty if you want to cover the exceptions in the main body of the article and you can go ahead and do so as long as you cite your additions according to wikipedia policy for being verifiable. The language used in section 2.1 is "that humans use when they solve puzzles...", and this is consistent for the other sections I have already enumerated for human-like intelligence. This article in its current form is overwhelmingly oriented to human-like intelligence applied normatively to establish the goals of AI. Arguing the exception can be covered in the main body but does not belong in the Lede according to wikipedia policy. @CharlesGillingham; You appear now to be devoted to the Strong-AI position to support your answers. This is only one version of AI, and it is not the one which is the principal one covered in the main baody of this article which covers the goal of producing human-link intelligence and its principal objectives. Strong-AI, Weak-AI, and normative AI are three versions, and one should not be used to bias attention away from what the main content of this article is about which is the normative AI approach as discussed in each of the opening 8 sections. The language used in section 2.1 is "that humans use when they solve puzzles...", and this is consistent for the other sections I have already enumerated. No difficulty if you want to bring in the material to support your preference for Strong-AI in the main body of the article. Until you do so the Strong-AI orientation should not affect what is represented in the Lede section. Wikipedia policy is that only material in the main body of the article may be used in the Lede. FelixRosch (talk) 16:10, 12 September 2014 (UTC)
I have no idea what you mean by "Strong AI" in the paragraph above. I am defending the positions of John McCarthy, Rodney Brooks, Peter Norvig and Stuart Russell, along with most modern AI researchers. These researchers advocate logic, nouvelle AI and the intelligent agent paradigm (respectively). All of these are about as far from strong AI as you can get, in either of the two normal ways the term is used. So I have to ask you: what do you mean when you say "strong AI"? It seems very strange indeed to apply it to my arguments.
I also have no idea what you mean by "normative AI" -- could you point to a source that defines "strong AI", "weak AI" and "normative AI" in the way you are using them? My definitions are based on the leading AI textbooks, and they seem to be completely different than yours.
Finally, you still have not addressed any of the points that Pgr94 and I have brought up -- if, as you claim, AI research is trying to simulate "human like" intelligence, why do most major researchers reject "human like" intelligence as a model or a goal, and why are so many of the techniques and applications based on principles that have nothing to do with human biology or psychology? ---- CharlesGillingham (talk) 04:02, 14 September 2014 (UTC)
You still have not responded to my quote in bold face above that the references in all 8 (eight) opening section of this article all refer to human comparisons. You should read them since you appear to be obviating the wording which they are using and as I have quoted it above. You now have two separate edits in two forms. These are two separate edits and you should not be automatically reverting them without discussion first. The first one is my preference and I can continue this Talk discussion until you start reading the actual contents of all eight opening sections which details human-like intelligence. The other edit is restored since there is no reason not to include the mention of the difference of general AI from strong AI and weak AI. Your comment on strong AI seems contradicted by your own editing of the very page (disambiguation page) for it. The related pages John Searle, etc, all are oriented to discussion of human comparisons of intelligence, as clearly stated on these links. Strong artificial intelligence, or Strong AI, may refer to:Artificial general intelligence, a hypothetical machine that exhibits behavior at least as skillful and flexible as humans do, and the research program of building such an artificial general intelligence, and, Computational theory of mind, the philosophical position that human minds are (or can be usefully modeled as) computer programs. This position was named "strong AI" by John Searle in his Chinese room argument. Each of these links supports human-like intelligence comparisons as basic to understating each of these terms. FelixRosch (talk) 15:21, 15 September 2014 (UTC)

────────────────────────────────────────────────────────────────────────────────────────────────────All I'm saying is this: major AI researchers would (and do) object to defining AI as specifically and exclusively studying "human-like" intelligence. They would prefer to define the field as studying intelligence in general, whether human or not. I have provided ample citations and quotations prove that this is the case. If you can't see that I have proved this point, then we are talking past each other. Repeatedly trying to add "human" or "human-like" or "human-ish" intelligence to the definition is simply incorrect.

I am happy to get WP:Arbitration on this matter, if you like, as long as it is understood that I only check Wikipedia once a week or so.

Re: many of the sections which define the problem refer to humans. This does not contradict what I am saying and does not suggest that Wikipedia should try to redefine the field in terms of human intelligence. Humans are the best example of intelligent behavior, so it is natural that we should use humans as an example when we are describing the problems that AI is solving. There are technical definitions of these problems that do not refer to humans: we can define reasoning in terms of logic, problem solving in terms of abstract rational agents, machine learning in terms of self-improving programs and so on. Once we have defined the task precisely and written a program that performs it to any degree, we're no longer talking about human intelligence any more -- we're talking about intelligence in general and machine intelligence in particular (which can be very "inhuman", as I demonstrated in an earlier post).

Re: strong AI. Yes, strong AI (in either sense) is defined in terms of human intelligence or consciousness. However, I am arguing that major AI researchers would prefer not to use "human" intelligence as the definition of the field, a position which points in the opposite direction from strong AI; the people I am arguing on behalf of are generally uninterested in strong AI (as Russell and Norvig write "most AI researchers don't care about the strong AI hypothesis"). So it was weird that you wrote I was "devoted to the Strong-AI position". Naturally, I wondered what on earth you were talking about.

The term "weak AI" is not generally used except in contrast to "strong AI", but if we must use it, I think you could characterize my argument as defending "weak AI"'s claim to be part of AI. In fact, "strong AI research" (known as artificial general intelligence) is a very small field indeed, and "weak AI" (if we must call it that) constitutes the vast majority of research, with thousands of successful applications and tens of thousands of researchers. ---- CharlesGillingham (talk) 00:35, 20 September 2014 (UTC)

Undid revision 626280716. WP:MoS requires Lede to be consistent with the main body of the article. Previous version of Lede is inconsistent between 1st and 4th paragraph on human-like intelligence. Current version is consistent. Each one of the opening sections is also based one-for-one on direct emulation of human-like intelligence. You may start by explaining why you have not addressed the fact that each of the opening 8 (eight) sections is a direct comparison to human-like intelligence. Also, please stop your personal attacks by posting odd variations on my reference to the emulation of human-like intelligence. Your deliberate contortion of this simple phrase to press your own minority view of weak-AI is against wikipedia policy. Page count statistics also appear to favor the mainstream version of human-like intelligence which was posted and not your minority weak-AI preference. Please stop edit warring, and please stop violating MoS policy and guidelines for the Lede. The first paragraph, as the fourth paragraph already is in the Lede, must be consistent and a summary of the material in the main body of the article, and not your admitted preference for the minority weak-AI viewpoint. FelixRosch (talk) 14:41, 20 September 2014 (UTC)
In response to your points above (1) I have "addressed the fact that each of the opening 8 (eight) sections is a direct comparison to human-like intelligence". It is in the paragraph above which begins with "Re: many of the sections which define the problem refer to humans.". (2) It's not a personal attack if I object every time you rephrase your contribution. I argue that the idea is incorrect and unsourced; the particular choice of words does not remove my objection. (3) As I have said before, I am not defending my own position, but the position of leading AI researchers and the vast majority of people in the field.
Restating my position: The precise, correct, widely accepted technical definition of AI is "the study and design of intelligent agents", as described in all the leading AI textbooks. Sources are in the first footnote. Leading AI researchers and the four most popular AI textbooks object to the idea that AI studies human intelligence (or "emulates" or "simulates" "human-like" intelligence).
Finally, with all due respect, you are edit warring. I would like to get WP:Arbitration. ----
I support getting arbitration. User:FelixRosch has not added constructively to this article and is pushing for a narrow interpretation of the term "artificial intelligence" which the literature does not support. Strong claims need to be backed up by good sources which Rosch has yet to do. Instead s/he appears to be cherrypicking from the article and edit warring over the lede. The article is not beyond improvement, but this is not the way to go about it. pgr94 (talk) 16:52, 20 September 2014 (UTC)
Pgr94 has not been part of this discussion for over a week, and the same suggestion is being made here, that you or CharlesG are welcome to try to bring in any cited material you wish to in order to support the highly generalized version of the Lede sentence which you appear to want to support. Until you bring in that material, WP:MoS is clear that the Lede is only supposed to summarize material which exists in the main body of the article. User:CharlesG keeps referring abstractly to multiple references he is familiar with and continues not to bring them into the main body of the article first. WP:MoS requires that you develop your material in the main body of the article before you summarize it in the Lede section. Without that material you cannot support an overly generalized version of the Lede sentence. The article in its current form, in all eight (8) of its opening sections is oriented to human-like intelligence (Sections 2.1, 2.2, ..., 2.8). Also, the fourth paragraph in the Lede section now firmly states that the body of the article is based on human intelligence as the basis for the outline of the article and its contents. According to WP:MoS for the Lede, your new material must be brought into the main body of the article prior to making generalizations about it which you wish to place in the Lede section. FelixRosch (talk) 19:45, 20 September 2014 (UTC)
As I have said before, the material you are requesting is already in the article. I will quote the article again:
From the lede: Major AI researchers and textbooks define this field as "the study and design of intelligent agents"
First footnote: Definition of AI as the study of intelligent agents:
Comment: Note that an intelligent agent or rational agent is (quite deliberately) not just a human being. It's more general: it can be a machine as simple a thermostat or as complex as a firm or nation.
From the section
Approaches:
A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?
From the corresponding
footnote
Biological intelligence vs. intelligence in general:
  • Russell & Norvig 2003, pp. 2–3, who make the analogy with aeronautical engineering.
  • McCorduck 2004, pp. 100–101, who writes that there are "two major branches of artificial intelligence: one aimed at producing intelligent behavior regardless of how it was accomplioshed, and the other aimed at modeling intelligent processes found in nature, particularly human ones."
  • Kolata 1982, a paper in Science, which describes John McCathy's indifference to biological models. Kolata quotes McCarthy as writing: "This is AI, so we don't care if it's psychologically real"[2]. McCarthy recently reiterated his position at the AI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence" (Maker 2006).
Comment All of these sources (and others; Rodney Brook's Elephants Don't Play Chess paper should also be cited) are part of a debate within the field that lasted from the 1960s to 90s, and was mostly settled by the "intelligent agent" paradigm. The exceptions would be the relatively small (but extremely interesting) field of artificial general intelligence research. This field defines itself in terms human intelligence. The field of AI, as a whole, does not.
This article has gone to great pains to stay in synch with leading AI textbooks, and the leading AI textbook addresses this issue (see chpt. 2 of Russell & Norvig's textbook), and comes down firmly against defining the field in terms of human intelligence. Thus "human" does not belong in the lead.
I have asked for dispute resolution. ---- CharlesGillingham (talk) 19:07, 21 September 2014 (UTC)

Arbitration ?[edit]

Why is anyone suggesting that arbitration might be in order? Arbitration is the last step in dispute resolution, and is used when user conduct issues make it impossible to resolve a content dispute. There appear to be content issues here, such as whether the term "human-like" should be used, but I don't see any evidence of conduct issues. That is, it appears that the editors here are being civil and are not engaged in disruptive editing. I do see that a thread has been opened at the dispute resolution noticeboard, an appropriate step in resolving content issues. If you haven't tried everything else, you don't want arbitration. Robert McClenon (talk) 03:08, 21 September 2014 (UTC)

You're right, dispute resolution is the next step. I have opened a thread. (Never been in a dispute that we couldn't resolve ourselves before ... the process is unfamiliar to me.) ---- CharlesGillingham (talk) 19:08, 21 September 2014 (UTC)
I am now adding an RFC, below. ---- CharlesGillingham (talk) 04:58, 23 September 2014 (UTC)

Alternate versions of lede[edit]

In looking over the recent discussion, it appears that the basic question is what should be in the article lede paragraph. Can each of the editors with different ideas provide a draft for the lede? If the issue is indeed over what should be in the lede, then perhaps a content Request for Comments might be an alternative to formal dispute resolution. Robert McClenon (talk) 03:24, 21 September 2014 (UTC)

Certainly. I would like the lede to read more or less as it has since 2008 or so:

Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is also an academic field of study. Major AI researchers and textbooks define this field as "the study and design of intelligent agents",[1] where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.[2] John McCarthy, who coined the term in 1955,[3] defines it as "the science and engineering of making intelligent machines".[4]

---- CharlesGillingham (talk) 19:12, 21 September 2014 (UTC)

We can nit pick this stuff to death and I'm already resigned that the lede isn't going to be exactly what I think it should be. BTW, some of my comments yesterday were based on my recollection of an older version of the lede, there was so much back and forth editing. I can live with the lede as it currently is but I don't like the word "emulating". To me "emulating" still implies we are trying to do it the way humans do. E.g., when I emulate DOS on a Windows machine or emulate Lisp on an IBM mainframe. When you emulate you essentially define some meta-layer and then just run ths same software and you trick it into thinking it's running on platform Y rather than X. I would prefer words like designing or something like that. But it's a minor point. I'm not going to start editing myself because I think there are already enough people going back and forth on this so just my opinion. --MadScientistX11 (talk) 15:20, 30 September 2014 (UTC)

Follow-Up[edit]

Based on a comment posted by User:FelixRosch at my talk page, it appears that the main issue is whether the first sentence of the lede should include "human-like". If that is the issue of disagreement, then the Request for Comments process is appropriate. The RFC process runs for 30 days unless there is clear consensus in less time. Formal dispute resolution can take a while also. Is the main issue the word "human-like"? Robert McClenon (talk) 15:12, 22 September 2014 (UTC)

Yes that is the issue. ---- CharlesGillingham (talk) 16:59, 22 September 2014 (UTC)
I have a substantive opinion, and a relatively strong substantive opinion, but I don't want to say what it is at this time until we can agree procedurally on how to settle the question. I would prefer the 30-day semi-automated process of an RFC rather than the formality of mediation-like formal dispute resolution, largely because it gets a better consensus via publishing the RFC in the list of RFCs and in random notification of the RFC by the bot. Unless anyone has a reason to go with mediation-like dispute resolution, I would prefer to get the RFC moving. Robert McClenon (talk) 21:41, 22 September 2014 (UTC)
I am starting the rfc below. As I said in the dispute resolution, I've never had a problem like this before. ---- CharlesGillingham (talk) 05:54, 23 September 2014 (UTC)

RfC: Should this article define AI as studying/simulating "intelligence" or "human-like intelligence"?[edit]

Deleting RFC header as per discussion. New RFC will be posted if required. Robert McClenon (talk) 15:03, 1 October 2014 (UTC)

Argument in favor of "intelligence"[edit]

The article should define AI as studying "intelligence" in general rather than specifically "human-like intelligence" because

  1. AI founder John McCarthy (computer scientist) writes "AI is not, by definition, a simulation of human intelligence", and has argued forcefully and repeatedly that AI should not simulate human intelligence, but should focus on solving problems that people use intelligence to solve.
  2. The leading AI textbook, Russell and Norvig's Artificial Intelligence: A Modern Approach defines AI as the "the study and design of rational agents", a term (like the more common term intelligent agent) which is carefully defined to include simple rational agents like thermostats and complex rational agents like firms or nations, as well as insects, human beings, and other living things. All of these are "rational agents", all them provide insight into the mechanism of intelligent behavior, and humans are just one example among many. They also write that the "whole-agent view is now widely accepted in the field."
  3. Rodney Brooks (leader of MIT's AI laboratories for many years) argued forcefully and repeatedly that AI research (specifically robotics) should not attempt to simulate human-like abilities such as reasoning and deduction, but instead should focus on animal-like abilities such as survival and locomotion.
  4. The majority of successful AI applications do not use "human-like" reasoning, and instead rely on statistical techniques (such as bayesian nets or support vector machines), models based the behavior of animals (such as particle swarm optimization), models based on natural selection, and so on. Even neural networks are an abstract mathematical model that does not typically simulate any part of a human brain. The last successful approach that modeled human reasoning were the expert systems of the 1980s, which are primarily of historical interest. Applications based on human biology or psychology do exist and may one day regain the center stage (consider Jeff Hawkins' Numenta, for one), but as of 2014, they are on the back burner.
  5. From the 1960s to the 1980s there was some debate over the value of human-like intelligence as a model, which was mostly settled by the all-inclusive "intelligent agent" paradigm. (See History of AI#The importance of having a body: Nouvelle AI and embodied reason and History of AI#Intelligence agents.) The exceptions would be the relatively small (but extremely interesting) field of artificial general intelligence research. This sub-field defines itself in terms human intelligence, as do some individual researchers and journalists. The field of AI, as a whole, does not.

All of these points are made in the article, with ample references:

From the lede: Major AI researchers and textbooks define this field as "the study and design of intelligent agents"
First footnote: Definition of AI as the study of intelligent agents:
Comment: Note that an intelligent agent or rational agent is (quite deliberately) not just a human being. It's more general: it can be a machine as simple a thermostat or as complex as a firm or nation.
From the section
Approaches:
A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?
From the corresponding
footnote
Biological intelligence vs. intelligence in general:
  • Russell & Norvig 2003, pp. 2–3, who make the analogy with aeronautical engineering.
  • McCorduck 2004, pp. 100–101, who writes that there are "two major branches of artificial intelligence: one aimed at producing intelligent behavior regardless of how it was accomplioshed, and the other aimed at modeling intelligent processes found in nature, particularly human ones."
  • Kolata 1982, a paper in Science, which describes John McCathy's indifference to biological models. Kolata quotes McCarthy as writing: "This is AI, so we don't care if it's psychologically real"[3]. McCarthy recently reiterated his position at the AI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence" (Maker 2006).

FelixRosch has succeeded in showing that human-like intelligence is interesting to AI research, but not that it defines AI research. Defining artificial intelligence as studying/simulating "human-like intelligence" is simply incorrect; it is not how the majority of AI researchers, leaders and major textbooks define the field. ---- CharlesGillingham (talk) 05:54, 23 September 2014 (UTC)

Comments[edit]

I fully support the position presented by User:CharlesGillingham.

User:FelixRosch says The article in its current form, in all eight (8) of its opening sections is oriented to human-like intelligence (Sections 2.1, 2.2, ..., 2.8). I fail to see how sections 2.3 planning, 2.4 learning, 2.6 perception, 2.7 motion and manipulation relate only to humans. Could you please quote the exact wording in each of these sections that give you this impression? pgr94 (talk) 22:18, 23 September 2014 (UTC)

User:FelixRosch says "User:CharlesG keeps referring abstractly to multiple references on the article Talk page (and in this RfC) which he is familiar with, and continues not to bring them into the main body of the article first". The references are already in the article. The material in the table above is cut-and-pasted from the article. ---- CharlesGillingham (talk) 06:12, 24 September 2014 (UTC)

I support the position in favor of "intelligence", for the raisons stated by User:CharlesGillingham. Pintoch (talk) 07:27, 24 September 2014 (UTC)

The origins of the artificial intelligence discipline did largely have to do with "human-like" intelligence. However, much of modern AI, including most of its successes, have had to do with various sorts of non-human intelligence. To restrict this article only to efforts (mostly unsuccessful) at human-like intelligence would be to impoverish the article. Robert McClenon (talk) 03:16, 25 September 2014 (UTC)

  • If I'm understanding the question correctly the answer is obvious. AI is absolutely not just about studying "human like" intelligence but intelligence in general, which includes human like intelligence. I mean there are whole sub-disciplines of AI, the formal methods people in particular, who study mathematical formalisms that are about how to represent logic and information in general not just human intelligence. To pick one specific example: First Order Logic. People are all over the map on how much FOL relates to human intelligence. Some would say very much others would say not at all but I don't think anyone who has worked in AI would deny that FOL and languages based on it are absolutely a part of AI. Or another example is Deep Blue. It performs at the grand master level but some people would argue the way it computes is very different than the way a human does and -- at least in my experience -- the people who code programs like Deep Blue don't really care that much either way, they want to solve hard problems as effectively as possible. The analogy I used to hear all the time was that AI is to human cognition as aeronautics is to bird flight. An aeronautics engineer may study how birds fly in order to better design a plane but she will never be constrained by how birds do it because airplanes are fundamentally different and the same with computers, human intelligence will definitely impact how we design smart computers but it won't define it and AI researchers are not bound to stay within the limits of how humans solve problems. --MadScientistX11 (talk) 15:14, 25 September 2014 (UTC)
  • Comment The sources cited by CharlesGillingham do not all contradict the article. Apparently the current wording is confusing, but:
  • "Human-like" need not be read as "simulating humans". It can also be read as "human-level", which is typically the (or a) goal of AI. Speaking from my field of expertise, all the Bayes nets and neural nets and graphical models in the world are still trying to match hand-labeled examples, i.e. obtain human-level performance, even though the claim that they do anything like human brains is very, very strenuous. (Though I can point you to recent papers in major venues where this is still used as a selling point. Here's one of them.)
  • More importantly, the article speaks of emulating, not simulating, intelligence. Citing Wiktionary, emulation is "The endeavor or desire to equal or excel someone else in qualities or actions" (I don't have my Oxford Learner's Dictionary near, but I assure you the definition will be similar). In other words, emulation can exceed the qualities of the thing being emulated, so there's no need to stop at a human level of performance a priori; and emulation does not need to use "human means", or restrict itself to "cognitively plausible" ways of achieving its goal.
  • The phrase "though other variations of AI such as strong-AI and weak-AI are also studied" seems to have been added by someone who didn't understand the simulation-emulation distinction. I'll remove this right away, as it also just drops two technical terms on the reader without introducing them or explaining the (perceived) distinction with the emulation of human-like intelligence.
I conclude that the RFC is based on false premises (but I have no objection against a better formulation that is more in line with reliable sources). QVVERTYVS (hm?) 09:01, 1 October 2014 (UTC)
The simulation/emulation distinction that you are making does not appear in our most reliable source, Russell and Norvig's Artificial Intelligence: A Modern Approach (the most popular textbook on the topic). They categorize definitions of AI along these orthogonal lines: acting/"thinking" (i.e. behavior vs. algorithm), human/rational (human emulation vs. directed at defined goals), and they argue that AI is most usefully defined in terms of "acting rationally". The same section describes the long-term debate over the definition of AI. (See pgs. 1-5 of the second edition). The argument against defining AI as "human-like" (see my post at the top of the first RfC) is that R&N, as well as AI leaders John McCarthy (computer scientist) and Rodney Brooks all argue that AI should NOT be defined as "human-like". While this does not represent a unanimous consensus of all sources, of course, nevertheless we certainly can't simply bulldoze over the majority opinion and substitute our own. Cutting the word "human-like" gives us a definition that everyone would find acceptable. ---- CharlesGillingham (talk) 00:51, 9 October 2014 (UTC)

Argument in favor of "human-like intelligence"[edit]

Please Delete or Strike RFC[edit]

I am requesting that the author of the RFC delete or strike the RFC, because it is non-neutral in its wording. Robert McClenon (talk) 22:30, 23 September 2014 (UTC)

Just as the issue with the article is only with its first sentence, my issue with the RFC is with its first sentence. Robert McClenon (talk) 22:31, 23 September 2014 (UTC)
Let's face it, I don't know how to do this .... is the format above acceptable? Can you help me fix it? ---- CharlesGillingham (talk) 05:32, 24 September 2014 (UTC)
Much better. Robert McClenon (talk) 03:12, 25 September 2014 (UTC)

Neither one[edit]

To avoid a possible false dichotomy (and because neither seemed right to me) I will add an area for alternative proposals and make an initial one. Markbassett (talk) 05:27, 3 November 2014 (UTC)

  • Just Go With The Cites Stop trying to reinvent the words. The general definition for such a prominent field should be quoted from authoritative external source, not recrafted by random editors. Maybe Webster "an area of computer science that deals with giving machines the ability to seem like they have human intelligence" or IEEE or whoever, but get what is out there that folks have been using and stop trying to make up something. That's how it got to the tangled phrasing "study which studies the goal of creating intelligence" from a simple "Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it". Again, just go with whatever the cites say. Markbassett (talk) 05:39, 3 November 2014 (UTC)

AI definition: What is "Strong" vs. "Weak" AI and where is it referenced?[edit]

The current definition of AI contrasts "strong" vs "weak" AI. I'm not familiar with that distinction. Who makes it and where is it referenced. Also, as a meta-point I've noticed there seems to be a lot of deference to the Russel and Norvig book on AI. That is only one book and neither guy has the standing of people who have also written general AI books such as Patrick Wilson, Feigenbaum's AI handbook, and other. Here is Wilson's definition: "Artificial Intelligence is the study of ideas that enable computers to be intelligent" from Artificial Intelligence by Patrick Wilson p.1. I think such a simple definition is what we should use to start the article --MadScientistX11 (talk) 14:59, 29 September 2014 (UTC)

I just saw that Russel and Norvig have a definition of strong vs. weak AI (section 1.5 p29). Their definition is strong AI think that machines can be conscious weak AI thinks they can't. That is a very different definition than is what is currently in the intro text. First of all I think the whole distinction is unimportant anyway. It matters to people for whom AI is a purely academic discipline but the people who actually do AI, who build expert systems, ontologies, etc. and use them in the real world don't care one way or the other. I think that part of the intro definitely needs to be changed. I don't think the strong vs. weak is important enough to be mentioned so early on but if it is it should at least be consistent with the definition of R&N --MadScientistX11 (talk) 15:58, 29 September 2014 (UTC)
I agree and have removed this sentence once again. (It may be re-added by FelixRosch shortly, assuming things continue to happen as they have been happening these last few weeks.)
I agree that (1) undefined terms such as strong AI or weak AI should not be in second sentence because they have not yet been defined for the reader. Defining them correctly would take too much space to put in the lede, thus these terms can't be in the lede. (2) The distinction between different kinds of AI is not important at this point. The highest priority is the widely-accepted academic definition of AI (sentence 3) and the definition intended by the man who coined the term (sentence 4). These are much higher priorities. (3) The term "strong AI" only appears in the article at two points in the article (once as a synonym for artificial general intelligence, and once as the philosophical position identified by John Searle as the strong AI thesis). Thus, given all the material we have to cover in this article, this a relatively unimportant topic, and does not belong in the lede because it summarizes such a small fraction of the material in the article. ---- CharlesGillingham (talk) 02:05, 30 September 2014 (UTC)
Not surprisingly, the sentence with strong AI/weak AI has been re-added by Felix Rosch. Feel free to remove it if you agree with me that the sentence doesn't work. ---- CharlesGillingham (talk) 17:00, 30 September 2014 (UTC)

Deep Learning (original section)[edit]

I've noticed a couple of reverts on this topic. I agree with the people who don't want Deep Learning as a separate major sub-heading. If you look at the major AI text books none of them have a chapter heading "Deep Learning" to my recollection. AI is such a broad topic we need to be sure to not try and have this article cover every single thing that has ever been described as AI but stick to the major topics. Deep learning merits its own article and a link from this article to it but not a whole section in this article. Rather than just keep reverting I think we should try to reach some consensus first and the advocates of Deep Learning should cite some major AI text books that have it as a major topic or say why they think that is not an appropriate criteria for what things should be covered in this article. --MadScientistX11 (talk) 15:50, 29 September 2014 (UTC)

"Deep learning" is not mentioned in the leading AI textbook, Russell and Norvig's Artificial Intelligence: A Modern Approach. This is why I removed this material, as a five-paragraph section is WP:UNDUE weight for a relatively minor topic. One sentence in the section on neural networks would be appropriate, if anything.
I noticed FelixRosch has reverted my removal ... ---- CharlesGillingham (talk) 02:07, 30 September 2014 (UTC)
@Felix -- if you would like to make an argument against me, now is the time. I will be removing deep learning section eventually, unless you can provide a convincing argument that it is four times as important as statistical AI, twelve times as important as neural networks, fuzzy computation and evolutionary computation, or equally important to the history of AI as symbolic AI. This is the weight this article gives to these sub-fields.
I also remind you that the only thing that counts here is reliable sources, and "deep learning" does not appear in the 1000+ pages of the most popular AI textbook. ---- CharlesGillingham (talk) 03:54, 8 October 2014 (UTC)
Yes check.svg Done --- CharlesGillingham (talk) 15:22, 21 October 2014 (UTC)
Too soon on this deletion. You still have not answered: Previous editor has apparently not read the sixty-three (63) books and articles clearly present on the plainly linked article for "Deep learning". FelixRosch (talk) 16:20, 21 October 2014 (UTC)
There are many subtopics in AI that have hundreds of articles and books written about them. This is an overview article on AI. It's not meant to cover every possible topic in the field. For example: Case-based reasoning, Knowledge-Based Software Engineering, Distributed AI,... All those topics have MORE than 63 books and articles. It's a common problem in these articles that everyone wants their favorite topic to receive special attention. I agree with those who want to remove deep learning as a major subtopic. I don't agree that anyone needs to address the fact that there are 63 books and articles on the topic. It's not a strong argument for inclusion. I think looking at some standard AI textbooks such as R&N and seeing which topics receive major chapters is a much better guideline. --MadScientistX11 (talk) 16:59, 21 October 2014 (UTC)
There is a well-developed DL article. From the reader's perspective there are more valuable things for us to do than battle over the degree of article overlap. It would be one click to the full scoop on DL, or as full as one will find at Wikipedia, anyway. I promise you, exceedingly few Wikipedia readers care about the strength of the DL-AI connection. Wikipedia is not an academic journal, the majority of its readers on this topic are not academics, and the article should not be written for academics. As a non-academic, I am often very disappointed when I go to a Wikipedia article hoping to learn something about a scientific topic, and find the article completely inaccessible to me because it was written by academics who lack a clue how to write for ordinary intelligent people. ‑‑Mandruss (t) 17:07, 21 October 2014 (UTC)
(Also posted below) There was consensus for lowering the weight of "deep learning", based on the fact that it does not appear in the leading AI textbook. I added a one-sentence mention of deep learning in the appropriate section (deep learning is a neural network technique).
Keep in mind that AI is a very large field; we are trying to summarize hundreds of thousands of technical papers, thousands of courses, and thousands of popular books. A few magazine articles is not nearly enough weight to merit that many paragraphs.
I would say that "weight" is the most difficult part of editing a summary article like this one, especially because AI is an interesting topic and there are thousands of "pet theories", "interesting new ideas", "new organizing principles" and "contrarian points of view" out there. Occasionally they get added to this article, and we have to weed them out.
That's why we have used the leading AI textbooks as the gold standard for inclusion in this article. (See Talk:Artificial intelligence/Textbook survey to see how this was originally done.) There's no better way to have an objective standard that I can think of. Russell and Norvig is almost 1000 pages and covers a truly vast amount of information, far more than we could summarize here. We have even skipped topics that have their own journals.
At any rate, "deep learning" is not in those 1000 pages, and I need more than a magazine article to consider it important enough to cover here, but, as a compromise, I added a sentence under neural networks. ---- CharlesGillingham (talk) 01:10, 22 October 2014 (UTC)
@Felix: Re: the 66 footnotes in the article deep learning. Do any of these contain the assertion that "deep learning" is more important to AI, as, say, statistical techniques in general? This is what your edit asserts, because this is the weight you are giving to the topic. In fact, the question doesn't even make sense, because deep learning is defined by Hinton in terms of neural networks, and neural networks are a subset of both statistical and sub-symbolic AI. Again, to be frank, it doesn't seem to me that you are familiar enough with the subject to be casually reverting a good faith, consensus-derived edit. ---- CharlesGillingham (talk) 01:24, 22 October 2014 (UTC)

Discussion about this section has been continued below (#Deep Learning). --Mirokado (talk) 17:40, 23 October 2014 (UTC)

Some definitions of AI[edit]

Since we still seem to need a consensus on how to define AI I thought it would be worthwhile to just post a few from some of the classic text books:

  • "Artificial Intelligence is the study of ideas that enable computers to be intelligent" from Artificial Intelligence by Patrick Wilson p.1.
  • "The field of artificial intelligence, or AI, attempts to understand intelligent entities. Thus, one reason to study it is to learn more about ourselves. But unlike philosophy and psychology, which are also concerned with intelligence, AI strives to build intelligent entities as well as understand them. Another reason to study AI is that these constructed intelligent entities are interesting and useful in their own right" Russel and Norvig AI A Modern Approach p. 3
  • "Artificial Intelligence is the part of computer science concerned with designing intelligent computer systems, that is systems we associate with intelligence in human behavior: understanding language, learning, reasoning..." AI Handbook Barr and Feigenbaum https://archive.org/stream/handbookofartific01barr#page/n19/mode/2up

I like the Barr and Feigenbaum definition the best. Note two things though, EVERYONE describes it as "the study of" not as the intelligence itself, that is in contrast with the definition here and two NONE of them say anything about being contstrained by the way humans solve problems. Again, I like the Feigenbaum one best because it makes the valid point which is similar to what is there now but importantly different, making computers do things that are thought of as human intelligence IS AI but not being constrained by the WAY humans do those things. --MadScientistX11 (talk) 16:29, 29 September 2014 (UTC)

These are definitions of the academic field of AI research, i.e. "the study of". I am fine with restricted the definition to only describe the academic field, if everyone thinks that's best. Some years ago, we had something like this "Artificial intelligence is a branch of computer science which studies intelligent machines and software," i.e., the definition was strictly about the academic field.
I think that there are actually two other uses of the term outside of the academic AI, but we can choose to ignore this if we want, because the article is definitely about academic AI, and not about science fiction or other popular sources. The other two uses are: (2) the intelligence of machines or software (3) an intelligent machine or program (this usage is common in gaming and science fiction). The article for the last several years has started with (2) and ignored (3).
Feel free to try to fix this. ---- CharlesGillingham (talk) 02:26, 30 September 2014 (UTC)
When you say "the article is about academic AI" that's partly true but AI is one of those concepts like distributed computing that has both a strong academic and a strong industry flavor. My background is in both btw, I've worked in the AI group of a Major Consulting firm as well as doing research for DARPA and USAF. And where I'm coming from with some of my comments is more from the industry side. It's my industry experience that makes me say that the whole "is it just about human intelligence" is just a no brainer. People who aren't academics NEVER think like that in my experience, they want to build smart systems that solve hard problems and they will use any technique that works best. --MadScientistX11 (talk) 15:11, 30 September 2014 (UTC)
Sure -- it's about mainstream academic and industrial AI, as opposed to pop-science, science fiction and any of those thousands of "pet theories" and "alternative forms" of AI.
As I said before, feel free to rewrite the first couple of sentences any way that makes sense to you; it seems like you know what you're talking about. I'd like to keep the intelligent agent/rational agent definition and McCarthy's quote. The simple definition for the lay reader can go any way you think is best. ---- CharlesGillingham (talk) 16:58, 30 September 2014 (UTC)
The definition we quote in the intro is from Poole, Mackworth & Goebel 1998, p. 1. I like it because it's from a popular textbook, it's concise, to the point, does not equivocate, does not raise any unnecessary complications and finds a way to define AI that does not require also defining human intelligence, sidestepping all possible philosophical and technical objections. ---- CharlesGillingham (talk) 02:38, 30 September 2014 (UTC)

What Needs Discussing?[edit]

There seems to have been too much reverting in the past few days. Let's identify the issues. There is disagreement as to whether to include a paragraph on "deep learning". There is disagreement on whether to mention "strong AI" and "weak AI". I think that strong and weak should be mentioned in the article, but not in the lede, but that is only my opinion. What other disagreements are there, besides the "human-like" question that is being decided by RFC? Robert McClenon (talk) 02:00, 30 September 2014 (UTC)

Here is a summary of the current editorial issues.
1) An ongoing dispute about the lede, which has lasted several week. FelixRosch's latest contribution to the lede is this phrase: "which generally studies the goal of emulating human-like intelligence, though other variations of AI such as strong-AI and weak-AI are also studied." This phrase had been added and removed several times. I have three objections to this phrase:
"Human-like" intelligence
(Covered by the RfC above) There is an ongoing dispute that the term "human-like intelligence" should not be used to define AI.
Strong AI, weak AI
(Covered by the discussion started by MadScientist above) MadScientist and I both have objections to introducing these terms in the second sentence of the article.
The writing
And finally, in my opinion it is an awkward sentence, which reads poorly.
2) "Deep learning": (Covered by the discussion started by MadScientist above). The section added by FelixRosch about "Deep learning" is WP:UNDUE weight, in my opinion and MadScientist's. This section is copied and pasted from the article deep learning, and (I would argue) that is where it belongs. ---- CharlesGillingham (talk) 03:27, 30 September 2014 (UTC)
Deep Learning does seem like it now has undue weight. ... But, without that section it seems like AI techniques are almost entirely symbolic and strictly logical, which is also wrong. Is there a way to summarize Deep learning, traditional neural networks, and other more black-boxy techniques? APL (talk) 13:51, 30 September 2014 (UTC)
I would argue we have a consensus that Deep Learning has undue weight. As for the other issues: I also agree things like connectionist frameworks: Minsky, Pappert, Arbib, Churchland (those are the authors off the top of my head that I know, I don't know that part of the field though) needs more emphasis HOWEVER, I would strongly urge we table that. Let's sort out Deep Learning and the lede first and then move on to other issues. --MadScientistX11 (talk) 15:26, 30 September 2014 (UTC)
@APL: I don't agree with your reading of the "Approaches" sectio. Cybernetics (1930-1950s) and symbolic/logical/knowledgebased "GOFAI" (1960s-1980s) are presented as failed approaches that have been mostly superseded by newer approaches. Deep learning is one example of what the article calls statistical AI and sub-symbolic AI, as are all modern neural network methods.
As I said, I think that deep learning belongs in the section under Tools called Neural networks. It seem to me that deep learning (as described in Wikipedia) is one new neural network technique among the many that have been developed in the last decade. The neural network section mentions Jeff Hawkins' Hierarchical Temporal Memory approach to neural networks; it could also mention Hinton's deep learning if everyone thinks that's important. However, I have to say, I think it's possible to come up with at least dozen more examples of interesting new approaches to neural networks from the last decade, and we don't have room to mention them all. ---- CharlesGillingham (talk) 03:56, 1 October 2014 (UTC)
@APL & @MadScientist -- do you have any objection to moving your posts in to the section #Deep learning above? ---- CharlesGillingham (talk) 00:54, 9 October 2014 (UTC)

RFC[edit]

In further looking at the RFC, it is still non-neutral and has everyone confused. I would like to strike the RFC, and wait about 24 hours, and then create a new RFC with nothing but a question as to the lede sentence, and any other questions that are well-defined. Arguments in favor of a position can then be included as discussion. Unless anyone strongly objects, I will strike the RFC. (If anyone does object, we have to have an RFC on whether to strike the RFC. -:) ). Robert McClenon (talk) 14:55, 30 September 2014 (UTC)

My 2 cents is don't even bother making it an RFC. You end up getting a bunch of people who have little or no actual editing experience pontificating and going off on tangents. Just stick to a regular discussion in the talk section and try to keep it as focused as possible on specific editing questions. I think an RFC is overkill and that it slows down a real consensus and moving forward with actual editing which should be the goal. --MadScientistX11 (talk) 15:03, 30 September 2014 (UTC)
@Robert: I realize this is a lot to ask, but do you think you could start the RFC and help us figure out how to end this? As I've said before, I don't really understand why this dispute is continuing and why the normal standards of evidence are being ignored. I just want it to stop. How do we muster the necessary support to end this all-fronts total edit war? ---- CharlesGillingham (talk) 16:53, 30 September 2014 (UTC)
I guess I should spell this out a little more directly -- I'm trying to assume good faith here. What, exactly, does it take in order to allow us to remove the term "human-like" from the lede? We have a huge body of evidence that this is the right thing to do, absolutely no coherent evidence that it is wrong thing to do, a consensus of several editors here (including yourself) who agree that the term does not belong in the lede. However, every time I remove it from the lede, it gets restored by FelixRosche, thus I find myself in an edit war. I don't know what to do at this point.
I'm not sure exactly what's wrong with the RFC -- the question is clear, general and simple and the corresponding editorial choices are obvious. Is the problem that there is only one side presented? It seems to me that should be reason to end the issue as settled -- if FelixRosche doesn't care to make an argument, then let's be done with it. ---- CharlesGillingham (talk) 03:35, 1 October 2014 (UTC)
One last thought: editors should be aware that FelixRosche has added the term "human-like" back into the article many times, in many different forms, with many different edits. The RFC has to settle the issue of "human-like" in general, so that he doesn't just change the sentence again. (And I apologize if this seems to be bad faith; it's not -- I'm just betting the percentages here: "the best predictor of future behavior is past behavior"). ---- CharlesGillingham (talk) 04:02, 1 October 2014 (UTC)
I've struck "human-like" from the lede again. We need an RFC if User:FelixRosch actually is ready to argue that "human-like" should be somewhere in the lede. If he is willing to agree that it doesn't need to be in the lede, then we can leave it out. If he really wants it in, then we need some sort of resolution process to keep it out. I have argued in favor of RFC rather than DRN. Is he willing to leave human-like out of the lede, or does he really think it belongs, in which case we need an RFC? Robert McClenon (talk) 15:01, 1 October 2014 (UTC)
I am willing to formulate the RFC. The RFC itself will be brief and neutral. Arguments for or against "human-like" can be in the !votes or the discussion. In response to the comment that we may not need an RFC, I have asked User:FelixRosch on his talk page whether he is willing to agree that consensus is against the inclusion of "human-like" in the first paragraph. If he agrees, we don't have an issue. If he wants it in the first paragraph, then we should use either RFC or DRN, and I prefer RFC, because it receives wider attention. Robert McClenon (talk) 16:37, 1 October 2014 (UTC)
The third and fourth paragraphs of the introduction to the article do include references to what is actually human-like intelligence. In particular, the third paragraph refers to artificial general intelligence, and the fourth paragraph refers to myth and fiction. My own opinion is that those references are satisfactory, and that the only real issue has to do with the first paragraph. If anyone objects to the third and fourth paragraphs, then we may need another part to the RFC. Robert McClenon (talk) 16:37, 1 October 2014 (UTC)
Your comment in the above seems to have missed the additions of the editor Qwertyus which are worthy of some consideration. I am supporting Qwertyus even though the suggestion abridges my edit substantially and am reverting to that version as offering a point of agreement between editors which was previously not available. In restoring the Qwertyus version, I shall also stipulate that if (If) it is acceptable to all involved editors, then I shall not pursue further changes to the first sentence of the Lede which has been debated. Second, if (If) the neutral Qwertyus edit is acceptable, then I will stipulate that I shall accept the abridgment to my second sentence in the first paragraph of the Lede as well with the dropping of the phrase dealing with weak AI and strong AI there. The rest of the material would need to remain in its Qwertyus form, and all editors can return to regular editing activities. My previous offer that both @CharlesG and @RobertM, as explicit supporters of weak-AI, will also still stand as an open invitation to them to further develop the sections and content in the main body of the article dealing with weak-AI. Your own supporter @MadScientist has even asked you, Where is it?, where is it? My edit here is to support Qwertyus as offering a useful edit. FelixRosch (talk) 14:29, 2 October 2014 (UTC)
It is not acceptable, of course, as I have argued above.
We do not need your permission "to return regular editing activities".
The term "weak AI" is never used in the way you are using it, so please don't call me a "supporter" of it. Do you mean "AI research into non-humanlike intelligence as well as human-like intelligence"? That would seem to follow from the position you hold. If so, then I must point out, for the third or fourth time, that most of the article is about what you call "weak AI". None of the topics is exclusively about human-like intelligence. Please read my earlier posts. ---- CharlesGillingham (talk) 09:01, 7 October 2014 (UTC)
Your comment appears to have missed the useful additions of editor @Qwerty. Your co-editor, @RobertM, has also declined all comment on this edit in preference to his posting a poorly formed RfC replacement for the previous defective RfC. Unless he joins this discussion or replaces/withdraws the currently poorly formed RfC, then it shall be difficult to respond. Your own version was posted as a full page ad for "Weak-AI" on the previous RfC. This discussion must be made on the basis of the current version of the article. FelixRosch (talk) 14:36, 7 October 2014 (UTC)
I am aware of Qwerty's contribution and I agree that is useful (especially in that he removed the misuse of the terms "strong AI" and "weak AI"). However, it does not change the fact that major AI textbooks and major AI researchers deliberately avoid defining artificial intelligence in terms of human intelligence, and that removing the word "human-like" does no harm to the article. I have proven this with solid evidence from all the major sources. Qwerty's actions are irrelevant in that he did not disprove these facts, and neither have you. ---- CharlesGillingham (talk) 18:10, 7 October 2014 (UTC)
That is still not a justification for an overly generalized version of the Lede section which is being supported by your co-editor User:RobertM and yourself on the poorly formed RfC below. Nor is your personal attack justified on @Qwerty calling those edits "irrelevant". Please note that your co-editor RobertM is not joining you here to support you on this. FelixRosch (talk) 18:30, 7 October 2014 (UTC)
You are not reading my post very carefully. DO NOT accuse me of a personal attack -- I complimented Qwerty on his edit. His edit was fine, but the original, ongoing issue involves the term "human-like", and Qwerty's edit did not change this. There is no consensus for a version that says AI "generally studies the goal of emulating human-like intelligence." This is the issue at hand. I did not say that Qwerty's edit was irrelevant. It is your comments that are not helpful and that are avoiding the subject.
The most reliable mainstream source (Russell & Norvig) rejects the idea of emulating human-like intelligence as goal for AI. It doesn't matter what I think, or what you think, or what Robert thinks. This is not a vote, this is not an issue that we get to decide ourselves. It has already been decided by the mainstream AI sources. You have no basis for your argument, other than your own insistence.
And, as I have said before: this is not a position that I personally agree with. This is a position that the article must take, because it is the only one available from the most reliable source. We don't get to make up things here on Wikipedia and then just insist on them. ---- CharlesGillingham (talk) 03:28, 8 October 2014 (UTC)
Your personal attacks upon @Qwertyus must stop and calling him "irrelevant", to use your word, is not Wikipedia policy. You must also stop misrepresenting the case to admin @Redrose64 that your edit is unanimous since your poorly formulated RfC is against both User:Qwertyus and myself who support "emulation" as a fair summary of the article in its current state. @Redrose64 is an experienced editor who can explain your difficulties to you if you represent the matter as it is, and that your position is not unanimously supported in this poorly formed RfC. Please note that your co-editor RobertM is not joining you here to support you on this. FelixRosch (talk) 14:59, 8 October 2014 (UTC)
I did not call Qwertyus' edit, irrelevant to the article or to the topic, and certainly did not say that Qwertyus is irrelevant. I said it was irrelevant TO OUR DISPUTE about the term "human-like", which it obviously is because he neither removed nor added the term human like. QED. This will be the second time I have proved this, using plain English. I would prefer it if you would read my posts before responding. I'm finding it difficult to believe that you can't follow what I'm saying, and, if I assume good faith, I must also assume you are not reading them. ---- CharlesGillingham (talk) 21:30, 8 October 2014 (UTC)
And just to stay on point: the most reliable sources carefully and deliberately DO NOT define artificial intelligence as studying or emulating "human-like" intelligence, and this is an issue which many major AI researchers feel strongly about. Adding the term "human-like" to the lede is an insult to the hard work that these researchers have done to define their field. Wikipedia's editors do not have the right to define "artificial intelligence", so it does not matter what you think or what I think or what anyone thinks. What matters is the sources. ---- CharlesGillingham (talk) 21:37, 8 October 2014 (UTC)
Your personal attack against @Qwertyus was "Qwerty's actions are irrelevant", and your personal attack must stop. Are you now denying that this is a direct quote of your personal attack on another fellow editor? Also, to stay on point, your misrepresentation of your claim to "unanimous" support to admin must be withdrawn with full apology to the editor @Redrose for this misrepresentation. Your position is not unanimous, you are using an old outdated 2008 textbook for a high tech field, and your poorly formed RfC with your co-editor @RobertM promoting your preference for "Weak-AI" should be withdrawn. FelixRosch (talk) 16:32, 9 October 2014 (UTC)
May I cordially suggest to CharlesGillingham that you leave this rant, and any repetitions that follow, unanswered? The rest of us can all see for ourselves where it is coming from, there is no need to defend yourself against it. — Cheers, Steelpillow (Talk) 17:01, 9 October 2014 (UTC)

RFC on Phrase "Human-like" in First Paragraph[edit]

RESOLVED:

There is clear consensus against using the phrase "human-like" in the first paragraph of the lede of this article. I also see a possibility for consensus that it may be appropriate to include the definition of what AI is to the lede, which could go in the second or third paragraphs; this would need to be discussed more appropriately in a separate discussion. (non-admin closure){{U|Technical 13}} (etc) 17:11, 24 November 2014 (UTC)

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Should the phrase "human-like" be included in the first paragraph of the lede of this article as describing the purpose of the study of artificial intelligence? Robert McClenon (talk) 14:43, 2 October 2014 (UTC)

It is agreed that some artificial intelligence research, sometimes known as strong AI, does involve human-like intelligence, and some artificial intelligence research, sometimes known as weak AI, involves other types of intelligence, and these are mentioned in the body of the article. This survey has to do with what should be in the first paragraph. Robert McClenon (talk) 14:43, 2 October 2014 (UTC)

Survey on retention of "Human-like"[edit]

  • Oppose - The study of artificial intelligence has achieved considerable success with intelligent agents, but has not been successful with human-like intelligence. To limit the field to the pursuit of human-like intelligence would exclude its successes. Robert McClenon (talk) 14:46, 2 October 2014 (UTC) Inclusion of the restrictive phrase would implicitly exclude much of the most successful research and would narrow the focus too much. Robert McClenon (talk) 14:46, 2 October 2014 (UTC)
  • Oppose - At least as it's currently being used. Only some fields of AI strive to be human-like. (Either through "strong" AI, or through emulating a specific human behavior.) The rest of it is only "human-like" in the sense that humans are intelligent creatures. The goal of many AI projects is to perform make some intelligent decision far better than any human possibly could, or sometimes simply to do things differently than humans would. To define AI as striving to be "human like" is to encourage a 'Hollywood' understanding of the topic, and not a real understanding. (If "human-like" is mentioned father down the paragraph with the qualifier that *some* forms of AI strive to be human-like, that's fine, but it should absolutely not be used to define the field as a whole.) APL (talk) 15:21, 2 October 2014 (UTC)
  • Comment The division of emphasis is pretty fundamental. I would prefer to see this division encapsulated in the lead, perhaps along the lines of, "...an academic field of study which generally studies the goal of creating intelligence, whether in emulating human-like intelligence or not." — Cheers, Steelpillow (Talk) 08:45, 3 October 2014 (UTC)
    This is not a bad idea. It has the advantage of being correct. ---- CharlesGillingham (talk) 18:15, 7 October 2014 (UTC)
    I don't know much about this subject area, but this compromise formulation is appealing to me. I can't comment on whether it has the advantage of being correct, but it does have the advantage of mentioning an aspect that might especially interesting to novice readers. WhatamIdoing (talk) 04:43, 8 October 2014 (UTC)
  • Support. RFC question is inherently faulty: There cannot be a valid consensus concerning exclusion a word from one arbitrarily numbered paragraph. One can easily add another paragraph to the article, or use the same word in another paragraph in manner that circumvents said consensus or use the same word in conjunction with negation. For instance, Robert McClenon seems not to endorse saying "AI is all about creating artificial human-like behavior." But doesn't that mean RM is in favor saying "AI is not all about creating human-like behavior"? Both sentences have "human-like" in them. RFC question must instead introduce a specific literature and ask whether it is acceptable or not. Best regards, Codename Lisa (talk) 11:39, 3 October 2014 (UTC) Struck my comment because someone has refactored the question, effectively subverting my answer. This is not the question to which I said "Support". This RFC looks weaker and weaker every minute. Codename Lisa (talk) 17:03, 9 October 2014 (UTC)
His intent is clear from the mountain of discussion of the issue above. The question is should AI be defined as simulating human intelligence, or intelligence in general. ---- CharlesGillingham (talk) 13:54, 4 October 2014 (UTC)
Yes, that's where the danger lies: To form a precedent which is not the intention of a mountain of discussions that came beforehand. Oh, and let me be frank: Even if no one disregarded that, I wouldn't help form a consensus on what is inherently a loophole that will come to hunt me down ... in good faith! ("In good faith" is the part that hurts most.) Best regards, Codename Lisa (talk) 19:31, 4 October 2014 (UTC)
I don't understand this !vote. It appears to be a !vote against the RFC rather than against the exclusion of the term from the lead, in which case it belongs in the discussion section not in the survey section. Jojalozzo 22:27, 4 October 2014 (UTC)
Close, but no cigar. It is against the exclusion, but because of (not against) the RFC fault. Best regards, Codename Lisa (talk) 07:11, 5 October 2014 (UTC)
Is this vote just a personal opinion? Or do you have reliable sources? pgr94 (talk) 21:30, 8 October 2014 (UTC)
  • Oppose Please see detailed argument in the previous RfC. This is not how the most widely used AI textbooks define the field, and is not how many leading AI researches describe their work. ---- CharlesGillingham (talk) 13:52, 4 October 2014 (UTC)
  • Oppose that is not the place for such an affirmation. For that we should have an article on Human-like Artificial intelligence. Incidentally, I also support the objections to the form of this RFC.JonRichfield (talk) 05:16, 5 October 2014 (UTC)
  • Oppose WP policy is clear (WP:V, WP:NOR and WP:NPOV) and this core policy just needs to be applied in this case. The literature does not say human-like. Those wishing to add "human-like" need to follow policy. My understanding is that personal opinions and walls of text are irrelevant. Please note that proponents of the change have yet to provide a single source. pgr94 (talk) 21:20, 8 October 2014 (UTC)
  • Oppose Human-like is one of the many possible goals/directions. This article deals with AI in general. OCR or voice recognition research has little to do with human-like intelligence*, yet (at the moment) they are far more useful fields of AI research than, say, a chat bot able to pass the Turing test. (*vision or hearing are not required for human-like intelligence) WarKosign 11:29, 12 October 2014 (UTC)
  • Support - I am not well versed in the literature on this topic, but I don't think one needs to be for this purpose. We're talking about the first sentence paragraph in the lead, and for that purpose a quick survey of the hits from "define artificial intelligence" should suffice. Finer distinctions based on academic literature can be made later in the lead and in the body. ‑‑Mandruss (talk) 11:48, 14 October 2014 (UTC)
  • Oppose This article is focused on the computer science use of the term (we already have a separate article on its use in fiction). And computer scientists talk about Deep Blue and Expert systems as "Artificial Intelligence". So, it's become a technical term that is used in a broad way to apply to any programming and computing that helps to deal with the many issues involved in computers interacting with real world situations and problems. However, in science fiction, Artificial intelligence in fiction has been generally taken to mean human like intelligence. So - perhaps it might help to clarify to start the second sentence with "In Computer science it is an academic field of study ..." or some such. Then it is uncontroversial that in computer science the way that the term is used, as a technical term, is exactly as presented in the first paragraph. And it introduces the article and gives the user a clear idea of what this article is about. The fourth paragraph in the intro does mention that "The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—"can be so precisely described that a machine can be made to simulate it."" and it is also mentioned in the history. Just a suggestion to think over. Robert Walker (talk) 12:27, 14 October 2014 (UTC)
  • Both at the same time. Why do we have to choose between human-like and not? As the RFC statement already says, it is agreed that some AI seeks human-like intelligence, and other AI has weaker goals. We should say so. Teach the controversy. Or, as WP:NPOV states, "Avoid stating seriously contested assertions as facts." That is, we should neither say that all AI aims for human-like intelligence, nor should we imply the opposite by not saying that. We should say that some do and some don't. —David Eppstein (talk) 01:45, 16 October 2014 (UTC)
Technically, an "oppose" is a vote for both. We are discussing whether it should be "human-like intelligence" or just "intelligence" (which is both). We can't write "The field of AI research studies human-like and non-human-like intelligence" ---- CharlesGillingham (talk) 13:20, 16 October 2014 (UTC)
I disagree. Putting just "intelligence" is not both, it is only giving one side of the story (the side that says that it doesn't matter whether the intelligence is human-like or not). —David Eppstein (talk) 23:27, 16 October 2014 (UTC)
  • Oppose The phrase "which generally studies the goal of emulating human-like intelligence", which is currently in the lead, has various problems: "generally" is a weasel word; AI covers both the emulation (weak AI) and presence (strong AI) of intelligence and is by no means restricted to "human-like" intelligence. The first para of the lead can be based on McCarthy's original phrase, already quoted, which refers to intelligence without qualification. --Mirokado (talk) 02:03, 16 October 2014 (UTC)
  • Both There are sub-communities in the AI field (e.g. chatterbots) who specifically look at human-like intelligence, there are sub-communities (e.g. machine learning) who don't. —Ruud 12:52, 16 October 2014 (UTC)
See comment above. ---- CharlesGillingham (talk) 13:20, 16 October 2014 (UTC)
Just "intelligence" would be underspecified. The reader may interpret this as human, non-human or both. Only the latter is correct. I'd like to this see this addressed explicitly in the lede. —Ruud 18:29, 16 October 2014 (UTC)
  • Comment I want to remind everyone that the issue is sources. Major AI textbooks and the leaders of AI research carefully define their field in terms of intelligence and specifically argue that it is a mistake to define AI in terms of "human intelligence" or "human-like" intelligence. Even those in artificial general intelligence do not try to define the entire field this way. Please see detailed argument at the beginning of the first RfC, above. Just because this is an RfC, it does not mean we can choose any definition we like. We must respect choices made by the leaders of the field and the most popular textbooks. ---- CharlesGillingham (talk) 03:13, 17 October 2014 (UTC)
  • I join the logical chorus in opposition to reference to AI aiming for anything "human-like" -- why not just as well mention "bird-like" or "dolphin-like"? Humans have a certain kind and degree of intelligence (on average and within bounds), but have many limitations in things such as calculators capacity, and many foibles such as emotions overriding reason, and the capacity to act as though things are true when we ought to reasonably know them to be false. It is not the aim of researchers to make machines as broken as men in these regards. DeistCosmos (talk) 16:59, 18 October 2014 (UTC)
    (This comment was originally posted elsewhere on this page but seems intended for the RFC. OP notified). --Mirokado (talk) 23:49, 26 October 2014 (UTC)

Threaded discussion of RFC format[edit]

Threaded discussion of RFC topic[edit]

I am getting more unhappy with that phrase "human-like". What does it signify? The lead says, "This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence," which to me presupposes human-like consciousness. OTOH here it is defined as: "The ability for machines to understand what they learn in one domain in such a way that they can apply that learning in any other domain." This makes no assumption of consciousness, it merely defines human-like behaviour. One of the citations in the article says, "Strong AI is defined ... by Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." Besides begging the question as to what "simulating thinking" might be, this appears to raise the question as to whether strong vs weak is really the same distinction as human-like vs nonhuman. Like everybody else, AI researchers between tham have all kinds of ideas about the nature of consciousness. I'll bet that many think that "simulating thinking" is an oxymoron, while as many others see it as a crucial issue. In other words, there is a profound difference between the scientific study and creation of AI behaviour vs. the philosophical issue as to its inner experience - a distinction long acknowleged in the study of the human mind. Which of these aspects does the phrase "human-like" refer to? One's view of oneself in this matter will strongly inform one's view of AI in like manner. I would suggest that it can refer to either according to one's personal beliefs, and rational debate can only allow the various camps to beg to differ. The phrase is therefore best either avoided in the lead or at least set in an agreed context. Sorry to have rambled on so. — Cheers, Steelpillow (Talk) 18:21, 6 October 2014 (UTC)

This is a good question, which hasn't been answered directly before. In my view, "human-like" can mean several different things:
  1. AI should use the same algorithms that people do. For example, means-ends analysis is an algorithm that was based on psychological experiments by Newell and Simon, where they studied how people solved puzzles. AI founder John McCarthy (computer scientist) argued that this was a very limiting approach.
  2. AI should study uniquely human behaviors; i.e. try to pass the Turing Test. See Turing Test#Weaknesses of the test to see the arguments against this idea. Please read the section on AI research -- most AI researchers don't agree that the Turing Test is a good measure of AI's progress.
  3. AI should be based on neurology; i.e., we should simulate the brain. Several people in artificial general intelligence think this is the best way forward, but the vast majority of successful AI applications have absolutely no relationship to neurology.
  4. AI should focus on artificial general intelligence (by the way, this is what Ray Kurzweil and other popular sources call "strong AI"). It's not enough write a program that solves only one particular problem intelligently; it has to be prepared to solve any problem, just as humans brains are prepared to solve any problem. The vast majority of AI research is about solving particular problems. I think everyone would agree that general intelligence is a long term goal, but it also true that many would not agree that "general intelligence" is necessarily "human-like".
  5. AI should attempt to give a machine subjective conscious experience (consciousness or sentience). (This is what John Searle and most academic sources call "strong AI"). Even if it was clear how this could be done, it is an open question as to whether consciousness is necessary or sufficient for intelligent problem-solving.
The question at issue is this: do any of these senses of "human like" represent the majority of mainstream AI research? Or do each of these represent the goals or methodology of a small minority of researchers or commentators? ---- CharlesGillingham (talk) 08:48, 7 October 2014 (UTC)
@Felix: What do you mean by "human-like"? Is it any of the senses above? Is there are another way to construe it I have overlooked? I'm am still unclear as to what you mean by "human-like" and why you insist on including it in the lede. ---- CharlesGillingham (talk) 09:23, 7 October 2014 (UTC)
One other meaning occurs to me now I have slept on it. The phrase "human-like" could be used as shorthand for "'human-like', whatever that means", i.e. it could be denoting a deliberately fuzzy notion that AI must clarify if it is to succeed. Mary Shelley galvanized Frankenstein's monster with electricity - animal magnetism - to achieve this end in what was essentially a philosophical essay on what it means to be human. Biologists soon learned that twitching the leg of a dead frog was not what they meant by life. People once wondered whether a sufficiently complex automaton could have "human-like" intelligence. Alan Turing suggested a test to apply but nowadays we don't think that is quite what we mean. In the days of my youth, playing chess was held up as an example of more human-like thinking - until the trick was pulled and then everybody said, "oh no, now we know how it's done that's not what I meant". Something like pulling inferences from fuzzy data took its place, only to be tossed in the "not what I meant" bucket by Google and its ilk. You get the idea. We won't know what "human-like" means until we have stopped saying "that's not what I meant" and started saying, "Yes, that's what I mean, you've done it." In this light we can understand that some AI researchers are desperate to make that clarification, while others believe it to be a secondary issue at best and prefer to focus on "intelligence" in its own right. — Cheers, Steelpillow (Talk) 09:28, 8 October 2014 (UTC)

I'm unhappy with it for another reason. "Artificial Intelligence" in computer science I think is now a technical term that is applied to a wide range of things. When someone writes a program to enable self driving cars - they call it artificial intelligence. See Self Driving Car: An Artificial Intelligence Approach "Artificial Intelligence also known as (AI) is the capability of a machine to function as if the machine has the capability to think like a human. In automotive industry, AI plays an important role in developing vehicle AI plays an important important role in developing developing vehicle vehicle technology." For a machine to function as if it had the capability to think like a human - that's very different from actually emulating human-like intelligence. Deep Blue was able to do that also - to chess onlookers - it acted as if it had the capability to think like a human at least in the limited realm of a chess game. In the case of the self driving car, or Deep Blue - you are not at all aiming to pass the Turing test or make a machine that is intelligent in the way a human is. Indeed, the goals to make a chess playing computer or a self driving car are compatible with a belief that human intelligence can't be programmed.

I actually think that way myself, persuaded by Roger Penrose's arguments - I think myself that no programmed computer will ever be able to understand mathematics in the way a mathematician does. Can never truly understand what is meant by "this statement is true" - just feign an understanding of truth, continually corrected by its programmers when it makes mistakes. His argument also extends to quantum computers and hardware neural nets. He doesn't think that hardware neural nets capture what the brain does, but that there is a lot going on within the cells which we don't know about that are also relevant as well as other forms of communications between cells.

But still, I accept that in tasks of limited scope such as chess playing or driving cars, they can come to out perform humans. This is nothing to do with weak AI or strong AI as I think both are impossible myself. Except perhaps with biological machines (slime moulds) or computers that in some way can do something essentially non computable (recognize mathematical truth) - if so they have to go beyond ordinary programming and go beyond ordinary quantum computers also to some new thing.

So - I know that's controversial - not trying to persuade you of my views - but philosophically it's a view that some people take, including Roger Penrose. And is a consistent view to have. Saying that the aim of AI is to create human like intelligence - that's making a philosophical statement that the things programmers are trying to achieve with self driving cars and with chess playing computers are on a continuum with human intelligence and we just need more of the same. But not everyone sees it that way. I think AI is very valuable, but not in that way, not the direction it is following at present anyway.

Plus also - the engineers of Google's self driving cars - are surely not involved in a "goal of emulating human-like intelligence" except in a very limited way.

Rather, their main aim is to create machines that are able to take over from humans in a flexible human environment without causing problems - and to do that by emulating human intelligence to whatever extent is necessary and useful to do that work.

Also in another sense, emulating human intelligence is too limited a goal. In the case of Deep Blue the aim was to be better at chess than any human - not to just emulate humans. Ideally also the Google self driving cars will be better at driving than humans. The aim is to create machines that in their own limited frame of reference are better than humans - and using designs inspired by capabilities of humans and drawn from things that humans can do. But not at all to emulate humans including all their limitations and faults and mistakes and accidents. I think myself very few AI researchers have that as a goal. So not sure how the lede should be written, but I am not at all happy with "goal of emulating human intelligence" - that is wrong in so many ways - except for some science fiction stories. I suggest also that we say "In Computer science" to start the second sentence, whatever it is, to distinguish it from "In fiction" where the goal often is to emulate human intelligence to a high degree as with Asimov's positronic robots. Robert Walker (talk) 00:28, 16 October 2014 (UTC)

To limit the primary focus of AI to human-like intelligence, as User:FelixRosch originally sought to do with the lede, would be to ignore the successes of the field and focus on the aspect of the field that is always ten years in the future. Robert McClenon (talk) 21:50, 22 October 2014 (UTC)

The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Human-Like in Lede, and RFC again[edit]

I have removed the phrase "human-like" from the lede again. To state that the primary purpose of artificial intelligence is the implementation of human-like intelligence (regardless of what is meant by that) is misleading and impoverishes a field that has made significant contributions without achieving the mythic objective of human-like intelligence. Consensus is currently running in favor of keeping that phrase out of the first sentence. Robert McClenon (talk) 01:41, 11 October 2014 (UTC)

I've removed it again. If the RFC suddenly goes the other way, of course we can put it back in. But it doesn't seem unreasonable to leave it out for now, since the RFC has been running for over a week and not a single editor has spoken in favor of the phrase "human-like". APL (talk) 03:09, 12 October 2014 (UTC)
At least one editor favors the use of the phrase "human-like" in the lede sentence. It has been inserted by one registered editor and one IP. The registered editor disputed the RFC rather than participating in it. There should be references to "human-like" intelligence in various parts of the article, but the topic should not be restricted by including that phrase in the first sentence. Robert McClenon (talk) 14:14, 12 October 2014 (UTC)
Well, OK, but now is the time for that editor to speak up. Trying to instigate change after the RFC has run its course will be nearly impossible.
If he's got a reasonable argument, he should be making it, so that the non-partial, uninvolved editors coming here for the RFC can see both sides of the issue. APL (talk) 22:28, 13 October 2014 (UTC)
Since you have asked: The history of the edit dispute with CharlesG started 3 months ago and can be summarized briefly in 4-5 comments.
(1) Three months ago I read this article and saw that in its current form that the article was oriented to the human engineering and reverse human engineering perspectives of AI in all of the first 8 (eight) sections. Section 2.1 was about the emulation of human deduction, section 2.2 was about the emulation of human knowledge representation, 2.3 was about the emulation of human planning, 2.4 was about the emulation of human learning, 2.5 was about the emulation of human natural language processing (there is no other type of natural language processing), 2.6 was about the emulation of human perception, 2.7 was about the emulation of the human equivalent motion and manipulation, and the same for section 2.8. Given the article in its current form, I then added to term human-like in the Lede to accurately represent the article as it exists in its current form. CharlesG disagreed stating his belief that from a general perspective not limited to the article itself, that the over-arching and general version of intelligence was his personal preference to defend his own Weak-AI perspective.
(2) My response to CharlesG was that even if what he said was true, that WP:Lede requires that the Lede summarize the article as it exists in its current form, and not from the general perspective of how AI could be defined in a future version of the article which may at some time in the future be re-written to highlight his preference for the Weak-AI perspective. I also made multiple invitations to CharlesG to add his information on Weak-AI into the article to increase its prominence in the article, which he refused to do. The key issue is that in its current form, the body of the article is oriented to human engineering and reverse human engineering as its main perspective in all of its eight opening section, which by WP:Lede is what should be summarized in the Lede based on this article in its current form at this point in time. CharlesG declined my multiple invites to expand the Weak-AI material in the article and decided to file a Dispute Resolution Notice, as his preferred path to resolution. I fully acknowledged and answered the Dispute Resolution Process issues raised there.
(3) After filing the dispute resolution notice, RobertM then falsely presented himself as a neutral and non-biased mediator of the Dispute Resolution Process and recommended strongly that CharlesG withdraw the Dispute Resolution Notice and file an RfC instead, and CharlesG accepted the advice. Although it looked odd, RobertM was presenting himself as a non-biased mediator making suggestions and CharlesG filed an RfC. The resulting RfC was criticized by RobertM, as the first (previous) RfC became a full page advertisement for the Weak-AI position and was withdrawn by ChalesG and RobertM together.
(4) When I challenged RobertM 2-3 times on his Talk page about what he was doing, he then affirmed that he was not neutral (not NPOV) and that he was biased to the Weak-AI perspective on his Talk page against my reading. He appeared to take the approach that by controlling the RfC process that he could steer the results to the Weak-AI perspective, regardless of the content of the article in its current form, and force his form of the Weak-AI friendly version of the Lede. I then extended the same invitation to RobertM that they (with CharlsG) expand the Weak-AI material in the main body of the article first because WP:Lede requires that only material in main body of the article can be used for the Lede summary, but he refused the invitation. He then posted a second version of the RfC stating his own preferred version of the Lede as the only feasible option without disclosing his own bias against NPOV, and without presenting any of this history to the newly emerging editors joining the discussion for the first time. There are now 4 (four) versions of the edit for the Lede available (Qwertus, CharlesG, mine, and one by SteelPillow), not the one option which RobertM has offered in his biased RfC as his one and only "solution".
(5) The unbiased version of an RfC would simply list the 4 (four) options just mentioned above without prejudice and ask editors to indicate support-or-oppose for their preference, again without prejudicing new editors to one and only one "solution". The RobertM version of the RfC is biased for this reason and should be deleted as being non-neutral and against npov. FelixRosch (talk) 15:55, 14 October 2014 (UTC)
The issue is sources. Major AI textbooks and AI's leaders carefully and specifically define the field as studying all kinds of intelligence, not just human intelligence. FelixRosch's reading of the article is mistaken. Please see detailed arguments above.---- CharlesGillingham (talk) 13:28, 16 October 2014 (UTC)
There is no requirement that the author of an RFC be neutral, only that the wording of the RFC be neutral. Does FelixRosch have a proposed alternate wording for the RFC? Defacing an RFC (since reverted) by changing the RFC's own lead question to a protest about the RFC is not an appropriate use of the RFC process. Robert McClenon (talk) 18:47, 14 October 2014 (UTC)
The issue remains that your version does not appear as neutral. If there are 4 options, then you are not supposed to single out only one of them (which you prefer) to the exclusion of the other choices. You appear to want to write the RfC while not admitting your own bias. An unbiased version is outlined in item (5) directly above your comment here. Your biased RfC should be withdrawn or deleted. FelixRosch (talk) 21:09, 14 October 2014 (UTC)
WP:RFC explains what to do in this situation: "If you feel an RfC is improperly worded, ask the originator to improve the wording, or add an alternative unbiased statement immediately below the RfC question template. Do not close the RfC just because you think the wording is biased." Hope this helps. 83.104.46.71 (talk) 09:42, 15 October 2014 (UTC) (steelpillow (talk · contribs) dropping by during my wikibreak).

Summary of Penrose's views inaccurate[edit]

This passage summarizes the views of Penrose's critics, not his own views:

John Lucas (in 1961) and Roger Penrose (in 1989) both argued that Gödel's theorem entails that artificial intelligence can never surpass human intelligence,[186] because it shows there are propositions which a human being can prove but which can not be proved by a formal system. A system with a certain amount of arithmetic, cannot prove all true statements, as is possible in formal logic. Formal deductive logic is complete, but when a certain level of number theory is added, the total system becomes incomplete. This is true for a human thinker using these systems, or a computer program.

Penrose's argument is that artificial intelligence if it is based on programming, neural nets, quantum computers and such like, can never emulate human understanding of truth. And Gödel's theorem indeed shows that any first order formal deductive logic strong enough to include arithmetic is incomplete (and a second order theory can't be used on its own as a deductive system). But Penrose's point is that the limitations of formal logic do not apply to a human, because we are not limited to reasoning within formal systems. Humans can indeed use Gödel's very argument to come up with a statement that a human can see to be true but which can never be proved within the formal system. Vis the statement that the Gödel sentence cannot be proved within the formal system, and that there is no Gödel encoding of its proof within the system.

And he doesn't argue that it is impossible for artificial intelligence to surpass human intelligence. It clearly can in restricted areas like playing chess games.

Nor does he argue that it is impossible for any form of artificial intelligence to ever emulate human understanding of mathematical truth. He just says that it is impossible for it to do that if based on methods that are equivalent to Turing computability, which includes hardware neural nets, and quantum computers and probabilitistic methods as all of those are shown to introduce nothing new, they are just faster forms of Turing type computability.

He says that before AI can achieve human understanding of mathematical truth, then new physics is needed. We need to understand first how humans are able to do something non computable, to understand maths. He thinks that the missing step is to do with the way that the collapse of the wave function happens, based on his ideas of quantum gravity. So - you still could get artificial intelligence able to understand maths based on either using biology directly - or else - based on some fundamental new understanding of physics. But not through computer programs, neural nets or quantum computers.

I think this is important because, whatever you think of Penrose's views, they introduce an interesting new position according to which both weak AI and strong AI can never be achieved by conventional methods. So - it would add to the article to acknowledge that in the philosopical section, as a third position, and highlight it more - rather than to just put forward the views of Penrose's critics. At any rate the section as it stands is inaccurate as it says that it is summarizing the views of Penrose and Lucas - but then goes on to summarize the views of their critics instead.

His ideas are summarized reasonably accurately in the main article Philosophy_of_artificial_intelligence#Lucas.2C_Penrose_and_G.C3.B6del

"In 1931, Kurt Gödel proved that it is always possible to create statements that a formal system (such as a high-level symbol manipulation program) could not prove. A human being, however, can (with some thought) see the truth of these "Gödel statements". This proved to philosopher John Lucas that human reason would always be superior to machines.[26] He wrote "Gödel's theorem seems to me to prove that mechanism is false, that is, that minds cannot be explained as machines."[27] Roger Penrose expanded on this argument in his 1989 book The Emperor's New Mind, where he speculated that quantum mechanical processes inside individual neurons gave humans this special advantage over machines."

Except - that more precisely, he speculated that non computable processes occurring during collapse of quantum mechanical states, both within neurons and also spanning more than one neuron, gave humans this special advantage over machines. (I'll correct that page).

Robert Walker (talk) 11:17, 16 October 2014 (UTC)

Agreed. Please be WP:BOLD and fix it. (It needs to be short, of course.)---- CharlesGillingham (talk) 16:00, 16 October 2014 (UTC)
Okay thanks, will do. I've just fixed the issues in the Philosophy_of_artificial_intelligence#Lucas.2C_Penrose_and_G.C3.B6del section as best I could. Will give some thought about how to summarize that in a shorter form for this article and give it a go after I've had time to reflect on how best to present it here in short form. Robert Walker (talk) 13:47, 19 October 2014 (UTC)
Done my best. It is a little long, but not much longer than the para it replaced, at least I've got it down to a single para. Hope it is okay. Will take another look in a day or two see if I can do a rewrite to reduce it further. If you wanted to trim it right down, you could stop just before "This is quite a general result, if accepted" - by that stage it has presented the essential argument in a nutshell but not mentioned its implications or the controversy that arose from it. Artificial_intelligence#Philosophy Robert Walker (talk) 22:08, 21 October 2014 (UTC)
And you feel happy that all the other issues are covered in philosophy of AI? ---- 172.250.79.167 (talk) 00:51, 22 October 2014 (UTC)
Sorry about the delay in reply. I'm okay with the section on Lucas and Penrose there - there are many more counter arguments and responses to them, but the basics I think covered okay, as best I understand it.
As for the rest of the article, when it says "Few disagree that a brain simulation is possible in theory" - of course - Penrose thinks that his argument proves that brain simulation of the type described is not possible even in theory - at least not if you use the currently known laws of physics. He thinks that some other laws are needed. And if the resulting laws have some element that is in some essential way non computable, then they can't be simulated in an ordinary Turing machine equivalent computer - perhaps can only be simulated using the same physics or similar.
Just saying, not sure it needs to be edited, he could count as one of the "few people" who disagree here. And the next section makes his views clear enough hopefully. It's the only other thing that springs to mind. Robert Walker (talk) 03:10, 1 November 2014 (UTC)

Can We Try Again on Human-Like Issue[edit]

I am willing to take another try at an RFC on the use of the phrase "human-like" in the lede. If a better phrasing of the RFC is agreed on, then the bot header can be deleted and the discussion of the RFC can be boxed when the new RFC is published. It is probably appropriate to delete and restate this RFC anyway, because it has been refactored and defaced, making the answers inconsistent. Does anyone have a better wording of the RFC?

I, for one, do not object to a phrasing that includes "human-like" in a context such as "whether human-like or otherwise". I only object to a phrasing that implies that human-like intelligence is the primary objective of AI. It is one of the objectives of AI, and not the one that has been successful yet (in spite of the dreams of a technological singularity that have been on the moving horizon for decades).

If we can't get agreement on a revised RFC, it may be that moderated dispute resolution is the best approach after all. Robert McClenon (talk) 16:14, 16 October 2014 (UTC)

Offer to stipulate. You appear to be saying that of the 4 options from various editors which I listed above as being on the discussion table, that you have a preference for version (d) by Steelpillow, and that you are willing to remove the disputed RfC under the circumstance that the Steeltrap version be posted as being a neutral version of the edit. Since I accept that the editors on this page are in general of good faith, then I can stipulate that if (If) you will drop this RfC by removing the template, etc, that I shall then post the version of Steeltrap from 3 October on Talk in preference to the Qwerty version of 1 October. The 4 paragraph version of the Lede of Qwerty will then be posted updated with the 3 October phrase of Steelpillow ("...whether human-like or not") with no further amendations. It is not your version and it is not my version, and all can call it the Steelpillow version the consensus version. If agreed then all you need do is close/drop the RfC, and then I'll post the Steelpillow version as the consensus version. FelixRosch (talk) 17:46, 16 October 2014 (UTC)

Yes check.svg Done - Your turn. Robert McClenon (talk) 18:30, 16 October 2014 (UTC)

Yes check.svg Done Installing new 4 paragraph version of Lede following terms of close-out by originating Editor RobertM and consensus of 5 editors. It is the "Steelpillow" version of the new Lede following RfC close-out on Talk. FelixRosch (talk) 19:58, 16 October 2014 (UTC)

Comment : An RFC is not just a contest between two people, and a consensus is not just the agreement between those two. FelixRosch has no particular authority to dictate terms, especially as the RFC was clearly leaning away from his position. APL (talk) 19:28, 16 October 2014 (UTC)

The RFC has been widely publicised now, which is why I contributed to it. Let it run its course and accept the result, whatever that might be. Stop editing the part of the article affected by it. --Mirokado (talk) 21:44, 16 October 2014 (UTC)

I now notice that the RFC was closed out of process. I have reverted that. The version of the lead updated in relation to that is full of repetition and bad grammar: "an academic field of study which generally studies the goal of studying ..., whether by in ...". But in any case it is better to keep the text stable during an RFC so I have restored the version which was subject to the RFC (at least when the bot sent its random notifications). --Mirokado (talk) 22:49, 16 October 2014 (UTC) (updated Mirokado (talk) 23:10, 16 October 2014 (UTC))

Would anyone object to reverting to any of the versions of the lede from 2007 to 2014? It contained the same content for that entire time, only changing by a word or two here and there? ---- CharlesGillingham (talk) 00:31, 17 October 2014 (UTC)
Well yes, I'm afraid I would. An RFC is a formal process where the community has a month to consider content editors give their opinions and an uninvolved closer determines consensus. While we would remove something cosmic like a BLP violation, we should leave that part of the article alone so that everyone is basing their comments on the same text. The current text has the advantage in the context of the RFC that editors can see the phrase being discussed without having to guess which previous version to open. --Mirokado (talk) 00:40, 17 October 2014 (UTC)
Does it matter that this dispute began when I reverted FelixRosch's addition of "human-like" from the lede? Before his edit, it had been very stable for seven years or so. Since then he has reverted or subverted every attempt to remove his edit; this is the source of the dispute. Shouldn't it stay in the "last stable version"? It hasn't been stable since FelixRosch added his edit. ---- CharlesGillingham (talk) 03:23, 17 October 2014 (UTC)
On the other hand, I'm happy to wait until the RfC is over. Just wanted you to know what was happening. ---- CharlesGillingham (talk) 03:29, 17 October 2014 (UTC)

The sentence "[AI] is an academic field of study which generally studies the goal of emulating human-like intelligence." is unsourced. Adding a cn tag. pgr94 (talk) 01:19, 17 October 2014 (UTC)

Pgr94 is correct; it needs a source more reliable than Artificial Intelligence: A Modern Approach, which you're not going to find. ---- CharlesGillingham (talk) 03:17, 17 October 2014 (UTC)
Undid revision 629913001. You are reverting against a consensus of 5 editors. Restore Close-out by Author of RfC. Please follow Wikipedia policy for establishing consensus first. FelixRosch (talk) 15:49, 17 October 2014 (UTC)
There is no consensus for closing the RFC. It was not just a contest between Felix and Robert McClenon. The two of them together should not close it, even if they, personally have resolved their differences. APL (talk) 21:24, 17 October 2014 (UTC)
Undid revision 630029056 You appear to be reverting against 5 editors in consensus including the originating author RobertM. This is the Steelpillow version of the edit. You have not contacted any of them to try to make consensus prior to editing and you are not following Wikipedia procedure and policy. FelixRosch (talk) 21:49, 17 October 2014 (UTC)
It appears that I misunderstood what User:FelixRosch had offered. I thought that he had agreed to stipulate an alternate wording of the RFC. He apparently wanted to stipulate an alternate wording of the lede, bypassing the RFC process. Should a new RFC be used, or should the original (before being edited and defaced) wording of the RFC be stored, or should moderated dispute resolution be used? Robert McClenon (talk) 22:56, 17 October 2014 (UTC)

──────────────────────────────────────────────────────────────────────────────────────────────────── Please have a look at Wikipedia:Requests_for_comment#Ending_RfCs:

  • It is clear that an RFC can be withdrawn by the poster "if the community's response became obvious very quickly". If you are going to assert that, then the contested wording must be removed from the article. I think a formal close may be more subtle, and I don't think the result is clear enough to close it for that reason.
  • The RFC may be closed "if the participants can agree to end it". They obviously have not agreed to end it (@FelixRosch: should provide diffs if I have missed a relevant, prominent conversation involving five or more of the participants in the last day or so).
  • @Robert McClenon: You are welcome to request help with this (for example at WP:DRN) and I will be happy to cooperate with whatever results from such a request, but until then I think we should let the RFC carry on. The opinions expressed by the thirteen or so participants so far cannot be ignored which is effectively what would happen if two editors decide between themselves what to do.

I will yet again reopen the RFC, because otherwise the bot will remove the entry prematurely and that will cause even more trouble. --Mirokado (talk) 23:38, 17 October 2014 (UTC)

Moderated dispute resolution isn't in order if the RFC is still running. Is the current wording of the RFC the wording that I originated, or has it been edited again (let alone defaced again)? Robert McClenon (talk) 14:46, 18 October 2014 (UTC)
@Mirokado, The edit which has the 5 (five) editor consensus is the "Steelpillow "version of the edit and Not the one you yourself quoted in your Talk comment above. It is also referred to as the "include Both" edit with RobertM using the phase "whether human-like or not". Please read this in the above, and please accept that this is a 5 editor consensus edit which closed the RfC. Please follow Wikipedia policies and procedures and establish consensus here on Talk prior to further edits. RobertM is likely to support you in further discussion within this section below. @Robert McClenon; You presently hold the 5 (five) person consensus as you acknowledged it. You do not appear to have received as much credit as you deserve for having done this. You have held me to strict terms to support the Steelpillow edit and I have accepted those strict terms. You currently have a 5 editor consensus for the closing of the RfC and continuing the discussion here in this section below for subsequent discussion. FelixRosch (talk) 14:58, 18 October 2014 (UTC)
Your persistent disruption of this RFC is totally unacceptable. Stop your edit warring to impose a result contrary to the developing community consensus. It is open again.
You still have not provided diffs for the conversation about closing this RFC involving five editors. I don't believe there has been such a conversation. The conversation at the start of this section is only between you and one other editor. No agreement by two editors to ignore developing community consensus (involving fourteen or so editors so far) is going to be accepted. No attempt to close an RFC prematurely by an editor with whom most respondents disagree is going to be accepted. --Mirokado (talk) 03:55, 19 October 2014 (UTC)
Undid revision 630191794 You appear to be edit warring against a 5 editor consensus for an RfC closed by the originating author RobertM. Please stop WP:EW and WP:3RR. Your next edit puts you over 3RR and your Talk page is posted forWP:EW against consensus of 5 editors. You have not even tried one single time to contact RobertM concerning the established consensus or anyone else. RobertM has made genuine progress for a discussion over a month long for the first time by establishing a consensus of 5 editors. You have been invited to seek consensus in the discussion below in this section and you have refused. Please stop edit warring and please follow Wikipedia policy and procedures for establishing consensus before you edit. FelixRosch (talk) 14:24, 20 October 2014 (UTC)
  • (contribution which seems intended to be part of the RFC itself moved there. OP notified). --Mirokado (talk) 23:49, 26 October 2014 (UTC)

@FelixRosch: who are the five editors you keep mentioning? I don't count that many, and far more opposed. ---- 172.250.79.167 (talk) 15:03, 21 October 2014 (UTC)

Is There a Current RFC, or should we go to DRN, or should Peer Review status be discussed[edit]

I don't see a currently running RFC on the lede. It had been my understanding that the RFC would be re-opened, but it is boxed. There isn't consensus on what the lede should say. If there isn't agreement on an RFC, I will try moderated dispute resolution. ~~

It occurs to be of significance to list the various AI disciplines covered by the various editors in the last week since there seems more which is of relevance here than just Weak AI and Human Engineering. FelixRosch (talk) 16:27, 21 October 2014 (UTC)
I see that User:FelixRosch changed the wording of the heading. Is he suggesting Wikipedia peer review? I don't think, based on my knowledge of Wikipedia peer review, that it is in order. It is used to bring an article to GA or FA status when there is consensus as to content. We don't have consensus on the lede. Do we want a new RFC, or to re-open the old RFC, or do we need DRN? Robert McClenon (talk) 16:49, 21 October 2014 (UTC)
@Robert McClenon; It would be useful at this time to see the complete list of leading AI disciplines as identified by the editors here. If you know what that list is (or someone else) then lets see the list which is more that just Weak AI and Human Engineering. FelixRosch (talk) 19:00, 21 October 2014 (UTC)
Hello Robert. The RFC only looks closed at the moment because of Felix' continuing disruption. As far as I can see the sequence of events was:
  • You effectively asked whether or not the current RFC should be closed.
  • Felix offered to add a variant of his preferred phrase if you closed it.
  • You agreed to do so, and then marked the RFC as closed without waiting for any further reactions. As I've already pointed out an RFC can be closed early, but none of the criteria for doing so apply here.
  • The closure was reverted (not by me). At this point WP:BRD should probably have applied, but instead Felix started another round of edit warring, which is currently still ongoing.
I appreciate that your closure was made with the best of intentions but it is clear that there is no general agreement to close the RFC early. I think you can take the subsequent reversions etc as "keep it open" answers to your original question.
If it were closed early, the closure notice should, just as for a normal closure, contain a neutral evaluation of the responses. The result from the responses so far is quite clear and does not correspond to your proposed closure rationale.
For these reasons please reopen the RFC yourself and let it run its normal course. --Mirokado (talk) 22:32, 21 October 2014 (UTC)
The RFC bot template had been deleted, and the RFC was boxed. I have added a new RFC bot template and have removed the archive box. That really reopens the RFC. Since there seems to be considerable support for the wording of the RFC, any editing or defacing of the RFC will be treated as disruptive editing. We can also open an RFC on deep learning if that is desired. Robert McClenon (talk) 22:56, 21 October 2014 (UTC)

Deep Learning[edit]

It appears that User:CharlesGillingham removed a section on deep learning, and that User:FelixRosch restored it. Was the questioned material copied from another article? If so, should a link rather than a copy-and-paste be used? What is the issue? Let's resolve the issue rather than edit-warring. Robert McClenon (talk) 16:55, 21 October 2014 (UTC)

Sorry, I reverted the edit before I saw this new section. There is also a section above on Deep Learning where I put a new comment. I thought we had consensus already which is why I reverted the last edit on deep learning. Another editor has mentioned that there are "63 books and articles on Deep Learning" which IMO is not at all a convincing argument. Let's take another esoteric part of AI that I was actually very involved in: the Knowledge-Based Software Assistant There are well over 63 articles and books on that topic. They used to have a yearly conference with around 50 articles or so just in that one conference. So "63 articles and books" is not a strong argument. This article should not be a collection of every approach to AI that has ever been tried. It should be a high level overview of the field that focuses on the main topics. A link to Deep Learning in the section on Neural Nets (or wherever it is most appropriate) seems far more in keeping with the relative weight of the topic in the field rather than having a whole sub-section. I think to merit a sub-section the topic should receive significant coverage in at least one major AI overview book such as R&N. --MadScientistX11 (talk) 17:12, 21 October 2014 (UTC)
I am inclined to agree with the removal of the section. There is a full article. We do not need to largely duplicate a separate article. Robert McClenon (talk) 18:54, 21 October 2014 (UTC)
@Madruss also had just voiced a similar sentiment in the old section above. My own orientation was based on the large number of articles in the mainstream press last month (Wired magazine, etc) about deep learning as in: [4][5][6]. @RobertM, after you look at the link make a judgment call if it is worth discussing, or, a simple one sentence link from one of the existing pertinent sections, or other option. @MadScientist, should everyone be starting to integrate their comments on this here in this new section? FelixRosch (talk) 19:00, 21 October 2014 (UTC)
Please note that this has been discussed twice on this page already. There was consensus for lowering the weight of "deep learning", based on the fact that it does not appear in the leading AI textbook. I added a one-sentence mention of deep learning in the appropriate section (deep learning is a neural network technique).
Keep in mind that AI is a very large field; we are trying summarize hundreds of thousands of technical papers, thousands of courses, and thousands of popular books. A few magazine articles is not nearly enough weight to merit that many paragraphs.
I would say that "weight" is the most difficult part of editing a summary article like this one, especially because AI is an interesting topic and there are thousands of "pet theories", "interesting new ideas", "new organizing principles" and "contrarian points of view" out there. Occasionally they get added to this article, and we have to weed them out.
That's why we have used the leading AI textbooks as the gold standard for inclusion in this article. (See Talk:Artificial intelligence/Textbook survey to see how this was originally done.) There's no better way to have an objective standard that I can think of. Russell and Norvig is almost 1000 pages and covers a truly vast amount of information, far more than we could summarize here. We have even skipped topics that have their own journals.
At any rate, "deep learning" is not in those 1000 pages, and I need more than a magazine article to consider it important enough to cover here, but, as a compromise, I added a sentence under neural networks. ---- CharlesGillingham (talk) 01:10, 22 October 2014 (UTC)
Undid revision 630536769 Someone keeps section blanking on this section with 63 linked citations. If you are making a bold edit by deleting the entire section which uses 63 cites for books and articles then state this plainly on the Talk page. New cites added. There has not been a full discussion of the deletion of an issue that is abundantly supported by 63 books and articles. This topic is well past the normal guidelines of verifiable sources. Referring to an old 2008 textbook for a high tech field which they have not kept up with is evidence that the 2008 source is antiquated and should be effectively supplemented in a 2014 on-line encyclopedia article which keeps up with well-documented progress in the field. FelixRosch (talk) 17:06, 23 October 2014 (UTC)
The pure number of sources is not what decides if something should be in the article.
Neither is it necessarily the point of an article (especially a summary article like this one) to provide in depth news coverage of whatever books have come out this year.
This is a topic spanning over a half a century. Even if Felix is correct (and I don't really think he is.) that "Deap Learning" has become dominant in the field in the last six years, we should beware recent-ism and not give it undue weight. APL (talk) 17:13, 23 October 2014 (UTC)
I agree absolutely with APL and I know CharlesGillingham agrees. FelixRosch you keep harping on these 64 papers and books. 64 doesn't mean a thing. There are hundreds of articles and books on AI sub-topics like Knowledge-Based Software Engineering and Case Based Reasoning. Should we have separate sections for those as well? --MadScientistX11 (talk) 17:51, 23 October 2014 (UTC)
That's a point which is well-rehearsed. May I ask you to read the 3 (only 3) new cites which I added at the very start of the section. Scientific American is usually not considered to be a silly magazine not be dealt with seriously. Also, investments of the size being made by Google and others should not be ignored as irrelevant. Are you including the Sci Am article as just another tabloid piece for the popular press? FelixRosch (talk) 17:57, 23 October 2014 (UTC)
@Felix: Your unfamiliarity with the field is evident. The 2013 edition of Russell and Norvig's Artificial Intelligence textbook has 1 chapter out of 27 covering probabilistic models and statistical machine learning which encompasses deep learning. Your editing is disruptive; enough of your edit warring and pushing your own favourite fields. Pick up an AI textbook and broaden your horizon. pgr94 (talk) 19:06, 23 October 2014 (UTC)
@Pgr94, Reading old textbooks, even if periodically updated, is not the same as keeping up with the 2014 literature in a hi-tech field subject to rapid growth and innovation. A 2014 on-line encyclopedia is capable of keeping pace with rapid innovation and the growing technical literature which is not limited to one or two textbooks which you may have read previously. If you have a 2014 article or book from a scholarly or well-established source to quote then present it here. FelixRosch (talk) 21:02, 23 October 2014 (UTC)
@Felix: Most top tier universities are using Russell and Norvig to teach artificial intelligence. I suppose they are behind the times and you know better? (StanfordCMUCambridge).
Please stop edit warring. The evidence overwhelmingly points to deep learning not being as important as you make out and the burden of proof is on you to demonstrate otherwise. If you provide sufficiently strong evidence we'll reach a consensus. But instead you are repeatedly failing to follow WP:BRD. pgr94 (talk) 21:32, 23 October 2014 (UTC)

(undent)@Felix. Please read WP:UNDUE and read my posts; you keep restating points which I have already refuted, and you offer no counter argument against me -- you just keep restating your points as if you haven't read my posts, which is disruptive. I've already explained why the 66 footnotes in the article deep learning and the three additional magazine articles don't count for much in an article like this one. But I'll repeat it in more detail if that helps:

3 or 66 citations are incredibly tiny numbers and mean almost nothing here. Pretty much every section in this article summarizes thousands or sometimes hundreds of thousands of technical papers and books. (Especially in the sections Goals, Approaches, Tools and Philosophy. You can't change anything based on just one more article; adding a three or ten or a hundred new citations does nothing to prove WP:UNDUE weight. That's just not going to convince us.

The citations you are finding are useless because there is no point in trying to count hundreds of thousands of possible citations to determine the right weight. We're not talking about a local band here trying to establish notability. We're talking about a multi-billion dollar industry, thousands of university departments, tens of thousands of active researchers and a vast body of published research. We can't count citations. We have to rely on textbooks and introductory courses to help us do the summarizing and weighting. I use Russell & Norvig because it's the most popular AI textbook.

You claim that Russell and Norvig is out of date, but this isn't true: it is currently, today, being used in over 1200 University courses -- it's currently, today, the most popular textbook. And, if you buy the top ten textbooks (as I did when I first starting editing this article) you'll find that Nilsson and all the other popular textbooks use very similar weights. I found that there is widespread agreement about what "AI" is all about.

Instead of citing magazine articles, you should buy the textbook or take an introductory course in AI. If your argument had the form that "the last three chapters of Hawkin's Introduction to AI are all about "Hierarchical Temporal Memory" and now Stanford, Harvard and MIT are teaching Hawkin's book, and, what's more, Hawkin's company now has majority market share and thus is the most successful AI company in history." Then I would agree we should bump up the profile of Hierarchical Temporal Memory. But this isn't the case, and, more to the point: this isn't the form of your argument.

Your argument does not convince me and does not appear to convince many of the others above. Please don't drop that giant blob of text about "deep learning" back in; it will just be removed again. ---- CharlesGillingham (talk) 05:04, 24 October 2014 (UTC)

@CharlesGillingham, Arbitrary deletion of edits without reading them first is against Wikipedia policy. If you are making a bold edit and section blanking an edit citing 63 books and articles on Deep Learning, then you need to state this plainly and be held responsible for it. You apparently have not read one single 2014 citation which I provided. You appear to have no knowledge that Peter Norvig is one of the principle supporters of Deep Learning research at Google in his function as a director there were Krizhevsky was hired: "In running deep learning algorithms on a machine with two GPU cards, Alex Krizhevsky could better the performance of 16,000 machines and their primary CPUs, the central chips that drive our computers. The trick involves how the algorithms operate but also that all those GPUs are so close together. Unlike with Google’s massive cluster, he didn’t have to send large amounts of data across a network. As it turns out, Krizhevsky now works for Google—he was part of a deep learning startup recently acquired by the company—and Google, like other web giants, is exploring the use of GPUs in its own deep learning work." Norvig defers to Krizhevsky for expertise on Deep Learning, not the other way around. Your lack of reading in this area is profound. If you are making a bold edit and section blanking an edit citing 63 books and articles on Deep Learning, then you need to state this plainly. FelixRosch (talk) 14:53, 24 October 2014 (UTC)
@Felid: Again, you seem to be missing my point and talking past me. It's like having an argument with someone who doesn't listen.
I'll respond to all your points, as quickly as I can: I am absolutely responsible for my edits -- I reduced the four paragraphs down to one sentence and put it under "neural networks". I looked at most of the citations -- they're not really on topic (i.e. they don't really address WP:UNDUE weight), and many of them don't mention deep learning on all. Everybody knows that Peter Norvig does AI for Google, and that Google is deeply into statistical machine learning. Note that Norvig doesn't use the term "deep learning" in his textbook, which we are using as the gold standard. I am well read in AI, having a degree in it, having worked in the field, and having followed it fairly carefully for the last thirty five years, at any rate the argument from authority doesn't count in Wikipedia. I removed the four paragraphs on "deep learning" at least twice, always signed my name (unless I forgot to login, I guess). I am always WP:BOLD when boldness is called for. I am proud of my edit and can't imagine why I wouldn't want to "state this plainly". Is that plain enough?
Finally, as I said at the top, please respond to my points so I know you are listening. Do you disagree with the method we are using to determine WP:UNDUE weight? Do you agree that we can't count citations to determine weight in this case? I.e. that the every sentence in this article has thousands of potential citations and 100 more or less doesn't matter? ---- CharlesGillingham (talk) 15:45, 24 October 2014 (UTC)
@CharlesGillingham; As you are finally indicating that you are making a Bold edit in section blanking "Deep learning" then I am Reverting formally in order that Discussion can now take place on Talk in the section now designated. Since you are the only one of the editors to finally take responsibility for your bold edit, I have re-read your Norvig comment and think that there may be a way to address the multiple issues on this page during this BRD. In order to show good faith, I will add the first comment of the BRD process to see if there is agreement that the current outline of this article for AI is substantially inferior to the Norvig outline for AI found in any one of the editions of his book. If there is agreement on this, although I do not think that the Norvig outline is the most ideal one in 2014, then it may still be possible to bring the multiple issues on this page into some better order and resolution. FelixRosch (talk) 21:47, 24 October 2014 (UTC)
This has been discussed, and everyone agreed that you were wrong.
Sorry that you didn't get your way, but you don't get to pretend otherwise.
You don't own the article. APL (talk) 22:07, 24 October 2014 (UTC)
@Felix: We already did BRD, about three or four times. Not sure what you mean. ---- CharlesGillingham (talk) 22:45, 24 October 2014 (UTC)
Who are you replying to? From you indentation, It looks like you're replying to me, but I think that reply was intended for Felix. APL (talk) 14:27, 27 October 2014 (UTC)
Yes, sorry. Gave it an "@". ---- CharlesGillingham (talk) 22:54, 27 October 2014 (UTC)

BRD following Bold edit by CharlesG of section blanking and Revert for Discussion[edit]

After re-reading the CharlesG comment on Norvig made directly above, there is reason to think that there may be a way to address the multiple issues on this page during this BRD. There is virtually no agreement between the outline of the current Wikipedia article for AI with the outline used for AI by Norvig which CharlesG is strongly supporting. In order to show good faith, I will add the first comment of the BRD process to see if there is agreement that the current outline of this non-peer reviewed AI article ("B"-class) is substantially inferior to the Norvig outline for AI found in any one of the editions of his book. If there is agreement on this, although I do not think that the Norvig 2008 outline is the most ideal one in 2014, then it may still be possible to bring the multiple issues on this page into some better order and resolution. FelixRosch (talk) 21:47, 24 October 2014 (UTC)

I think that you've got a very ... non-standard ... idea of what [WP:BRD] means?
It certainly doesn't mean that you get to revert-war against consensus for as long as you can prolong the discussion. APL (talk) 22:12, 24 October 2014 (UTC)
@Felix: I think the bold edit was when you dropped it in, the revert was me removing, the discussion was up above. The next bold was you adding it back in, the next revert was by someone else, the next discussion was up above, a little lower. And so on. About four or five times. Every time you add it back in, we all discuss it and decide it's a bad idea, you ignore the discussion and put it back in. BRD was already happening. The discussion is over, the article is fixed. There won't be anything to discuss until the next time you boldly drop it back in and we have to revert and discuss again. ---- CharlesGillingham (talk) 22:45, 24 October 2014 (UTC)
The BRD sequence consists of: the first, "bold", edit; the second "revert" edit; then discussion if the original editor does not agree with the reversion (perhaps followed by some form of dispute resolution if necessary). There is no second bold edit etc: restoring a reverted edit without discussion is edit warring. --Mirokado (talk) 00:05, 28 October 2014 (UTC)
Second, to rebut your point about R&N: please see Talk:Artificial intelligence/Textbook survey. This article uses very similar weights as the major AI textbooks, except that we emphasize history a little more, unsolved problems a little more. ---- CharlesGillingham (talk) 22:50, 24 October 2014 (UTC)
@CharlesG; There is no knowledge on my part of any brd ever taking place. You did not ping my account not inform me of any dispute. The deletion appeared a few days ago on the edit history summary of the AI article which is when I was informed. I had spent two days trying to find one editor who would take responsibility for the section blanking as something more substantial than a drive-by section blanking by a random editor. You finally stepped up and took responsibility for it as a Bold edit of section blanking. I then Reverted under BRD policy and guidelines so that BRD Discussion could proceed in good faith. If you re-install the edit as I placed it in my Revert according to BRD policy and guidelines, then I will support your re-installing it for purposes of BRD and good faith discussion can proceed. If you do not honor the BRD policy and guidelines by not honoring the revert then there is no reason for me to assume good faith on your part for further discussion. The BRD policy and guidelines for honoring my right to Revert needs to be re-installed as the section before it was deleted by you in order for good faith BRD discussion to be initiated. FelixRosch (talk) 14:35, 28 October 2014 (UTC)
You've totally ignored the fact that four separate editors reverted your addition. And you're still going on about your "rights"? [7] --NeilN talk to me 14:46, 28 October 2014 (UTC)
I agree that the content was removed properly, in accordance to policy, and in good faith.
Felix's understanding of both the situation, and the BRD essay is wrong. APL (talk) 19:57, 28 October 2014 (UTC)
My understanding of BRD is to follow Wikipedia policy and guidelines, and all 4 editors can have their say as long as the BRD policies and guidelines are followed with the restore of the blanked out section, in order for good faith discussion to continue. Otherwise, there is no basis to believe that good faith discussion is possible by CharlesG for following Wikipedia policy and guidelines. He initiated the section blanking and he can restore it in order for good faith discussion to continue for all participating editors. FelixRosch (talk) 20:58, 28 October 2014 (UTC)
Regardless of which edit was the "B" and which was the "R", the discussion has now occurred. It'd done. The consensus it to not include the material.
Maybe you missed the discussion because you were busy arguing about BRD, RFCs or who knows what, but you're not allowed to hold up the process by pretending that the discussion hasn't occurred. It's just disruptive editing. APL (talk) 21:29, 28 October 2014 (UTC)
No answer from @CharlesG; There is no knowledge on my part of any brd ever taking place. You did not ping my account nor inform me of any dispute for deep learning. The deletion appeared a few days ago on the edit history summary of the AI article which is when I was indirectly informed of it. I had spent two days trying to find one editor who would take responsibility for the section blanking as something more substantial than a drive-by section blanking by a random editor. You finally took responsibility for it as a Bold edit of section blanking with harsh words. I then Reverted under BRD policy and guidelines so that BRD Discussion could proceed in good faith. If you re-install the edit as I placed it in my Revert according to BRD policy and guidelines, then I will support your re-installing it for purposes of BRD and good faith discussion can proceed. If you do not honor the BRD policy and guidelines, by not honoring the revert, then there is no reason for me to assume good faith on your part for further discussion. The BRD policy and guidelines for honoring my placement of the Revert needs to be re-installed as the full section before it was deleted by you in order for good faith BRD discussion to be initiated. Each editor at Wikipedia is entitled to discussion under the terms of BRD, yet you seem to have singled me out from being allowed to do this. You appear not to wish to honor my request to make my case for Deep Learning following the accounts of its leading expositors in 2014 and not your old version from a 2008 book. The assumption of good faith on your part calls for the section which you blanked out to be Restored under BRD rules in order for good faith discussion of the section to commence under Wikipedia BRD policy and guidelines. FelixRosch (talk) 21:07, 29 October 2014 (UTC)
"Each editor at Wikipedia is entitled to discussion under the terms of BRD"
What? That is not true at all. Nothing about BRD is personal. You don't own your edits. The issue has been discussed, you don't personally get some kind of trial.
Anyway, it's pretty clear that you're not convincing anyone at all. If you think we've behaved improperly, go report us at WP:ANI.
APL (talk) 21:22, 29 October 2014 (UTC)

Going forward on Deep Learning[edit]

There appears to be consensus against the inclusion of the four paragraphs proposed by User:FelixRosch on deep learning. Is Felix willing to accept that consensus and leave the material out of this article? If so, should a link to the article on deep learning be included in this article? If Felix does not agree that there is consensus, then does he want to go forward with another RFC, or does he want to take this issue and possibly any other issues to moderated dispute resolution? Robert McClenon (talk) 15:35, 27 October 2014 (UTC)

@RobertM; Not so fast. You still have not explained your odd about face on the ANI. You made an agreement with your "Done" check mark to me, and then you did an about-face needlessly exposing several good faith editors to the hazards of increased scrutiny for mis-step during the ANI process. When the several editors there followed the ANI rules and did not commit a mis-step you suddenly did an about face nullifying your previous agreement to attain consensus for your own personal reasons. The editors you exposed to ANI were all good faith editors are there was no visible reason for you to needlessly put them into the hazard of increased scrutiny during an ANI, and for you to foment a controversial ANI, for your own personal reasons of seeking your bias in this AI article. Your about-face requires some explanation. Please explain. FelixRosch (talk) 14:48, 28 October 2014 (UTC)
"exposing several good faith editors to the hazards of increased scrutiny for mis-step during the ANI process."
Oh, please. (::RollEyes::)
Nobody has been put into any "hazard". If the only thing stopping you from edit-warring is the threat that someone at ANI might notice, then you should not be editing Wikipedia ever.
(In any case, forming consensus is not a sin. We're supposed to be flexible, and not stubborn.)
APL (talk) 15:02, 28 October 2014 (UTC)
You were not the principal editor involved in the ANI, so you should not laugh at the other good faith editors. If you enjoy having your positions and edits on Wikipedia misrepresented, then roll your eyes again. FelixRosch (talk) 15:08, 28 October 2014 (UTC)
Oh, I am. APL (talk) 15:18, 28 October 2014 (UTC)
First, I wasn't the author of the ANI. Second, if Felix is implying that making mistakes is sanctionable behavior, then isn't defacing an RFC also sanctionable behavior. Third, this thread isn't about the ANI or the previous RFC. It is about deep learning. Fourth, I misunderstood what you had offered. I thought that you thought that the RFC was non-neutral, and that you were willing to change its wording. So I struck the bot template for the previous RFC, and expected that you would file a new RFC. I didn't realize that you expected that you were trying to establish a consensus and then use that to stop further discussion. I made a mistake, in that I didn't understand what you were offering. Fifth, how do you want to go forward on deep learning? Robert McClenon (talk) 15:54, 28 October 2014 (UTC)
My answer to CharlesG is posted above in the BRD section. Your odd about face is still unexplained and is still pending. My comments said nothing about a new RfC at that time and I did exactly my part of what was stated:
"Offer to stipulate. You appear to be saying that of the 4 options from various editors which I listed above as being on the discussion table, that you have a preference for version (d) by Steelpillow, and that you are willing to remove the disputed RfC under the circumstance that the Steeltrap version be posted as being a neutral version of the edit. Since I accept that the editors on this page are in general of good faith, then I can stipulate that if (If) you will drop this RfC by removing the template, etc, that I shall then post the version of Steeltrap from 3 October on Talk in preference to the Qwerty version of 1 October. The 4 paragraph version of the Lede of Qwerty will then be posted updated with the 3 October phrase of Steelpillow ("...whether human-like or not") with no further amendations. It is not your version and it is not my version, and all can call it the Steelpillow version the consensus version. If agreed then all you need do is close/drop the RfC, and then I'll post the Steelpillow version as the consensus version. FelixRosch (talk) 17:46, 16 October 2014 (UTC)"
Yes check.svg Done - Your turn. Robert McClenon (talk) 18:30, 16 October 2014 (UTC)
Yes check.svg Done Installing new 4 paragraph version of Lede following terms of close-out by originating Editor RobertM and consensus of 5 editors. It is the "Steelpillow" version of the new Lede following RfC close-out on Talk. FelixRosch (talk) 19:58, 16 October 2014 (UTC)
Please explain your about-face. Please remove/close your defective and biased RfC. FelixRosch (talk) 16:25, 28 October 2014 (UTC)
1 ) He did explain. It was an error. It needs no explanation beyond that. (It didn't even need that much explanation. People are allowed to change their minds without first getting permission from User:FelixRosch.)
2) You have no authority to issue demands to other users.
3) It would be inappropriate for him or anyone else to remove an RFC in progress.
4) Felix, you are making a fool of yourself. Everyone can see it except you.
APL (talk) 16:45, 28 October 2014 (UTC)
As User:APL explains, and as I explained, I misunderstood the offer, which I thought was an offer to reword the RFC, not to replace it with a two-editor "consensus". If I had understood the offer, I certainly would never had taken an RFC down based on a private agreement. I made a mistake based on misunderstanding. I didn't, for instance, deface an RFC. User:FelixRosch appears to be imposing a higher standard for other editors than he holds himself to. Robert McClenon (talk) 18:41, 28 October 2014 (UTC)
In any case, you haven't yet answered how you want to go forward on deep learning. Do you have a constructive suggestion, or do you want to ridicule other editors and to carp? Robert McClenon (talk) 18:41, 28 October 2014 (UTC)
As to the demand of User:FelixRosch to remove the RFC, there is consensus that it should not be removed, and that its removal, based on a miscommunication, was improper. Your continued demands that it be removed are disruptive editing. Because of your tendentiousness in editing, I have gone to extraordinary lengths to satisfy your demands, and have just gotten more demands, and will no longer pay any attention to your demands. You said that the RFC was biased, so I was willing to have its wording changed, but you took that as a private consensus. I am still trying to work toward consensus, and I have opposed a suggestion that the RFC should be snow closed. Would you rather have the RFC run its course, or have it snow closed? Demanding that it be removed is not a reasonable option. Robert McClenon (talk) 20:41, 28 October 2014 (UTC)

"Five Editors"[edit]

Perhaps Felix could enlighten us as to who the "five editors" that agree with him are? They seem to express their specific agreement for every single edit Felix has made here, but I can never figure out who they are. 74.113.53.42 (talk) 21:16, 21 October 2014 (UTC)

I think that he is counting me and User:Steelpillow, but I don't entirely agree with him, and I don't know about Steelpillow. My own opinion is the Felix is arguing that there is a consensus when we are still trying to establish consensus. Arguing that there is a consensus when one is still being worked out is non-collaborative. Robert McClenon (talk) 22:24, 21 October 2014 (UTC)

Another RfC on "human-like"[edit]

NO ACTION:

There is no consensus in this discussion to make the modification proposed. (non-admin closure){{U|Technical 13}} (etc) 17:02, 24 November 2014 (UTC)

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

I propose here that the phrase "human-like" be included in the article lead only as a part of the broad idea of "whether human-like or not." In particular, I propose that the opening sentences of the article lead should read, "Artificial intelligence (AI) is the intelligence exhibited by machines or software. The academic field of AI studies the goal of creating intelligence, whether in emulating human-like intelligence or not." — Cheers, Steelpillow (Talk) 10:13, 22 October 2014 (UTC)

Rationale[edit]

The inclusion of "human-like" in the lead has caused much contention and resulting confusion. Like may words and phrases, its precise interpretation depends upon the context in which it is used. This proposal uses it in a linguistically descriptive rather than academically definitive way, and as such its usage should not need to be cited. Any subsequent use in the article of the term "human-like", or of a similar-meaning qualifier, in a more specific context would need to be be cited. — Cheers, Steelpillow (Talk) 10:15, 22 October 2014 (UTC)

Survey responses[edit]

  • Oppose - It's better than saying that all AI is "Human-like", but I wish we could use a different phrase to communicate that Idea. From the discussions above, it's pretty clearly the people interpret the phrase in different ways. And there's the historical weirdness that in decades past "intelligence" was defined as a uniquely human property. Edit : And, as-per CharlesGillingham, modern sources try to avoid "human-like". APL (talk) 15:31, 22 October 2014 (UTC)
  • Comment I want to remind everyone again that the issue is sources. AI's leaders and major textbooks carefully define AI in terms of intelligence in general and not in terms of human intelligence in particular. See #Argument in favor of "intelligence" above for details. We, as Wikipedia editors, are not free to define AI in any way we like; we must respect the sources. ---- CharlesGillingham (talk) 16:15, 22 October 2014 (UTC)
  • Weak oppose I can accept Steelpillow's suggestion because at least it is accurate. I would have preferred Steelpillow's suggestion is accurate, but I would prefer to leave the issue out of the lede by just using the general word "intelligence" there, and discuss the issue of "human-like" vs. "machine-like" vs. "animal-like" vs. "formal-ish" later in the article. ---- CharlesGillingham (talk) 16:15, 22 October 2014 (UTC)
  • Oppose - This proposal appears to be a good faith attempt to resolve a dispute through compromise but, IMO, we're asked to accept content that lacks sufficient good secondary sources to justify lead paragraph weight in order to make the conflict go away. Jojalozzo 17:31, 22 October 2014 (UTC)
True, and a fair criticism. Changing my vote. ---- CharlesGillingham (talk) 06:39, 24 October 2014 (UTC)
  • Weak Oppose in that I would prefer to keep "human-like" out of the lede and to leave its discussion in the article. It's better than the previous Felix version which had human-like as the objective. Robert McClenon (talk) 21:55, 22 October 2014 (UTC)
  • I too Oppose mention of any fabulary use of "human-like intelligence" or "human intelligence" in this piece at all, much less in the leading material. It is simply not so in the field. If it is to be anywhere, in the body only, and cited. DeistCosmos (talk) 02:07, 24 October 2014 (UTC)
  • Preliminary Comment @APL, the caricature which RobertM has painted about me has nothing to do with my reference to human engineering and reverse human engineering in AI within the Lede. To my knowledge no-one who has done their reading in AI believes what you say that "all AI is human-like", which is a severe caricature which I do not endorse. The issue for Wikipedia is to accurately summarize the current article as it exists at this moment in time, which was written by another editor before I ever saw this article. All the opening 8 sections were oriented by the previous editor, 2.1 through 2.8, from the human engineering and reverse human engineering standpoint in the current non-peer reviewed article with its many deficiencies at this point in time. As an accurate summary of the article in its current form at this point in time, and as it was written by another editor, and to point out this orientation of the current article, I then added the word "human-like" to describe the state of the article in its current form at this point in time. My hope is that in the future this article will become a peer reviewed article (GA or A) which will not be oriented to only one perspective in its opening 8 sections. My main point was in reference to the article's current form at this point in time. Namely, that the opening 8 sections are all oriented to the human engineering and reverse human engineering perspectives in emulating human intelligence. @Steelpillow has written a better RfC than the poorly written and biased RfC by RobertM which multiple editors have criticized. The non-neutral and biased RfC by RobertM should be deleted, and RobertM should make note of how Steelpillow constructed this RfC which states his orientation plainly and lists his rationale just as plainly for all editors to see. By now everyone knows that RobertM is biased to the Weak-AI perspective and his pretending to be neutral is not fooling or diverting anyone anymore. He should simply state that he is biased to supporting the Weak-AI perspective and delete/close his poorly formed RfC as non-neutral and violating Wikipedia policy for NPOV. @Jojalozzo, I agree with your criticism of the previous RfC as deficient and your endorsement here appears to be well intended. Though I am sorry you are opposed to Steelpillow here, that is certainly your option to voice your opinion now that Steelpillow has explained the rationale for the view presented plainly and for everyone to see. If the previous poorly formed RfC by RobertM is deleted/closed, then discussion could perhaps continue constructively here. FelixRosch (talk) 17:38, 23 October 2014 (UTC)
  • Oppose. Researchers in AI use many techniques: Monte Carlo simulation, simulated annealing, etc. "Doing it the way a human would" is not among them. Maproom (talk) 08:25, 29 October 2014 (UTC)
  • Clarification. The current RfC originated from a previous RfC which a bot has ended were a consensus of several editors Supported the version of Steelpillow. Those Supports posted can/should be re-posted here for completeness. (Users: Steel pillow, DavidEpstein, Ruuud). FelixRosch (talk) 18:22, 3 November 2014 (UTC)
That is the exact opposite of what is supposed to happen!
You can't copy/paste in people's comments on different threads and just re-position them to support you on new questions.
You may ask those people to reiterate their points, but if they don't want to, they don't have to. You can't force people to weigh in, nor can you hold it against people people if they decide to change their mind. APL (talk) 20:38, 3 November 2014 (UTC)
But please read WP:CANVASS, you may not only contact people who agree with you: "The audience must not be selected on the basis of their opinions—for example, if notices are sent to editors who previously supported deleting an article, then identical notices should be sent to those who supported keeping it." --Mirokado (talk) 22:31, 3 November 2014 (UTC)
The proper identification of over-lapping rfc's is part of Wikipedia policy and guidelines. In this case, an over-lapping rfc was identified for any new editor who wishes to be fully informed of the history of this discussion. FelixRosch (talk) 16:25, 7 November 2014 (UTC)
  • Oppose. The first RFC has already clearly decided that we will not have the term "human-like" in the lead. This suggestion seems to be that it would be OK to include the term, without any sources, if it is added inside another phrase. I don't buy this idea at all. The term is unsourced so it cannot be used. It does not appear in the article, so it cannot appear in the lead, which summarises article content. The wrapping phrase implies that there is some discussion or confusion within the field about the applicability of this term. That is also unsourced (and I do not believe that it is the case).
    Some robots are of course anthropomorphic or designed to communicate with humans using voice. This has very little to do with the mechanics of any underlying intelligence, though. --Mirokado (talk) 01:29, 14 November 2014 (UTC)

Threaded discussion of Another RfC on "human-like"[edit]

The following discussion is in reply to FelixRosch's Preliminary Comment, above. ---- CharlesGillingham (talk) 19:12, 29 October 2014 (UTC)

You repeatedly edit-warred, against multiple other editors, to change the lede so that it defines the goal of AI research as the creation of "human-like" intelligence. [8] [9][10][11][12][13][14][etc]
You tried a few different wordings, but they all ultimately have the same meaning. A meaning that's factually incorrect and not supported by sources.
If anyone is is behaving non-constructively here, it's you. Trying to deflect that criticism onto other editors isn't fooling anyone. APL (talk) 23:38, 23 October 2014 (UTC)
APL (talk) 23:38, 23 October 2014 (UTC)
Just in case anyone reading here is unfamiliar with what "strong AI" and "weak AI" is, I want to make it clear that there is no such thing as a "weak AI perspective", and no one, to my knowledge, ever had anything like a "weak AI agenda". The "agenda" that Felix ascribes to RobertM is pure nonsense based on misunderstanding. Felix, being unfamiliar with field, imagines that there is some kind of political debate between roughly equal factions for "strong AI" or "weak AI". This isn't true. There is a large and successful academic and industrial research program known as AI, involving billions of dollars and tens of thousands of people. There is a very small, but very interesting, subfield known as artificial general intelligence. Some of the people in AGI use the term "strong AI" to describe their work. "Weak AI" is never really used to describe anything, except in contrast to strong AI. This article has a section on AGI and we actually give it a little more weight than major AI textbooks do, simply because, as I said, it is interesting. There is an AGI article that goes into more detail, that names most of the companies and institutions involved. I'll say it again: the "agenda" that Felix ascribes to RobertM is pure nonsense based on misunderstanding. ---- CharlesGillingham (talk) 05:21, 24 October 2014 (UTC)
Yes, I agree that Felix seems to be imagining some sort of conflict between two groups of AI researchers, the StrongAI and the WeakAI, and he believes that he's fighting a conspiracy by the WeakAI people. Even though, there isn't really such thing as "Weak AI". There's an entire field of research, and then a tiny subset of that field that's sometimes called "Strong AI".
It doesn't help that the tiny subset of the field called "Strong AI" is the part that Hollywood focuses on. That may be part of the misunderstanding. APL (talk) 15:20, 24 October 2014 (UTC)
Also, I suppose I should also rebut FelixRosch's argument about the sections 2.1, etc.
FelixRosch's original reading of the article was deeply mistaken. As User:pgr94 and I argued in detail above, none of the sections he mentions are primarily about human emulation. These sections describe tasks that require intelligence. Certainly people do most of these tasks in some form or other, but that is not what AI is really after. AI is interested in the task itself, and is not committed to doing the task by emulating the way people do it. In order to work on these tasks, AI first has to redefine the task so that it doesn't refer to humans, just so that they have clear understanding of what the task is. "Problem solving" is defined in terms of rational agents. "Machine learning" is defined as "self-improving programs". And so on. "Natural language processing" is a catch-all for any program that takes as input natural language or produces output in natural language. (For example, Google's search engine is an AI NLP program --- no human being could read everything on the internet and rank it, but this AI program can. It is a NLP problem and it is very in-human.) They are a class of tasks that we would like machines to perform.
The fact that humans can perform them is interesting, but only up to a point. We certainly want to pay close attention to those areas where people out-perform machines, but experience has shown that emulating humans is unlikely to be the best way forward. Russell and Norvig offer an analogy with aeronautics --- airplanes are not tested by how closely they emulate birds. They are tested by how well they fly. By analogy, FelixRosch read that airplanes "fly" and argues that "aeronautical engineering is the study of machines capable of bird-like flight", arguing that flight is a behavior strongly associated with birds. (This works better if you imagine he is reading the article in the year 1870.)
Today, the methods that AI programs use to carry out these tasks are typically very in-human: they can be based on the formal structure of the problem (such as logic or mathematical optimization) or they can be inspired by animal behavior (such as particle swarm optimization) or by natural selection (genetic algorithms) or by mechanical processes (simulated annealing) and so on.
Felix has heard these arguments before, but I thought I would save you all some searching and bring them down here. ---- CharlesGillingham (talk) 06:27, 24 October 2014 (UTC)
@CharlesGillingham, Your comments are self-contradictory from one edit to the next. This is your comment: "I think you could characterize my argument as defending "weak AI"'s claim to be part of AI. In fact, "strong AI research" (known as artificial general intelligence) is a very small field indeed, and "weak AI" (if we must call it that) constitutes the vast majority of research, with thousands of successful applications and tens of thousands of researchers. ---- CharlesGillingham (talk) 00:35, 20 September 2014 (UTC)". All of which you contradict in your mis-statement and disparagement of my position above. John Searle has amply defined both Weak-AI and Strong-AI and you should stop pretending to be the one or the other when it suits you. You claim to be Weak-AI one day and then not Weak-AI the next. FelixRosch (talk) 15:19, 24 October 2014 (UTC)
You left out the previous sentence where I said "The term weak AI not generally used except in contrast to strong AI, but if we must use it," etc. At that point in the conversation you were using "weak AI" in a way I had never heard it used before, and you still are. You were originally accused me of having a "strong AI" agenda, which made no sense at all, and now you accuse me of having a "weak AI agenda" which is very weird way of describing the position I have defended. I was forced to the conclusion that you are unfamiliar with the meaning of the terms, and since almost every introductory course in AI touches on John Searle, and I think I am justified in concluding that you are unfamiliar with the field. (Indeed, you are still demonstrating this: John Searle's Chinese room#Strong AI is very different from what you are talking about -- he's talking about consciousness, and the theory of mind, which are pretty far removed from the subject. Your meaning is closer to Ray Kurzweil's re-definition of the term.) I was trying to point out that what you were calling "weak AI" is never called that (except in extraordinary cases). You missed the main point, which I am trying make as plain as I can. Here it is, using bold as you do: what you're calling "weak AI" is actually called "AI" by everybody else. ---- CharlesGillingham (talk) 16:25, 24 October 2014 (UTC)
As this debate seems neverending, I simply wish to endorse everything Charles has conveyed above. Not simply the words, but the spirit of it. DeistCosmos (talk) 05:29, 27 October 2014 (UTC)

What's this all about? (Rhetorical question!)[edit]

I'm tempted to go away and leave you all to play Kilkenny cats in a hopefully convergent series, but there are a couple of items that, if they have been recognised in the foregoing, I have missed and I refuse to dig through it to find them.

  • The point of the article is to offer a service to the user; in particular a service that constructively deals with user needs and expectations.
  • For an article with an unqualified name such as "Intelligence" to deal with only "Spontaneously Emergent Intelligence" or only "Artificial intelligence" would be misleading. For a less abstract article with a more qualified name such as "Artificial Intelligence" to deal only with the still more tightly constrained concept of "Human-like Artificial Intelligence" would be even more misleading, though on similar principles.
  • Therefore anyone who wants an article that concentrates on "Human-like Artificial Intelligence" or "Animal-like Artificial Intelligence" or "Mole-like Artificial Intelligence", or "Bush-like Artificial Intelligence", or "Slug-like Artificial Intelligence", or "Industrial Artificial Intelligence", or "Theoretical Artificial Intelligence", or "Mousetrap-like Artificial Intelligence", or "Alien Artificial Intelligence" could do so with everyone's blessing, but not in this article; its title is not thus qualified.
  • Accordingly there is no point to compromising on what goes into the lede. The article should deal with what the user seeks, and in particular what the user seeks on the basis of the title, not on the basis of what one faction of the authors thinks would make a nice article if only the readers would just ignore the title. The lede in turn should tell the reader as compactly and comprehensibly as may be, why s/he should skip the article or continue reading. It should not include discussions, just hints at the relevant content. Formulae for lede length are for folks who haven't worked out what should be said or why to say it. A twenty page article might do very well with a ten-line lede, whereas a two-page article might need a half-page lede. The measure of a lede's greatness is rather a function of its logical content and brevity than its length.
  • The field of artificial intelligence is far beyond what we can deal with comprehensively; its sub-fields need separate articles just to summarise them, and before we can deal with them we must define them coherently. Flat assertions about constraints such as intelligence having to be like human intelligence to be artificial intelligence (instead of like Turing machine intelligence or Vulcan Intelligence no doubt) need cogent justification if they are to be so much as considered, let alone indulged.
  • I cannot offer a serious structure of articles in the current contest, and as I have no intention of writing any of them, I would not be entitled to present one anyway. But for heaven's sake do it hierarchically, from the most abstract (Artificial Intelligence just below Intelligence (already done), followed by more constrained topics, such as Human-like (if anyone wants such an article and can define it coherently), and any other branches that anyone can define, describe and discuss usefully. They could be presented as discrete, linked articles, each dealing with more highly constrained sub-divisions of the broader, more abstractly defined topics.
  • If you all cannot agree on a basis for formulating the article structure, then you should form a group (project, whatever you like) that can apply some requirements of what people state here. And agreement might demand compromises, but compromise does not mean writing handwaving instead of sense just so that you can include all the words that anyone thought sounded nice. I mean, look at: "Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is an academic field of study which studies the goal of creating intelligence, whether in emulating human-like intelligence or not." That is the kind of incoherence and inaccuracy that may result when one tries to impose mutual appeasement instead of cogency! JonRichfield (talk) 12:33, 2 November 2014 (UTC)
Agree that opening of the lede is almost incoherent, especially the second sentence. Personally, I like the article's structure, and as far as I know there are no complaints about this. We have been very careful about summarizing the field in a way that reflects how AI describes itself -- we cover the same topics as major AI textbooks, with slightly more emphasis on history, unsolved problems, and popular culture. See Talk:Artificial intelligence/Textbook survey ---- CharlesGillingham (talk) 02:42, 5 November 2014 (UTC)
Hi @JonRichfield:. I agree entirely that this article's title requires it to overview all notable aspects of AI and is currently not even coherent. For example AI is not just an academic field of study, it is primarily the object studied by the academic field of the same name. It also has a stong place in popular culture and science fiction. But one thing at a time! What about the human-like aspect? Here I think you have the present RfC discussion backwards, in that opinion is overwhelmingly in favour of expunging anything resembling "human-like" from the lead. I believe this is a grave mistake. Consider for example the face-like robots designed to simulate a degree of human empathy and emotional cognition. The essence of these devices is human-like control of behaviour. Take too for example the latest (1 Nov 2014) copy of New Scientist, page 21, which has the subheading, "A hybrid computer will combine the best of number-crunching with human-like adaptability – so it can even invent its own programs." To quote from later in the article, "DeepMind Technologies ... is designing computers that combine the way ordinary computers work with the way the human brain works." But with opinion so overwhelmingly against citing such off-message material, I have no stomach to engage the prevailing attitude of "well I'm an AI expert and I have never come across it." — Cheers, Steelpillow (Talk) 18:54, 2 November 2014 (UTC)
Again, the issue is sources. Major AI textbooks and leaders carefully define the field in terms of intelligence in general, and not human intelligence in particular. We are not free to define AI any way we like.
The fact that some AI research involves human emulation does not imply that the entire field needs to be defined in terms human emulation/or not. And that fact that popular articles about AI always mention human intelligence in one form or another doesn't mean that the field should be defined that way -- it just means that this is the most interesting thing about the field to a popular audience.
Also, I think you should note that the first definition given is "the intelligence of machines or software" -- so the current version does name the "object of study". That being said, this article is about the academic and industrial field of AI research. The term "artificial intelligence" was coined as the name of an academic field. We have a sections on AI in science fiction, philosophy and popular speculation. I think there will be resistance to expanding them -- there have been comments in the past that we should cut them all together (which I opposed). ---- CharlesGillingham (talk) 02:42, 5 November 2014 (UTC)
If as you say "this article is about the academic and industrial field of AI research" then it should be moved to Artificial intelligence research. If is to remain here, then it needs to adopt a more comprehensive approach and address less rigidly academic sources such as New Scientist. We come back to Jon's opening issue; "The point of the article is to offer a service to the user; in particular a service that constructively deals with user needs and expectations." These expectations are channelled by the article title, it has to accurately reflect the content and vice versa. — Cheers, Steelpillow (Talk) 09:46, 5 November 2014 (UTC)
The name of the field is Artificial Intelligence, just as the name of chemistry is Chemistry, not chemistry research or chemistry (science) or whatever. ---- CharlesGillingham (talk) 09:08, 6 November 2014 (UTC)

Hi @Steelpillow. I agree with you practically in detail, even to the point of including the human-like aspect as an important topic. Where the wheels come off is that the human-like aspect (which very likely could earn its own article for a range of reasons, some of industrial/social importance and some of academic/philosophic importance) is not of fundamental, but of contingent importance. You could easily have a huge field of study and endeavour of and in AI, without even mentioning human-like AI. There is no reason to mention human-like AI except in context of particular lines of work. There even is room to argue about the nature of "human-like". Is Eliza human-like? You know and I know, but she fooled a lot of folks who refused to believe there wasn't someone at the other end of the terminal. The fact that there is a lot of work in that direction, and a lot of interest in it doesn't imply that it needs discussion where the basic concepts of the field are being introduced and discussed. Consider an analogy; suppose we had an article on automotive engineering in the 21st century, and one of the opening sentences read: "It is an academic field of study which studies the goal of creating mechanical means of transport, whether in emulating horse-like locomotion or not." Up to the final comma no one is likely to have much difficulty, but after that things go wrong don't they? Even if we all agree that there is technical merit to studying horse-like locomotion, that is not the place, nor the context to mention it. Even though we can argue that horse-like locomotion had been among the most important for millennia, even though less than 150 years ago people spoke of "horseless carriages" because the concept of horses was so inseparable from that of locomotion, even though we still have a lot to learn before we could make a real robot that can rival certain aspects of a horse's locomotory merits, horse-like locomotion is not the first thing we mention in such a context. I could make just as good a case for spider-like AI as for human, but again I do not say: "It is an academic field of study which studies the goal of creating intelligence, whether in emulating spider-like intelligence or not." Is there room in the article for mentioning such things at all? Very possibly. In their place and context certainly. Not in the lede though. And possibly not in the article at all; it might go better into a related article. Universal importance is not the same as universal relevance. The way to structure the article is not by asking what the most important things are, but in asking how to articulate the topic, and though there are many ways in which it could be done, that is not one of the good ways! JonRichfield (talk) 19:56, 2 November 2014 (UTC)

That is fair comment (though the horse analogy is a bit stretched, no matter). Whether human likeness is mentioned in the lead should depend on the prominence that it and its synonyms are given in the body of the article. At present they have little. — Cheers, Steelpillow (Talk) 20:34, 2 November 2014 (UTC)
@Steelpillow and @JonRichfield; If both of you could start a proper section and discussion on which new sections are needed to improve this article then it would help to solve most of the issues encountered here. The non-peer reviewed status of this "B" article on AI is likely its biggest enemy. (See the comments of the new editor User:Mark Basset above who appears to have put his new November comments in a very old section above on this Talk page). The current outline of this article is inferior to the AI outline of the Russell and Norvig book from 2008 and could be substantially improved with relatively little effort. A new discussion section could determine what a new and improved outline should include as its section titles in outline for AI at this time. The non-peer reviewed status of this current "B" article on AI is likely its biggest enemy and is holding back the resolution of many issues. FelixRosch (talk) 18:50, 3 November 2014 (UTC)

The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Good Article[edit]

I've never put this article up for WP:Good article because I don't have time to fix all the little editorial changes that this always requires. Since there are so many eyes on this article now, maybe we have sufficient editor-power to do this now.

It's very comprehensive, it has extremely reliable sources for almost every paragraph and the weight is good as we can make it. I think the writing is fine (although you never know what's going to bother them this year over at MOS).

The only section that is a little short on comprehensiveness is Applications -- it's just scattershot -- no one has made an effort to cover the most widely used and important applications first, no one has made an attempt to organize the applications by industry and so on.

Another section that could use help is AI in fiction. It's just a bullet list, and it's always growing. We should probably cut the whole thing -- it's a pain to keep knocking everybody's favorite book off it, so I gave up a long time ago and just let it grow. I liked the way this section was several years ago, when science fiction and speculation were mixed in the same section, because they really do make the same points. If you try to organize the section on science fiction by topic, you will find that many of the paragraphs cover the same topics as the speculation section.

Anyway, those are my thoughts. Anyone interested? ---- CharlesGillingham (talk) 02:42, 5 November 2014 (UTC)

This dovetails nicely with the "Human-like", question. As, unlike most real AI, fictional AI is almost always indistinguishable from human intelligence. (Except smarter and/or crazier, somehow.) APL (talk) 15:13, 5 November 2014 (UTC)

The thing vs the academic study of it[edit]

The structure and scope of this article shows a conceptual muddle between "AI" as machine intelligence and "AI" as the associated academic discipline. We get an opening definition of the thing itself, a restatement that it is the academic field, and various sections on one or the other of these with no indication as to which or why. The Outline of artificial intelligence is no better, while the History of artificial intelligence carries barely one paragraph on the wider aspects.

It is claimed above that the term "artificial intelligence" was coined by academia to describe a field of research. But so too were words such as "biology", "philosophy", "nuclear physics" and a hundred other terms which have become synonymous with the object studied. Several broader aspects of AI have their own articles - Philosophy of artificial intelligence, Ethics of artificial intelligence, Artificial intelligence in fiction. It is nonsense to suggest that these are studying the philosophy, ethics and fictional aspects of the academic discipline, they are patently aspects of the thing itself. So we see that Wikipedians have already embedded this broader usage of the term in our encyclopedia. To suddenly turn round and claim that this article's present title references primarily the academic discipline is inconsistent with the established wider usage and utterly untenable.

It seems to me that one way out of this dilemma is to create a new page title for Artificial intelligence research. Probably 90% of the present article's content would be appropriate there, the remainder - plus a summary of the former - should remain here. Options would seem to be:

  1. Move this article across, then cut-and-paste back the 10% or so that does belong here.
  2. Create Artificial intelligence research from scratch and heavily rebalance the present article.
  3. Refactor the present article to cover both usages unambigously and not create a new page.

What do folks think? — Cheers, Steelpillow (Talk) 10:24, 5 November 2014 (UTC)

I don't see the need for two articles. Refactor this article to differentiate clearly between artificial intelligence (the product or software) and the study of artificial intelligence (the discipline). Robert McClenon (talk) 15:20, 5 November 2014 (UTC)
@Steelpillow I'm having a lot of trouble understanding what you have in mind. Pretty much every technical topic is both a subfield and a type of AI. Neural networks is subfield and a kind of program. Machine learning is a subfield and a class of programs. Statistcal AI is subfield and a description of a kind of program. Certainly you wouldn't want to write slightly differently worded versions of each section in both articles? How would you split up the history section? I guess I'm not sure what you mean by the "object of study" -- it's AI programs in the real world, right? Or are we talking about something else? ---- CharlesGillingham (talk) 09:17, 6 November 2014 (UTC)
Personally I agree with you. But some editors have been adamant that they want the article to cover the academic literature to the exclusion of other sources. For example the fact that the idea of "human-like" does not appear in the primary literature is being used as an argument that it should not appear in the article. On the "object of study", consider biology. This is an academic field of study, but we also talk happily about "the biology of" a thing, meaning what is actually going on inside the thing rather than meaning merely the study of the thing. Similarly, when we philosophise about AI we are philosophising about the end product, the thing created, not about the academic activity that surrounds it. This kind of issue has been bubbling along for some time so I was wondering whether the apparently irreconcilable views of some editors might be met by forking the article. This would allow each camp to develop their material in relative peace - at least until the first Merge proposal Face-wink.svg. I felt the idea worth floating. — Cheers, Steelpillow (Talk) 10:21, 6 November 2014 (UTC)
I'm more interested in what you actually want to do the article; how the fork would work. We already have an article on "human-like intelligence" --- artificial general intelligence so the fork is already there. But the fork you are proposing is between AI research vs. Ai programs, and, actually, we also already have an article artificial intelligence applications which describes AI programs. So the forks are pretty much there. I'm having trouble visualizing what material would go into an article on "the intelligence of machines" unless it's the same as applications of AI. I'm also having trouble how we could describe the field of AI without most of the text just describing different classes of AI programs (and the subfields of AI that study and create them). ---- CharlesGillingham (talk)
I should have looked for those articles, thank you. I am coming at this primarily as a Wikipedian, and my knowledge of the subject is no more than that of an interested layman. What I have been groping towards is the idea of a top-level article, akin to a portal page, which introduces the whole field of created intelligence, cognition, mind, whatever, and across all of hard research, philosophy/ethics and fiction. I had assumed that the present article was it, but then found that other editors had a more specialised view of its scope. For example philosophy, ethics, popular culture and fiction are most strongly concerned with the human-like aspects of artificial general intelligence so a portal would give that more equal emphasis with the academic study of weak AI. The present article focuses on the academic study, which is primarily of weak AI. I started the present discussion as a way to tease apart those two foci and see how best to create that overarching introduction. (In this context I am treating the programs as part of the research effort, as a substrate to the intelligence created rather than as enshrining the intelligence, any more than a surgeon sees the intelligence under the scalpel when he cuts into a living brain. It is the information constructs bandied about by the programs and the brain which comprise the intelligence itself. Perhaps academia disagrees with me?). I am quite prepared to be told that my proposed fork is wrong-headed and that there is a better way. But as a Wikipedian and a layman, I do need that introductory material. — Cheers, Steelpillow (Talk) 11:36, 8 November 2014 (UTC)
Would it be appropriate to assist readers by adding a disambiguation section at the very top of the article along the lines of "This article concerns research into artificial intelligence and its applications. For information related to human-level artificial intelligence see Artificial General Intelligence, for the feasibility of AI see Philosophy of AI."
Another option might be to create an AI navigation template like the one in Programming paradigm. Both options may make it easier for readers to find what they are looking for. I'd much rather this kind of change than the article evolving into something that doesn't match the textbooks. pgr94 (talk) 15:17, 8 November 2014 (UTC)
@Steelpillow appears to be suggesting something like this for a refactoring of the main sections in the article (in addition to the four options of newer outlines for refactoring sections in the new section below). This would be done for the benefit of readers to have a more useful version of the article. The preliminary version of @Steelpillow appears to move in the direction of something like the following (for discussion/amendation/revision, etc.):
AI (continue to tease apart the two foci)
1. Created intelligence, cognition, mind
2. Academic study of Weak-AI
3. Hard research
4. Philosophy/ethics
5. Fiction
Is this any closer to a workable refactoring, or, should the 4 new outlines provided below still be consulted. FelixRosch TALK 17:53, 8 November 2014 (UTC)

──────────────────────────────────────────────────────────────────────────────────────────────────── That looks quite close to what I have in mind, although I don't know enough to comment on the division between academic study and the hard stuff, while a History section might also be appropriate. I am currently chewing over the idea that the entry page should be the new article, titled something like Introduction to artificial intelligence and allowing the present one the opportunity to focus more exclusively on the technical side of things, omitting for example the section on Fiction. This seems to be the way that several other science topics are treated, see for example Quantum mechanics and the Introduction to quantum mechanics. This reply may also be taken in the context of the thoughts expressed by JonRichfield (talk · contribs) below here. — Cheers, Steelpillow (Talk) 21:11, 8 November 2014 (UTC)

Cognitive computers[edit]

Cognitive computers combine artificial intelligence and machine-learning algorithms, in an approach which attempts to reproduce the behaviour of the human brain.[1] An example is provided by IBM's Watson machine. A later approach has been their development of the TrueNorth microchip architecture, which is designed to be closer in structure to the human brain that the von Neumann architecture used in conventional computers.

  1. ^ Dharmendra Modha (interview), "A computer that thinks", New Scientist 8 November 2014, Pages 28-29

Any thoughts on where to drop this - or something like it - into the article? Or, since it "combines AI with..." does it belong elsewhere? — Cheers, Steelpillow (Talk) 16:04, 7 November 2014 (UTC)

[update] Just found the stub article on the Cognitive computer. The paragraph above obviously belongs there, but how much mention should be made here? — Cheers, Steelpillow (Talk) 18:12, 7 November 2014 (UTC)
As I understand (from what little I read) this is a chip that accelerates an algorithm called Hebbian learning, which is used on neural networks. So, if you want to add a (one sentence) mention of cognitive computer, it would have to go in the section Neural networks of this article. (I would prefer it if we added it the neural network sub-article, however, because there isn't room for everything here.) ---- CharlesGillingham (talk) 04:35, 8 November 2014 (UTC)
I looked at neural networks, and neuromorphic computing (a different kind of hardware designed to run neural networks faster) is described under Artificial neural network#Recent improvements- so it seems to me that "cognitive computer" belongs there. Someone should create a section in that article on hardware approaches to neural network computing, and find out all the current hardware. It may even deserve it's own article. ---- CharlesGillingham (talk) 04:52, 8 November 2014 (UTC)

Two options for re-factoring the AI outline from two leading scholars (expanded to four options)[edit]

One editor has suggested that a re-factoring of the AI article could address many of the issues with its original 2004 form when it was first written by one of the participants in the Talk page discussion above. The first is adapted from Professor Sack and U.Cal. at Santa Cruz, and the other is from Peter Norvig's 2008 book on AI for purposes of discussion/comment/revision. Yet another editor has expanded the list to four outlines as providing further ideas for the refactoring of the old outline option. FelixRosch TALK 17:36, 8 November 2014 (UTC)

Option one: Adapted from Prof. Sack, Univ. Cal. at Santa Cruz:

1...Early AI

2...Behaviorism and AI

3...Cybernetics and AI

4...AI as a Kantian Endeavor

5...Common sense challenges to AI

5.1...Kant and Common Sense

5.2...AI and Common Sense

6...AI and non-military concerns

7...Two strands of AI research

7.1...The neo-encyclopedists

7.2...The Computational Phenomenologists

8...The Aesthetic turn in AI

8.1...Husserl, Heidegger and AI

8.2...AI and cultural difference

9...Turing's Imitation Game

10...AI and the Uncanny

11...Deep learning and WATSON

12...Directions for Future Research


Option two: Adapted from Peter Norvig, Director research, Google, Inc., formerly USC:

Part I Artificial Intelligence

Part II Problem Solving

Part III Knowledge and Reasoning

Part IV Uncertain Knowledge and Reasoning

Part V Learning

Part VII Communicating, Perceiving, and Acting

Part VIII WATSON and the future of AI

These adapted outlines are options for possible contemporary re-factoring of the article on AI from its current version. FelixRosch (talk) 17:41, 7 November 2014 (UTC)

These are good. The first is more appropriate for philosophy of AI than this article. Have you seen Talk:Artificial intelligence/Textbook survey? You could add these to that. ---- CharlesGillingham (talk) 04:24, 8 November 2014 (UTC)
Perhaps we could refresh the survey with some more recent textbooks? I've been bold and added Poole and Mackworth (2010). Hope that's ok.
Other candidates include
  • The Cambridge Handbook of Artificial Intelligence (2014).
  • Artificial Intelligence: A Modern Approach (3rd Edition)
pgr94 (talk) 15:00, 8 November 2014 (UTC)
Firstly, my apologies to anyone concerned. I arrived here in response to the RFC, not because of any expertise in AI, a field in which I had attenuated interest, and hardly any practical skills. So my points were made as an outsider. Meanwhile I have had an interruption in my personal PC resources, from which I am only now recovering, which prevented my continued participation in the discussion that I had interrupted.
Now then, most of the discussion since I left has been, I think, more coherent and cooperative than much that went before I came in, in particular with more explicit recognition of the need to distinguish and define the concerns and threads before deciding how to construct a theme from them. Look at the foregoing TOC examples for example. All Good Stuff, but I suggest that the basic idea needs a bit of adjustment. For one thing, all the TOC examples (necessarily?) lead to book-like structures, and very possibly structures for good books at that. But a book is not as a rule an encyclopaedic article (and vice of course versa). Also, note that the alternatives would produce very different books; in fact skimming the alternatives, they even give me the impression of theses, not necessarily conflicting or incompatible theses, but certainly not parallel and coherent theses. In fact, I argue that it is not even clear that any of the layouts is necessarily better than the others.
As I see it, we might well begin with such TOCs, but not to construct the outline for an article; a better approach should be to collect as many independent "chapters" as we agree need writing. Then contemplating those chapters as independent articles. In principle, with suitable interlinking, that could cover the whole topic, and the user could skip the way eclectically through whichever articles s/he chose in any way s/he chose. "Just a second," (I hear you cry) "you must be havering; thirty five articles, plus any more that anyone thinks upont the fly? Get real!"
Maybe. But to begin with, if the subject matter comprises 35 or 350 material topics that can stand on their own merits, meaning that any reasonable reader might reasonably wish to consult any one of them without having to plod through extraneous material, then that is how many articles we (ideally) should have. As I have had occasion to point out, the subject is huge! This is not a consideration unique to AI of course; consider how many articles say, biology has been split into. Or medicine. Or ecology.
Secondly, having once decided on anything resembling such a list or structure, we should be in a far sounder position to say which "chapters" could naturally and usefully be combined into the same article; the fact that particular articles could be separated need not imply that they should be. For instance, jumping the gun by way of example, at a guess the first three (or four?) chapters above might constitute one article. But having done so, we should quite naturally banish problems such as where or whether "human-like" AI should be mentioned in any chapter (article) or not. And that is just one of many confusions that naturally emerge in any undisciplined or unstructured discussion of such a field.
Then we could look at the question of which teams should work on which articles and we could create suitable stubs to pre-empt the use of the names. Finally, we could a global guide to the topics, so that anyone who would indeed love to work the way through the whole field book-wise, could do so naturally, a click at a time.
I am well aware that this is a bottom-up design approach in an age in which top-down is the holy tenet, but I urge you to consider that top-down design demands a deep command of the underlying primitives. Where they do not yet exist, they must be created -- a bottom-up element from some points of view.
The superficiality of my own acquaintance with the field precludes my guiding any such endeavour (just the construction of the overview guide might seem trivial, but it would demand considerable depth of mastery of the field and its didactics, and these I lack). I don't mind anyone rattling my cage at any point where I could help with discussion, if ever. Nor do I mind assisting with editing, but that is as far as I could be useful if welcome.
Just thoughts... JonRichfield (talk) 20:02, 8 November 2014 (UTC)
Yet again I agree with you. I am also in rather the same boat, being able to contribute little more to the articles than the odd snippet from a popular source. I have responded in an earlier thread to the idea of a broad introductory structure. Perhaps that could serve as the top level for a more comprehensive subject tree. The Outline of artificial intelligence is another place to start, especially if one enjoys jumping in at the deep end. Like many such Outlines, it's really just a massive bullet list, though it makes some poor pretence that the lengthier entries are introductory content. — Cheers, Steelpillow (Talk) 21:20, 8 November 2014 (UTC)
@Steelpillow and @JonRichfield; Further agreement with both. This is the expansion of another 2014 book on AI to provide ideas for helping to identify the sections in the refactoring. The link I am including for one of the articles in it is very worthwhile for practitioners. Here is the outline and link to the 2014 article by Stan Franklin:
  • The Cambridge Handbook of Artificial Intelligence (2014).

Part I: Foundations

1. History, motivations, and core themes, Stan Franklin [15] (linked must reading for AI practitioners.)

2. Philosophical foundations, Konstantine Arkoudas and Selmer Bringsjord

3. Philosophical challenges, William S. Robinson

Part II: Architectures

4. GOFAI, Margaret A. Boden

5. Connectionism and neural networks, Ron Sun

6. Dynamical systems and embedded cognition, Randall D. Beer

Part III: Dimensions

7. Learning, David Danks

8. Perception and computer vision, Markus Vincze, Sven Wachsmuth, and Gerhard Sagerer

9. Reasoning and decision making, Eyal Amir

10. Language and communication, Yorick Wilks

11. Actions and agents, Eduardo Alonso

12. Artificial emotions and machine consciousness, Matthias Scheutz

Part IV: Extensions

13. Robotics, Phil Husbands

14. Artificial life, Mark A. Bedau

15. The ethics of artificial intelligence, Nick Bostrom and Eliezer Yudkowsky

The above four part outline looks useful. FelixRosch TALK 21:33, 8 November 2014 (UTC)

I certainly agree that the outline would be useful and even might be a basis for the overview article on AI. Purely as a superficial personal reaction, and decidedly without suggesting that the list would be adequate, I suspect that the AI overview article could owe a great deal to the foundations (Part I of the book), with special emphasis on the Franklin chapter. Whether in our context the other two chapters in Part I would best be included in the same overview, or in either one or two separate WP articles on the philosophy, I cannot say without having read them, which, as it happens I have not.
Equally superficially I incline to think that Part II could fit into one article, but I do not deny that it might spawn more related articles on specialised themes (which of course might happen with practically any article on any theme).
I suspect that Part III chapters 7--12 each might best be in its own article, and in fact an extra article on Dimensions in AI might be desirable to give an overview over the six linked articles.
Much the same would apply for chapters 13--15 and not having read them, but going by title alone, I reckon that chapters 2--3 might fit into the same group as the extensions, sharing an overview article.
At a rough guess, that might mean something like twelve to fifteen articles. However there already are articles on many of the themes, though we cannot assume a priori that those are already adequate, or even acceptable in their present form. Examples include Ethics of artificial intelligence, Artificial life#See also (check the list!) Synthetic biology, Robotics, Artificial consciousness, and there is plenty where that came from,even without leaving the confines of WP.
Going more deeply into the theme, and still without having read the source material (i.e. very likely talking nonsense) I suspect from the chapter titles of the Sack & Norvig books, that we could find ourselves with a good ten or so more topics if the overlap with the Cambridge book isn't broader than the titles suggest. But let's face it, an article on say, Knowledge and reasoning in artificial intelligence could be a big, fat one, and just scouting for already existing, non-trivial, related articles in WP (eg Semantic reasoner) would be a non-trivial exercise.
In short, to do a really gratifying job would be a daunting challenge for a team, but the realisation leaves me with the question of whether anything less would be worth doing at all. If anything comes of this as a project I could not drive it,but would be willing to do a bit of water-carrying if it is perceived as useful. JonRichfield (talk) 11:31, 9 November 2014 (UTC)
@JonRichfield; Yes, that all sounds on target and your emphasis on the reader's viewpoint is important (rather that only looking at editor's viewpoints alone). The outline @Steelpillow presented in the previous section was the following and maybe it offers some further ideas for reflection on the refactoring question:
Is this close to what you had in mind... FelixRosch (TALK) 17:39, 10 November 2014 (UTC)
For the record that was not my list. Modifying it per my comments might lead to something like:
Artificial Intelligence (continuing to tease apart its Applications and the thing itself)
1. Created intelligence, cognition, mind
2. History
3. Research
4. Philosophy/ethics
5. Fiction
— Cheers, Steelpillow (Talk) 19:48, 10 November 2014 (UTC)

(undent) @JonRichfield I support everything you're saying. I think you grossly underestimate the number of articles being summarized here -- dig through Category:Artificial intelligence and its subcategories for 10 or 20 minutes and you'll begin to see the breadth of the field. Also, I wanted to make sure you were aware that this project was carried out back in 2007 (see Talk:Artificial intelligence/Textbook survey). ---- CharlesGillingham (talk) 15:15, 11 November 2014 (UTC)

@CharlesGillingham: Thanks Charles;that there is so much breadth to conceive is far from being ANY surprise, but the degree to which it appeared in the categories was, somewhat! :) It leaves me somewhat nonplussed. On one hand I am tempted to suggest that a wider range of topics be put into "bottom-up" separate articles as satellites to a main article or group of articles, but I suspect that it would be more practical to begin by blinkering ourselves and start from the outlines that appear below. Having established a sound nucleus on which we can build, either as a single article, or as a structure of linked articles, we can extend indefinitely. From my experience with large articles, I suspect that if any article becomes too large for editors unfamiliar with the established text to scan it fairly conveniently, there will be a plague of updates out of place or simply confused and inaccurate. The maintenance problem can be forbidding. The outlines below look promising. However, we should remain alert for articles that either should be split into separate articles, or structured to contain clearly distinct sections. Failure to discriminate between concerns in suitably linked but conceptually continent topics, is one of the most insidious enemies of cogency and lucidity in technical writing, formal or informal. JonRichfield (talk) 15:57, 14 November 2014 (UTC)

@Pgr94 and everyone. We do need to update Talk:Artificial intelligence/Textbook survey with the current editions of everything. I don't have time to do this right now, and I would be surprised if it's really all that different, but I would like to know. I expect it would mostly effect the tools section -- we need to add some new tools and drop some deprecated ones. I also think we may need to add a paragraph to "knowledge" and "reasoning" to emphasize statistical AI a little more -- these sections leave off where symbolic AI failed and don't really cover how far statistical AI has come since then. ---- CharlesGillingham (talk) 15:15, 11 November 2014 (UTC)

@FelixRosch and Steelpillow. My original proposal for this article, back in 2007, was divided into three sections Perspectives,Research,Applications. Perspectives came first, and included the sections "history", "philosophy", "ethics, fiction and speculation". Later editors thought this was a bad idea. So I understand what you're trying to do: I also feel that most readers are more interested in these perspectives than they are in the technical stuff, and maybe this stuff should get more emphasis. I don't object to moving Problems, Approaches and Tools into a section called Research. ---- CharlesGillingham (talk) 15:15, 11 November 2014 (UTC)

@Steelpillow The only problem I have is with "created intelligence, cognition, mind". I have three problems.

  1. We have to cover this material elsewhere in the article. I'll hit each of the three: Created intelligence I'm not sure what we want to say about "created intelligence", unless you are talking about the ethics of creating artificial beings, and we have a comprehensive section on this already (and it's also touched on in the first paragraph of AI#History). Cognition By "cognition" and assume you mean cognitive science. Right now, cognitive science is all over the article -- it's origin in AI#Cognitive simulation, AI#Sub-symbolic mentions embodiment, the second half of AI#Reasoning and problem solving, the second half of AI#Knowledge representation. Cognitive science is important to AI, but then again so is statistics and so is computer science, but this article isn't about statistics or computer science or cognitive science. I don't think you need a separate section on this, especially because we have to discuss it where it's relevant, so what else is there to say? Mind Clearly this is philosophy of AI, and we cover all the greatest hits here.
  2. What are the sources for this section, unless they are sources we're already using in these other sections? If you stray from the main sources, you're going to find literally thousands of different points of view, none of which is widely accepted and most of which are highly speculative. There is no end to how much people say about this, and what little difference all this talk makes to actual AI programs.
  3. What other articles will this section summarize? Isn't it the same articles as speculation and philosophy? ---- CharlesGillingham (talk) 15:15, 11 November 2014 (UTC)
I borrowed "created intelligence, cognition, mind" unchanged from the earlier proposal. I assumed it to be a conceptual introduction to the field, something like explaining the difference between the weaker forms of AI and cognitive AI, leading on to the strongest forms of AI, which in turn can be used to introduce the philosophical questions about an artificial mind, consciousness and ethics. It would set all the subsequent sections in context. If it turns out to be short enough, it could just be the article lead. As I have said, I am weak on sources. I would assume that conceptual introductions to the field exist. Tertiary sources are better than secondary, which in turn are far better than primary research papers. It would not so much summarise as lay conceptual foundations for all the articles on AI. — Cheers, Steelpillow (Talk) 20:38, 11 November 2014 (UTC)

@FelixRosch What is "hard research"? Who are we talking about and how is it different than "academic study of weak AI"? Is "hard research" artificial general intelligence? If so, then why can't we call it by its name? Forgive me for being direct: please learn the correct terminology before you keep proposing things here. ---- CharlesGillingham (talk) 15:15, 11 November 2014 (UTC)

I think it's clear that he's still imagining that AI is mostly about people creating "Strong/Complete AIs", but that some people want "weak ai" to be included even though it's not really AI.
This whole understanding of the field is so wrong that it's almost the exact opposite of reality. It's not going to be possible to "compromise" with that. APL (talk) 18:02, 11 November 2014 (UTC)
@APL: Can you be more specific about what you are rejecting? For example here is my suggestion, further refined per ongoing discussion:
Introduction to artificial intelligence
  1. Conceptual foundations
  2. History
  3. Research
  4. Philosophy and ethics
  5. Fiction
Are you rejecting this whole structure or just Felix's suggested subdivision of the research? — Cheers, Steelpillow (Talk) 20:57, 11 November 2014 (UTC)
Sorry, I was just rejecting the subdivision of the research into "weakAI" and "hard research".
The rest tentatively looks good to me.
At least, as described just now by @Steelpillow: I don't think leading with "Created intelligence, cognition, mind" is smart, and the phrase "weakAI" should probably not even appear.
History and Research overlap a bit. Even if the research section is a list of different techniques. But I don't think that's a problem.
Fiction will actually be a tough one to balance. It's an important part of common conceptions about AI, but it'll be tough to not devolve it into a laundry list. APL (talk) 23:06, 11 November 2014 (UTC)

Feel free to respond in-between up there. ---- CharlesGillingham (talk) 15:15, 11 November 2014 (UTC)

@Steelpillow; Your comments and @JonRichfield have been useful in emphasizing the readers standpoint rather than just what editors might like to pursue. It would be very useful now if you could start to present a cross-section table to how the old article sections would be separated into the new factoring which you are putting forward. The assumption is that much of the current material in the current article would be more or less directly absorbed into the new refactored version and it would useful to see this in a table or list of some sort. (For example, old section "A" goes into new section X, and old section "B" goes into new section Y, etc.). Cheers. FelixRosch (TALK) 16:34, 12 November 2014 (UTC)

Hi Felix. I am not sure if we have yet reached a view on whether we want one article or two. Should my reader-oriented outline apply to the present article refactored, or to a new Introduction to artificial intelligence to accompany it? — Cheers, Steelpillow (Talk) 18:08, 12 November 2014 (UTC)
@Steelpillow; My thought was along the lines of looking at the main article refactoring first, and then having that version would make it easier to evaluate and formulate an Introduction article right after its done. The present article refactored into the new reader-oriented outline would be useful at this time. Cheers. FelixRosch (TALK) 20:13, 12 November 2014 (UTC)
Let me add a bit to your outline, and see if this is what you have in mind:
  1. Conceptual foundations: definition of AI, Turing test, intelligent agent paradigm (cf. Philosophy of AI#Intelligence)
  2. History: (social history)
  3. Research (and technical history)
    1. Goals
    2. Approaches (- without the intelligent agent paradigm, which moved into foundations)
    3. Tools
  4. Applications and industry (I think we do need this section, if anyone ever has the time to make it good; it's a multibillion dollar industry, and the article should cover it comprehensively.
  5. Philosophy (I think that philosophy of AI is so cleanly defined that it belongs in separate section from the speculation.
  6. Speculation and ethics (as is -- just tidied this up today)
  7. Fiction (I think we also need this section, but I argued above (and in 2008) that it works best if we mix it into speculation, only to emphasize the serious ideas that have appeared in fiction)
Does that work? ---- CharlesGillingham (talk) 01:15, 13 November 2014 (UTC)
My original outline was this:
  1. Conceptual foundations
  2. History
  3. Perspectives
    1. Philosophy
    2. Speculation and ethics
    3. Fiction
  4. Research
    1. Goals
    2. Approaches
    3. Tools
  5. Applications
I actually think this is even more reader-friendly, but other editors didn't like the "perspectives" header. ---- CharlesGillingham (talk) 01:15, 13 November 2014 (UTC)
I am impressed by how close we are getting to a common foundational structure for the article. My own observations at this stage:
  • The separation of research and applications now makes sense to me.
  • The "perspectives" grouping is interesting. I see where it is coming from but am unsure whether it is informative enough to be worth the extra nesting of sub-sections.
  • Putting my philosopher hat on, I very firmly see ethics as a branch of philosophy and not as a branch of speculation or prediction. "Roboethics" (shudder) is wholly contingent on the philosophical idea of a machine as a conscious mind, a being in its own right, while "machine ethics" is simply an extension of our own ideas of human ethics, how we should behave towards others and therefore by extension how we should make our tools behave towards others. Both are well established branches of philosophy. One may speculate and predict on these topics of course, but that is a trivial observation, one may speculate and predict on any aspect of AI.
  • Ultimately, the technological content (research and applications) may overwhelm the article. In this event, I would suggest that my idea of moving the wider context to an introductory article might be revisited.
  • Between speculation, prediction and fiction, I would plump for "speculative developments" and "fiction" as two (overlapping but) fundamentally independent aspects.
Most of these are minor comments, the only one I feel strongly about is the place of ethics in the field.
— Cheers, Steelpillow (Talk) 12:08, 13 November 2014 (UTC)
@Steelpillow and @JonRichfield; General agreement across the board. The one topic of "History" has not been discussed yet and its placement. Neither the 2014 Cambridge version nor the Norvig version of the outline on the science of AI lists History with this level of prominence in its outline form. My thinking is that they may be on to something here and Wikipedia already has a peer reviewed-version of the History page for AI. Is it possible to just link it directly at the top of this page and maybe move a shorter version of the section on History to the end of this main AI page. The outline for the 2014 Cambridge AI outline is included below in shortened form for ready reference.
  • The Cambridge Handbook of Artificial Intelligence (2014).
Part I: Foundations, Stan Franklin [16] (linked must reading for AI practitioners.)
Part II: Architectures
Part III: Dimensions
Part IV: Extensions
@Steelpillow; The cross-reference table from the current article outline to your new outline would still be useful. FelixRosch (TALK) 16:09, 13 November 2014 (UTC)

Two most recent versions for refactoring new AI version outline[edit]

@Steelpillow and @JonRedfield and @APL; My thought continues along the lines of looking at the main article refactoring first, and then having the present article refactored into the new reader-oriented outline would be useful at this time. One of you could start the useful cross-mapping of the current section numbers, such as where does 2.4 go in the newly refactored outline, where does 2.5 go in the new outline, where does 2.6 go, etc. FelixRosch (TALK) 17:20, 14 November 2014 (UTC)

Having established a sound nucleus on which we can build, either as a single article, or as a structure of linked articles, we can extend indefinitely. From my experience with large articles, I suspect that if any article becomes too large for editors unfamiliar with the established text to scan it fairly conveniently, there will be a plague of updates out of place or simply confused and inaccurate. The maintenance problem can be forbidding. The outlines below look promising. However, we should remain alert for articles that either should be split into separate articles, or structured to contain clearly distinct sections. Failure to discriminate between concerns in suitably linked but conceptually contingent topics, is one of the most insidious enemies of cogency and lucidity in technical writing, formal or informal. JonRichfield (talk) 15:57, 14 November 2014 (UTC) (Reposted by FelixRosch (TALK) 17:20, 14 November 2014 (UTC))

Refactored Outline I:

Introduction to artificial intelligence
  1. Conceptual foundations
  2. History
  3. Research
  4. Philosophy and ethics
  5. Fiction

Refactored Outline II:

Let me add a bit to your outline, and see if this is what you have in mind:
  1. Conceptual foundations: definition of AI, Turing test, intelligent agent paradigm (cf. Philosophy of AI#Intelligence)
  2. History: (social history)
  3. Research (and technical history)
    1. Goals
    2. Approaches (- without the intelligent agent paradigm, which moved into foundations)
    3. Tools
  4. Applications and industry (I think we do need this section, if anyone ever has the time to make it good; it's a multibillion dollar industry, and the article should cover it comprehensively.
  5. Philosophy (I think that philosophy of AI is so cleanly defined that it belongs in separate section from the speculation.
  6. Speculation and ethics (as is -- just tidied this up today)
  7. Fiction (I think we also need this section, but I argued above (and in 2008) that it works best if we mix it into speculation, only to emphasize the serious ideas that have appeared in fiction)

This is the closest approximation to what the two leading new outlines seem to be now which brings together everyone's thoughts (if anyone is not attributed fully just chime in with your name and where it belongs since there about a half dozen participating editors on this topic at this point). Can someone in the group at least take a first starting attempt at the cross-mapping of section numbers from the current version to the new version outline. The History section, for example, is easy to identify and could be moved to the end of each outline, though some of the other sections need a little more thinking and elaboration to cross-map accurately. FelixRosch (TALK) 17:20, 14 November 2014 (UTC)

OK, here goes nothing[edit]

As mentioned, my latest short version looks a little different. Here it is:

  1. Conceptual foundations
  2. History
  3. Research
  4. Applications
  5. Philosophy and ethics
  6. Fiction

It is expanded below. I hope that this will give some clue as to why I have put ethics where I have. Many of the philosophy subsection headings are inspired by, if not culled directly from, Philosophy of artificial intelligence and Ethics of artificial intelligence. I hope that you can appreciate from this how the philosophy and ethics stitch so closely together.

My copying of the technical stuff is very slavish, due to my general ignorance of the details.

Artificial intelligence

1 Conceptual foundations

2 History

3 Research
3.1 Goals
3.1.1 Deduction, reasoning, problem solving
3.1.2 Knowledge representation
3.1.3 Planning
3.1.4 Learning
3.1.5 Natural language processing (communication)
3.1.6 Perception
3.1.7 Motion and manipulation
3.1.8 Long-term goals
3.1.8.1 Social intelligence
3.1.8.2 Creativity
3.1.8.3 General intelligence
3.2 Approaches
3.2.1 Cybernetics and brain simulation
3.2.2 Symbolic
3.2.3 Sub-symbolic
3.2.4 Statistical
3.2.5 Integrating the approaches
3.3 Tools
3.3.1 Search and optimization
3.3.2 Logic
3.3.3 Probabilistic methods for uncertain reasoning
3.3.4 Classifiers and statistical learning methods
3.3.5 Neural networks
3.3.6 Control theory
3.3.7 Languages
3.4 Evaluating progress

4 Applications
4.1 Competitions and prizes
4.2 Platforms
4.3 Toys

5 Philosophy and ethics
5.1 Intelligent behaviour and machine ethics
5.1.1 Criteria for intelligence
5.1.2 Machine ethics
5.1.3 Malevolent and friendly AI
5.1.4 Decrease in demand for human labor
5.2 Machine consciousness
5.2.1 Criteria for consciousness
5.2.2 Robot rights
5.2.3 The threat to human dignity (devaluation of humanity)
5.3 Superintelligence
5.3.2 The singularity
5.3.3 Transhumanism

6 Fiction

The following table maps the current sections onto the new, and is done primarily to show that nothing has been forgotten.

Current Proposed
1 History 2 History
2 Goals 3.1 Goals
2.1 Deduction, reasoning, problem solving 3.1.1 Deduction, reasoning, problem solving
2.2 Knowledge representation 3.1.2 Knowledge representation
2.3 Planning 3.1.3 Planning
2.4 Learning 3.1.4 Learning
2.5 Natural language processing (communication) 3.1.5 Natural language processing (communication)
2.6 Perception 3.1.6 Perception
2.7 Motion and manipulation 3.1.7 Motion and manipulation
2.8 Long-term goals 3.1.8 Long-term goals
2.8.1 Social intelligence 3.1.8.1 Social intelligence
2.8.2 Creativity 3.1.8.2 Creativity
2.8.3 General intelligence 3.1.8.3 General intelligence
3 Approaches 3.2 Approaches
3.1 Cybernetics and brain simulation 3.2.1 Cybernetics and brain simulation
3.2 Symbolic 3.2.2 Symbolic
3.3 Sub-symbolic 3.2.3 Sub-symbolic
3.4 Statistical 3.2.4 Statistical
3.5 Integrating the approaches 3.2.5 Integrating the approaches
4 Tools 3.3 Tools
4.1 Search and optimization 3.3.1 Search and optimization
4.2 Logic 3.3.2 Logic
4.3 Probabilistic methods for uncertain reasoning 3.3.3 Probabilistic methods for uncertain reasoning
4.4 Classifiers and statistical learning methods 3.3.4 Classifiers and statistical learning methods
4.5 Neural networks 3.3.5 Neural networks
4.6 Control theory 3.3.6 Control theory
4.7 Languages 3.3.7 Languages
5 Evaluating progress 3.4 Evaluating progress
6 Applications 4 Applications
6.1 Competitions and prizes 4.1 Competitions and prizes
6.2 Platforms 4.2 Platforms
6.3 Toys 4.3 Toys
7 Philosophy 5 Philosophy and ethics
8 Predictions and ethics merged into 5 Philosophy and ethics
8.1 Decrease in demand for human labor 5.1.4 Decrease in demand for human labor
8.2 Devaluation of humanity 5.2.3 The threat to human dignity (devaluation of humanity)
8.3 Malevolent and friendly AI 5.1.3 Malevolent and friendly AI
8.4 Robot rights 5.2.2 Robot rights
8.5 The singularity 5.3.1 The singularity
8.6 Transhumanism 5.3.2 Transhumanism
9 In fiction 6 In Fiction

Comments? — Cheers, Steelpillow (Talk) 20:46, 14 November 2014 (UTC)


@Steelpillow and @JonRichfield and @APL; This all looks strong as a usable outline for a 2014 version of the new refactored ouline. My small suggestion is to give some consideration to merging the first two sections in both outlines in order that the history section is absorbed into the Conceptual foundation sections. Since Wikipedia already has a peer reviewed article for History of AI, there does not seem to be a reason to distract readers from a very good article on the subject which already exists and can be readily linked. Here is a short draft of what the opening section could start to look like:
Conceptual foundation
The conceptual foundations defining artificial intelligence in the second decade of the 21st century are effectively best summarized as a list of strongly endorsed research pairings of contemporary 21-century research areas as follows: (a) Symbolic AI versus neural nets; (b) Reasoning versus perception; (c) Reasoning versus knowledge; (d) Representationalism versus non-representationalism; (e) Brains-in-vats versus embodied AI; and (f) Narrow AI verus human-level intelligence. [Franklin (2014), Cambridge Univ Press, pp15-16. The Cambridge Handbook of Artificial Intelligence (2014). [17]]
Several key moments in the history of AI have contributed to defining the major 21 century research areas in AI. These early historical research areas from the last century, although by now well-rehearsed, are revisited occasionally with some recurrent reference to: (i) McCullock and Pitts early research in schematizing digital circuitry, (ii) Alan Turing's pioneering efforts and thought experiments; (iii) the early Dartmouth workshop in AI; (iv) Samuel's early checker player; (v) Minsky's early dissertation; and (vi) the misstep of early Perceptrons and the early neural net winter. From these followed the 4 more historical research areas currently being pursued in updated form which include: (a) means-ends problem solvers (Newell 1959), (b) Nautral language processing (Winograd 1972), (c) knowledge engineering (Lindsay 1980), and (d) early automated medical diagnosis (Shortliffe 1977). [Franklin, Cambridge Univ Press, pp16-22. The Cambridge Handbook of Artificial Intelligence (2014). [18]]]
Major recent accomplishments in AI defining future research paths in the 21st century have included the development of (a) Extensive knowledge-based expert systems, (b) Deep Blue defeating Gary Kasparov, (c) Solution to the Robbins conjecture, (d) Watson defeat of Jeopardy human champions, and (e) Killer App ( and Gaming applications as a major force of research and innovation). [Franklin, Cambridge Univ Press, pp22-24. The Cambridge Handbook of Artificial Intelligence (2014). [19]]]
The current major leading 21-century research areas appear to be (i) Knowledge maps, (ii) Heuristic search, (iii) Planning, (iv) Expert systems, (v) Machine vision, (vi) Machine learning, (vii) Natural language, (viii) Software agents, (ix) Intelligent tutoring, and (x) Robotics. The most recent 21-century trends appear as the fields of: (a) Soft computing, (b) AI for data mining, (c) Agent based AI, (d) Cognitive computing, and (e) AI and cognitive science. [Franklin, Cambridge Univ Press, pp24-30. The Cambridge Handbook of Artificial Intelligence (2014). [20]]]
This should provide at least something to draft into the missing Conceptual foundation section which you identified. Otherwise your outline looks strongly like it is ready to move forward and it would be nice to see it begin transitioning into the existing material since it otherwise seems to match up almost one-for-one. Possibly one of you could suggest the easiest way to start this and maybe even take the next first steps in this direction of refactoring and transitioning. Cheers. FelixRosch (TALK) 20:38, 15 November 2014 (UTC)

Comment: I don't know what hit this discussion, but the contrast to what I first saw is startling. I am very impressed. I'll be very willing to assist where I can offer assistance on request, but frankly, as things stand I don't feel essential to the effort. Is everything solved? Of course not, but at least something healthy seems to be emerging. Would I argue for any changes? Most likely, but none that at this point would be worth interrupting the progress for. My main caveat would be that because the top is now so soundly (though still conditionally) structured, therefore as far as practical, the rest of the progress be continued systematically top-down. By this I don't mean that sections have to be tackled in any special order, but that each section be tackled independently as an author becomes available, possibly in stages. Cross-reference should be thorough, but any cross-reference between sections should as far as practical avoid duplication of material. Difficulties in that respect should be taken as grounds for redistributing or splitting or recombining sections. Finally, as I see it, all the section topics should retain their presence in the main article, but the editors should remain alert for the advisability of splitting out the main substance of a section that becomes too large, complex, or intimately involved with external topics. Just as a single arbitrary example, toys should be mentioned and the topic summarised, but this probably is not the article for a detailed treatment, and that topic deserves at least one major article elsewhere. JonRichfield (talk) 14:11, 17 November 2014 (UTC)

Deleting Human-Like from Lede[edit]

Since there does appear to be consensus to delete "human-like" from the lede, with, as far as I can tell, one dissenting editor, I have gone ahead and deleted "human-like". Robert McClenon (talk) 18:09, 15 November 2014 (UTC)

RfC is currently pending and open. Overlapping and separate RfC established a consensus of 4 editors Supporting. FelixRosch (TALK) 20:03, 15 November 2014 (UTC)
Consensus for what? Robert McClenon (talk) 22:15, 18 November 2014 (UTC)
In the first RfC, I see a medium-strong consensus for removing "human-like" with a couple of editors who wanted some kind of compromise. (I see 2, but maybe it's 4 somehow.) In the second RfC, I see a strong consensus against the compromise. So that's that. ---- CharlesGillingham (talk) 17:30, 20 November 2014 (UTC)
One thing that no-one has seemed to notice: It is original research to add "human-like" to the article. Some researchers do try to emulate human intelligence, some do not, but most of those do not say they do or do not. We're adding a term with no definition. It should go, except in contexts where the reliable sources use the term. — Arthur Rubin (talk) 10:42, 21 November 2014 (UTC)
It is certainly not OR. The phrase appears in reliable tertiary sources - I have cited at least one example from New Scientist magazine elsewhere on this page. Terms are often not precisely defined, or their definition varies with context. New Scientist uses the term in exactly this woolly, undefined way. This is perfectly normal use of language for the lead of an encyclopedia article too. WP:NOT PAPERS never mind WP:NOTTEXTBOOK. There's another WP thing somewhere about synonyms and paraphrasing being perfectly acceptable. But as I can't recall it instantly, I'll offer you my current favourite on contrived arguments - WP:LEGS. — Cheers, Steelpillow (Talk) 12:01, 21 November 2014 (UTC)
Steelpillow If you're using a reference to support your argument, please be so kind as to link to it. pgr94 (talk) 12:42, 21 November 2014 (UTC)
OK, I hope this is easier for you that doing Ctrl-F and typing in New Scientist: I wrote,  '​Take too for example the latest (1 Nov 2014) copy of New Scientist, page 21, which has the subheading, "A hybrid computer will combine the best of number-crunching with human-like adaptability – so it can even invent its own programs." To quote from later in the article, "DeepMind Technologies ... is designing computers that combine the way ordinary computers work with the way the human brain works." '​ — Cheers, Steelpillow (Talk) 14:45, 21 November 2014 (UTC)
I found that. I searched for "artificial intelligence" in the New Scientist's archive and had no hits for 1st November. Perhaps you could also supply the title? pgr94 (talk) 14:50, 21 November 2014 (UTC)
My apologies. In full: Jacob Aron, "Ditch the programmers", New Scientist No.2993, 1 November 2014, Page 21. — Cheers, Steelpillow (Talk) 15:06, 21 November 2014 (UTC)
Thanks. They're using a different title online. It's so much easier if you just paste a URL.
Here's the link for everyone: Computer with human-like learning will program itself pgr94 (talk) 15:18, 21 November 2014 (UTC)

Definition of intelligence[edit]

I think it is good to define intelligence separately from human intelligence. Intelligence for me is the ability to solve problems. For example if a machine designed an airplane, in natural language we would attribute intelligence to it. Note that designing an airplane may or may not require learning. So I would see learning as separate from problem solving. Human intelligence perhaps has the abilities,

  • Problem solving
  • Learning
  • Reflection (and meta processing)
  • Consciousness
  • Perception
  • Motivation intelligence (theory of mind)
  • Emotion
  • Artistic creativity

Defining intelligence as human intelligence seems to me to be self serving. For me semantically intelligence only means the ability to solve problems. Perhaps you could say,

  • To control an agent so that it adapts to and makes best use of an environment.

But this seems too limiting. Perhaps the intelligence is not control of a body or machine to interact with the world.

To me, only problem solving is at the core of intelligence. Learning is second. But when people loose their learning abilities do we say they are not intelligent? You could argue that the ability to learn is part of intelligence, but to me it is a separate ability with separate name. And it is possible for an agent to be highly intelligent without learning, given sufficient initial knowledge.

Problem solving may described in two closely related ways,

  • Calculating what actions achieve a particular result.
  • Calculating what values meet a particular set of constraints.

All the other properties are aspects of human minds, but should not be taken as the definition of intelligence.

The other abilities of the human mind such as reflection and learning may enhance problem solving but are not necessary for it.

Consciousness and qualia are subjective and personal qualities. How can you say from outside whether an intelligence is conscious? It makes no sense to say that an intelligence must be conscious, if we cannot measure consciousness.

Motivation intelligence (theory of mind) is a particular sub type of intelligence, related to solving problems involving intelligent agents. So it still fits the definition of problem solving.

Emotion seems to me to be something separate from intelligence.

If we look at the product of artist creativity, then to me the Mandelbrot set is artistic. But the algorithm to create it is not intelligent. Artistic ability may also be characterized as solving the problem of determining what people find good to perceive.

Thepigdog (talk) 08:26, 17 November 2014 (UTC)

Sorry this is a bit half baked, but the fully baked version is too technical.
* The intelligence probability is the probability of an intelligence solving any problem given to it.
* A problem is a time limited interaction with an environment, with a goal that is either achieved or not achieved.
Thepigdog (talk) 11:01, 17 November 2014 (UTC)

@Thepigdog:

  • Most of what you say makes good sense, but I have a few reservations. For example: "...may or may not require learning. So I would see learning as separate from problem solving..." I do not argue that learning and problem are identical in concept or as entities, but it is not clear to me how solving a problem without acquiring and processing data could meaningfully be called intelligence, nor how such acquiring and processing of data is to be distinguished from learning. Even a mousetrap or a carpenter's plane, which are not very advanced examples of intelligent systems, use physical and logical feedback. You rightly mentioned reflection, metaprocessing et al, in connection with intelligence, but I argue that there are such things as intelligent action without them. Rather than waste our time on debating them by flat statement and flat contradiction, I recommend that the authors/editors be permitted to write as they please without anticipation of their errors, instead raising such matters only after they have failed in cogency. The topic is too big to make progress if it is to stop for discussion of every debatable point. Although I could imagine learning and problem solving justifying their own topics, I find it hard to imagine one or the other as characterising intelligence in any useful sense, without the other.
Yes on reflection the learning/problem solving distinction is a bit dubious. The Universal Artificial Intelligence people group them together, as one task for intelligence..
  • Self-serving or not, as I see it, human intelligence is intelligence, but intelligence is not necessarily human intelligence. I suggest we drop that topic from the main thread of this article as a disruptive red herring. Some section might mention it and link to a number of other articles, ranging from ethology and psychology to IQ, but except in aspects that can be shown to be germane to the the topic of this article the question of the vehicle of the intelligence, whether man, mouse, Martian or machine, should not be mentioned, any more than we would discuss whether it is better to make a mechanical Turk chess player of plastic, meat or metal. After all, every other aspect has to earn its place in the article by demonstrable relevance and cogency. SP regarded my "horse intelligence" as a bit stretched, and I can understand his reservations, but I could demonstrate startling empirical intelligence in a spider, never mind a horse, intelligence that would be very challenging to match in a robot. That is why I say that we should eschew the term "human intelligence" at least wherever the text could equally well read "spider intelligence", or where the unqualified term "intelligence" would do.
Yes I agree. I would even include the evolutionary system as an intelligence. A chess playing algorithm also even though it is a restricted form of intelligence. Should talk about what an intelligence does.
  • Consciousness is such a loaded term that the consciousness/perception topic should be mentioned only where it is immediately of inescapable relevance. We have enough trouble defining consciousness in humans, animals, plants, communities, and in machines that can input relevant data, let alone using it in the discussion of the essence of intelligence. I have never even seen any coherent attempt to define the location or relevance of "consciousness" in the "Chinese room".
Agreed, I have no idea what consciousness is, other than that I have it.
  • All of those are just examples. I do not argue that they or other examples of conceptual dilemmas should not be discussed, but that they should be avoided wherever possible, and where they are inescapable, they should not be mentioned in any greater detail than that demanded in context, but instead that serious discussion be banished to linked articles. Trying to deal with them satisfactorily in the main article on artificial intelligence not only would destroy this article, but would arrogate points relevant to and valid in articles in general concepts of intelligence, or of human intelligence. Artificial intelligence is not the only topic dealing with intelligence, any more than cardiology is the only topic dealing with pumps. We must remain alert for all variations on themes that tempt us to commit intellectual colonialism by including material that could be dealt with more fairly elsewhere. We are after all not at a loss for material; this article, more than most, should concentrate on exclusion. :) JonRichfield (talk) 14:11, 17 November 2014 (UTC)
Agreed. See comments. On reflection my original comments were a bit imprecise.
Thepigdog (talk) 00:15, 18 November 2014 (UTC)\

Transposition of refactored outline, no text deleted[edit]

@Steelpillow and @JonRichfield and @Pigdog; After reading the refactored outline there was only one short section missing which I put together over the weekend. The rest of the material was basically a direct refactoring from when the outline was done last week. This should position the current article for the next part of its revising/expansion/redraft if someone could take an attempt at starting to prioritorize the next phase topics in some order. Perhaps one of you could suggest a list of the top priorities in something which looks like a preliminary ordering. Cheers. FelixRosch (TALK) 21:22, 18 November 2014 (UTC)

I have now finished my refactoring of the philosophy and ethics material. I hope it is at least better-structured than the previous ordering. — Cheers, Steelpillow (Talk) 20:32, 19 November 2014 (UTC)
Looks great, Steelpillow. I moved the "feasibility" material into it's own section. I like the set of topics, especially the separation of feasibility and consciousness questions. ---- CharlesGillingham (talk) 17:14, 20 November 2014 (UTC)

Citations[edit]

One thing I did notice is that about half the page content is references and additional links of one kind or another - the article itself barely makes half way through the page. This seems absurd to me and I would suggest a severe rationalisation down to a few key secondary or tertiary works, with other sources (especially primary research) cited only where it is essential. — Cheers, Steelpillow (Talk) 20:32, 19 November 2014 (UTC)

Please do not remove references from the article. The references record the sources used when writing it. This is a top-level article which needs references from many different sources addressing the fields mentioned. References to iconic original papers and lectures are an important part of the history of a subject. The references are grouped by field (the subtitles for the fields make the listing a bit longer than it otherwise would be but add value for the reader) and followed by an alphabetical list of citations, mainly for books and journals. These provide different views of the literature supporting the article contents, depending on what the reader needs.
There are eleven citations which are not linked to from the references. We can probably remove those, which would shorten that list a bit. --Mirokado (talk) 09:46, 20 November 2014 (UTC)
I would beg to differ. Your comment mixes two separate issues.
WP:CITE is about verifying content, it is not about historical embellishment. By all means mention famous and pivotal papers, but those mentions should be supported by citing secondary and tertiary sources, not the papers themselves. Remember, the paper is being mentioned because of its importance, and the importance needs to be verified. No paper can verify its own importance. If the reader wants to dig deeper than the linked articles go, then the place to turn to in the first instance is the sources which do cite the primary papers: such lists will be a lot more complete than anything we can give in the present article. — Cheers, Steelpillow (Talk) 10:19, 20 November 2014 (UTC)
If you really feel such a list can be justified in its own right (and I do not disagree there), can I suggest that since it is far the biggest part of this page it should be hived off to a List of primary sources in artificial intelligence? That way, it wouldn't clog up this article's citations. — Cheers, Steelpillow (Talk) 10:30, 20 November 2014 (UTC)
I don't think that the length of the references is a problem that we need to solve. It's certainly not a problem for the reader, since they never scroll down that far. It's not a problem for Wikipedia, as it is WP:NOTPAPER. The complexity of these references is a bit of a problem for us, the editors of this article. Even this isn't a huge problem: per WP:CITE, it's always okay to add sources in any way that is complete and unambiguous, and eventually other editors (who enjoy this sort of thing) will bring them in line with the article's format.
As I've said above, the most difficult problem in editing this article is finding the right weight (per WP:UNDUE), and the format of these references also provides proof that each topic belongs here. ---- CharlesGillingham (talk) 17:24, 20 November 2014 (UTC)
@Steelpillow seems to recognize these concerns by making what appears to be a good solution in offering the editors to start of page for "List of primary sources in artificial intelligence". The list in its current form is not a comprehensive list, nor is it an exhaustive list, though it does take up about literally half of the article size which is un-needed here in the article itself. Linking to the moved material can be retained in the article, and the non-comprehensive and non-exhaustive list can be moved and can have its own page. Cheers. FelixRosch (TALK) 17:49, 20 November 2014 (UTC)
Can you guys point me to an example of what you're talking about? And also, I'm very serious about the undue weight thing -- it matters; I've been editing the article for 7 years now and it comes up over and over. It's nice to be able show that every important point appears in most of the most reliable sources. ---- CharlesGillingham (talk) 22:45, 20 November 2014 (UTC)
@Steelpillow; We're still with you on this. The material mostly appearing after the References section, such as "Other sources" should have its own page since it does not directly relate to the article itself. FelixRosch (TALK) 18:14, 21 November 2014 (UTC)
@CharlesGillingham: There are many ways to cut the citation cake, so what I give below is a personal reaction. It expresses the meat of my complaint, though not necessarily the optimum solution.
First off, all those bullet lists that cite Russell & Norvig and a bunch of others. Citing just Russell & Norvig would be fine, maybe one other if the factoid is particularly contentious. Looking at the sheer quantity of them in some paragraphs and the repetition in the list of Notes, I suspect that many of these could be reduced to a single citation at the end of the paragraph.
If a source is used for many citations, use it for as many others as it is appropriate for. Even if a standard reference work gets cut from the citations altogether, that is no problem. Citing every standard reference work around is not the job of the main article content.
Where a note cites a work given in full in one of the lists, say Russell & Norvig 2003, all such works should be collected alphabetically in a single appropriately-headed list so that the poor reader can actually find them. For example a Bibliography would be a good list.
Works used to build the article but not actually cited should also be included in the bibliography.
Sub-lists such as History of AI or AI textbooks are not appropriate in all that because they break the alphabetical listing, and the work may be cited in other sections than History anyway.
Sub-lists are more appropriate for Further reading. These are books not plundered for the article but still of interest to the keen reader. If the above were done, the length of the Further reading section would then dictate its fate. If it were too long then it should form the basis of a standalone List of works on AI, and a copy of the bibliography should be merged into it.
At the moment, it is actually quite hard to take any given example and follow the trail as to its relevance. And that's my point. Russell & Norvig only sprang up because of the relentless repetition. As the citations get refactored, it will become easier to pick out more examples.
Does that help? — Cheers, Steelpillow (Talk) 20:08, 21 November 2014 (UTC)
These are topically bundled (WP:CITEBUNDLE), short citations (WP:CITESHORT), using list defined reference (WP:LDR). All of these are accepted techniques in Wikipedia, although I agree it's rare to see so them used with such enthusiasm in one article.
Each footnote is on a particular topic, so it makes no sense to combine them based on what paragraphs they happen to appear in. They are used in multiple paragraphs, and some paragraphs contain multiple topics.
The "bibliography" of this article is called References (following standard practice in Wikipedia for WP:CITESHORT citations0. If you want to sort the textbooks and histories into the main list so that there is only one alphabetical list, that's fine. Note that you can click on a short citation and it takes you to the main citation.
I have cited three or four sources for each major topic because this shows that the topic has enough weight to be included in the article -- if it's covered by most of the most reliable sources, then this article should cover it. Of course, weight is not something that concerns the reader, but I honestly don't think that readers use footnotes all that often. I'm not sure where else we could document this, except at Talk:Artificial intelligence/Textbook survey. If it really bothers you, we could cut these down to just R&N for some of these. I wouldn't cut down any of the footnotes that contain other information (such as the footnote 1, for example).
I'm not sure what mean about it being "hard to take an example and follow the trail". You read a sentence, click on the footnote, and there's your relevance: you see that the topic is covered by most of the most reliable sources.
Again, I want to point out that size of the reference does not harm the reader in any way, and is not a problem that needs to be solved -- there are other more urgent issues: the Applications and Industry section is basically unwritten and Fiction section is a travesty. ---- CharlesGillingham (talk) 02:20, 23 November 2014 (UTC)

──────────────────────────────────────────────────────────────────────────────────────────────────── I have started to tidy up the remaining errors reported by User:Ucucha/HarvErrors.js. These are for citations which specify a |ref= parameter with no corresponding {{harvnb}} or whatever reference.

  • For the textbooks and history subsections, which list general background sources some of which are specifically referenced, it seems better to retain the citations but remove the parameter  – it is easy to restore the parameter if a reference is added.
  • For the others, I will at least mostly remove the citations and list them here for any further consideration.

I will continue to tweak for source consistency as I go, with the parameter order for citations roughly: last etc, year/date, title, publisher etc, isbn etc, url, ref. Having last, first, date in that order helps when matching citation to callout and having last first (!) helps when checking the citation sorting. --Mirokado (talk) 16:42, 6 December 2014 (UTC)

Removed:

--Mirokado (talk) 17:02, 6 December 2014 (UTC)

Should the Devaluation of Humanity and Computationalism be merged?[edit]

They are both talking about the same topic, but from different viewpoints. 204.8.193.130 (talk) 18:38, 21 November 2014 (UTC)

No. They are quite different topics within AI. The devaluation of humanity is an argument suggesting that weak (i.e. inhuman) AI systems might make bad decisions about us. Computationalism is a proposed systemic model of how the human brain works. There is no comparison. — Cheers, Steelpillow (Talk) 19:33, 21 November 2014 (UTC)

removed conceptual foundations section[edit]

Sorry for my boldness, but I think this section should be more readable. Since it seems to my cursory glances that some, if not most, of these bullet points are included in the history section, I'm removing conceptual foundations section to the talk page. Xaxafrad (talk) 07:55, 23 November 2014 (UTC)

I've edited this list and removed the items which are mentioned in the history section. Xaxafrad (talk) 08:52, 23 November 2014 (UTC)

Conceptual foundations[edit]

The conceptual foundations defining artificial intelligence in the second decade of the 21st-century are effectively best summarized as a list of strongly endorsed research pairings of contemporary 21st-century research topics as follows:

  • Symbolic AI versus neural nets
  • Reasoning versus perception
  • Reasoning versus knowledge
  • Representationalism versus non-representationalism
  • Brains-in-vats versus embodied AI
  • Narrow AI versus human-level intelligence. [1]

Several key moments in the history of AI have contributed to defining the major 21st-century research areas in AI. These early historical research areas from the last century, although by now well-rehearsed, are revisited occasionally with some recurrent reference to:

  • McCullock and Pitts early research in schematizing digital circuitry
  • Samuel's early checker player

From these followed the 4 more historical research areas currently being pursued in updated form which include

  • means-ends problem solvers (Newell 1959)
  • Nautral language processing (Winograd 1972)
  • knowledge engineering (Lindsay 1980)

Major recent accomplishments in AI defining future research paths in the 21st-century have included the development of

  • Solution to the Robbins conjecture
  • Killer App ( and Gaming applications as a major force of research and innovation). [2]

The current major leading 21st-century research areas appear to be

  • Knowledge maps
  • Heuristic search
  • Planning
  • Machine vision
  • Machine learning
  • Natural language
  • Software agents
  • Intelligent tutoring

The most recent 21st-century trends appear to be represented by the fields of

  • Soft computing
  • Agent based AI
  • Cognitive computing
  • AI and cognitive science. [3]
(end of removed section) So there it is...Sorry for stepping on anyone's toes. I'd be happy to help edit these 36 25 list items into the history section if it's needed. Xaxafrad (talk) 08:29, 23 November 2014 (UTC)

Incorporating Franklin[edit]

I think we can incorporate Franklin as one of our sources. I went through bullet lists above and identified where in the article we cover the same topics. There's only a few question marks --- someone needs to read Franklin carefully and make sure that I am right so far. ---- CharlesGillingham (talk) 21:37, 30 November 2014 (UTC)

  • Symbolic AI versus neural nets
Under Approaches. A difference between Symbolic AI and (the earliest form of) Computational Intelligence and soft computing. Covered in detail in History of AI#The revival of connectionism.
  • Reasoning versus perception
Not sure; possibly under Approaches,relevant to the difference between Symbolic AI and Embodied AI. Or is Franklin talking about David Marr vs. Symbolic AI? Should we even mention Marr? This was not a particularly influential dispute, but Marr is covered in History of AI#The importance of having a body: Nouvelle AI and embodied reason.
  • Reasoning versus knowledge
Under Approaches, the difference between Knowledge based AI and the rest of Symbolic AI.
  • Representationalism versus non-representationalism
Under Approaches, the difference between Symbolic and Sub-symbolic AI.
  • Brains-in-vats versus embodied AI
Under Approaches,the difference between Embodied AI and Symbolic AI.
  • Narrow AI versus human-level intelligence.
Under Goals, relevant to the difference between general intelligence and all other goals.

Several key moments in the history of AI have contributed to defining the major 21st-century research areas in AI. These early historical research areas from the last century, although by now well-rehearsed, are revisited occasionally with some recurrent reference to:

  • McCullock and Pitts early research in schematizing digital circuitry
They are named in a footnote 24: "AI's immediate precursors", and in the first sentence of Neural networks. Covered in more detail in History of AI#Cybernetics and early neural networks.
  • Samuel's early checker player
Added this to "Golden years" sentence and footnote in History. Covered in more detail in History of AI#Game AI.
  • means-ends problem solvers (Newell 1959)
The General Problem Solver is not covered in AI, but this is covered in detail in History of AI#Reasoning as Search
  • Nautral language processing (Winograd 1972)
SHRDLU is mentioned in "Golden years" sentence and footnote in History
  • knowledge engineering (Lindsay 1980)
This is probably covered in Approaches under Knowledge-Based, or in the paragraph of History on expert systems (and History of AI#The rise of expert systems). I am not familiar with Lindsay 1980; could someone read Franklin and see what this is?

Major recent accomplishments in AI defining future research paths in the 21st-century have included the development of

  • Solution to the Robbins conjecture
Should be added to the last paragraph of History.
  • Killer App ( and Gaming applications as a major force of research and innovation).
Don't know what Franklin is getting at here

The current major leading 21st-century research areas appear to be

Covered under Knowledge Representation. Or does he just mean "knowledge representation"? Also, why does he leave out "reasoning"?
  • Planning
Covered in Planning.
  • Machine vision
Covered under Perception.
  • Machine learning
Covered in Machine learning.
  • Natural language
Covered in Natural language processing.
  • Software agents
Possibly covered under Approaches in Intelligent agent paradigm, unless Franklin is saying something else ... to be checked.
  • Intelligent tutoring
We don't have this; I think it belongs under Applications.

The most recent 21st-century trends appear to be represented by the fields of

  • Soft computing
Added this under Approaches
  • Agent based AI
Probably covered under Approaches in Intelligent agent paradigm
  • Cognitive computing
Not sure what he means by this ... is this the hardware thing that IBM calls cognitive computing? The Wikipedia article on this term is horrific; there are only sources for IBM's hardware thing -- no sources at all for the more general term. Perhaps Franklin could be used to straighten out that article.
  • AI and cognitive science.
Similarly, not too sure about this either; cognitive science is relevant in many places in the article; not sure what trends he's talking about exactly.
---- CharlesGillingham (talk) 21:37, 30 November 2014 (UTC)

A little light relief[edit]

We're all doomed!

http://www.bbc.co.uk/news/technology-30290540

  1. ^ Franklin (2014), Cambridge Univ Press, pp15-16. The Cambridge Handbook of Artificial Intelligence (2014). [21]
  2. ^ Franklin, Cambridge Univ Press, pp22-24. The Cambridge Handbook of Artificial Intelligence (2014). [22]
  3. ^ Franklin, Cambridge Univ Press, pp24-30. The Cambridge Handbook of Artificial Intelligence (2014). [23]