Jump to content

Talk:Artificial intelligence: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Asamind (talk | contribs)
m →‎Deep Learning: fixed grammatical error
Line 273: Line 273:
== Deep Learning ==
== Deep Learning ==


I've noticed a couple of reverts on this topic. I agree with the people who don't want Deep Learning as a separate major sub-heading. If you look at the major AI text books none of them have a chapter heading "Deep Learning" to my recollection. AI is such a broad topic we need to be sure to not try and have this article cover every single thing that has ever been described as AI but stick to the major topics. Deep learning merits it's own article and a link from this article to it but not a whole section in this article. Rather than just keep reverting I think we should try to reach some consensus first and the advocates of Deep Learning should cite some major AI text books that have it as a major topic or say why they think that is not an appropriate criteria for what things should be covered in this article. --[[User:MadScientistX11|MadScientistX11]] ([[User talk:MadScientistX11|talk]]) 15:50, 29 September 2014 (UTC)
I've noticed a couple of reverts on this topic. I agree with the people who don't want Deep Learning as a separate major sub-heading. If you look at the major AI text books none of them have a chapter heading "Deep Learning" to my recollection. AI is such a broad topic we need to be sure to not try and have this article cover every single thing that has ever been described as AI but stick to the major topics. Deep learning merits its own article and a link from this article to it but not a whole section in this article. Rather than just keep reverting I think we should try to reach some consensus first and the advocates of Deep Learning should cite some major AI text books that have it as a major topic or say why they think that is not an appropriate criteria for what things should be covered in this article. --[[User:MadScientistX11|MadScientistX11]] ([[User talk:MadScientistX11|talk]]) 15:50, 29 September 2014 (UTC)


:"Deep learning" is not mentioned in the leading AI textbook, Russell and Norvig's [[Artificial Intelligence: A Modern Approach]]. This is why I removed this material, as a five-paragraph section is [[WP:UNDUE]] weight for a relatively minor topic. One sentence in the section on neural networks would be appropriate, if anything.
:"Deep learning" is not mentioned in the leading AI textbook, Russell and Norvig's [[Artificial Intelligence: A Modern Approach]]. This is why I removed this material, as a five-paragraph section is [[WP:UNDUE]] weight for a relatively minor topic. One sentence in the section on neural networks would be appropriate, if anything.

Revision as of 02:18, 19 October 2014

Template:Vital article

Article milestones
DateProcessResult
August 6, 2009Peer reviewReviewed

Template:Outline of knowledge coverage

On going issues

Length

I argue that this is WP:Summary article of a large field, and that therefor it is okay that it runs a little long. Currently, the article text is at around ten pages, but the article is not 100% complete and needs more illustrations. ---- CharlesGillingham (talk) 18:29, 2 November 2010 (UTC)[reply]

Todo: Illustration

The article needs a lead illustration and could use more illustrations throughout. ---- CharlesGillingham (talk) 18:29, 2 November 2010 (UTC)[reply]

Thanks to User:pgr94, the article is 70% illustrated. Almost there. ---- CharlesGillingham (talk) 00:03, 16 June 2011 (UTC)[reply]
Main illustration doesn't provide an actual example of an Artificial Intelligence, just a robot capable of mimicking human actions in a certain area (Namely, sport) — Preceding unsigned comment added by 86.163.226.52 (talk) 15:37, 4 August 2011 (UTC)[reply]

Todo: Applications

The "applications" section does not give a comprehensive overview of the subject. ---- CharlesGillingham (talk) 18:29, 2 November 2010 (UTC)[reply]

Todo: Topics covered by major textbooks, but not this article

I can't decide if these are worth describing (in just a couple of sentences) or not. ---- CharlesGillingham (talk) 18:29, 2 November 2010 (UTC)[reply]

  1. Could use a tiny section on symbolic learning methods, such as explanation based learning, relevance based learning, inductive logic programming, case based reasoning.
  2. Could use a tiny section on knowledge representation tools, like semantic nets, frames, scripts etc.
  3. Control theory could use a little filling out with other tools used for robotics.
  4. Should mention Constraint satisfaction. (Under search). Discussion below, at Talk:Artificial intelligence/Archive 4#Constraint programming.
  5. Should mention the Frame problem in a footnote at least. ---- CharlesGillingham (talk) 19:52, 3 February 2011 (UTC)[reply]
  1. Where can we link Belief calculus? Does this include Dempster-Shafer theory (according to R&N)? I think that's more or less deprecated. Does R&N include expectation-maximization algorithm as a kind of belief calculus? I don't think so. Where is this in Wikipedia?
  2. There are still several topics with no source: Subjective logic, Game AI, etc. All are tagged in the article. ---- CharlesGillingham (talk) 19:59, 3 February 2011 (UTC)[reply]

Goals

I think a high level listing of AI's goals (from which more specific Problems inherit) is needed; for instance "AI attempts to achieve one or more of: 1) mimicking living structure and/or internal processes, 2) replacing living thing's external function, using a different internal implementation, 3) ..." At one point in the past, I had 3 or 4 such disjoint goals stated to me by someone expert in AI. I am not, however. DouglasHeld (talk) 00:11, 26 April 2011 (UTC)[reply]

We'd need a reliable source for this, such as a major AI textbook. ---- CharlesGillingham (talk) 16:22, 26 April 2011 (UTC)[reply]

"Human-like" intelligence

I object to the phrase "human-like intelligence" being substituted here and elsewhere for "intelligence". This is too narrow and is out of step with the way many leaders of AI describe their own work. This only describes the work of a small minority of AI researchers.

  • AI founder John McCarthy (computer scientist) argued forcefully and repeatedly that AI research should not attempt to create "human-like intelligence", but instead should focus on create programs that solve the same problems that humans solve by thinking. The programs don't need to be human-like at all, just so long as they work. He felt AI should be guided by logic and formalism, rather than psychological experiments and neurology.
  • Rodney Brooks (leader of MIT's AI laboratories for many years) argued forcefully and repeatedly that AI research (specifically robotics) should not attempt to simulate human-like abilities such as reasoning and deduction, but instead should focus on animal-like abilities such as survival and locomotion.
  • Stuart Russell and Peter Norvig (authors of the leading AI textbook) dismiss the Turing Test as irrelevant, because they don't see the point in trying to creating human-like intelligence. What we need is the intelligence it takes to solve problems, regardless of whether it's human-like or not. They write "airplanes are tested by how well they fly, not by how they can fool other pigeons into thinking they are pigeons."
  • They also object to John Searle's Chinese room argument, which claims that machine intelligence can never be truly "human-like", but at best can only be a simulation of "human-like" intelligence. They write "as long the program works, [we] don't care if you call it a simulation or not." I.e., they don't care if it's human-like.
  • Russell and Norvig define the field in terms of "rational agents' and write specifically that the field studies all kinds of rational or intelligent agents, not just humans.

AI research is primarily concerned with solving real-world problems, problems that require intelligence when they are solved by people. AI research, for the most part, does not seek to simulate "human like" intelligence, unless it helps to solve this fundamental goal. Although some AI researchers have studied human psychology or human neurology in their search for better algorithms, this is the exception rather than the rule.

I find it difficult to understand why we want to emphasize "human-like" intelligence. As opposed to what? "Animal-like" intelligence? "Machine-like" intelligence? "God-like" intelligence? I'm not really sure what this editor is getting at.

I will continue to revert the insertion "human-like" wherever I see it. ---- CharlesGillingham (talk) 06:18, 11 June 2014 (UTC)[reply]

Completely agree. The above arguments are good. Human-like intelligence is a proper subset of intelligence. The editor seems to be confusing "Artificial human intelligence" and the much broader field of "artificial intelligence". pgr94 (talk) 10:12, 11 June 2014 (UTC)[reply]

One more thing: the phrase "human-like" is an awkward neologism. Even if the text was written correctly, it would still read poorly. ---- CharlesGillingham (talk) 06:18, 11 June 2014 (UTC)[reply]

To both editors, WP:MOS requires that the Lead section only contain material which is covered in the main body of the article. At present, the five items which you outline above are not contained in the main body of the article but only on Talk. The current version of the Lead section accurately summarizes the main body of the article in its current state. FelixRosch (talk) 14:54, 23 July 2014 (UTC)[reply]
The article (nor any of the sources) does not define AI by using the term "human like" to specify the exact kind of intelligence that it studies. Thus the addition of the term "human-like" absolutely does not summarize the article. I think the argument from WP:SUMMARY is actually a very strong argument for striking the term "human like".
I still don't understand the distinction between "human-like" intelligence and the other kind of intelligence (whatever it is), and how this applies to AI research. Your edit amounts to the claim that AI studies "human-like" intelligence and NOT some other kind of intelligence. It is utterly not clear what this other kind of intelligence is, and it certainly does not appear in the article or the sources, as far as I can tell. It would help if you explain what it is you are talking about, because it makes no sense to me and I have been working on, reading and studying AI for something like 34 years now. ---- CharlesGillingham (talk) 18:23, 1 August 2014 (UTC)[reply]
Also, see the intro to the section Approaches and read footnote 93. This describes specifically how some AI researchers are opposed to the idea of studying "human-like" intelligence. Thus the addition of "human-like" to the the intro not only does not summarize the article, it actually claims the opposite of what the body the article states, with highly reliable sources. ---- CharlesGillingham (talk) 18:34, 1 August 2014 (UTC)[reply]
That's not quite what you said in the beginning of this section. Also, your two comments on 1August seem to be at odds with each other. Either you are saying that there is nothing other than human-like intelligence, or you wish to introduce material to support the opposite. If you wish to develop the material into the body of the article following your five points at the start of this section, then you are welcome to try to post them in the text prior to making changes in the Lead section. WP policy is that material in the Lede must be first developed in the main body of the article, which you have not done. FelixRosch (talk) 16:35, 4 September 2014 (UTC)[reply]
As I've already said, the point I am making is already in the article.
"Human-like" intelligence is not in the article. Quite the contrary.
The article states that this is long standing question that AI research has not yet answered: "Should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?"
And the accompanying footnote makes the point in more detail:
"Biological intelligence vs. intelligence in general:
  • Russell & Norvig 2003, pp. 2–3, who make the analogy with aeronautical engineering.
  • McCorduck 2004, pp. 100–101, who writes that there are "two major branches of artificial intelligence: one aimed at producing intelligent behavior regardless of how it was accomplioshed, and the other aimed at modeling intelligent processes found in nature, particularly human ones."
  • Kolata 1982, a paper in Science, which describes McCathy's indifference to biological models. Kolata quotes McCarthy as writing: "This is AI, so we don't care if it's psychologically real"[1]. McCarthy recently reiterated his position at the AI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence" (Maker 2006)."
This proves that the article does not state that AI studies "human like" intelligence. It states, very specifically, that AI doesn't know whether to study human-like intelligence or not. ---- CharlesGillingham (talk) 03:21, 11 September 2014 (UTC)[reply]

Human-like intelligence is the subject of each of the opening eight sections including "Natural language"

As the outline of this article plainly shows in its opening eight sections, each one of the eight sections of this page are all explicitly for 'human-like' intelligence. This fact should be reflected in the Lede as well. The first eight section are all devoted to human-like intelligence. In the last few weeks you have taken several differing positions. First you were saying that there is nothing other than human-like intelligence, then you wished to introduce multiple references to support the opposite, and now you appear to wish to defend an explicitly strong-AI version of your views against 'human-like' intelligence. You are expected on the basis of good faith to make you best arguments up front. The opening eight sections are all devoted to human-like intelligence, even to the explicit numbering of natural language communication into the list. There is no difficulty if you wish to write your own new page for "Strong-AI" and only Strong-AI. If you like, you can even ignore the normative AI perspective on your version of a page titled "Strong-AI". That however is not the position which is represented on the general AI page which is predominantly in its first eight sections oriented to human-like intelligence. FelixRosch (talk) 16:18, 11 September 2014 (UTC)[reply]

(Just to be clear: (1) I did not say there is nothing other than human-like intelligence. I don't know where you're getting that. (2) I find it difficult to see how you could construe my arguments as being in favor of research into "strong AI" (as in artificial general intelligence) or as an argument that machines that behave intelligently must also have consciousness (as in the strong AI hypothesis). As I said in my first post, AI research is about solving problems that require intelligence when solved by people. And more to the point: the solutions to these problems are not, in general, "human-like". This is the position I have consistently defended. (3) I have never shared my own views in this discussion, only the views expressed by AI researchers and this article. ---- CharlesGillingham (talk) 05:19, 12 September 2014 (UTC))[reply]
Hello Felix. My reading of the sections is not the same. Could you please quote the specific sentences you are referring to. I have reverted your edit as it is rather a narrow view of AI that exists mostly in the popular press, not the literature. pgr94 (talk) 18:28, 11 September 2014 (UTC)[reply]
Hello Pgr94; This is the list of the eight items which start off the article: 2.1 Deduction, reasoning, problem solving 2.2 Knowledge representation 2.3 Planning 2.4 Learning 2.5 Natural language processing (communication) 2.6 Perception 2.7 Motion and manipulation 2.8 Long-term goals. Each of these items is oriented to human-like intelligence. I have also emphasized 2.5, Natural language processing, as specifically unique to human alone. Please clarify if this is the same outline that should appear on your screen. Of the three approaches to artificial intelligence, weak-AI, Strong-AI, and normative AI, you should specify which one you are endorsing prior to reverting. My point is that the Lede should be consistent with the body of the article, and that it should not change until the new material is developed in the main body of the article before changing the Lede. Human-like intelligence are what all the opening 8 sections are about. Make Lede consistent with the contents of the article following WP:MoS. FelixRosch (talk) 20:11, 11 September 2014 (UTC)[reply]
It seems you just listed the sections rather than answer my query. Never mind.
The article is not based on human-like intelligence as you seem to be suggesting. If you look at animal cognition you will see that reasoning, planning, learning and language are not unique to humans. Consider also swarm intelligence and evolutionary algorithms that are not based on human behaviour. To say that the body of the article revolves around human-like intelligence is therefore inaccurate.
If you still disagree with both Charles and myself, may I suggest working towards consensus here before adding your change as I don't believe your change to the lede reflects the body of the article. pgr94 (talk) 23:51, 11 September 2014 (UTC)[reply]
All of the intelligent behaviors you listed above can demonstrated by very "inhuman" programs. For example, a program can "deduce" the solution of a Sudoku puzzle by iterating through all of the possible combinations of numbers and testing each one. A database can "represent knowledge" as billions of nearly identical individual records. And so on. As for natural language processing, this includes tasks such as text mining, where a computer searches millions of web pages looking for a set of words and related grammatical structures. No human could do this task; a human would approach the problem a completely different way. Even Siri's linguistic abilities are based mostly on statistical correlations (using things like support vector machines or kernel methods) and not on neurology. Siri depends more on the mathematical theory of optimization than it does on our understanding of the way the brain processes language. ---- CharlesGillingham (talk) 05:19, 12 September 2014 (UTC)[reply]
@Pgr94; Your comment appears to state that because there are exceptions to the normative reading of AI, therefore you can justify changes to the Lede to reflect these exceptions. WP:MoS is the exact opposite of this, where the Lede is required to give only a summary of material already used to describe the field covered in the main body of the article. No difficulty if you want to cover the exceptions in the main body of the article and you can go ahead and do so as long as you cite your additions according to wikipedia policy for being verifiable. The language used in section 2.1 is "that humans use when they solve puzzles...", and this is consistent for the other sections I have already enumerated for human-like intelligence. This article in its current form is overwhelmingly oriented to human-like intelligence applied normatively to establish the goals of AI. Arguing the exception can be covered in the main body but does not belong in the Lede according to wikipedia policy. @CharlesGillingham; You appear now to be devoted to the Strong-AI position to support your answers. This is only one version of AI, and it is not the one which is the principal one covered in the main baody of this article which covers the goal of producing human-link intelligence and its principal objectives. Strong-AI, Weak-AI, and normative AI are three versions, and one should not be used to bias attention away from what the main content of this article is about which is the normative AI approach as discussed in each of the opening 8 sections. The language used in section 2.1 is "that humans use when they solve puzzles...", and this is consistent for the other sections I have already enumerated. No difficulty if you want to bring in the material to support your preference for Strong-AI in the main body of the article. Until you do so the Strong-AI orientation should not affect what is represented in the Lede section. Wikipedia policy is that only material in the main body of the article may be used in the Lede. FelixRosch (talk) 16:10, 12 September 2014 (UTC)[reply]
I have no idea what you mean by "Strong AI" in the paragraph above. I am defending the positions of John McCarthy, Rodney Brooks, Peter Norvig and Stuart Russell, along with most modern AI researchers. These researchers advocate logic, nouvelle AI and the intelligent agent paradigm (respectively). All of these are about as far from strong AI as you can get, in either of the two normal ways the term is used. So I have to ask you: what do you mean when you say "strong AI"? It seems very strange indeed to apply it to my arguments.
I also have no idea what you mean by "normative AI" -- could you point to a source that defines "strong AI", "weak AI" and "normative AI" in the way you are using them? My definitions are based on the leading AI textbooks, and they seem to be completely different than yours.
Finally, you still have not addressed any of the points that Pgr94 and I have brought up -- if, as you claim, AI research is trying to simulate "human like" intelligence, why do most major researchers reject "human like" intelligence as a model or a goal, and why are so many of the techniques and applications based on principles that have nothing to do with human biology or psychology? ---- CharlesGillingham (talk) 04:02, 14 September 2014 (UTC)[reply]
You still have not responded to my quote in bold face above that the references in all 8 (eight) opening section of this article all refer to human comparisons. You should read them since you appear to be obviating the wording which they are using and as I have quoted it above. You now have two separate edits in two forms. These are two separate edits and you should not be automatically reverting them without discussion first. The first one is my preference and I can continue this Talk discussion until you start reading the actual contents of all eight opening sections which details human-like intelligence. The other edit is restored since there is no reason not to include the mention of the difference of general AI from strong AI and weak AI. Your comment on strong AI seems contradicted by your own editing of the very page (disambiguation page) for it. The related pages John Searle, etc, all are oriented to discussion of human comparisons of intelligence, as clearly stated on these links. Strong artificial intelligence, or Strong AI, may refer to:Artificial general intelligence, a hypothetical machine that exhibits behavior at least as skillful and flexible as humans do, and the research program of building such an artificial general intelligence, and, Computational theory of mind, the philosophical position that human minds are (or can be usefully modeled as) computer programs. This position was named "strong AI" by John Searle in his Chinese room argument. Each of these links supports human-like intelligence comparisons as basic to understating each of these terms. FelixRosch (talk) 15:21, 15 September 2014 (UTC)[reply]

All I'm saying is this: major AI researchers would (and do) object to defining AI as specifically and exclusively studying

"human-like" intelligence. They would prefer to define the field as studying intelligence in general, whether human or not. I have provided ample citations and quotations prove that this is the case. If you can't see that I have proved this point, then we are talking past each other. Repeatedly trying to add "human" or "human-like" or "human-ish" intelligence to the definition is simply incorrect.

I am happy to get WP:Arbitration on this matter, if you like, as long as it is understood that I only check Wikipedia once a week or so.

Re: many of the sections which define the problem refer to humans. This does not contradict what I am saying and does not suggest that Wikipedia should try to redefine the field in terms of human intelligence. Humans are the best example of intelligent behavior, so it is natural that we should use humans as an example when we are describing the problems that AI is solving. There are technical definitions of these problems that do not refer to humans: we can define reasoning in terms of logic, problem solving in terms of abstract rational agents, machine learning in terms of self-improving programs and so on. Once we have defined the task precisely and written a program that performs it to any degree, we're no longer talking about human intelligence any more -- we're talking about intelligence in general and machine intelligence in particular (which can be very "inhuman", as I demonstrated in an earlier post).

Re: strong AI. Yes, strong AI (in either sense) is defined in terms of human intelligence or consciousness. However, I am arguing that major AI researchers would prefer not to use "human" intelligence as the definition of the field, a position which points in the opposite direction from strong AI; the people I am arguing on behalf of are generally uninterested in strong AI (as Russell and Norvig write "most AI researchers don't care about the strong AI hypothesis"). So it was weird that you wrote I was "devoted to the Strong-AI position". Naturally, I wondered what on earth you were talking about.

The term "weak AI" is not generally used except in contrast to "strong AI", but if we must use it, I think you could characterize my argument as defending "weak AI"'s claim to be part of AI. In fact, "strong AI research" (known as artificial general intelligence) is a very small field indeed, and "weak AI" (if we must call it that) constitutes the vast majority of research, with thousands of successful applications and tens of thousands of researchers. ---- CharlesGillingham (talk) 00:35, 20 September 2014 (UTC)[reply]

Undid revision 626280716. WP:MoS requires Lede to be consistent with the main body of the article. Previous version of Lede is inconsistent between 1st and 4th paragraph on human-like intelligence. Current version is consistent. Each one of the opening sections is also based one-for-one on direct emulation of human-like intelligence. You may start by explaining why you have not addressed the fact that each of the opening 8 (eight) sections is a direct comparison to human-like intelligence. Also, please stop your personal attacks by posting odd variations on my reference to the emulation of human-like intelligence. Your deliberate contortion of this simple phrase to press your own minority view of weak-AI is against wikipedia policy. Page count statistics also appear to favor the mainstream version of human-like intelligence which was posted and not your minority weak-AI preference. Please stop edit warring, and please stop violating MoS policy and guidelines for the Lede. The first paragraph, as the fourth paragraph already is in the Lede, must be consistent and a summary of the material in the main body of the article, and not your admitted preference for the minority weak-AI viewpoint. FelixRosch (talk) 14:41, 20 September 2014 (UTC)[reply]
In response to your points above (1) I have "addressed the fact that each of the opening 8 (eight) sections is a direct comparison to human-like intelligence". It is in the paragraph above which begins with "Re: many of the sections which define the problem refer to humans.". (2) It's not a personal attack if I object every time you rephrase your contribution. I argue that the idea is incorrect and unsourced; the particular choice of words does not remove my objection. (3) As I have said before, I am not defending my own position, but the position of leading AI researchers and the vast majority of people in the field.
Restating my position: The precise, correct, widely accepted technical definition of AI is "the study and design of intelligent agents", as described in all the leading AI textbooks. Sources are in the first footnote. Leading AI researchers and the four most popular AI textbooks object to the idea that AI studies human intelligence (or "emulates" or "simulates" "human-like" intelligence).
Finally, with all due respect, you are edit warring. I would like to get WP:Arbitration. ----
I support getting arbitration. User:FelixRosch has not added constructively to this article and is pushing for a narrow interpretation of the term "artificial intelligence" which the literature does not support. Strong claims need to be backed up by good sources which Rosch has yet to do. Instead s/he appears to be cherrypicking from the article and edit warring over the lede. The article is not beyond improvement, but this is not the way to go about it. pgr94 (talk) 16:52, 20 September 2014 (UTC)[reply]
Pgr94 has not been part of this discussion for over a week, and the same suggestion is being made here, that you or CharlesG are welcome to try to bring in any cited material you wish to in order to support the highly generalized version of the Lede sentence which you appear to want to support. Until you bring in that material, WP:MoS is clear that the Lede is only supposed to summarize material which exists in the main body of the article. User:CharlesG keeps referring abstractly to multiple references he is familiar with and continues not to bring them into the main body of the article first. WP:MoS requires that you develop your material in the main body of the article before you summarize it in the Lede section. Without that material you cannot support an overly generalized version of the Lede sentence. The article in its current form, in all eight (8) of its opening sections is oriented to human-like intelligence (Sections 2.1, 2.2, ..., 2.8). Also, the fourth paragraph in the Lede section now firmly states that the body of the article is based on human intelligence as the basis for the outline of the article and its contents. According to WP:MoS for the Lede, your new material must be brought into the main body of the article prior to making generalizations about it which you wish to place in the Lede section. FelixRosch (talk) 19:45, 20 September 2014 (UTC)[reply]
As I have said before, the material you are requesting is already in the article. I will quote the article again:
From the lede: Major AI researchers and textbooks define this field as "the study and design of intelligent agents"
First footnote: Definition of AI as the study of intelligent agents:
  • Poole, Mackworth & Goebel 1998, p. 1, which provides the version that is used in this article. Note that they use the term "computational intelligence" as a synonym for artificial intelligence.
  • Russell & Norvig (2003) (who prefer the term "rational agent") and write "The whole-agent view is now widely accepted in the field" (Russell & Norvig 2003, p. 55).
  • Nilsson 1998
Comment: Note that an intelligent agent or rational agent is (quite deliberately) not just a human being. It's more general: it can be a machine as simple a thermostat or as complex as a firm or nation.
From the section
Approaches:
A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?
From the corresponding
footnote
Biological intelligence vs. intelligence in general:
  • Russell & Norvig 2003, pp. 2–3, who make the analogy with aeronautical engineering.
  • McCorduck 2004, pp. 100–101, who writes that there are "two major branches of artificial intelligence: one aimed at producing intelligent behavior regardless of how it was accomplioshed, and the other aimed at modeling intelligent processes found in nature, particularly human ones."
  • Kolata 1982, a paper in Science, which describes John McCathy's indifference to biological models. Kolata quotes McCarthy as writing: "This is AI, so we don't care if it's psychologically real"[2]. McCarthy recently reiterated his position at the AI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence" (Maker 2006).
Comment All of these sources (and others; Rodney Brook's Elephants Don't Play Chess paper should also be cited) are part of a debate within the field that lasted from the 1960s to 90s, and was mostly settled by the "intelligent agent" paradigm. The exceptions would be the relatively small (but extremely interesting) field of artificial general intelligence research. This field defines itself in terms human intelligence. The field of AI, as a whole, does not.
This article has gone to great pains to stay in synch with leading AI textbooks, and the leading AI textbook addresses this issue (see chpt. 2 of Russell & Norvig's textbook), and comes down firmly against defining the field in terms of human intelligence. Thus "human" does not belong in the lead.
I have asked for dispute resolution. ---- CharlesGillingham (talk) 19:07, 21 September 2014 (UTC)[reply]

Arbitration ?

Why is anyone suggesting that arbitration might be in order? Arbitration is the last step in dispute resolution, and is used when user conduct issues make it impossible to resolve a content dispute. There appear to be content issues here, such as whether the term "human-like" should be used, but I don't see any evidence of conduct issues. That is, it appears that the editors here are being civil and are not engaged in disruptive editing. I do see that a thread has been opened at the dispute resolution noticeboard, an appropriate step in resolving content issues. If you haven't tried everything else, you don't want arbitration. Robert McClenon (talk) 03:08, 21 September 2014 (UTC)[reply]

You're right, dispute resolution is the next step. I have opened a thread. (Never been in a dispute that we couldn't resolve ourselves before ... the process is unfamiliar to me.) ---- CharlesGillingham (talk) 19:08, 21 September 2014 (UTC)[reply]
I am now adding an RFC, below. ---- CharlesGillingham (talk) 04:58, 23 September 2014 (UTC)[reply]

Alternate versions of lede

In looking over the recent discussion, it appears that the basic question is what should be in the article lede paragraph. Can each of the editors with different ideas provide a draft for the lede? If the issue is indeed over what should be in the lede, then perhaps a content Request for Comments might be an alternative to formal dispute resolution. Robert McClenon (talk) 03:24, 21 September 2014 (UTC)[reply]

Certainly. I would like the lede to read more or less as it has since 2008 or so:

Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is also an academic field of study. Major AI researchers and textbooks define this field as "the study and design of intelligent agents",[1] where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.[2] John McCarthy, who coined the term in 1955,[3] defines it as "the science and engineering of making intelligent machines".[4]

---- CharlesGillingham (talk) 19:12, 21 September 2014 (UTC)[reply]

We can nit pick this stuff to death and I'm already resigned that the lede isn't going to be exactly what I think it should be. BTW, some of my comments yesterday were based on my recollection of an older version of the lede, there was so much back and forth editing. I can live with the lede as it currently is but I don't like the word "emulating". To me "emulating" still implies we are trying to do it the way humans do. E.g., when I emulate DOS on a Windows machine or emulate Lisp on an IBM mainframe. When you emulate you essentially define some meta-layer and then just run ths same software and you trick it into thinking it's running on platform Y rather than X. I would prefer words like designing or something like that. But it's a minor point. I'm not going to start editing myself because I think there are already enough people going back and forth on this so just my opinion. --MadScientistX11 (talk) 15:20, 30 September 2014 (UTC)[reply]

Follow-Up

Based on a comment posted by User:FelixRosch at my talk page, it appears that the main issue is whether the first sentence of the lede should include "human-like". If that is the issue of disagreement, then the Request for Comments process is appropriate. The RFC process runs for 30 days unless there is clear consensus in less time. Formal dispute resolution can take a while also. Is the main issue the word "human-like"? Robert McClenon (talk) 15:12, 22 September 2014 (UTC)[reply]

Yes that is the issue. ---- CharlesGillingham (talk) 16:59, 22 September 2014 (UTC)[reply]
I have a substantive opinion, and a relatively strong substantive opinion, but I don't want to say what it is at this time until we can agree procedurally on how to settle the question. I would prefer the 30-day semi-automated process of an RFC rather than the formality of mediation-like formal dispute resolution, largely because it gets a better consensus via publishing the RFC in the list of RFCs and in random notification of the RFC by the bot. Unless anyone has a reason to go with mediation-like dispute resolution, I would prefer to get the RFC moving. Robert McClenon (talk) 21:41, 22 September 2014 (UTC)[reply]
I am starting the rfc below. As I said in the dispute resolution, I've never had a problem like this before. ---- CharlesGillingham (talk) 05:54, 23 September 2014 (UTC)[reply]

RfC: Should this article define AI as studying/simulating "intelligence" or "human-like intelligence"?

Deleting RFC header as per discussion. New RFC will be posted if required. Robert McClenon (talk) 15:03, 1 October 2014 (UTC)[reply]

Argument in favor of "intelligence"

The article should define AI as studying "intelligence" in general rather than specifically "human-like intelligence" because

  1. AI founder John McCarthy (computer scientist) writes "AI is not, by definition, a simulation of human intelligence", and has argued forcefully and repeatedly that AI should not simulate human intelligence, but should focus on solving problems that people use intelligence to solve.
  2. The leading AI textbook, Russell and Norvig's Artificial Intelligence: A Modern Approach defines AI as the "the study and design of rational agents", a term (like the more common term intelligent agent) which is carefully defined to include simple rational agents like thermostats and complex rational agents like firms or nations, as well as insects, human beings, and other living things. All of these are "rational agents", all them provide insight into the mechanism of intelligent behavior, and humans are just one example among many. They also write that the "whole-agent view is now widely accepted in the field."
  3. Rodney Brooks (leader of MIT's AI laboratories for many years) argued forcefully and repeatedly that AI research (specifically robotics) should not attempt to simulate human-like abilities such as reasoning and deduction, but instead should focus on animal-like abilities such as survival and locomotion.
  4. The majority of successful AI applications do not use "human-like" reasoning, and instead rely on statistical techniques (such as bayesian nets or support vector machines), models based the behavior of animals (such as particle swarm optimization), models based on natural selection, and so on. Even neural networks are an abstract mathematical model that does not typically simulate any part of a human brain. The last successful approach that modeled human reasoning were the expert systems of the 1980s, which are primarily of historical interest. Applications based on human biology or psychology do exist and may one day regain the center stage (consider Jeff Hawkins' Numenta, for one), but as of 2014, they are on the back burner.
  5. From the 1960s to the 1980s there was some debate over the value of human-like intelligence as a model, which was mostly settled by the all-inclusive "intelligent agent" paradigm. (See History of AI#The importance of having a body: Nouvelle AI and embodied reason and History of AI#Intelligence agents.) The exceptions would be the relatively small (but extremely interesting) field of artificial general intelligence research. This sub-field defines itself in terms human intelligence, as do some individual researchers and journalists. The field of AI, as a whole, does not.

All of these points are made in the article, with ample references:

From the lede: Major AI researchers and textbooks define this field as "the study and design of intelligent agents"
First footnote: Definition of AI as the study of intelligent agents:
  • Poole, Mackworth & Goebel 1998, p. 1, which provides the version that is used in this article. Note that they use the term "computational intelligence" as a synonym for artificial intelligence.
  • Russell & Norvig (2003) (who prefer the term "rational agent") and write "The whole-agent view is now widely accepted in the field" (Russell & Norvig 2003, p. 55).
  • Nilsson 1998
Comment: Note that an intelligent agent or rational agent is (quite deliberately) not just a human being. It's more general: it can be a machine as simple a thermostat or as complex as a firm or nation.
From the section
Approaches:
A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?
From the corresponding
footnote
Biological intelligence vs. intelligence in general:
  • Russell & Norvig 2003, pp. 2–3, who make the analogy with aeronautical engineering.
  • McCorduck 2004, pp. 100–101, who writes that there are "two major branches of artificial intelligence: one aimed at producing intelligent behavior regardless of how it was accomplioshed, and the other aimed at modeling intelligent processes found in nature, particularly human ones."
  • Kolata 1982, a paper in Science, which describes John McCathy's indifference to biological models. Kolata quotes McCarthy as writing: "This is AI, so we don't care if it's psychologically real"[3]. McCarthy recently reiterated his position at the AI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence" (Maker 2006).

FelixRosch has succeeded in showing that human-like intelligence is interesting to AI research, but not that it defines AI research. Defining artificial intelligence as studying/simulating "human-like intelligence" is simply incorrect; it is not how the majority of AI researchers, leaders and major textbooks define the field. ---- CharlesGillingham (talk) 05:54, 23 September 2014 (UTC)[reply]

Comments

I fully support the position presented by User:CharlesGillingham.

User:FelixRosch says The article in its current form, in all eight (8) of its opening sections is oriented to human-like intelligence (Sections 2.1, 2.2, ..., 2.8). I fail to see how sections 2.3 planning, 2.4 learning, 2.6 perception, 2.7 motion and manipulation relate only to humans. Could you please quote the exact wording in each of these sections that give you this impression? pgr94 (talk) 22:18, 23 September 2014 (UTC)[reply]

User:FelixRosch says "User:CharlesG keeps referring abstractly to multiple references on the article Talk page (and in this RfC) which he is familiar with, and continues not to bring them into the main body of the article first". The references are already in the article. The material in the table above is cut-and-pasted from the article. ---- CharlesGillingham (talk) 06:12, 24 September 2014 (UTC)[reply]

I support the position in favor of "intelligence", for the raisons stated by User:CharlesGillingham. Pintoch (talk) 07:27, 24 September 2014 (UTC)[reply]

The origins of the artificial intelligence discipline did largely have to do with "human-like" intelligence. However, much of modern AI, including most of its successes, have had to do with various sorts of non-human intelligence. To restrict this article only to efforts (mostly unsuccessful) at human-like intelligence would be to impoverish the article. Robert McClenon (talk) 03:16, 25 September 2014 (UTC)[reply]

  • If I'm understanding the question correctly the answer is obvious. AI is absolutely not just about studying "human like" intelligence but intelligence in general, which includes human like intelligence. I mean there are whole sub-disciplines of AI, the formal methods people in particular, who study mathematical formalisms that are about how to represent logic and information in general not just human intelligence. To pick one specific example: First Order Logic. People are all over the map on how much FOL relates to human intelligence. Some would say very much others would say not at all but I don't think anyone who has worked in AI would deny that FOL and languages based on it are absolutely a part of AI. Or another example is Deep Blue. It performs at the grand master level but some people would argue the way it computes is very different than the way a human does and -- at least in my experience -- the people who code programs like Deep Blue don't really care that much either way, they want to solve hard problems as effectively as possible. The analogy I used to hear all the time was that AI is to human cognition as aeronautics is to bird flight. An aeronautics engineer may study how birds fly in order to better design a plane but she will never be constrained by how birds do it because airplanes are fundamentally different and the same with computers, human intelligence will definitely impact how we design smart computers but it won't define it and AI researchers are not bound to stay within the limits of how humans solve problems. --MadScientistX11 (talk) 15:14, 25 September 2014 (UTC)[reply]
  • Comment The sources cited by CharlesGillingham do not all contradict the article. Apparently the current wording is confusing, but:
  • "Human-like" need not be read as "simulating humans". It can also be read as "human-level", which is typically the (or a) goal of AI. Speaking from my field of expertise, all the Bayes nets and neural nets and graphical models in the world are still trying to match hand-labeled examples, i.e. obtain human-level performance, even though the claim that they do anything like human brains is very, very strenuous. (Though I can point you to recent papers in major venues where this is still used as a selling point. Here's one of them.)
  • More importantly, the article speaks of emulating, not simulating, intelligence. Citing Wiktionary, emulation is "The endeavor or desire to equal or excel someone else in qualities or actions" (I don't have my Oxford Learner's Dictionary near, but I assure you the definition will be similar). In other words, emulation can exceed the qualities of the thing being emulated, so there's no need to stop at a human level of performance a priori; and emulation does not need to use "human means", or restrict itself to "cognitively plausible" ways of achieving its goal.
  • The phrase "though other variations of AI such as strong-AI and weak-AI are also studied" seems to have been added by someone who didn't understand the simulation-emulation distinction. I'll remove this right away, as it also just drops two technical terms on the reader without introducing them or explaining the (perceived) distinction with the emulation of human-like intelligence.
I conclude that the RFC is based on false premises (but I have no objection against a better formulation that is more in line with reliable sources). QVVERTYVS (hm?) 09:01, 1 October 2014 (UTC)[reply]
The simulation/emulation distinction that you are making does not appear in our most reliable source, Russell and Norvig's Artificial Intelligence: A Modern Approach (the most popular textbook on the topic). They categorize definitions of AI along these orthogonal lines: acting/"thinking" (i.e. behavior vs. algorithm), human/rational (human emulation vs. directed at defined goals), and they argue that AI is most usefully defined in terms of "acting rationally". The same section describes the long-term debate over the definition of AI. (See pgs. 1-5 of the second edition). The argument against defining AI as "human-like" (see my post at the top of the first RfC) is that R&N, as well as AI leaders John McCarthy (computer scientist) and Rodney Brooks all argue that AI should NOT be defined as "human-like". While this does not represent a unanimous consensus of all sources, of course, nevertheless we certainly can't simply bulldoze over the majority opinion and substitute our own. Cutting the word "human-like" gives us a definition that everyone would find acceptable. ---- CharlesGillingham (talk) 00:51, 9 October 2014 (UTC)[reply]

Argument in favor of "human-like intelligence"

Please Delete or Strike RFC

I am requesting that the author of the RFC delete or strike the RFC, because it is non-neutral in its wording. Robert McClenon (talk) 22:30, 23 September 2014 (UTC) [reply]

Just as the issue with the article is only with its first sentence, my issue with the RFC is with its first sentence. Robert McClenon (talk) 22:31, 23 September 2014 (UTC)[reply]
Let's face it, I don't know how to do this .... is the format above acceptable? Can you help me fix it? ---- CharlesGillingham (talk) 05:32, 24 September 2014 (UTC)[reply]
Much better. Robert McClenon (talk) 03:12, 25 September 2014 (UTC)[reply]

AI definition: What is "Strong" vs. "Weak" AI and where is it referenced?

The current definition of AI contrasts "strong" vs "weak" AI. I'm not familiar with that distinction. Who makes it and where is it referenced. Also, as a meta-point I've noticed there seems to be a lot of deference to the Russel and Norvig book on AI. That is only one book and neither guy has the standing of people who have also written general AI books such as Patrick Wilson, Feigenbaum's AI handbook, and other. Here is Wilson's definition: "Artificial Intelligence is the study of ideas that enable computers to be intelligent" from Artificial Intelligence by Patrick Wilson p.1. I think such a simple definition is what we should use to start the article --MadScientistX11 (talk) 14:59, 29 September 2014 (UTC)[reply]

I just saw that Russel and Norvig have a definition of strong vs. weak AI (section 1.5 p29). Their definition is strong AI think that machines can be conscious weak AI thinks they can't. That is a very different definition than is what is currently in the intro text. First of all I think the whole distinction is unimportant anyway. It matters to people for whom AI is a purely academic discipline but the people who actually do AI, who build expert systems, ontologies, etc. and use them in the real world don't care one way or the other. I think that part of the intro definitely needs to be changed. I don't think the strong vs. weak is important enough to be mentioned so early on but if it is it should at least be consistent with the definition of R&N --MadScientistX11 (talk) 15:58, 29 September 2014 (UTC)[reply]
I agree and have removed this sentence once again. (It may be re-added by FelixRosch shortly, assuming things continue to happen as they have been happening these last few weeks.)
I agree that (1) undefined terms such as strong AI or weak AI should not be in second sentence because they have not yet been defined for the reader. Defining them correctly would take too much space to put in the lede, thus these terms can't be in the lede. (2) The distinction between different kinds of AI is not important at this point. The highest priority is the widely-accepted academic definition of AI (sentence 3) and the definition intended by the man who coined the term (sentence 4). These are much higher priorities. (3) The term "strong AI" only appears in the article at two points in the article (once as a synonym for artificial general intelligence, and once as the philosophical position identified by John Searle as the strong AI thesis). Thus, given all the material we have to cover in this article, this a relatively unimportant topic, and does not belong in the lede because it summarizes such a small fraction of the material in the article. ---- CharlesGillingham (talk) 02:05, 30 September 2014 (UTC)[reply]
Not surprisingly, the sentence with strong AI/weak AI has been re-added by Felix Rosch. Feel free to remove it if you agree with me that the sentence doesn't work. ---- CharlesGillingham (talk) 17:00, 30 September 2014 (UTC)[reply]

Deep Learning

I've noticed a couple of reverts on this topic. I agree with the people who don't want Deep Learning as a separate major sub-heading. If you look at the major AI text books none of them have a chapter heading "Deep Learning" to my recollection. AI is such a broad topic we need to be sure to not try and have this article cover every single thing that has ever been described as AI but stick to the major topics. Deep learning merits its own article and a link from this article to it but not a whole section in this article. Rather than just keep reverting I think we should try to reach some consensus first and the advocates of Deep Learning should cite some major AI text books that have it as a major topic or say why they think that is not an appropriate criteria for what things should be covered in this article. --MadScientistX11 (talk) 15:50, 29 September 2014 (UTC)[reply]

"Deep learning" is not mentioned in the leading AI textbook, Russell and Norvig's Artificial Intelligence: A Modern Approach. This is why I removed this material, as a five-paragraph section is WP:UNDUE weight for a relatively minor topic. One sentence in the section on neural networks would be appropriate, if anything.
I noticed FelixRosch has reverted my removal ... ---- CharlesGillingham (talk) 02:07, 30 September 2014 (UTC)[reply]
@Felix -- if you would like to make an argument against me, now is the time. I will be removing deep learning section eventually, unless you can provide a convincing argument that it is four times as important as statistical AI, twelve times as important as neural networks, fuzzy computation and evolutionary computation, or equally important to the history of AI as symbolic AI. This is the weight this article gives to these sub-fields.
I also remind you that the only thing that counts here is reliable sources, and "deep learning" does not appear in the 1000+ pages of the most popular AI textbook. ---- CharlesGillingham (talk) 03:54, 8 October 2014 (UTC)[reply]

Some definitions of AI

Since we still seem to need a consensus on how to define AI I thought it would be worthwhile to just post a few from some of the classic text books:

  • "Artificial Intelligence is the study of ideas that enable computers to be intelligent" from Artificial Intelligence by Patrick Wilson p.1.
  • "The field of artificial intelligence, or AI, attempts to understand intelligent entities. Thus, one reason to study it is to learn more about ourselves. But unlike philosophy and psychology, which are also concerned with intelligence, AI strives to build intelligent entities as well as understand them. Another reason to study AI is that these constructed intelligent entities are interesting and useful in their own right" Russel and Norvig AI A Modern Approach p. 3
  • "Artificial Intelligence is the part of computer science concerned with designing intelligent computer systems, that is systems we associate with intelligence in human behavior: understanding language, learning, reasoning..." AI Handbook Barr and Feigenbaum https://archive.org/stream/handbookofartific01barr#page/n19/mode/2up

I like the Barr and Feigenbaum definition the best. Note two things though, EVERYONE describes it as "the study of" not as the intelligence itself, that is in contrast with the definition here and two NONE of them say anything about being contstrained by the way humans solve problems. Again, I like the Feigenbaum one best because it makes the valid point which is similar to what is there now but importantly different, making computers do things that are thought of as human intelligence IS AI but not being constrained by the WAY humans do those things. --MadScientistX11 (talk) 16:29, 29 September 2014 (UTC)[reply]

These are definitions of the academic field of AI research, i.e. "the study of". I am fine with restricted the definition to only describe the academic field, if everyone thinks that's best. Some years ago, we had something like this "Artificial intelligence is a branch of computer science which studies intelligent machines and software," i.e., the definition was strictly about the academic field.
I think that there are actually two other uses of the term outside of the academic AI, but we can choose to ignore this if we want, because the article is definitely about academic AI, and not about science fiction or other popular sources. The other two uses are: (2) the intelligence of machines or software (3) an intelligent machine or program (this usage is common in gaming and science fiction). The article for the last several years has started with (2) and ignored (3).
Feel free to try to fix this. ---- CharlesGillingham (talk) 02:26, 30 September 2014 (UTC)[reply]
When you say "the article is about academic AI" that's partly true but AI is one of those concepts like distributed computing that has both a strong academic and a strong industry flavor. My background is in both btw, I've worked in the AI group of a Major Consulting firm as well as doing research for DARPA and USAF. And where I'm coming from with some of my comments is more from the industry side. It's my industry experience that makes me say that the whole "is it just about human intelligence" is just a no brainer. People who aren't academics NEVER think like that in my experience, they want to build smart systems that solve hard problems and they will use any technique that works best. --MadScientistX11 (talk) 15:11, 30 September 2014 (UTC)[reply]
Sure -- it's about mainstream academic and industrial AI, as opposed to pop-science, science fiction and any of those thousands of "pet theories" and "alternative forms" of AI.
As I said before, feel free to rewrite the first couple of sentences any way that makes sense to you; it seems like you know what you're talking about. I'd like to keep the intelligent agent/rational agent definition and McCarthy's quote. The simple definition for the lay reader can go any way you think is best. ---- CharlesGillingham (talk) 16:58, 30 September 2014 (UTC)[reply]
The definition we quote in the intro is from Poole, Mackworth & Goebel 1998, p. 1. I like it because it's from a popular textbook, it's concise, to the point, does not equivocate, does not raise any unnecessary complications and finds a way to define AI that does not require also defining human intelligence, sidestepping all possible philosophical and technical objections. ---- CharlesGillingham (talk) 02:38, 30 September 2014 (UTC)[reply]

What Needs Discussing?

There seems to have been too much reverting in the past few days. Let's identify the issues. There is disagreement as to whether to include a paragraph on "deep learning". There is disagreement on whether to mention "strong AI" and "weak AI". I think that strong and weak should be mentioned in the article, but not in the lede, but that is only my opinion. What other disagreements are there, besides the "human-like" question that is being decided by RFC? Robert McClenon (talk) 02:00, 30 September 2014 (UTC)[reply]

Here is a summary of the current editorial issues.
1) An ongoing dispute about the lede, which has lasted several week. FelixRosch's latest contribution to the lede is this phrase: "which generally studies the goal of emulating human-like intelligence, though other variations of AI such as strong-AI and weak-AI are also studied." This phrase had been added and removed several times. I have three objections to this phrase:
"Human-like" intelligence
(Covered by the RfC above) There is an ongoing dispute that the term "human-like intelligence" should not be used to define AI.
Strong AI, weak AI
(Covered by the discussion started by MadScientist above) MadScientist and I both have objections to introducing these terms in the second sentence of the article.
The writing
And finally, in my opinion it is an awkward sentence, which reads poorly.
2) "Deep learning": (Covered by the discussion started by MadScientist above). The section added by FelixRosch about "Deep learning" is WP:UNDUE weight, in my opinion and MadScientist's. This section is copied and pasted from the article deep learning, and (I would argue) that is where it belongs. ---- CharlesGillingham (talk) 03:27, 30 September 2014 (UTC)[reply]
Deep Learning does seem like it now has undue weight. ... But, without that section it seems like AI techniques are almost entirely symbolic and strictly logical, which is also wrong. Is there a way to summarize Deep learning, traditional neural networks, and other more black-boxy techniques? APL (talk) 13:51, 30 September 2014 (UTC)[reply]
I would argue we have a consensus that Deep Learning has undue weight. As for the other issues: I also agree things like connectionist frameworks: Minsky, Pappert, Arbib, Churchland (those are the authors off the top of my head that I know, I don't know that part of the field though) needs more emphasis HOWEVER, I would strongly urge we table that. Let's sort out Deep Learning and the lede first and then move on to other issues. --MadScientistX11 (talk) 15:26, 30 September 2014 (UTC)[reply]
@APL: I don't agree with your reading of the "Approaches" sectio. Cybernetics (1930-1950s) and symbolic/logical/knowledgebased "GOFAI" (1960s-1980s) are presented as failed approaches that have been mostly superseded by newer approaches. Deep learning is one example of what the article calls statistical AI and sub-symbolic AI, as are all modern neural network methods.
As I said, I think that deep learning belongs in the section under Tools called Neural networks. It seem to me that deep learning (as described in Wikipedia) is one new neural network technique among the many that have been developed in the last decade. The neural network section mentions Jeff Hawkins' Hierarchical Temporal Memory approach to neural networks; it could also mention Hinton's deep learning if everyone thinks that's important. However, I have to say, I think it's possible to come up with at least dozen more examples of interesting new approaches to neural networks from the last decade, and we don't have room to mention them all. ---- CharlesGillingham (talk) 03:56, 1 October 2014 (UTC)[reply]
@APL & @MadScientist -- do you have any objection to moving your posts in to the section #Deep learning above? ---- CharlesGillingham (talk) 00:54, 9 October 2014 (UTC)[reply]

RFC

In further looking at the RFC, it is still non-neutral and has everyone confused. I would like to strike the RFC, and wait about 24 hours, and then create a new RFC with nothing but a question as to the lede sentence, and any other questions that are well-defined. Arguments in favor of a position can then be included as discussion. Unless anyone strongly objects, I will strike the RFC. (If anyone does object, we have to have an RFC on whether to strike the RFC. -:) ). Robert McClenon (talk) 14:55, 30 September 2014 (UTC)[reply]

My 2 cents is don't even bother making it an RFC. You end up getting a bunch of people who have little or no actual editing experience pontificating and going off on tangents. Just stick to a regular discussion in the talk section and try to keep it as focused as possible on specific editing questions. I think an RFC is overkill and that it slows down a real consensus and moving forward with actual editing which should be the goal. --MadScientistX11 (talk) 15:03, 30 September 2014 (UTC)[reply]
@Robert: I realize this is a lot to ask, but do you think you could start the RFC and help us figure out how to end this? As I've said before, I don't really understand why this dispute is continuing and why the normal standards of evidence are being ignored. I just want it to stop. How do we muster the necessary support to end this all-fronts total edit war? ---- CharlesGillingham (talk) 16:53, 30 September 2014 (UTC)[reply]
I guess I should spell this out a little more directly -- I'm trying to assume good faith here. What, exactly, does it take in order to allow us to remove the term "human-like" from the lede? We have a huge body of evidence that this is the right thing to do, absolutely no coherent evidence that it is wrong thing to do, a consensus of several editors here (including yourself) who agree that the term does not belong in the lede. However, every time I remove it from the lede, it gets restored by FelixRosche, thus I find myself in an edit war. I don't know what to do at this point.
I'm not sure exactly what's wrong with the RFC -- the question is clear, general and simple and the corresponding editorial choices are obvious. Is the problem that there is only one side presented? It seems to me that should be reason to end the issue as settled -- if FelixRosche doesn't care to make an argument, then let's be done with it. ---- CharlesGillingham (talk) 03:35, 1 October 2014 (UTC)[reply]
One last thought: editors should be aware that FelixRosche has added the term "human-like" back into the article many times, in many different forms, with many different edits. The RFC has to settle the issue of "human-like" in general, so that he doesn't just change the sentence again. (And I apologize if this seems to be bad faith; it's not -- I'm just betting the percentages here: "the best predictor of future behavior is past behavior"). ---- CharlesGillingham (talk) 04:02, 1 October 2014 (UTC)[reply]
I've struck "human-like" from the lede again. We need an RFC if User:FelixRosch actually is ready to argue that "human-like" should be somewhere in the lede. If he is willing to agree that it doesn't need to be in the lede, then we can leave it out. If he really wants it in, then we need some sort of resolution process to keep it out. I have argued in favor of RFC rather than DRN. Is he willing to leave human-like out of the lede, or does he really think it belongs, in which case we need an RFC? Robert McClenon (talk) 15:01, 1 October 2014 (UTC)[reply]
I am willing to formulate the RFC. The RFC itself will be brief and neutral. Arguments for or against "human-like" can be in the !votes or the discussion. In response to the comment that we may not need an RFC, I have asked User:FelixRosch on his talk page whether he is willing to agree that consensus is against the inclusion of "human-like" in the first paragraph. If he agrees, we don't have an issue. If he wants it in the first paragraph, then we should use either RFC or DRN, and I prefer RFC, because it receives wider attention. Robert McClenon (talk) 16:37, 1 October 2014 (UTC)[reply]
The third and fourth paragraphs of the introduction to the article do include references to what is actually human-like intelligence. In particular, the third paragraph refers to artificial general intelligence, and the fourth paragraph refers to myth and fiction. My own opinion is that those references are satisfactory, and that the only real issue has to do with the first paragraph. If anyone objects to the third and fourth paragraphs, then we may need another part to the RFC. Robert McClenon (talk) 16:37, 1 October 2014 (UTC)[reply]
Your comment in the above seems to have missed the additions of the editor Qwertyus which are worthy of some consideration. I am supporting Qwertyus even though the suggestion abridges my edit substantially and am reverting to that version as offering a point of agreement between editors which was previously not available. In restoring the Qwertyus version, I shall also stipulate that if (If) it is acceptable to all involved editors, then I shall not pursue further changes to the first sentence of the Lede which has been debated. Second, if (If) the neutral Qwertyus edit is acceptable, then I will stipulate that I shall accept the abridgment to my second sentence in the first paragraph of the Lede as well with the dropping of the phrase dealing with weak AI and strong AI there. The rest of the material would need to remain in its Qwertyus form, and all editors can return to regular editing activities. My previous offer that both @CharlesG and @RobertM, as explicit supporters of weak-AI, will also still stand as an open invitation to them to further develop the sections and content in the main body of the article dealing with weak-AI. Your own supporter @MadScientist has even asked you, Where is it?, where is it? My edit here is to support Qwertyus as offering a useful edit. FelixRosch (talk) 14:29, 2 October 2014 (UTC)[reply]
It is not acceptable, of course, as I have argued above.
We do not need your permission "to return regular editing activities".
The term "weak AI" is never used in the way you are using it, so please don't call me a "supporter" of it. Do you mean "AI research into non-humanlike intelligence as well as human-like intelligence"? That would seem to follow from the position you hold. If so, then I must point out, for the third or fourth time, that most of the article is about what you call "weak AI". None of the topics is exclusively about human-like intelligence. Please read my earlier posts. ---- CharlesGillingham (talk) 09:01, 7 October 2014 (UTC)[reply]
Your comment appears to have missed the useful additions of editor @Qwerty. Your co-editor, @RobertM, has also declined all comment on this edit in preference to his posting a poorly formed RfC replacement for the previous defective RfC. Unless he joins this discussion or replaces/withdraws the currently poorly formed RfC, then it shall be difficult to respond. Your own version was posted as a full page ad for "Weak-AI" on the previous RfC. This discussion must be made on the basis of the current version of the article. FelixRosch (talk) 14:36, 7 October 2014 (UTC)[reply]
I am aware of Qwerty's contribution and I agree that is useful (especially in that he removed the misuse of the terms "strong AI" and "weak AI"). However, it does not change the fact that major AI textbooks and major AI researchers deliberately avoid defining artificial intelligence in terms of human intelligence, and that removing the word "human-like" does no harm to the article. I have proven this with solid evidence from all the major sources. Qwerty's actions are irrelevant in that he did not disprove these facts, and neither have you. ---- CharlesGillingham (talk) 18:10, 7 October 2014 (UTC)[reply]
That is still not a justification for an overly generalized version of the Lede section which is being supported by your co-editor User:RobertM and yourself on the poorly formed RfC below. Nor is your personal attack justified on @Qwerty calling those edits "irrelevant". Please note that your co-editor RobertM is not joining you here to support you on this. FelixRosch (talk) 18:30, 7 October 2014 (UTC)[reply]
You are not reading my post very carefully. DO NOT accuse me of a personal attack -- I complimented Qwerty on his edit. His edit was fine, but the original, ongoing issue involves the term "human-like", and Qwerty's edit did not change this. There is no consensus for a version that says AI "generally studies the goal of emulating human-like intelligence." This is the issue at hand. I did not say that Qwerty's edit was irrelevant. It is your comments that are not helpful and that are avoiding the subject.
The most reliable mainstream source (Russell & Norvig) rejects the idea of emulating human-like intelligence as goal for AI. It doesn't matter what I think, or what you think, or what Robert thinks. This is not a vote, this is not an issue that we get to decide ourselves. It has already been decided by the mainstream AI sources. You have no basis for your argument, other than your own insistence.
And, as I have said before: this is not a position that I personally agree with. This is a position that the article must take, because it is the only one available from the most reliable source. We don't get to make up things here on Wikipedia and then just insist on them. ---- CharlesGillingham (talk) 03:28, 8 October 2014 (UTC)[reply]
Your personal attacks upon @Qwertyus must stop and calling him "irrelevant", to use your word, is not Wikipedia policy. You must also stop misrepresenting the case to admin @Redrose64 that your edit is unanimous since your poorly formulated RfC is against both User:Qwertyus and myself who support "emulation" as a fair summary of the article in its current state. @Redrose64 is an experienced editor who can explain your difficulties to you if you represent the matter as it is, and that your position is not unanimously supported in this poorly formed RfC. Please note that your co-editor RobertM is not joining you here to support you on this. FelixRosch (talk) 14:59, 8 October 2014 (UTC)[reply]
I did not call Qwertyus' edit, irrelevant to the article or to the topic, and certainly did not say that Qwertyus is irrelevant. I said it was irrelevant TO OUR DISPUTE about the term "human-like", which it obviously is because he neither removed nor added the term human like. QED. This will be the second time I have proved this, using plain English. I would prefer it if you would read my posts before responding. I'm finding it difficult to believe that you can't follow what I'm saying, and, if I assume good faith, I must also assume you are not reading them. ---- CharlesGillingham (talk) 21:30, 8 October 2014 (UTC)[reply]
And just to stay on point: the most reliable sources carefully and deliberately DO NOT define artificial intelligence as studying or emulating "human-like" intelligence, and this is an issue which many major AI researchers feel strongly about. Adding the term "human-like" to the lede is an insult to the hard work that these researchers have done to define their field. Wikipedia's editors do not have the right to define "artificial intelligence", so it does not matter what you think or what I think or what anyone thinks. What matters is the sources. ---- CharlesGillingham (talk) 21:37, 8 October 2014 (UTC)[reply]
Your personal attack against @Qwertyus was "Qwerty's actions are irrelevant", and your personal attack must stop. Are you now denying that this is a direct quote of your personal attack on another fellow editor? Also, to stay on point, your misrepresentation of your claim to "unanimous" support to admin must be withdrawn with full apology to the editor @Redrose for this misrepresentation. Your position is not unanimous, you are using an old outdated 2008 textbook for a high tech field, and your poorly formed RfC with your co-editor @RobertM promoting your preference for "Weak-AI" should be withdrawn. FelixRosch (talk) 16:32, 9 October 2014 (UTC)[reply]
May I cordially suggest to CharlesGillingham that you leave this rant, and any repetitions that follow, unanswered? The rest of us can all see for ourselves where it is coming from, there is no need to defend yourself against it. — Cheers, Steelpillow (Talk) 17:01, 9 October 2014 (UTC)[reply]

RFC on Phrase "Human-like" in First Paragraph

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Should the phrase "human-like" be included in the first paragraph of the lede of this article as describing the purpose of the study of artificial intelligence? Robert McClenon (talk) 14:43, 2 October 2014 (UTC)[reply]

It is agreed that some artificial intelligence research, sometimes known as strong AI, does involve human-like intelligence, and some artificial intelligence research, sometimes known as weak AI, involves other types of intelligence, and these are mentioned in the body of the article. This survey has to do with what should be in the first paragraph. Robert McClenon (talk) 14:43, 2 October 2014 (UTC)[reply]

Survey on retention of "Human-like"

  • Oppose - The study of artificial intelligence has achieved considerable success with intelligent agents, but has not been successful with human-like intelligence. To limit the field to the pursuit of human-like intelligence would exclude its successes. Robert McClenon (talk) 14:46, 2 October 2014 (UTC) Inclusion of the restrictive phrase would implicitly exclude much of the most successful research and would narrow the focus too much. Robert McClenon (talk) 14:46, 2 October 2014 (UTC)[reply]
  • Oppose - At least as it's currently being used. Only some fields of AI strive to be human-like. (Either through "strong" AI, or through emulating a specific human behavior.) The rest of it is only "human-like" in the sense that humans are intelligent creatures. The goal of many AI projects is to perform make some intelligent decision far better than any human possibly could, or sometimes simply to do things differently than humans would. To define AI as striving to be "human like" is to encourage a 'Hollywood' understanding of the topic, and not a real understanding. (If "human-like" is mentioned father down the paragraph with the qualifier that *some* forms of AI strive to be human-like, that's fine, but it should absolutely not be used to define the field as a whole.) APL (talk) 15:21, 2 October 2014 (UTC)[reply]
  • Comment The division of emphasis is pretty fundamental. I would prefer to see this division encapsulated in the lead, perhaps along the lines of, "...an academic field of study which generally studies the goal of creating intelligence, whether in emulating human-like intelligence or not." — Cheers, Steelpillow (Talk) 08:45, 3 October 2014 (UTC)[reply]
    This is not a bad idea. It has the advantage of being correct. ---- CharlesGillingham (talk) 18:15, 7 October 2014 (UTC)[reply]
    I don't know much about this subject area, but this compromise formulation is appealing to me. I can't comment on whether it has the advantage of being correct, but it does have the advantage of mentioning an aspect that might especially interesting to novice readers. WhatamIdoing (talk) 04:43, 8 October 2014 (UTC)[reply]
  • Support. RFC question is inherently faulty: There cannot be a valid consensus concerning exclusion a word from one arbitrarily numbered paragraph. One can easily add another paragraph to the article, or use the same word in another paragraph in manner that circumvents said consensus or use the same word in conjunction with negation. For instance, Robert McClenon seems not to endorse saying "AI is all about creating artificial human-like behavior." But doesn't that mean RM is in favor saying "AI is not all about creating human-like behavior"? Both sentences have "human-like" in them. RFC question must instead introduce a specific literature and ask whether it is acceptable or not. Best regards, Codename Lisa (talk) 11:39, 3 October 2014 (UTC) Struck my comment because someone has refactored the question, effectively subverting my answer. This is not the question to which I said "Support". This RFC looks weaker and weaker every minute. Codename Lisa (talk) 17:03, 9 October 2014 (UTC)[reply]
His intent is clear from the mountain of discussion of the issue above. The question is should AI be defined as simulating human intelligence, or intelligence in general. ---- CharlesGillingham (talk) 13:54, 4 October 2014 (UTC)[reply]
Yes, that's where the danger lies: To form a precedent which is not the intention of a mountain of discussions that came beforehand. Oh, and let me be frank: Even if no one disregarded that, I wouldn't help form a consensus on what is inherently a loophole that will come to hunt me down ... in good faith! ("In good faith" is the part that hurts most.) Best regards, Codename Lisa (talk) 19:31, 4 October 2014 (UTC)[reply]
I don't understand this !vote. It appears to be a !vote against the RFC rather than against the exclusion of the term from the lead, in which case it belongs in the discussion section not in the survey section. Jojalozzo 22:27, 4 October 2014 (UTC)[reply]
Close, but no cigar. It is against the exclusion, but because of (not against) the RFC fault. Best regards, Codename Lisa (talk) 07:11, 5 October 2014 (UTC)[reply]
Is this vote just a personal opinion? Or do you have reliable sources? pgr94 (talk) 21:30, 8 October 2014 (UTC)[reply]
  • Oppose Human-like is one of the many possible goals/directions. This article deals with AI in general. OCR or voice recognition research has little to do with human-like intelligence*, yet (at the moment) they are far more useful fields of AI research than, say, a chat bot able to pass the Turing test. (*vision or hearing are not required for human-like intelligence) WarKosign 11:29, 12 October 2014 (UTC)[reply]
  • Support - I am not well versed in the literature on this topic, but I don't think one needs to be for this purpose. We're talking about the first sentence paragraph in the lead, and for that purpose a quick survey of the hits from "define artificial intelligence" should suffice. Finer distinctions based on academic literature can be made later in the lead and in the body. ‑‑Mandruss (talk) 11:48, 14 October 2014 (UTC)[reply]
  • Oppose This article is focused on the computer science use of the term (we already have a separate article on its use in fiction). And computer scientists talk about Deep Blue and Expert systems as "Artificial Intelligence". So, it's become a technical term that is used in a broad way to apply to any programming and computing that helps to deal with the many issues involved in computers interacting with real world situations and problems. However, in science fiction, Artificial intelligence in fiction has been generally taken to mean human like intelligence. So - perhaps it might help to clarify to start the second sentence with "In Computer science it is an academic field of study ..." or some such. Then it is uncontroversial that in computer science the way that the term is used, as a technical term, is exactly as presented in the first paragraph. And it introduces the article and gives the user a clear idea of what this article is about. The fourth paragraph in the intro does mention that "The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—"can be so precisely described that a machine can be made to simulate it."" and it is also mentioned in the history. Just a suggestion to think over. Robert Walker (talk) 12:27, 14 October 2014 (UTC)[reply]
  • Both at the same time. Why do we have to choose between human-like and not? As the RFC statement already says, it is agreed that some AI seeks human-like intelligence, and other AI has weaker goals. We should say so. Teach the controversy. Or, as WP:NPOV states, "Avoid stating seriously contested assertions as facts." That is, we should neither say that all AI aims for human-like intelligence, nor should we imply the opposite by not saying that. We should say that some do and some don't. —David Eppstein (talk) 01:45, 16 October 2014 (UTC)[reply]
Technically, an "oppose" is a vote for both. We are discussing whether it should be "human-like intelligence" or just "intelligence" (which is both). We can't write "The field of AI research studies human-like and non-human-like intelligence" ---- CharlesGillingham (talk) 13:20, 16 October 2014 (UTC)[reply]
I disagree. Putting just "intelligence" is not both, it is only giving one side of the story (the side that says that it doesn't matter whether the intelligence is human-like or not). —David Eppstein (talk) 23:27, 16 October 2014 (UTC)[reply]
  • Oppose The phrase "which generally studies the goal of emulating human-like intelligence", which is currently in the lead, has various problems: "generally" is a weasel word; AI covers both the emulation (weak AI) and presence (strong AI) of intelligence and is by no means restricted to "human-like" intelligence. The first para of the lead can be based on McCarthy's original phrase, already quoted, which refers to intelligence without qualification. --Mirokado (talk) 02:03, 16 October 2014 (UTC)[reply]
See comment above. ---- CharlesGillingham (talk) 13:20, 16 October 2014 (UTC)[reply]
Just "intelligence" would be underspecified. The reader may interpret this as human, non-human or both. Only the latter is correct. I'd like to this see this addressed explicitly in the lede. —Ruud 18:29, 16 October 2014 (UTC)[reply]
  • Comment I want to remind everyone that the issue is sources. Major AI textbooks and the leaders of AI research carefully define their field in terms of intelligence and specifically argue that it is a mistake to define AI in terms of "human intelligence" or "human-like" intelligence. Even those in artificial general intelligence do not try to define the entire field this way. Please see detailed argument at the beginning of the first RfC, above. Just because this is an RfC, it does not mean we can choose any definition we like. We must respect choices made by the leaders of the field and the most popular textbooks. ---- CharlesGillingham (talk) 03:13, 17 October 2014 (UTC)[reply]

Threaded discussion of RFC format

Discussion of previous RfC format
(Deleting my own edit which was intentionally distorted by RfC editor User:RobertM by re-titling its section and submerging it into the section for his own personal gain of pressing his bias for the "Weak-AI" position in this poorly formulated RfC.) FelixRosch (talk) 17:22, 6 October 2014 (UTC)[reply]
Interestingly, User:FelixRosch didn't object to a previous very non-neutrally worded RFC, but now chooses to object to a neutrally worded RFC simply because the editor publishing the RFC has a stated opinion. Interesting. Robert McClenon (talk) 20:15, 2 October 2014 (UTC)[reply]
I do not see how separating the two sections as you did and then !voting in both is preferable to the "AfD" style where both "supports" and "opposes" run together. Does anyone object to me refactoring the poll accordingly? VQuakr (talk) 03:47, 3 October 2014 (UTC)[reply]
It is fine with me to refactor as long as it doesn't change the result. Robert McClenon (talk) 11:00, 3 October 2014 (UTC)[reply]
Done. — Cheers, Steelpillow (Talk) 09:00, 8 October 2014 (UTC)[reply]
The issue that I am trying to address has to do with the inclusion of the word "human-like" in the first paragraph in a limiting way, that is, defining the ultimate objective of artificial intelligence research as the implementation of human-like intelligence. The significance of the first paragraph is, of course, that it defines the scope of the rest of the article. I am willing to consider other ways to rework the first paragraph so that it recognizes both human-like and other forms of artificial intelligence. Robert McClenon (talk) 13:12, 3 October 2014 (UTC)[reply]

I was invited here randomly by a bot. (Though it also happens I have an academic AI background.) This RFC is flawed. Please read the RFC policy page before proceeding with this series of poorly framed requests. It makes no sense to me to have a section for including the term and separate section for excluding the term (should everyone enter an oppose and a support in each section?). The question should be something simple and straight forward like "Should "human-like" be included in the lead paragraph to define the topic." Then there should be a survey section where respondents can support or oppose the inclusion and a discussion section for stuff like this rant. Please read the policy page before digging this hole any deeper. Jojalozzo 22:35, 4 October 2014 (UTC)[reply]

Speaking as someone who has written a good deal of WP:RFC over the years, I'd like to drop by and clarify a few things:
  • Whatever you put in between the rfc template and the first timestamp is what people think the question is. Please don't put things like "the format of this RFC is being contested" as your "question". It looks like nonsense to people who are looking at Wikipedia:Requests for comment/Maths, science, and technology. No fancy formatting, please, just the actual question. What Robert has posted at the moment is fine.
  • The standard for a "neutral" question is "do your best". It is not "I get to reject any RFC (especially I'm 'losing') if I say the question is non-neutral". Frankly, the community expects RFC respondents to be capable of seeing through a non-neutral question and figuring out how to help improve the article.
  • There's nothing inherently wrong with separating support and oppose comments. It's not the most popular format for RFCs, but there is no rule against it. See Wikipedia:Requests for comment/Example formatting for other options, and pay careful attention to the words optional and voluntary (emphasis in the original) at the top of that page.
  • This isn't some sort of bureaucratic battle, where people can raise points of order and invoke rules to delay or interfere with the process. The point is to get useful information from a variety of people, to (ideally) make a decision, and to get back to normal editing. Or, to put it another way, an RFC is best approached as a minor variation on an everyday, normal talk-page discussion. The fancy coffee-roll-colored banner is just a sign that extra people are being encouraged to join the discussion. Everyday rules apply: Talk. Listen. Improve the article.
Good luck to you all, WhatamIdoing (talk) 04:39, 8 October 2014 (UTC)[reply]
Well spoken WhatamIdoing. — Cheers, Steelpillow (Talk) 09:00, 8 October 2014 (UTC)[reply]

This is the most confusingly formatted RFC I've ever seen that wasn't immediately thrown out as gibberish, however, it doesn't look like anybody is arguing that the topic should be described as "human-like" in the lead? I'd expect to see at least one. Have the concerned parties been notified that this is ongoing? APL (talk) 06:17, 8 October 2014 (UTC)[reply]

This continuing noise is in danger of drowning out the discussion proper, so I have refactored the format as suggested above and acceded to by the OP. — Cheers, Steelpillow (Talk) 09:00, 8 October 2014 (UTC)[reply]

Threaded discussion of RFC topic

I am getting more unhappy with that phrase "human-like". What does it signify? The lead says, "This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence," which to me presupposes human-like consciousness. OTOH here it is defined as: "The ability for machines to understand what they learn in one domain in such a way that they can apply that learning in any other domain." This makes no assumption of consciousness, it merely defines human-like behaviour. One of the citations in the article says, "Strong AI is defined ... by Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." Besides begging the question as to what "simulating thinking" might be, this appears to raise the question as to whether strong vs weak is really the same distinction as human-like vs nonhuman. Like everybody else, AI researchers between tham have all kinds of ideas about the nature of consciousness. I'll bet that many think that "simulating thinking" is an oxymoron, while as many others see it as a crucial issue. In other words, there is a profound difference between the scientific study and creation of AI behaviour vs. the philosophical issue as to its inner experience - a distinction long acknowleged in the study of the human mind. Which of these aspects does the phrase "human-like" refer to? One's view of oneself in this matter will strongly inform one's view of AI in like manner. I would suggest that it can refer to either according to one's personal beliefs, and rational debate can only allow the various camps to beg to differ. The phrase is therefore best either avoided in the lead or at least set in an agreed context. Sorry to have rambled on so. — Cheers, Steelpillow (Talk) 18:21, 6 October 2014 (UTC)[reply]

This is a good question, which hasn't been answered directly before. In my view, "human-like" can mean several different things:
  1. AI should use the same algorithms that people do. For example, means-ends analysis is an algorithm that was based on psychological experiments by Newell and Simon, where they studied how people solved puzzles. AI founder John McCarthy (computer scientist) argued that this was a very limiting approach.
  2. AI should study uniquely human behaviors; i.e. try to pass the Turing Test. See Turing Test#Weaknesses of the test to see the arguments against this idea. Please read the section on AI research -- most AI researchers don't agree that the Turing Test is a good measure of AI's progress.
  3. AI should be based on neurology; i.e., we should simulate the brain. Several people in artificial general intelligence think this is the best way forward, but the vast majority of successful AI applications have absolutely no relationship to neurology.
  4. AI should focus on artificial general intelligence (by the way, this is what Ray Kurzweil and other popular sources call "strong AI"). It's not enough write a program that solves only one particular problem intelligently; it has to be prepared to solve any problem, just as humans brains are prepared to solve any problem. The vast majority of AI research is about solving particular problems. I think everyone would agree that general intelligence is a long term goal, but it also true that many would not agree that "general intelligence" is necessarily "human-like".
  5. AI should attempt to give a machine subjective conscious experience (consciousness or sentience). (This is what John Searle and most academic sources call "strong AI"). Even if it was clear how this could be done, it is an open question as to whether consciousness is necessary or sufficient for intelligent problem-solving.
The question at issue is this: do any of these senses of "human like" represent the majority of mainstream AI research? Or do each of these represent the goals or methodology of a small minority of researchers or commentators? ---- CharlesGillingham (talk) 08:48, 7 October 2014 (UTC)[reply]
@Felix: What do you mean by "human-like"? Is it any of the senses above? Is there are another way to construe it I have overlooked? I'm am still unclear as to what you mean by "human-like" and why you insist on including it in the lede. ---- CharlesGillingham (talk) 09:23, 7 October 2014 (UTC)[reply]
One other meaning occurs to me now I have slept on it. The phrase "human-like" could be used as shorthand for "'human-like', whatever that means", i.e. it could be denoting a deliberately fuzzy notion that AI must clarify if it is to succeed. Mary Shelley galvanized Frankenstein's monster with electricity - animal magnetism - to achieve this end in what was essentially a philosophical essay on what it means to be human. Biologists soon learned that twitching the leg of a dead frog was not what they meant by life. People once wondered whether a sufficiently complex automaton could have "human-like" intelligence. Alan Turing suggested a test to apply but nowadays we don't think that is quite what we mean. In the days of my youth, playing chess was held up as an example of more human-like thinking - until the trick was pulled and then everybody said, "oh no, now we know how it's done that's not what I meant". Something like pulling inferences from fuzzy data took its place, only to be tossed in the "not what I meant" bucket by Google and its ilk. You get the idea. We won't know what "human-like" means until we have stopped saying "that's not what I meant" and started saying, "Yes, that's what I mean, you've done it." In this light we can understand that some AI researchers are desperate to make that clarification, while others believe it to be a secondary issue at best and prefer to focus on "intelligence" in its own right. — Cheers, Steelpillow (Talk) 09:28, 8 October 2014 (UTC)[reply]


I'm unhappy with it for another reason. "Artificial Intelligence" in computer science I think is now a technical term that is applied to a wide range of things. When someone writes a program to enable self driving cars - they call it artificial intelligence. See Self Driving Car: An Artificial Intelligence Approach "Artificial Intelligence also known as (AI) is the capability of a machine to function as if the machine has the capability to think like a human. In automotive industry, AI plays an important role in developing vehicle AI plays an important important role in developing developing vehicle vehicle technology." For a machine to function as if it had the capability to think like a human - that's very different from actually emulating human-like intelligence. Deep Blue was able to do that also - to chess onlookers - it acted as if it had the capability to think like a human at least in the limited realm of a chess game. In the case of the self driving car, or Deep Blue - you are not at all aiming to pass the Turing test or make a machine that is intelligent in the way a human is. Indeed, the goals to make a chess playing computer or a self driving car are compatible with a belief that human intelligence can't be programmed.

I actually think that way myself, persuaded by Roger Penrose's arguments - I think myself that no programmed computer will ever be able to understand mathematics in the way a mathematician does. Can never truly understand what is meant by "this statement is true" - just feign an understanding of truth, continually corrected by its programmers when it makes mistakes. His argument also extends to quantum computers and hardware neural nets. He doesn't think that hardware neural nets capture what the brain does, but that there is a lot going on within the cells which we don't know about that are also relevant as well as other forms of communications between cells.

But still, I accept that in tasks of limited scope such as chess playing or driving cars, they can come to out perform humans. This is nothing to do with weak AI or strong AI as I think both are impossible myself. Except perhaps with biological machines (slime moulds) or computers that in some way can do something essentially non computable (recognize mathematical truth) - if so they have to go beyond ordinary programming and go beyond ordinary quantum computers also to some new thing.

So - I know that's controversial - not trying to persuade you of my views - but philosophically it's a view that some people take, including Roger Penrose. And is a consistent view to have. Saying that the aim of AI is to create human like intelligence - that's making a philosophical statement that the things programmers are trying to achieve with self driving cars and with chess playing computers are on a continuum with human intelligence and we just need more of the same. But not everyone sees it that way. I think AI is very valuable, but not in that way, not the direction it is following at present anyway.

Plus also - the engineers of Google's self driving cars - are surely not involved in a "goal of emulating human-like intelligence" except in a very limited way.

Rather, their main aim is to create machines that are able to take over from humans in a flexible human environment without causing problems - and to do that by emulating human intelligence to whatever extent is necessary and useful to do that work.

Also in another sense, emulating human intelligence is too limited a goal. In the case of Deep Blue the aim was to be better at chess than any human - not to just emulate humans. Ideally also the Google self driving cars will be better at driving than humans. The aim is to create machines that in their own limited frame of reference are better than humans - and using designs inspired by capabilities of humans and drawn from things that humans can do. But not at all to emulate humans including all their limitations and faults and mistakes and accidents. I think myself very few AI researchers have that as a goal. So not sure how the lede should be written, but I am not at all happy with "goal of emulating human intelligence" - that is wrong in so many ways - except for some science fiction stories. I suggest also that we say "In Computer science" to start the second sentence, whatever it is, to distinguish it from "In fiction" where the goal often is to emulate human intelligence to a high degree as with Asimov's positronic robots. Robert Walker (talk) 00:28, 16 October 2014 (UTC)[reply]

The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Human-Like in Lede, and RFC again

I have removed the phrase "human-like" from the lede again. To state that the primary purpose of artificial intelligence is the implementation of human-like intelligence (regardless of what is meant by that) is misleading and impoverishes a field that has made significant contributions without achieving the mythic objective of human-like intelligence. Consensus is currently running in favor of keeping that phrase out of the first sentence. Robert McClenon (talk) 01:41, 11 October 2014 (UTC)[reply]

I've removed it again. If the RFC suddenly goes the other way, of course we can put it back in. But it doesn't seem unreasonable to leave it out for now, since the RFC has been running for over a week and not a single editor has spoken in favor of the phrase "human-like". APL (talk) 03:09, 12 October 2014 (UTC)[reply]
At least one editor favors the use of the phrase "human-like" in the lede sentence. It has been inserted by one registered editor and one IP. The registered editor disputed the RFC rather than participating in it. There should be references to "human-like" intelligence in various parts of the article, but the topic should not be restricted by including that phrase in the first sentence. Robert McClenon (talk) 14:14, 12 October 2014 (UTC)[reply]
Well, OK, but now is the time for that editor to speak up. Trying to instigate change after the RFC has run its course will be nearly impossible.
If he's got a reasonable argument, he should be making it, so that the non-partial, uninvolved editors coming here for the RFC can see both sides of the issue. APL (talk) 22:28, 13 October 2014 (UTC)[reply]
Since you have asked: The history of the edit dispute with CharlesG started 3 months ago and can be summarized briefly in 4-5 comments.
(1) Three months ago I read this article and saw that in its current form that the article was oriented to the human engineering and reverse human engineering perspectives of AI in all of the first 8 (eight) sections. Section 2.1 was about the emulation of human deduction, section 2.2 was about the emulation of human knowledge representation, 2.3 was about the emulation of human planning, 2.4 was about the emulation of human learning, 2.5 was about the emulation of human natural language processing (there is no other type of natural language processing), 2.6 was about the emulation of human perception, 2.7 was about the emulation of the human equivalent motion and manipulation, and the same for section 2.8. Given the article in its current form, I then added to term human-like in the Lede to accurately represent the article as it exists in its current form. CharlesG disagreed stating his belief that from a general perspective not limited to the article itself, that the over-arching and general version of intelligence was his personal preference to defend his own Weak-AI perspective.
(2) My response to CharlesG was that even if what he said was true, that WP:Lede requires that the Lede summarize the article as it exists in its current form, and not from the general perspective of how AI could be defined in a future version of the article which may at some time in the future be re-written to highlight his preference for the Weak-AI perspective. I also made multiple invitations to CharlesG to add his information on Weak-AI into the article to increase its prominence in the article, which he refused to do. The key issue is that in its current form, the body of the article is oriented to human engineering and reverse human engineering as its main perspective in all of its eight opening section, which by WP:Lede is what should be summarized in the Lede based on this article in its current form at this point in time. CharlesG declined my multiple invites to expand the Weak-AI material in the article and decided to file a Dispute Resolution Notice, as his preferred path to resolution. I fully acknowledged and answered the Dispute Resolution Process issues raised there.
(3) After filing the dispute resolution notice, RobertM then falsely presented himself as a neutral and non-biased mediator of the Dispute Resolution Process and recommended strongly that CharlesG withdraw the Dispute Resolution Notice and file an RfC instead, and CharlesG accepted the advice. Although it looked odd, RobertM was presenting himself as a non-biased mediator making suggestions and CharlesG filed an RfC. The resulting RfC was criticized by RobertM, as the first (previous) RfC became a full page advertisement for the Weak-AI position and was withdrawn by ChalesG and RobertM together.
(4) When I challenged RobertM 2-3 times on his Talk page about what he was doing, he then affirmed that he was not neutral (not NPOV) and that he was biased to the Weak-AI perspective on his Talk page against my reading. He appeared to take the approach that by controlling the RfC process that he could steer the results to the Weak-AI perspective, regardless of the content of the article in its current form, and force his form of the Weak-AI friendly version of the Lede. I then extended the same invitation to RobertM that they (with CharlsG) expand the Weak-AI material in the main body of the article first because WP:Lede requires that only material in main body of the article can be used for the Lede summary, but he refused the invitation. He then posted a second version of the RfC stating his own preferred version of the Lede as the only feasible option without disclosing his own bias against NPOV, and without presenting any of this history to the newly emerging editors joining the discussion for the first time. There are now 4 (four) versions of the edit for the Lede available (Qwertus, CharlesG, mine, and one by SteelPillow), not the one option which RobertM has offered in his biased RfC as his one and only "solution".
(5) The unbiased version of an RfC would simply list the 4 (four) options just mentioned above without prejudice and ask editors to indicate support-or-oppose for their preference, again without prejudicing new editors to one and only one "solution". The RobertM version of the RfC is biased for this reason and should be deleted as being non-neutral and against npov. FelixRosch (talk) 15:55, 14 October 2014 (UTC)[reply]
The issue is sources. Major AI textbooks and AI's leaders carefully and specifically define the field as studying all kinds of intelligence, not just human intelligence. FelixRosch's reading of the article is mistaken. Please see detailed arguments above.---- CharlesGillingham (talk) 13:28, 16 October 2014 (UTC)[reply]
There is no requirement that the author of an RFC be neutral, only that the wording of the RFC be neutral. Does FelixRosch have a proposed alternate wording for the RFC? Defacing an RFC (since reverted) by changing the RFC's own lead question to a protest about the RFC is not an appropriate use of the RFC process. Robert McClenon (talk) 18:47, 14 October 2014 (UTC)[reply]
The issue remains that your version does not appear as neutral. If there are 4 options, then you are not supposed to single out only one of them (which you prefer) to the exclusion of the other choices. You appear to want to write the RfC while not admitting your own bias. An unbiased version is outlined in item (5) directly above your comment here. Your biased RfC should be withdrawn or deleted. FelixRosch (talk) 21:09, 14 October 2014 (UTC)[reply]
WP:RFC explains what to do in this situation: "If you feel an RfC is improperly worded, ask the originator to improve the wording, or add an alternative unbiased statement immediately below the RfC question template. Do not close the RfC just because you think the wording is biased." Hope this helps. 83.104.46.71 (talk) 09:42, 15 October 2014 (UTC) (steelpillow (talk · contribs) dropping by during my wikibreak).[reply]

Summary of Penrose's views inaccurate

This passage summarizes the views of Penrose's critics, not his own views:

John Lucas (in 1961) and Roger Penrose (in 1989) both argued that Gödel's theorem entails that artificial intelligence can never surpass human intelligence,[186] because it shows there are propositions which a human being can prove but which can not be proved by a formal system. A system with a certain amount of arithmetic, cannot prove all true statements, as is possible in formal logic. Formal deductive logic is complete, but when a certain level of number theory is added, the total system becomes incomplete. This is true for a human thinker using these systems, or a computer program.

Penrose's argument is that artificial intelligence if it is based on programming, neural nets, quantum computers and such like, can never emulate human understanding of truth. And Gödel's theorem indeed shows that any first order formal deductive logic strong enough to include arithmetic is incomplete (and a second order theory can't be used on its own as a deductive system). But Penrose's point is that the limitations of formal logic do not apply to a human, because we are not limited to reasoning within formal systems. Humans can indeed use Gödel's very argument to come up with a statement that a human can see to be true but which can never be proved within the formal system. Vis the statement that the Gödel sentence cannot be proved within the formal system, and that there is no Gödel encoding of its proof within the system.

And he doesn't argue that it is impossible for artificial intelligence to surpass human intelligence. It clearly can in restricted areas like playing chess games.

Nor does he argue that it is impossible for any form of artificial intelligence to ever emulate human understanding of mathematical truth. He just says that it is impossible for it to do that if based on methods that are equivalent to Turing computability, which includes hardware neural nets, and quantum computers and probabilitistic methods as all of those are shown to introduce nothing new, they are just faster forms of Turing type computability.

He says that before AI can achieve human understanding of mathematical truth, then new physics is needed. We need to understand first how humans are able to do something non computable, to understand maths. He thinks that the missing step is to do with the way that the collapse of the wave function happens, based on his ideas of quantum gravity. So - you still could get artificial intelligence able to understand maths based on either using biology directly - or else - based on some fundamental new understanding of physics. But not through computer programs, neural nets or quantum computers.

I think this is important because, whatever you think of Penrose's views, they introduce an interesting new position according to which both weak AI and strong AI can never be achieved by conventional methods. So - it would add to the article to acknowledge that in the philosopical section, as a third position, and highlight it more - rather than to just put forward the views of Penrose's critics. At any rate the section as it stands is inaccurate as it says that it is summarizing the views of Penrose and Lucas - but then goes on to summarize the views of their critics instead.

His ideas are summarized reasonably accurately in the main article Philosophy_of_artificial_intelligence#Lucas.2C_Penrose_and_G.C3.B6del

"In 1931, Kurt Gödel proved that it is always possible to create statements that a formal system (such as a high-level symbol manipulation program) could not prove. A human being, however, can (with some thought) see the truth of these "Gödel statements". This proved to philosopher John Lucas that human reason would always be superior to machines.[26] He wrote "Gödel's theorem seems to me to prove that mechanism is false, that is, that minds cannot be explained as machines."[27] Roger Penrose expanded on this argument in his 1989 book The Emperor's New Mind, where he speculated that quantum mechanical processes inside individual neurons gave humans this special advantage over machines."

Except - that more precisely, he speculated that non computable processes occurring during collapse of quantum mechanical states, both within neurons and also spanning more than one neuron, gave humans this special advantage over machines. (I'll correct that page).

Robert Walker (talk) 11:17, 16 October 2014 (UTC)[reply]

Agreed. Please be WP:BOLD and fix it. (It needs to be short, of course.)---- CharlesGillingham (talk) 16:00, 16 October 2014 (UTC)[reply]

Can We Try Again on Human-Like Issue

I am willing to take another try at an RFC on the use of the phrase "human-like" in the lede. If a better phrasing of the RFC is agreed on, then the bot header can be deleted and the discussion of the RFC can be boxed when the new RFC is published. It is probably appropriate to delete and restate this RFC anyway, because it has been refactored and defaced, making the answers inconsistent. Does anyone have a better wording of the RFC?

I, for one, do not object to a phrasing that includes "human-like" in a context such as "whether human-like or otherwise". I only object to a phrasing that implies that human-like intelligence is the primary objective of AI. It is one of the objectives of AI, and not the one that has been successful yet (in spite of the dreams of a technological singularity that have been on the moving horizon for decades).

If we can't get agreement on a revised RFC, it may be that moderated dispute resolution is the best approach after all. Robert McClenon (talk) 16:14, 16 October 2014 (UTC)[reply]

Offer to stipulate. You appear to be saying that of the 4 options from various editors which I listed above as being on the discussion table, that you have a preference for version (d) by Steelpillow, and that you are willing to remove the disputed RfC under the circumstance that the Steeltrap version be posted as being a neutral version of the edit. Since I accept that the editors on this page are in general of good faith, then I can stipulate that if (If) you will drop this RfC by removing the template, etc, that I shall then post the version of Steeltrap from 3 October on Talk in preference to the Qwerty version of 1 October. The 4 paragraph version of the Lede of Qwerty will then be posted updated with the 3 October phrase of Steelpillow ("...whether human-like or not") with no further amendations. It is not your version and it is not my version, and all can call it the Steelpillow version the consensus version. If agreed then all you need do is close/drop the RfC, and then I'll post the Steelpillow version as the consensus version. FelixRosch (talk) 17:46, 16 October 2014 (UTC)[reply]

 Done - Your turn. Robert McClenon (talk) 18:30, 16 October 2014 (UTC)[reply]

 Done Installing new 4 paragraph version of Lede following terms of close-out by originating Editor RobertM and consensus of 5 editors. It is the "Steelpillow" version of the new Lede following RfC close-out on Talk. FelixRosch (talk) 19:58, 16 October 2014 (UTC)[reply]

Comment : An RFC is not just a contest between two people, and a consensus is not just the agreement between those two. FelixRosch has no particular authority to dictate terms, especially as the RFC was clearly leaning away from his position. APL (talk) 19:28, 16 October 2014 (UTC)[reply]

The RFC has been widely publicised now, which is why I contributed to it. Let it run its course and accept the result, whatever that might be. Stop editing the part of the article affected by it. --Mirokado (talk) 21:44, 16 October 2014 (UTC)[reply]

I now notice that the RFC was closed out of process. I have reverted that. The version of the lead updated in relation to that is full of repetition and bad grammar: "an academic field of study which generally studies the goal of studying ..., whether by in ...". But in any case it is better to keep the text stable during an RFC so I have restored the version which was subject to the RFC (at least when the bot sent its random notifications). --Mirokado (talk) 22:49, 16 October 2014 (UTC) (updated Mirokado (talk) 23:10, 16 October 2014 (UTC))[reply]

Would anyone object to reverting to any of the versions of the lede from 2007 to 2014? It contained the same content for that entire time, only changing by a word or two here and there? ---- CharlesGillingham (talk) 00:31, 17 October 2014 (UTC)[reply]
Well yes, I'm afraid I would. An RFC is a formal process where the community has a month to consider content editors give their opinions and an uninvolved closer determines consensus. While we would remove something cosmic like a BLP violation, we should leave that part of the article alone so that everyone is basing their comments on the same text. The current text has the advantage in the context of the RFC that editors can see the phrase being discussed without having to guess which previous version to open. --Mirokado (talk) 00:40, 17 October 2014 (UTC)[reply]
Does it matter that this dispute began when I reverted FelixRosch's addition of "human-like" from the lede? Before his edit, it had been very stable for seven years or so. Since then he has reverted or subverted every attempt to remove his edit; this is the source of the dispute. Shouldn't it stay in the "last stable version"? It hasn't been stable since FelixRosch added his edit. ---- CharlesGillingham (talk) 03:23, 17 October 2014 (UTC)[reply]
On the other hand, I'm happy to wait until the RfC is over. Just wanted you to know what was happening. ---- CharlesGillingham (talk) 03:29, 17 October 2014 (UTC)[reply]

The sentence "[AI] is an academic field of study which generally studies the goal of emulating human-like intelligence." is unsourced. Adding a cn tag. pgr94 (talk) 01:19, 17 October 2014 (UTC)[reply]

Pgr94 is correct; it needs a source more reliable than Artificial Intelligence: A Modern Approach, which you're not going to find. ---- CharlesGillingham (talk) 03:17, 17 October 2014 (UTC)[reply]
Undid revision 629913001. You are reverting against a consensus of 5 editors. Restore Close-out by Author of RfC. Please follow Wikipedia policy for establishing consensus first. FelixRosch (talk) 15:49, 17 October 2014 (UTC)[reply]
There is no consensus for closing the RFC. It was not just a contest between Felix and Robert McClenon. The two of them together should not close it, even if they, personally have resolved their differences. APL (talk) 21:24, 17 October 2014 (UTC)[reply]
Undid revision 630029056 You appear to be reverting against 5 editors in consensus including the originating author RobertM. This is the Steelpillow version of the edit. You have not contacted any of them to try to make consensus prior to editing and you are not following Wikipedia procedure and policy. FelixRosch (talk) 21:49, 17 October 2014 (UTC)[reply]
It appears that I misunderstood what User:FelixRosch had offered. I thought that he had agreed to stipulate an alternate wording of the RFC. He apparently wanted to stipulate an alternate wording of the lede, bypassing the RFC process. Should a new RFC be used, or should the original (before being edited and defaced) wording of the RFC be stored, or should moderated dispute resolution be used? Robert McClenon (talk) 22:56, 17 October 2014 (UTC)[reply]

Please have a look at Wikipedia:Requests_for_comment#Ending_RfCs:

  • It is clear that an RFC can be withdrawn by the poster "if the community's response became obvious very quickly". If you are going to assert that, then the contested wording must be removed from the article. I think a formal close may be more subtle, and I don't think the result is clear enough to close it for that reason.
  • The RFC may be closed "if the participants can agree to end it". They obviously have not agreed to end it (@FelixRosch: should provide diffs if I have missed a relevant, prominent conversation involving five or more of the participants in the last day or so).
  • @Robert McClenon: You are welcome to request help with this (for example at WP:DRN) and I will be happy to cooperate with whatever results from such a request, but until then I think we should let the RFC carry on. The opinions expressed by the thirteen or so participants so far cannot be ignored which is effectively what would happen if two editors decide between themselves what to do.

I will yet again reopen the RFC, because otherwise the bot will remove the entry prematurely and that will cause even more trouble. --Mirokado (talk) 23:38, 17 October 2014 (UTC)[reply]

Moderated dispute resolution isn't in order if the RFC is still running. Is the current wording of the RFC the wording that I originated, or has it been edited again (let alone defaced again)? Robert McClenon (talk) 14:46, 18 October 2014 (UTC)[reply]
@Mirokado, The edit which has the 5 (five) editor consensus is the "Steelpillow "version of the edit and Not the one you yourself quoted in your Talk comment above. It is also referred to as the "include Both" edit with RobertM using the phase "whether human-like or not". Please read this in the above, and please accept that this is a 5 editor consensus edit which closed the RfC. Please follow Wikipedia policies and procedures and establish consensus here on Talk prior to further edits. RobertM is likely to support you in further discussion within this section below. @Robert McClenon; You presently hold the 5 (five) person consensus as you acknowledged it. You do not appear to have received as much credit as you deserve for having done this. You have held me to strict terms to support the Steelpillow edit and I have accepted those strict terms. You currently have a 5 editor consensus for the closing of the RfC and continuing the discussion here in this section below for subsequent discussion. FelixRosch (talk) 14:58, 18 October 2014 (UTC)[reply]
  • I join the logical chorus in opposition to reference to AI aiming for anything "human-like" -- why not just as well mention "bird-like" or "dolphin-like"? Humans have a certain kind and degree of intelligence (on average and within bounds), but have many limitations in things such as calculators capacity, and many foibles such as emotions overriding reason, and the capacity to act as though things are true when we ought to reasonably know them to be false. It is not the aim of researchers to make machines as broken as men in these regards. DeistCosmos (talk) 16:59, 18 October 2014 (UTC)[reply]