Artificial intelligence is within the scope of WikiProject Robotics, which aims to build a comprehensive and detailed guide to Robotics on Wikipedia. If you would like to participate, you can choose to edit this article, or visit the project page (Talk), where you can join the project and see a list of open tasks.
This article is within the scope of WikiProject Technology, a collaborative effort to improve the coverage of technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
This article is within the scope of WikiProject Philosophy, a collaborative effort to improve the coverage of content related to philosophy on Wikipedia. If you would like to support the project, please visit the project page, where you can get more details on how you can help, and where you can join the general discussion about philosophy content on Wikipedia.
This article is within the scope of WikiProject Linguistics, a collaborative effort to improve the coverage of Linguistics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
This article is within the scope of WikiProject Software, a collaborative effort to improve the coverage of software on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Under section "Predictions and Ethics" is the following:
"In the 1980s artist Hajime Sorayama's Sexy Robots series were painted and published in Japan depicting the actual organic human form with life-like muscular metallic skins and later "the Gynoids" book followed that was used by or influenced movie makers including George Lucas and other creatives. Sorayama never considered these organic robots to be real part of nature but always unnatural product of the human mind, a fantasy existing in the mind even when realized in actual form."
This information is about robotic art and does not apply to "Predictions and Ethics", and should be relocated to an appropriate article or deleted.
In the same paragraph occurs the following information:
"Almost 20 years later, the first AI robotic pet, AIBO, came available as a companion to people. AIBO grew out of Sony's Computer Science Laboratory (CSL). Famed engineer Toshitada Doi is credited as AIBO's original progenitor: in 1994 he had started work on robots with artificial intelligence expert Masahiro Fujita, at CSL. Doi's, friend, the artist Hajime Sorayama, was enlisted to create the initial designs for the AIBO's body. Those designs are now part of the permanent collections of Museum of Modern Art and the Smithsonian Institution, with later versions of AIBO being used in studies in Carnegie Mellon University. In 2006, AIBO was added into Carnegie Mellon University's "Robot Hall of Fame"."
This information is about robotic history, and does not apply to "Predictions and Ethics". Advise relocate or delete. Belnova (talk) 06:33, 2 April 2014 (UTC)
I think a high level listing of AI's goals (from which more specific Problems inherit) is needed; for instance "AI attempts to achieve one or more of: 1) mimicking living structure and/or internal processes, 2) replacing living thing's external function, using a different internal implementation, 3) ..." At one point in the past, I had 3 or 4 such disjoint goals stated to me by someone expert in AI. I am not, however. DouglasHeld (talk) 00:11, 26 April 2011 (UTC)
We'd need a reliable source for this, such as a major AI textbook. ---- CharlesGillingham (talk) 16:22, 26 April 2011 (UTC)
I argue that this is WP:Summary article of a large field, and that therefor it is okay that it runs a little long. Currently, the article text is at around ten pages, but the article is not 100% complete and needs more illustrations. ---- CharlesGillingham (talk) 18:29, 2 November 2010 (UTC)
Main illustration doesn't provide an actual example of an Artificial Intelligence, just a robot capable of mimicking human actions in a certain area (Namely, sport) — Preceding unsigned comment added by 188.8.131.52 (talk) 15:37, 4 August 2011 (UTC)
So, after learning a little, am I perhaps being too cynical/suspicious in suspecting this as a clever means towards tenure?
[How] Does the quality of the research/papers bear on it's inclusion in Wikipedia? (less picky for "Notes"??)
Since you are asking (in part) about procedures here, I'll fill you in. Yes you should WP:BE BOLD whenever possible. For well watched articles (such as this one), you will be reverted if your edit is terrible.
I haven't looked into the issues that you raise, but I would research them carefully before proceeding, because it's important to WP:ASSUME GOOD FAITH. If you're killing something because you think it is someone's self-promotion, then the onus of proof is on you. ---- CharlesGillingham (talk) 04:09, 18 December 2013 (UTC)
I finally got around to noticing your edits. Yes, there is definitely something wrong with the thing you struck out -- it seems weird that anyone would deny the role of sub-symbolic reasoning in 2014 unless they don't know what they are talking about, especially after popular books such as Gladwell's Blink or Kahnemann's Thinking, Fast and Slow have brought together such a huge body of evidence. ---- CharlesGillingham (talk) 20:39, 18 January 2014 (UTC)
'Strong AI' seems to be used ambiguously for a number of different theses or programs, from reductionism about mind to the computational theory of mind to reductionism about semantics or consciousness (discussed in Chinese room) to the creation of machines exhibiting generally intelligent behavior. The last of these options is the topic of the article we're currently calling 'Strong AI'. I've proposed a rename to Artificial general intelligence at the Talk page. What do you all think? -Silence (talk) 23:57, 11 January 2014 (UTC)
This IP seems to be trying to add plugs to recent articles, by adding a paragraph on semantic comprehension with reference to Deep Blue vs. Kasparov, or by adding links in the body of the article without plain text. This isn't really appropriate at this level of article - AI is meant for a general overview, and not to promote one of the many thousands of attempts to define intelligence. Leondz (talk) 11:52, 9 March 2014 (UTC)
I object to the phrase "human-like intelligence" being substituted here and elsewhere for "intelligence". This is too narrow and is out of step with the way many leaders of AI describe their own work. This only describes the work of a small minority of AI researchers.
AI founder John McCarthy (computer scientist) argued forcefully and repeatedly that AI research should not attempt to create "human-like intelligence", but instead should focus on create programs that solve the same problems that humans solve by thinking. The programs don't need to be human-like at all, just so long as they work. He felt AI should be guided by logic and formalism, rather than psychological experiments and neurology.
Rodney Brooks (leader of MIT's AI laboratories for many years) argued forcefully and repeatedly that AI research (specifically robotics) should not attempt to simulate human-like abilities such as reasoning and deduction, but instead should focus on animal-like abilities such as survival and locomotion.
Stuart Russell and Peter Norvig (authors of the leading AI textbook) dismiss the Turing Test as irrelevant, because they don't see the point in trying to creating human-like intelligence. What we need is the intelligence it takes to solve problems, regardless of whether it's human-like or not. They write "airplanes are tested by how well they fly, not by how they can fool other pigeons into thinking they are pigeons."
They also object to John Searle's Chinese room argument, which claims that machine intelligence can never be truly "human-like", but at best can only be a simulation of "human-like" intelligence. They write "as long the program works, [we] don't care if you call it a simulation or not." I.e., they don't care if it's human-like.
Russell and Norvig define the field in terms of "rational agents' and write specifically that the field studies all kinds of rational or intelligent agents, not just humans.
AI research is primarily concerned with solving real-world problems, problems that require intelligence when they are solved by people. AI research, for the most part, does not seek to simulate "human like" intelligence, unless it helps to solve this fundamental goal. Although some AI researchers have studied human psychology or human neurology in their search for better algorithms, this is the exception rather than the rule.
I find it difficult to understand why we want to emphasize "human-like" intelligence. As opposed to what? "Animal-like" intelligence? "Machine-like" intelligence? "God-like" intelligence? I'm not really sure what this editor is getting at.
I will continue to revert the insertion "human-like" wherever I see it. ---- CharlesGillingham (talk) 06:18, 11 June 2014 (UTC)
Completely agree. The above arguments are good. Human-like intelligence is a proper subset of intelligence. The editor seems to be confusing "Artificial human intelligence" and the much broader field of "artificial intelligence". pgr94 (talk) 10:12, 11 June 2014 (UTC)
One more thing: the phrase "human-like" is an awkward neologism. Even if the text was written correctly, it would still read poorly. ---- CharlesGillingham (talk) 06:18, 11 June 2014 (UTC)
To both editors, WP:MOS requires that the Lead section only contain material which is covered in the main body of the article. At present, the five items which you outline above are not contained in the main body of the article but only on Talk. The current version of the Lead section accurately summarizes the main body of the article in its current state. FelixRosch (talk) 14:54, 23 July 2014 (UTC)
The article (nor any of the sources) does not define AI by using the term "human like" to specify the exact kind of intelligence that it studies. Thus the addition of the term "human-like" absolutely does not summarize the article. I think the argument from WP:SUMMARY is actually a very strong argument for striking the term "human like".
I still don't understand the distinction between "human-like" intelligence and the other kind of intelligence (whatever it is), and how this applies to AI research. Your edit amounts to the claim that AI studies "human-like" intelligence and NOT some other kind of intelligence. It is utterly not clear what this other kind of intelligence is, and it certainly does not appear in the article or the sources, as far as I can tell. It would help if you explain what it is you are talking about, because it makes no sense to me and I have been working on, reading and studying AI for something like 34 years now. ---- CharlesGillingham (talk) 18:23, 1 August 2014 (UTC)
Also, see the intro to the section Approaches and read footnote 93. This describes specifically how some AI researchers are opposed to the idea of studying "human-like" intelligence. Thus the addition of "human-like" to the the intro not only does not summarize the article, it actually claims the opposite of what the body the article states, with highly reliable sources. ---- CharlesGillingham (talk) 18:34, 1 August 2014 (UTC)