Talk:Artificial intelligence: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Line 238: Line 238:
::So I took a closer look. I think that the existing text does a good job, and is basing itself on / dealing with common meanings of the term. The general theme of your statement seems to want to take it away from "common meaning" being the standard to personal philosophical arguments by persons engaged in the field. For the summary in the lead I think that such would be a bad idea. I think that it would be fine in the article with some attribution e.g. "some authors and researchers say...." I think adding a single such attributed summary-type sentence to the lead would also be fine. Sincerely, <b style="color: #0000cc;">''North8000''</b> ([[User talk:North8000#top|talk]]) 14:08, 18 October 2021 (UTC)
::So I took a closer look. I think that the existing text does a good job, and is basing itself on / dealing with common meanings of the term. The general theme of your statement seems to want to take it away from "common meaning" being the standard to personal philosophical arguments by persons engaged in the field. For the summary in the lead I think that such would be a bad idea. I think that it would be fine in the article with some attribution e.g. "some authors and researchers say...." I think adding a single such attributed summary-type sentence to the lead would also be fine. Sincerely, <b style="color: #0000cc;">''North8000''</b> ([[User talk:North8000#top|talk]]) 14:08, 18 October 2021 (UTC)


:::One sentence in the lead could be fine. Another good place to add this kind of criticism in more detail is {{section-link|Artificial intelligence|Applications}} or [[Applications of artificial intelligence]]. Basically, anywhere we cover [[AI effect]], which is another way the border of (AI vs. (not AI)) keeps moving.
:::One sentence in the lead could be fine. Another good place to add this kind of criticism in more detail is {{section link|Artificial intelligence|Applications}} or [[Applications of artificial intelligence]]. Basically, anywhere we cover [[AI effect]], which is another way the border of (AI vs. (not AI)) keeps moving.


:::Let be clear, I'm definitely ''not'' opposed to covering this idea. It's just that it can't appear to be the consensus view. So I'm concerned about things like (1) the ''placement'' of what you're talking about. (2) the attribution (as North8000 says, it needs a qualifier, e.g. "Leading robotics researcher [[Rodney Brooks]] argues that, on the contrary, ..."). ---- [[User:CharlesGillingham|CharlesGillingham]] ([[User talk:CharlesGillingham|talk]]) 14:39, 18 October 2021 (UTC)
:::Let be clear, I'm definitely ''not'' opposed to covering this idea. It's just that it can't appear to be the consensus view. So I'm concerned about things like (1) the ''placement'' of what you're talking about. (2) the attribution. As North8000 says, it needs a qualifier, e.g. "Leading robotics researcher [[Rodney Brooks]] argues that, on the contrary, ..."). ---- [[User:CharlesGillingham|CharlesGillingham]] ([[User talk:CharlesGillingham|talk]]) 14:39, 18 October 2021 (UTC)


{{reflist-talk}}
{{reflist-talk}}

Revision as of 14:44, 18 October 2021

Template:Vital article

Article milestones
DateProcessResult
August 6, 2009Peer reviewReviewed

Template:Outline of knowledge coverage Template:WikiEd banner shell This article was the subject of a Wiki Education Foundation-supported course assignment, between 9 January 2020 and 18 April 2020. Further details are available on the course page. Student editor(s): Jimmyk578 (article contribs). This article was the subject of a Wiki Education Foundation-supported course assignment, between 25 August 2020 and 10 December 2020. Further details are available on the course page. Student editor(s): GFL123, Yjh5146 (article contribs). Peer reviewers: Kbrower2020. This article was the subject of a Wiki Education Foundation-supported course assignment, between 13 October 2020 and 4 December 2020. Further details are available on the course page. Student editor(s): Bcasano (article contribs). This article was the subject of a Wiki Education Foundation-supported course assignment, between 6 September 2020 and 6 December 2020. Further details are available on the course page. Student editor(s): CaptainJoseph (article contribs). This article was the subject of a Wiki Education Foundation-supported course assignment, between 17 May 2021 and 31 July 2021. Further details are available on the course page. Student editor(s): Scarpinojosh (article contribs).


Bloat project

I can't claim tho have analyzed every detail but am impressed by your discussion in talk and urge you to keep boldly editing. North8000 (talk) 03:50, 5 July 2021 (UTC)[reply]

I've made a pass through the entire article at this point, except for "Future of AI". I've copyedited almost every paragraph for brevity. If I had to cut deeply, I posted the stuff I cut in this section. ---- CharlesGillingham (talk) 11:15, 22 September 2021 (UTC)[reply]
@CharlesGillingham: Hard to review such an overwhelming amount of work but overall it looks good and so cool that you are doing it. North8000 (talk) 14:08, 23 September 2021 (UTC)[reply]
Okay, now I have copyedited the entire article for brevity. Next: (1) I will give the same treatment to the sub-article for "Tools". (2) I will restore the tools section to the article. (3) I will find a place for most of the material I cut in various sub-articles. That's the plan.
I know this is a lot of big changes, so please, discuss any concerns you have here. --- CharlesGillingham (talk) 07:45, 24 September 2021 (UTC)[reply]
The article is now 17 pages of content, 45 pages all together. On 30 June 21, it was 27 pages, 62 total. The "Tools" section, as it is now, is a bit more than 7 pages of content, so I'm going to pull it back tonight, and give it the same treatment over the next week. ---- CharlesGillingham (talk) 04:27, 26 September 2021 (UTC)[reply]
Okay, I think I am pretty much done. The only sections I didn't hit was the two sections on deep learning -- these are too much of a mess. They will have to wait until we can update the article based on Russell & Norvig's 4th edition. Citation format is standardized for the whole article. ---- CharlesGillingham (talk) 12:00, 28 September 2021 (UTC)[reply]
I'm in the processing of finding a home for everything I cut. The stuff that has been used elsewhere in Wikipedia is marked below. --CharlesGillingham (talk) 06:43, 1 October 2021 (UTC)[reply]
All that's left to do: the sub-sections on deep learning, integrating and saving material from the old Basics section. ---- CharlesGillingham (talk) 23:28, 6 October 2021 (UTC)[reply]
This project is complete. The cut material is now at Talk:Artificial intelligence/Where did it go? 2021. ----CharlesGillingham (talk) 22:21, 16 October 2021 (UTC)[reply]

Minor typo


  • What I think should be changed: Under Statistical AI, in the phrase: "By 2000, solutions developed by Ai researchers...", please change "Ai" to "AI".
  • Why it should be changed: AI is an initialism, so this is probably a typo.
  • References supporting the possible change (format using the "cite" button):

BenjaminVincentCho (talk) 07:57, 21 September 2021 (UTC)[reply]

Done for you. Princess Persnickety (talk) 08:30, 21 September 2021 (UTC)[reply]


Posing questions as a style?

There is a serious problem with the wholesale revision and rewriting of this article. That is the use of questions to begin sections. This is not an exploratory white paper; it is an article that attempts to address issues and facts--as they exist--within the scope of an encyclopedia entry. This diversion from norms, including reliance on papers that support only one perspective or interpretation, threatens this article. I appreciate an editor wanting to clean it up; rewriting and reinterpreting years of input is not quite in keeping with the community who have contributed. I will begin changing these "stylistic" modifications as I find them. Andreldritch (talk) 02:18, 27 September 2021 (UTC)[reply]

I don't understand the questions part....I saw zero questions in the entire article. On the other points, I've not analyzed the changes. Certainly, we don't want the article to give undue weight to any unique / specialty perspective. And the main text should reflect mainstream views/info. This article had a lot of reference spamming and self-promotion material. Having given the changes only a vague overview, my impression is that the changers have reduced that problem. We want to give close scrutiny to (re)introductions regarding that issue. North8000 (talk) 12:06, 27 September 2021 (UTC)[reply]
I removed half a dozen questions that actually began sections and rewrote them as declaratives shortly after my post (you can see them in the change history). In some cases, I probably should have just removed the entire line of inquiry. It does seem that a lot of this article has taken on a single-minded slant over the past week in that whole sections are being reconfigured and in some cases eliminated with little explanation. Bears close watching IMHO. Andreldritch (talk) 17:07, 27 September 2021 (UTC)[reply]
Cool on getting rid of the questions. On the other stuff, I've not analyzed it enough to have an opinion. North8000 (talk) 17:32, 27 September 2021 (UTC)[reply]
I realize that a big edit like this makes people nervous. I've been doing this since 2007 when big edits were the "norm".
To your specific points:
(1) "reliance on papers that support only one perspective" Most of the citations in this article are to the most reliable secondary sources available: the four leading AI textbooks (at the time this was written) and the two most respected histories of AI. In order to avoid any WP:UNDUE weight, we did a careful survey of these sources. See Talk:Artificial intelligence/Textbook survey. (Now, I realize that these sources have gone a bit out of date at this point, and we have to address that eventually, but that's not the point.) This article does not rely on papers that support only one perspective. As you can see from my original post above, I was motivated to take this article on because someone eliminated the most essential section of the article. My edits are mostly intended to focus the article back on the topic, return the essential section and remove some of the non-essential things that have accumulated over the years. I'm trying to remove WP:UNDUE, I'm not introducing it. If you have a specific example of WP:UNDUE, let me know — there may be something, and we can talk about it, and we'll fix it.
(2) "Eliminated with little explanation" Please see above -- everything I cut is saved above and I made an argument to the community about why I think it should be cut. If you disagree, then please -- make an argument above. If you're right and I'm wrong, we'll fix it. Bold/Revert/Discuss/Fix.
(3) "Remove the line of enquiry" This would be a mistake, because these philosophical issues are all discussed in Chpt. 1 of Russell & Norvig, or in the philosophy section toward the back. In fact, I've moved the organization a few steps closer to R&N presentation of the topic. Every sentence in the philosophy section represents a body of literature that includes thousands of philosophy & AI publications.
I fixed the rest of the questions in the philosophy sections. Your edit was very helpful, but you got a few things wrong, so I had to take another pass at it. I added plenty of quotes, for a nice he said/she said form — that is, instead of a question, now we have the two conflicting answers. By the way, those questions have been in the article for years. I just moved them out of the old Approaches section and put them in philosophy.
The article is getting much, much better this week. Have faith. ---- CharlesGillingham (talk) 11:16, 28 September 2021 (UTC)[reply]
If you are concerned about my edits, take a look above at #From neural networks — I documented precisely what I did. Please assume good faith. ---- CharlesGillingham (talk) 15:53, 29 September 2021 (UTC)[reply]
Weighing in, a bit late: Certainly, good faith is assumed by all (I presume). I have a concern in many AI articles about reliance on one source used as for a preponderance of citations in supporting what is included in an article on something as nuanced as AI. It was Pamela McCorduck in the history of AI, and now it's Russell & Norvig here. R&N are cited more than 100 times in this article, the next nearest citation count is (by my estimate) 37, or about a 1/3rd of the R&N count. (Same was true of McCorduck in history--it began to read like Machines Who Think). Relying on one source makes this feel like we might as well just publish the R&N textbook and be done with it.
Cuts are all well and good, as is streamlining for concision, but eliminating everything that doesn't occur in your preferred sources skews the article, and eliminates other perspectives. I think that is what occurred when you eliminated the listing of apps (which has been apparently places back in another section). This is not just about theory or philosophy; credible apps that fit the definitions (even if the definitions are fleeting/transitory) should be part of what AI "is" -- since practical and commercial application are part of what helps to shape the evolution of AI--not just textbook revisions.
Who is Bernard Goetz? Since this isn't a white paper, some individuals should be given reasons/support for their inclusion. Again, this is WIKIPEDIA, not a graduate class in AI thought. I truly believe that this article is losing its accessibility for all in striving for academic relevance. TrainTracking1 (talk) 17:45, 29 September 2021 (UTC)[reply]
{{subst:DNAU|User:TrainTracking}} I think you're misunderstanding the role of WP:SECONDARY sources here. We, as Wikipedia editors, are not qualified to decide what is/isn't an essential topic about a huge subject like AI. We rely on secondary sources to determine what must be mentioned and what is unnecessary -- in other words, the secondary source determines for us the WP:RELEVANCE and the WP:NPOV in an empirical way. This article relies mostly on the best secondary sources we have, as I've explained above.
You're understanding of secondary sources is exactly backwards: we WANT to have 157 citations to a reliable secondary source and only a handful to primary sources or less reliable sources. That shows that all the material here probably belongs here. The number of sources is completely irrelevant, is a very poor measure of WP:NPOV. The quality of the sources is what counts, not the number.
Citations of individual papers are typically about irrelevant material. There are literally millions of papers about AI, and most of them are about topics that are too irrelevant for this, the top level, introductory article about AI. If the best source you could find was an academic paper or a newspaper article, chances are you're writing about something that doesn't belong here. If the source is the standard AI textbook, then we can be fairly sure that the contribution is WP:RELEVANT. It's ridiculous to criticize the article on the basis of some global notion about an even density distribution among the sources. Exactly backwards.
I'm not slanting the article towards any point of view (except R&N's point of view, which is a reliable source). Please stop suggesting I'm writing a "white paper" --- that's bad faith. And by the way, my recent edits didn't introduce all the R&N citations you're talking about -- they've been here since 2008. I may have removed a bunch of random citations to newspaper articles and people's vanity papers and so on, but only because they were attached to irrelevant material. I've given you ample information on the talk page above to precisely understand my edits. If there's something you disagree with, let's get into the weeds and fix it. Happy to do that. (I agreed with whoever cut all those McCorduck quotes, by the way.)
Ben Goertzel is a founder of the 21st century field of artificial general intelligence. If you see Bernard Goetz in the article, that's probably a typo. Let's fix that. Bernard Goetz was a vigilante who killed three people on the subway. ---- CharlesGillingham (talk) 00:25, 30 September 2021 (UTC)[reply]
No one is questioning the good faith of the edits, nor the importance of streamlining a formerly unwieldy article. As a reader and editor, I think numerous references from a single source become a "forest for the trees" issue--as in, whose lens is shaping the perspective. R&N are a good source, no argument. There are others, however, and they should be considered as part of the rewrite--which seems to be a top down effort. I view this as akin to using Woodward as the definitive voice on the Nixon presidency because he is the most cited; the latter is not a validation of the former, although there is no doubt the source is valid. There are others equally valid. I'm not going to argue each citation, and I will add my own shortly as per your suggestion. I believe looking at the article with a view to making it accessible is important, whether or not one agrees with my assessment of it as approaching academic white paper territory. Also, the name "Bernard Goetz" led the section on AGI. I removed it--an egregious oversight on all our parts that it even existed--and did not replace with Ben Goertzel, since he is simply one of many who popularized the AGI concept (it predates CYC, for example), but did not create it. TrainTracking1 (talk) 02:59, 30 September 2021 (UTC)[reply]
Please feel free to add more sources. You'll notice that all the essential points in the article have 3 or 4 sources (two or three textbooks, and sometimes a few more specific sources). It's easy to add a few more if you like.
On "AGI". Before 2002, AI sort of was AGI -- in the early days especially, there was no need for a distinction. In 2002, Goertzel asked Shane Legg to help him come up with a term to describe what Ray Kurzweil was calling "Strong AI" (which is a terrible term, because the philosophy of mind had already been using it for thirty years to describe something different). Legg actually came up with term, but it was Goertzel who introduced it to the world, and it was the title of his 2005 book. It really kickstarted the whole movement as a serious academic/industrial enterprise in 2005. That's the story. ---- CharlesGillingham (talk) 04:39, 30 September 2021 (UTC)[reply]
Actually, I have to say a bit more. Most of the main points in the article have citations to several textbooks. Look all those bullet-listed, bundled references! I don't think there is another article in Wikipedia that made more of an effort to avoid the problem you're accusing it of. We use seven main sources, when Wikipedia really only requires one, and, what's more, we went to the trouble to prove they are the best sources. The incredible number of bullet lists in the citations proves that there is wide agreement about what belongs here and what doesn't. This article has been sourced in a way that is supposed to prove to you that most of the content is from a WP:NPOV and is WP:RELEVANT.
The only contributions you should be worrying about are not the ones with the bullet list bundled citations to several textbooks. You should be worried about the citations to academic papers -- they could be self promotion, or something someone googled and never read, after they already knew what they were going to say. That's the problem with primary sources or magazine articles. It's exactly the opposite of what you're thinking. ---- CharlesGillingham (talk) 07:20, 1 October 2021 (UTC)[reply]
TrainTracking beat me to the punch on my replies as I was about to hit "post." So I'll stick with them. Outside of these comments, I think the article overlooks commercial developments of AI, which--as mentioned--do help shape evolution (or at least the flow of much research from within corporate domains, which could have been ignored in the 80s, but cannot be now with Google, Amazon, et al spending billions). I don't have a problem with R&N, but wonder why there are no references to early substantive works by Winston, Barr, et al. Surely these would prove useful as comparative studies. NB: I think Bernard Goetz looks like a bad bot tag. Andreldritch (talk) 17:56, 29 September 2021 (UTC)[reply]
{{subst:DNAU|User:Andreldritch}} Fair enough.
On corporate contributions: there are a few half-paragraphs in history that talk about the current boom and the size of the current investment. Successful corporate applications also get a paragraph or two in "applications". I also just added sections to the applications of AI article to try to cover what's been happening in the 2010s. Have a look at that article and see what you think needs to be done.
By the way, Peter Norvig is the author of our main source, and is also the director of AI at Google. So, as you can imagine, there's a lot of overlap right now between "academic" and "industrial" AI research — it's the same people. The best people in academia are being offered seven figure salaries in Silicon Valley.
Winston doesn't quite make the cut -- you realize we only have about two or three paragraphs for the 70s and 80s. We only have room to mention a handful of people -- we get (I think) Minsky, Brooks, Hinton, Moravec, Feigenbaum, each of whom can be tied to a decades-long historical movement. Is there some topic we should tie him to? Same question for Barr.
If you want to contribute something about Winston, feel free, or let's discuss it under a different header. The subject of this discussion was "don't introduce a controversy with a question", which has been a problem here for ten years and is now fixed.
I should say, I am quite reluctant to just add more researchers -- there are obviously hundreds we could mention, but they're only useful as a innovator of something, or a founder of a school of something, or as a spokesman for a particular technique or critique or something, and then only if the "something" is notable enough to merit inclusion. This article has always had a problem with people promoting particular researchers who are notable, but not notable enough for the top level article. Russians always want to add Russians, MIT graduates want to add MIT people, and so on. And we always have problems with self-promotion.
Again, these issues have nothing to do with my recent edits. I didn't cut anything about Winston, as far as I know. Attributing this problem to me only because I did as lot of editing is bad faith. Please study my notes above and find specific things that I did that you think need discussion. But please, let's discuss it up above where I wrote down everything I did, so we know we're talking about something I actually did. ---- CharlesGillingham (talk) 00:25, 30 September 2021 (UTC)[reply]

Pretty soon I'm going off the grid for 10 days and wanted to leave a few comments for standing through that period. I think that the article has recently evolved much for the better. I'm not saying that every change was good because I don't know. Previously it had a lot of patchy statements put in for self-promotion and reference spamming purposes, and then reference spamming. It took a lot of much-needed work to move forward on those issues and I am opposed to any backsliding in that area. I don't have the depth of analysis or knowledge about authors to comment on the issues raised in this talk section. Certainly major commercial developments are VERY important, as they represent AI actually in use vs. academic. North8000 (talk) 18:47, 29 September 2021 (UTC)[reply]

One last thing: As you can tell by my edits, I am a big fan of organized writing. Can we please start new headers for new topics. There are about eight different topics in here, and no one else is going to be able to figure out what we're talking about. ---- CharlesGillingham (talk) 06:48, 1 October 2021 (UTC)[reply]

addition of a neuromorphic computing section to thIs article

I'd like to ask for consensus to add a section on specialized AI hardware.

the rationale for this is that talking about a type of software without the corresponding hardware makes the subject incomplete.

there are already Wikipedia articles on this subject which I will list below.

RJJ4y7 (talk) 00:12, 11 October 2021 (UTC)[reply]

Maybe, but it would have to be short -- just a sentence, really. (As you probably noticed I just carefully edited the article from 34 pages of main text down to 21 pages, which is still WP:TOO LONG) You could add a full paragraph one level down, perhaps in artificial neural network or even machine learning. Another good choice would be to create an AI hardware or Specialized hardware for artificial intelligence article and do a full a treatment there (to do it right, you would also add section headers for Lisp machine and all other specialized hardware you know about, each with the template {{expand section}}. Look at the current state of Applications of AI.) Then your one-sentence mention in AI could link to something more complete. What do you think? I'm really just encouraging you to think about the big picture, to try to see that we have the right level of detail in the right article. ---- CharlesGillingham (talk) 18:29, 11 October 2021 (UTC)[reply]
{{subst:DNAU|User:RJJ4y7}} Have a look at AI accelerator (which is poorly named). It seems to me that this could be expanded to a more complete and comprehensible article Hardware for artificial intelligence. However, this is not my area of expertise, so I'm reluctant to take it on. Would you consider looking into it? ---- CharlesGillingham (talk) 18:19, 12 October 2021 (UTC)[reply]
Actually, I'm going to go ahead and create the structure I'm thinking of. I'll add a section to this article, and create the hardware article I'm talking about. Please have a look and see if this will work for you. ---- CharlesGillingham (talk) 18:22, 12 October 2021 (UTC)[reply]


I think a summary type section on specialized AI hardware would be good in this article. My thought would be a few sentences with lots of links to other articles. North8000 (talk) 11:50, 13 October 2021 (UTC)[reply]
I'll try to expand the section that CharlesGillingham created being careful not to make it too long. I also don't want to do anything that will add more confusion to a subject that is not commonly known. the information already available on Wikipedia is scattered/unorganized. and I'm mainly working to fix that. To give some background ill say that AI hardware is basically divided into 2 groups: von Neumann hardware designed to do vector (matrix) number crunching such as the Graphics processing unit and the Tensor Processing Unit however these are not very efficient computationally (they need to be trained for a long time) and require a lot of power to be useful. on the other hand, neuromorphic computers and other physical network implementations try to take the computational load off the computers by giving the hardware itself a neural network structure so only the function of the network has to actually be calculated. beyond the reduction in power use, it's been found that these devices have the property of One-shot learning or the ability to be trained (to learn) by a few or in extreme cases one example only the disadvantage thou is that most such devices are application specific though there is work to try and make a universal neuromorphic device RJJ4y7 (talk) 17:01, 13 October 2021 (UTC)[reply]
{{subst:DNAU|User:RJJ4y7}} Hardware for artificial intelligence is currently a brand-new WP:STUB and the plan is for people like you to fill the article in with all the stuff you have that we can't cover in this article.
Forget the section, which only needs a one-sentence summary of the article, which we can rewrite if necessary once Hardware for artificial intelligence is a bit more mature. Trying to keep this article down to 20 pages. ---- CharlesGillingham (talk) 00:40, 14 October 2021 (UTC)[reply]

ok, I see, we will leave the section alone for now. though I don't know what the advantage of Hardware for artificial intelligence over ai accelerator which is the currently accepted term. I guess ai accelerator usually refers to von Neumann technology and excludes neuromorphic tech (which by the way are becoming more popular by the year) furthermore Neuromorphic engineering and Physical neural network are also about the same thing. I guess in all such cases the articles of focus should be ones that increase understandability and organization. as I said before my goal is to clarify this subject of AI hardware as much as possible.

for now i'll do as follows: 1) expand Hardware for artificial intelligence 2) untill now ive focused on the neuromorphic computing article ill try to work more on physical neuraql network since it is a more understandable term. 3) after all this we can discuss about rewriting the section if nessesary.

Where did it go? (On the big copy-edit in the fall of 2021)

This summer and fall, I have copy-edited the entire article for brevity (as well as better organization, citation format, and a non-technical WP:AUDIENCE). The article is down from its peak of 34 text pages down to about 21 or so. Most of this savings was from copy-editing for tighter prose and better organization, but there was a good deal of stuff that was cut. I tried to move as much material as I could down sub-articles like existential risk of AI or machine learning and so on. I've documented exactly where everything I cut has been moved to, and indicated the things I couldn't find a place for (or were otherwise unusable). You can see exactly where this material went here: Talk:Artificial intelligence/Where did it go? 2021. ---- CharlesGillingham (talk) 00:52, 14 October 2021 (UTC)[reply]

Thanks for your hard-work. I think many of these topics are related to AI only remotely. AXONOV (talk) 18:51, 16 October 2021 (UTC)[reply]

Steve Wozniak & modern "AI"

There is and interesting opinion of Steve Wozniak on modern day mockery of actual Artificial Intelligence you can find here (wind to 2:20). He's basically criticized the modern term "AI" being used for smart information association software (or hardware) as not being even close to what the Intelligence means. Probably worth to look at. AXONOV (talk) 18:48, 16 October 2021 (UTC)[reply]

Intro

I propose to rewrite the following excerpt from the intro section and stress out that many modern day technologies like speech and image recognition (machine learning etc.) algorithms have little to do with what actual AI or intelligence at all as not to confuse/mislead/perplex readers. I also propose to keep the terminology differentiated. There is a couple of nice sources to start with.[1][2]

References

  1. ^ "Artificial Intelligence vs. Machine Learning vs. Deep Learning: What's the Difference? | Built In". builtin.com. Retrieved 2021-10-16.
  2. ^ Iriondo, Roberto (2021-04-02). "Machine Learning (ML) vs. Artificial Intelligence (AI) — Crucial Differences". Medium. Retrieved 2021-10-16. Afterward, organizations attempted to separate themselves with the term AI, which had become synonymous with unsubstantiated hype and utilized different names to refer to their work. For instance, IBM described Deep Blue as a supercomputer and explicitly stated that it did not use artificial intelligence [10], while it did [23].{{cite web}}: CS1 maint: url-status (link)

AXONOV (talk) 19:10, 16 October 2021 (UTC)[reply]

Not sure if I understand what you are proposing. Machine learning is in the scope of academic field of artificial intelligence. You can certainly find sources that argue maybe it shouldn't be, but that doesn't change the fact that it is currently categorized that way. For example, courses with the title "artificial intelligence" spend some of their time on "machine learning", corporations who are putting money into machine learning announce that they are "investing in AI", and so on. These are real facts about the real world that the article must reflect; we can't report the fringe view. (Even if we personally think the fringe view is correct.).
There is a place in the article where this kind of discussion is relevant (it's Artificial intelligence § Philosophy), or, better still in the article Philosophy of AI ---- CharlesGillingham (talk) 21:28, 16 October 2021 (UTC)[reply]
@CharlesGillingham: The above excerpt basically says that smart web search, speech, and image recognition are the same things as the AI in the sense of application. I propose to explicitly state that it has nothing to do with AI or Intelligence. Smart web search prompts or character recognition are extremes of what modern computers are able to do. It's nowhere close to what the "intelligence" means. I strongly disagree with giving preference of commercial POV (on the said technologies) over more critical, scientific or even philosophical one. AXONOV (talk) 07:33, 17 October 2021 (UTC)[reply]
Your argument is sound and valid, and there are several notable analysts and commentators who agree with you (including Rodney Brooks, Noam Chomsky and others).
However, the definition of "intelligence" they are using is slippery, which is why leading researchers have found other ways to define "artificial intelligence" that are more precise (carefully read artificial intelligence § Defining artificial intelligence). These definitions are "widely accepted by the field" (according to the leading AI textbook, Russell and Norvig); the definition given in the first paragraph of the article is the consensus view.
As editors, we have to prioritize the most widely accepted, consensus point of view. In science, there is always some dissent, and there is a place for that in Wikipedia. The point of view you are talking about has a place in Wikipedia, as I said above, but it is not in the second or third paragraph of the most introductory article on the topic.
More to the point, the first section has to use a definition that is coherent with things like newspaper articles, university course syllabi, book store sections, textbook titles, corporate announcements, national agendas and the like. In other words, the most useful definition is sociological, not logical -- "AI" is whatever most sources mean when they say "AI". The issues you bring up will not help the reader to understand all these ordinary real world things. ---- CharlesGillingham (talk) 18:24, 17 October 2021 (UTC)[reply]
@CharlesGillingham: In order not to waste time I propose to stick to the sources. I request more sources for the excerpt above as I didn't find those in the body of the article. Would appreciate if you bring them over here much. Currently it seems a bit WP:ORish. AXONOV (talk) 10:46, 18 October 2021 (UTC)[reply]
@CharlesGillingham: I think we should be wary of sources like NYT that tend to hype on certain technologies, especially medical ones.
The article's body it says that

[...] AI is used in search engines (such as Google Search), [...][11:01, October 18, 2021]

but the source it a bit more cautionary:

[...] The reality, however, is a little less dramatic. The automated voice on your smartphone that tries to answer your questions? That’s a type of A.I. So are features of Google’s search engine." [...] [1]

It's vague as it could get and I oppose this joggling that leaks into the intro. I propose to clearly state (by using fresh sources) in the lead that there are different views on what defines the the A.I.: some (experts) say that it's an advanced thinking machine comparable to human brain and others (jornos etc.) say that it's a speaking toster. --AXONOV (talk) 11:21, 18 October 2021 (UTC)[reply]

Not implying that the any of the above efforts are doing otherwise, but also remember that the lead is supposed to be a summary of what is in the body of the article. I'd also like to reinforce that it is a term and it's meaning is defined by the common meanings of the term rather than anything else. North8000 (talk) 19:53, 17 October 2021 (UTC)[reply]

@North8000: Are you supporting or opposing my proposal? I agree that MOS:INTRO should be enforced but currently I don't see sources supporting the excerpt above. See my reply above. AXONOV (talk) 10:48, 18 October 2021 (UTC)[reply]
Well, my previous post was just making a few points without intending to weigh in on your proposal. Based on your ping I took a closer look. Your proposal is basically saying which items you are proposing changing and the general theme of your intended changes. That's fine, but it should be understood that you did not make a specific editing proposal.
So I took a closer look. I think that the existing text does a good job, and is basing itself on / dealing with common meanings of the term. The general theme of your statement seems to want to take it away from "common meaning" being the standard to personal philosophical arguments by persons engaged in the field. For the summary in the lead I think that such would be a bad idea. I think that it would be fine in the article with some attribution e.g. "some authors and researchers say...." I think adding a single such attributed summary-type sentence to the lead would also be fine. Sincerely, North8000 (talk) 14:08, 18 October 2021 (UTC)[reply]
One sentence in the lead could be fine. Another good place to add this kind of criticism in more detail is Artificial intelligence § Applications or Applications of artificial intelligence. Basically, anywhere we cover AI effect, which is another way the border of (AI vs. (not AI)) keeps moving.
Let be clear, I'm definitely not opposed to covering this idea. It's just that it can't appear to be the consensus view. So I'm concerned about things like (1) the placement of what you're talking about. (2) the attribution. As North8000 says, it needs a qualifier, e.g. "Leading robotics researcher Rodney Brooks argues that, on the contrary, ..."). ---- CharlesGillingham (talk) 14:39, 18 October 2021 (UTC)[reply]

References

  1. ^ Lohr, Steve (2016-02-28). "The Promise of Artificial Intelligence Unfolds in Small Steps". The New York Times. ISSN 0362-4331. Retrieved 2021-10-18.