Talk:Artificial intelligence

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Article milestones
DateProcessResult
August 6, 2009Peer reviewReviewed


Announcement to remove two Arxiv papers[edit]

In general, the current article is written very well, but it is only a bit too long. There are two arxiv papers referenced in the section “Basics” which are problematic for an overview article:

  1. #85 Matti, D: Combining LiDAR space clustering and convolutional neural networks for pedestrian detection
  2. #86 Ferguson, Sarah: Real-Time Predictive Modeling and Robust Avoidance of Pedestrians with Uncertain, Changing Intentions.

The first one is 7 page long PDF document which was published at Arxiv. It describes a highly specialized attempt for image detection with convolutional neural networks. The second paper is a proceeding from a robotics conference and contains a gaussian forward model for a prediction problem.

Both papers were written for AI experts which have lots of background knowledge. It is unlikely, that in a beginner course about robotics these papers are used to teach the subject to a larger audience. If no counter arguments are provided, i will delete both papers in the near future. This will help to reduce the article size.--ManuelRodriguez (talk) 12:04, 18 December 2020 (UTC)


I support your proposed deletion. The numbers on this look like there is reference spamming here. You appear to have the expertise and discretion to find us another 30 to remove.  :-) North8000 (talk) 16:26, 18 December 2020 (UTC)

The mentioned two references are examples for primary sources. Primary means, that it is a research paper which can be referenced by other researchers. For example the first paper “Combining LiDAR space clustering and convolutional neural networks for pedestrian detection” has a high quality and according to Google Scholar it was mentioned in 35 other papers. The problem is, that primary sources are a poor choice for an encyclopedia, because they are containing the ongoing debate between experts. The more convenient source is a secondary source, for example “Russel/norvig: AIMA” or the book from Nils Nilsson about AI history. Both are referenced in the article and even a high ranking Wikipedia admin isn't allowed to remove such a secondary source.--ManuelRodriguez (talk) 04:56, 19 December 2020 (UTC)
I was thinking at a more basic refrence spamming level. Content should be there for the purpose of the article, not to provide an entre' for reference spamming. Also, references should be ones that have stature & recognition, not ones seeking to gain recognition or stature by being in Wikipedia. Sincerely, North8000 (talk) 23:38, 16 January 2021 (UTC)

Ethical Artificial Intelligence[edit]

Proposing a subject heading for Artificial Intelligence, Ethical Artificial Intelligence

Ethical artificial intelligence is an area of artificial intelligence that deals with removing bias in artificial intelligence algorithms. Ethical artificial intelligence is achieved by allowing for transparency and review of the algorithms that are deployed by artificial intelligence computing systems. Ethical artificial intelligence allows for more trust of computing systems in everyday lives. — Preceding unsigned comment added by NmuoMmiri (talkcontribs) 20:16, 18 January 2021 (UTC)

For the most modern stuff (machine / deep learning) there are no reviewable algorithms. North8000 (talk) 21:28, 18 January 2021 (UTC)

AI and the standardized list of censored words: LDNOOBW[edit]

Just a quick AfC suggestion. Wired ran a story about a list of 400+ censored words widely used to filter autocompletes (re GitHub, Shutterstock) and for limiting corpuses used in ML. The list is commonly referred to as the “List of Dirty, Naught, Obscene, and Otherwise Bad Words” (LDNOOVW) and the article covers how its use (or similar lists) can impact inclusivity, block discussions, or limit access to important scientific, medical, or artistic content.[1] Zatsugaku (talk) 19:46, 4 February 2021 (UTC)

conclusion DNF[edit]

"For the danger of uncontrolled advanced AI to be realized, the hypothetical AI would have to overpower or out-think all of humanity, ...." This is a false, overly strong statement. Consider an uncontrolled advanced AI working in a mine far from any city and perhaps not even connected to the outside world by Internet, hardwires or anything else. There are humans who work along side this AI machine, which has 'learned' to extract certain resources. Among the resources are silver and gold. As so often happens, the silver in the matrix dwindles to almost nothing; how-ever our AI machine is thirsty for more silver. Using its oh-so-refined talents, it notes the silver in the fillings in some of the humans' teeth. Uncontrolled, the AI machine extracts the silver more ruthlessly than any Old West dentist or bad hombre using string and a swinging door or even a blunt instrument aka a hammer. The trauma is certainly a danger realized, and yet this AI is nowhere near able to overpower or out-think all of humanity. So, can this sentence be redacted, please.Kdammers (talk) 05:08, 1 May 2021 (UTC)

Kate Crawford[edit]

Should Kate Crawford's Atlas of AI be mentioned or at least given as a further-reading item?Kdammers (talk) 05:12, 1 May 2021 (UTC)

  1. ^ Simonite, Tom (4 Feb 2021). "AI and the List of Dirty, Naughty, Obscene, and Otherwise Bad Words". Retrieved 4 Feb 2021.