Jump to content

Wikipedia:Articles for deletion/Friendly artificial intelligence: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
Lightbound (talk | contribs)
smoking gun on WP:COI
Line 54: Line 54:
'''Comment''' Lighthound, I was invited to contribute to Springer volume as a critic, not an advocate, of the MIRI perspective. So to use this as evidence of bias in their favour takes ingenuity worthy of a better cause.
'''Comment''' Lighthound, I was invited to contribute to Springer volume as a critic, not an advocate, of the MIRI perspective. So to use this as evidence of bias in their favour takes ingenuity worthy of a better cause.
--[[User:Davidcpearce|Davidcpearce]] ([[User talk:Davidcpearce|talk]]) 01:36, 30 March 2014 (UTC)
--[[User:Davidcpearce|Davidcpearce]] ([[User talk:Davidcpearce|talk]]) 01:36, 30 March 2014 (UTC)

'''Comment''' You may want to review what is meant by [[WP:COI]]. Again, the issue isn't intention but proximity. And [http://www.meetup.com/London-Futurists/events/110562132/ here is the evidence] that you helped plan the book. That you weren't merely a contributor who happened to not know anyone involved. This proves the proximity of [[WP:EXTERNALREL]]: ''"He will be joined as a speaker by David Pearce, '''who has been actively involved behind the scenes in the planning of the book''', and who contributed two articles in the book."'' This is useful knowledge to anyone making a judgement on this page. Of the two citations you brought to the table to use, both of them are [[WP:EXTERNALREL]]. What is being stated is that there is significant proximity to the sole ensemble of resources for which you are providing to defend the notability of the article as a stand-alone topic. There are more links available if desired, but I think this shows that this isn't conjecture on my part. By the way, still waiting on that scientific journal article on the theory of "Strong AI" that you said was not factual on my part. --[[User:Lightbound|<font color="black">☯</font><font color="FF9900">Lightbound</font><font color="black">☯</font>]] [[User talk:Lightbound|<font color="AAAAAA"><sup><b><u>talk</u></b></sup></font>]] 01:59, 30 March 2014 (UTC)

Revision as of 01:59, 30 March 2014

Friendly artificial intelligence (edit | talk | history | protect | delete | links | watch | logs | views) – (View log · Stats)
(Find sources: Google (books · news · scholar · free images · WP refs· FENS · JSTOR · TWL)

Since the subject appears to be non-notable and/or original research, I propose to delete the article. Although the general issue of constraing AIs to prevent dangerous behaviors is notable, and is the subject of Machine ethics, this article mostly deals with this "Friendliness theory" or "Frendly AI theory" or "Coherent Extrapolated Volition" which are neologisms that refer to concepts put forward by Yudkowsky and his institute, which didn't receive significant recognition in academic or otherwise notable sources.

Strong Keep. The IJ Good / MIRI conception of posthuman superintelligence needs to be critiqued not deleted. The (alleged) prospect of an Intelligence Explosion and nonfriendly singleton AGI has generated much controversy, both on the Net and elsewhere (e.g. the recent Springer Singularities volume) Several of the external links need updating. --Davidcpearce (talk) 21:38, 28 March 2014 (UTC)[reply]

Note: This debate has been included in the list of Social science-related deletion discussions. • Gene93k (talk) 23:28, 28 March 2014 (UTC)[reply]
Note: This debate has been included in the list of Computing-related deletion discussions. • Gene93k (talk) 23:28, 28 March 2014 (UTC)[reply]

Comment (I'm the user who proposed the deletion) There is already the Machine ethics article covering these issues. The Friendly artificial intelligence article is almost entirely about specific ideas put forward by Yudkowsky and his institute. They may be notable enough to deserve a mention in Machine ethics, not an article on their own. Most of the references are primary sources such as blog posts or papers self-published on MIRI's own webiste, which don't meet reliability criteria. The only source published by an indepdendent editor is the chapter written by Yudkowsky in the Global Catastrophic Risks book, which is still a primary source. The only academic source is Omohundro's paper which, although related, doesn't directly reference these issues. As far as I know, other sources meeting reliability criteria don't exist. Moreover, various passages of this article seem highly speculative and are not clearly attributed, and may be well original research. For instance: "Yudkowsky's Friendliness Theory relates, through the fields of biopsychology, that if a truly intelligent mind feels motivated to carry out some function, the result of which would violate some constraint imposed against it, then given enough time and resources, it will develop methods of defeating all such constraints (as humans have done repeatedly throughout the history of technological civilization)." Seriously, Yudkowsky can infer that using biopsychology? Biopsychology is defined in its own article as "the application of the principles of biology (in particular neurobiology), to the study of physiological, genetic, and developmental mechanisms of behavior in human and non-human animals. It typically investigates at the level of nerves, neurotransmitters, brain circuitry and the basic biological processes that underlie normal and abnormal behavior."

Comment Anon, like you, I disagree with the MIRI conception of AGI and the threat it poses. But if we think the academic references need beefing up, perhaps add the Springer volume - or Nick Bostrom's new book "Superintelligence: Paths, Dangers, Strategies" (2014)? --Davidcpearce (talk) 08:17, 29 March 2014 (UTC)[reply]

Delete There are several issues here: the first is that Friendly AI is and always has been WP:OR. That it has lasted this long on the Wikipedia is evidence of the lack of interest to researchers who would have otherwise recognized this and nominated deletion sooner. As we all know, Wikipedia is not a place for original research. Second, even if you manage to find WP:NOTABLE sources, this does not substantiate an article for it when it can and should be referenced in the biography for the author. Frankly, that is a stretch itself, given that it doesn't pass WP:TRUTH as a verifiable topic, but I don't think anyone would object to it. Third, in WP:TRUTH, the minimum condition is that the information can be verified from a notable source. This strengthens the deletion argument, as there are no primary, peer-reviewed sources on the topic of Friendly AI. And it is not sufficient to pass notability by proxy; using a notable source that references non-notable sources, such as Friendly AI web publications, would invalidate such a reference immediately. Fourth, even if we were to accept such a stand-alone article, it would be difficult to establish it to an acceptable quality due to the immense falsehood of the topic. This kind of undue weight issue is mentioned in WP:TRUTH. Therefore, and in light of these issues, I strongly recommend deletion. --Lightbound talk 20:22, 29 March 2014 (UTC)[reply]

Comment Lightbound, for better or worse, all of the essays commissioned for the recent Springer volume ("Singularity Hypotheses: A Scientific and Philosophical Assessment: Amnon H. Eden (Editor), James H Moor (Editor), Johnny H Soraker (Editor)) were academically peer-reviewed, including "Eliezer's Friendly" AI paper; and critical comments on it. --Davidcpearce (talk) 20:43, 29 March 2014 (UTC)[reply]

Comment I'm afraid I'm going to have to invoke WP:COI, as you, David, were involved with the organization, publication, and execution of that source. And you were also a contributing author beyond this. Any administrator considering this page's contents should be made aware of that fact. Now, moving back to the main points: firstly, Friendly AI as a theory is WP:PSCI, and any Wikipedia article that would feature it would immediately have to contend with issues of WP:UNDUE and WP:PSCI. That Springer published an anthology of essays does not substantiate the mathematical or logical theories behind Friendly AI theory. In fact, this will never occur, as it is mathematically impossible to do what the theory suggests and intractible in practice, even if it were. That this wasn't caught by the referees calls into question the validity of that source. Strong evidence can be brought here to counter the theory, and it would end up spilling over into the majority of the contents of the article as to why it is WP:PSCI. Should every Wikipedia page become an open critique on fringe and pseudoscientific theories? I would hope not. Further, to substantiate a stand-alone article, this topic will need to have several high quality primary sources. Even if we somehow allow these issues I've raised to pass, that final concern should be sufficient to recommend deletion alone. --Lightbound talk 21:12, 29 March 2014 (UTC)[reply]

Comment Lightbound, if I have a declaration of interest to make, it's that I'm highly critical of MIRIs concept of "Friendly AI" - and likewise of both Kurzweilian and MIRI's conception of a "Technological Singularity". Given my views, I didn't expect to be invited to contribute to the Springer volume; I wasn't one of the editors, all of whom are career academics. Either way, to say that there are "no primary, peer-reviewed sources on the topic of Friendly AI" is factually incorrect. It's a claim that you might on reflection wish to withdraw. --Davidcpearce (talk) 22:03, 29 March 2014 (UTC)[reply]

Comment (OP) The Springer pubblication is paywalled, I can only access the first page where Yudkowsky discusses examples of anthropomorphization in science fiction. Does the paper substantially supports the points in this article? Even if it does, it is still a primary source. If I understand correctly, even though Springer is generally an academic publisher, this volume is part of the special series "The Frontiers Collection", which is aimed at non-technical audiences. Hence I wouldn't consider it an academic pubblication. I think that the subject may be notable enough to deserve a mention in MIRI and/or Machine ethics, but not notable and verifiable enough to deserve an article on its own. — Preceding unsigned comment added by 131.114.88.192 (talk) 21:14, 29 March 2014 (UTC)[reply]

Comment David, there is not a single primary, peer-reviewed journal article on the scientific theory of "Friendly AI". And there is a very logical reason why there is not, and it is related to why it was published in an anthology. "Friendly AI" can not survive the peer-review process of a technical journal. To do so, such a paper would need to come in the form of a mathematical proof or, at the very least, a rigorous conjecture. As pointed out above, the book is oriented towards a non-technical audience. Again, even if we let this source pass (which we shouldn't), this is not sufficient in quality or quantity to warrant a stand-alone article. --Lightbound talk 22:26, 29 March 2014 (UTC)[reply]

Comment Lightbound, your criticisms of Friendly AI are probably best addressed by one of its advocates, not me! All I was doing was pointing out that your original claim - although made I'm sure in good faith - was not factually correct. --Davidcpearce (talk) 23:06, 29 March 2014 (UTC)[reply]

Comment I still strongly support deletion. David, feel free to cite the actual rigorous mathematical conjecture or scientific theory paper that directly entails the "Friendly AI" theory and I'll gladly concede; however, if you cite the anthology from Springer, then it has its own issues, though largely moot as one source is not enough for a stand-alone article. That a source is from a major publication does not automatically make it sufficient to establish the due dilligence in the spirit of WP:NOTABLE, especially in light of the arguments made against it above. You could replace "Friendly AI" with any pseudoscientific theory and I would (and have, in the past) respond the same. This is a significantly weak minimal POV that can scarcely stand on its own outside of this encyclopedia. Yet, somehow, it has spread into many articles and sideboxes on Wikipedia as if a "de facto" part of machine ethics! That no one has taken issue with it until now is that it has simply been ignored. Lastly, I would point out that if your primary concern was WP:POV, the article could have reflected that before it was nominated for deletion, as it has been in place for years, and you have ties with its author and those interested in its theme. Again, sharing a close connection with the topic and or authors should be noted by administrators. --Lightbound talk 23:21, 29 March 2014 (UTC)[reply]

Comment Lighthound, what are these mysterious "ties" of which you speak? Yes, I have criticized in print MIRI's conception of Friendly AI; but this is not grounds for suggesting I might be biased in its favour (!). --Davidcpearce (talk) 23:58, 29 March 2014 (UTC)[reply]

Comment David, in the interest of keeping this on topic, I'm not going to fill this comment section with all the links that would show your affiliations with many of the authors of the Springer anthology source you mentioned, and the author of the "Friendly AI" theory. Anyone who wishes to do that can find that information through a few Google searches. It is sufficient for WP:COI that you share a close relationship with the source material, topic, and reference(s) you are trying to bring forward. This is irrespective of your intentions outside this context. And note that this is supplimental information and is not neccessary to defend the case for deletion. I digress on further comment to keep this focused. Still waiting on that burden of proof that there is a scientific paper that entails "Friendly AI" theory. I'm not sure there is much more that anyone can really say at this point, as, unless new sources are brought forward this seems to devolve into trilemma. --Lightbound talk 00:06, 30 March 2014 (UTC)[reply]

Comment Lighthound, the ad hominems are completely out of place. I have no affiliations whatsoever with MIRI or Eliezer Yudkowsky. As to your very different charge of having "a close relationship with the source material, topic, and reference", well, yes! Don't you? How would ignorance of - or a mere nodding acquaintance with - the topic and the source material serve as a better qualification for an opinion? How else can one make informed criticism? This debate is lame; our time could be more usefully spent strengthening the entry.--Davidcpearce (talk) 00:59, 30 March 2014 (UTC)[reply]

Comment I would like to propose a final closing perspective, which is independent of my former arguments and notwithstanding them. Consider this article as an analogy to perpetual motion, but before we knew that it was an "epistemic impossibility". This is a concept that is mentioned in the perpetual motion article as well. The problem with having a stand-alone article on this fallacious topic is that it shifts the burden of proof onto editors to compile a criticism section for something that is so wantonly false that it is unlikely to be formally taken up. That is to say, disproving this is simple enough that one can point to the Halting problem and Gödel's incompleteness theorems for the theoretical side, and cracking and reverse engineering for the practical side. But these are basic facts within the field, and this basic nature is part of the problem of establishing WP:NPOV; while the world waits for an academic to draft a formal refutation of an informally stated concept that hasn't even been put forward as a stand-alone mathematical conjecture, the article would remain here on the Wikipedia as WP:OR. I believe this clearly violates the spirit of these guidelines, and that knowledge of this asymmetry has been used as an opportunity to present this "theory" as something stronger than it actually is. That this isn't just a matter of debate, but something so incredulous that it has been nearly totally ignored by the mainstream scientific community. That should be a strong indicator of the status of this "theory". --Lightbound talk 00:42, 30 March 2014 (UTC)[reply]

Comment David, pointing out to administrators that you may be involved in WP:EXTERNALREL is not an ad hominem; it is a fact that you contributed to the Springer source, and it is a verifiable fact through simple Google queries that you know the author(s) involved in the article. This is important for judgement in looking at the big picture of WP:NPOV and WP:COI. Thankfully, someone was able to bring this information to light so that it could at least be known. What to be done about it is up to administrators. My only purpose in pointing out a fact was to provide the whole truth. I do not have a WP:COI with this topic as I did not create the theory nor contribute or collaborate with others who did. The spirit of WP:EXTERNALREL is that you are affiliated or involved in some non-trivial way with the contributors or sources or topic of concern, which is completely distinct from a Wikipedian who is absolutely putting the interest of this community first. And, in the interest of this community it should be a non-issue that this article can not stand on its own. --Lightbound talk 01:08, 30 March 2014 (UTC)[reply]

Comment Lighthound, I was invited to contribute to Springer volume as a critic, not an advocate, of the MIRI perspective. So to use this as evidence of bias in their favour takes ingenuity worthy of a better cause. --Davidcpearce (talk) 01:36, 30 March 2014 (UTC)[reply]

Comment You may want to review what is meant by WP:COI. Again, the issue isn't intention but proximity. And here is the evidence that you helped plan the book. That you weren't merely a contributor who happened to not know anyone involved. This proves the proximity of WP:EXTERNALREL: "He will be joined as a speaker by David Pearce, who has been actively involved behind the scenes in the planning of the book, and who contributed two articles in the book." This is useful knowledge to anyone making a judgement on this page. Of the two citations you brought to the table to use, both of them are WP:EXTERNALREL. What is being stated is that there is significant proximity to the sole ensemble of resources for which you are providing to defend the notability of the article as a stand-alone topic. There are more links available if desired, but I think this shows that this isn't conjecture on my part. By the way, still waiting on that scientific journal article on the theory of "Strong AI" that you said was not factual on my part. --Lightbound talk 01:59, 30 March 2014 (UTC)[reply]