Jump to content

Wikipedia:Articles for deletion/Friendly artificial intelligence: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Lightbound (talk | contribs)
needs to try to see the evidence; going to major effor to ellucidate here
Line 126: Line 126:
:: '''Oppose'''. Wikipedia is not a sounding board for our opinions, nor a discussion forum to debate [[WP:OR]] about a philosophical or speculative issue through talk pages. If we followed this suggestion, the entire article would have to be moved to the talk page, at which point it would simply become a forum. That you didn't know that "Friendly AI" is part of the "theory" along with "Coherent Extrapolated Volition" is part of the issue with neologisms, and why they are usually weeded out on this encyclopedia. The desire to have ethical machines is distinct, more general, and has been in existence, long before "Friendly AI" theory came onto the scene. If we want to have a topic about making machines ethical there is already an [[machine ethics|article namespace]] for that. If we want to talk about the pseudoscientific, non-credible, non-independently sourced fringe theory that is "Friendly AI", which is what this page is about, then that is another issue. I am repeating all of this; because, people are coming in and expressing an emotional appeal or vote without considering these issues or looking at the (lack of) evidence to support the existence of this original research on Wikipedia. I believe strongly at this point that someone needs to at least start moving this forward by providing strong sources that substantiate this theory. But, as mentioned before, those references do not exist. Had we been having this discussion while this article was a stub it would have been a candidate for speedy deletion, but it has embedded itself and slipped unnoticed for years because of its (non-)status in the field. --[[User:Lightbound|<font color="black">☯</font><font color="FF9900">Lightbound</font><font color="black">☯</font>]] [[User talk:Lightbound|<font color="AAAAAA"><sup><b><u>talk</u></b></sup></font>]] 00:23, 1 April 2014 (UTC)
:: '''Oppose'''. Wikipedia is not a sounding board for our opinions, nor a discussion forum to debate [[WP:OR]] about a philosophical or speculative issue through talk pages. If we followed this suggestion, the entire article would have to be moved to the talk page, at which point it would simply become a forum. That you didn't know that "Friendly AI" is part of the "theory" along with "Coherent Extrapolated Volition" is part of the issue with neologisms, and why they are usually weeded out on this encyclopedia. The desire to have ethical machines is distinct, more general, and has been in existence, long before "Friendly AI" theory came onto the scene. If we want to have a topic about making machines ethical there is already an [[machine ethics|article namespace]] for that. If we want to talk about the pseudoscientific, non-credible, non-independently sourced fringe theory that is "Friendly AI", which is what this page is about, then that is another issue. I am repeating all of this; because, people are coming in and expressing an emotional appeal or vote without considering these issues or looking at the (lack of) evidence to support the existence of this original research on Wikipedia. I believe strongly at this point that someone needs to at least start moving this forward by providing strong sources that substantiate this theory. But, as mentioned before, those references do not exist. Had we been having this discussion while this article was a stub it would have been a candidate for speedy deletion, but it has embedded itself and slipped unnoticed for years because of its (non-)status in the field. --[[User:Lightbound|<font color="black">☯</font><font color="FF9900">Lightbound</font><font color="black">☯</font>]] [[User talk:Lightbound|<font color="AAAAAA"><sup><b><u>talk</u></b></sup></font>]] 00:23, 1 April 2014 (UTC)
::: You didn't present it as ''part'' of a theory, but as ''the'' theory. It is not the name of a theory. It's the article title we're talking about here, and whether it warrants a place on Wikipedia. Problems with the content should be worked out on the article's talk page. The term and subject "friendly AI" exists as a philosophical concept independently of the theory you so adamantly oppose. You could remove the theory from the page, or give it a proper "Theory of" heading, or clarify it as pseudoscience (there are plenty of those covered on Wikipedia). Deleting the article would be counterproductive. Because... The subject "friendly AI" is encountered as a philosophical concept so frequently out there in transhumanist circles and on the internet, that not to cover its existence as such on Wikipedia would be an obvious oversight on our part. And by "discussion" (in the field of transhumanism), I meant philosophical debate (that's what discussions in a philosophical field are). Such debate takes place in articles, in presentations and panel discussions at conferences, etc. In less than fifteen minutes of browsing, I came across multiple articles on friendly AI, a mention in a Times magazine article, an interview, and found it included in a course outline. But as a philosophical concept or design consideration, not a field of science. It was apparent there is a lot more out there. (Google reported 131,000 hits). I strongly support fixing the article. [[User talk:The Transhumanist|<i>The Transhumanist</i>]] 02:45, 1 April 2014 (UTC)
::: You didn't present it as ''part'' of a theory, but as ''the'' theory. It is not the name of a theory. It's the article title we're talking about here, and whether it warrants a place on Wikipedia. Problems with the content should be worked out on the article's talk page. The term and subject "friendly AI" exists as a philosophical concept independently of the theory you so adamantly oppose. You could remove the theory from the page, or give it a proper "Theory of" heading, or clarify it as pseudoscience (there are plenty of those covered on Wikipedia). Deleting the article would be counterproductive. Because... The subject "friendly AI" is encountered as a philosophical concept so frequently out there in transhumanist circles and on the internet, that not to cover its existence as such on Wikipedia would be an obvious oversight on our part. And by "discussion" (in the field of transhumanism), I meant philosophical debate (that's what discussions in a philosophical field are). Such debate takes place in articles, in presentations and panel discussions at conferences, etc. In less than fifteen minutes of browsing, I came across multiple articles on friendly AI, a mention in a Times magazine article, an interview, and found it included in a course outline. But as a philosophical concept or design consideration, not a field of science. It was apparent there is a lot more out there. (Google reported 131,000 hits). I strongly support fixing the article. [[User talk:The Transhumanist|<i>The Transhumanist</i>]] 02:45, 1 April 2014 (UTC)
:::: '''Strongly oppose.''' Yes, ''it is'', in fact, [https://intelligence.org/files/CEV.pdf a claim to a theory], and I quote the words of the creator of this "theory" and neologism from that non-notable source: "This is an update to that part of '''Friendly AI theory''' [sic] that describes Friendliness, the
objective or thing-we’re-trying-to-do". My emphasis has been added so it is crystal clear. See, this is part of the problem. There is a pseudoscientific "theory" (read: not a [[scientific theory|theory]]) called "Friendly AI" and then there is the adjective enhancing AI that refers to the concept, practice, or goal of making an AI friend''ly'', viz. benevolent. These are two very, very distinct concepts which have been lamenated together and are trying to be used here to edge in an unsubstantiated theory. Again, there are no notable, credible, independent 3rd party sources on the "theory" of "Friendly AI", and this has been stated over, and over again now. As for wanting or desiring or wishing there was a canonical place to discuss "friendliness" of AI, this is not it unless it can be backed by significant quality sources. As it stands, [[machine ethics]] should be the place for the general overview of this field and the goals it shares. Anyone reading this so far should see clearly this distinction. This is intentionally obfuscated for a reason and it is part of why this is so difficult to separate out, unpack, and discuss. Please try to see the distinction that is not without difference. --[[User:Lightbound|<font color="black">☯</font><font color="FF9900">Lightbound</font><font color="black">☯</font>]] [[User talk:Lightbound|<font color="AAAAAA"><sup><b><u>talk</u></b></sup></font>]] 03:13, 1 April 2014 (UTC)


===Recommendation for Reformatting===
===Recommendation for Reformatting===

Revision as of 03:13, 1 April 2014

Friendly artificial intelligence (edit | talk | history | protect | delete | links | watch | logs | views) – (View log · Stats)
(Find sources: Google (books · news · scholar · free images · WP refs· FENS · JSTOR · TWL)

Since the subject appears to be non-notable and/or original research, I propose to delete the article. Although the general issue of constraing AIs to prevent dangerous behaviors is notable, and is the subject of Machine ethics, this article mostly deals with this "Friendliness theory" or "Frendly AI theory" or "Coherent Extrapolated Volition" which are neologisms that refer to concepts put forward by Yudkowsky and his institute, which didn't receive significant recognition in academic or otherwise notable sources.

  • Strong Keep. The IJ Good / MIRI conception of posthuman superintelligence needs to be critiqued not deleted. The (alleged) prospect of an Intelligence Explosion and nonfriendly singleton AGI has generated much controversy, both on the Net and elsewhere (e.g. the recent Springer Singularities volume)

Several of the external links need updating. --Davidcpearce (talk) 21:38, 28 March 2014 (UTC)[reply]

Note: This debate has been included in the list of Social science-related deletion discussions. • Gene93k (talk) 23:28, 28 March 2014 (UTC)[reply]
Note: This debate has been included in the list of Computing-related deletion discussions. • Gene93k (talk) 23:28, 28 March 2014 (UTC)[reply]

Comment (I'm the user who proposed the deletion) There is already the Machine ethics article covering these issues. The Friendly artificial intelligence article is almost entirely about specific ideas put forward by Yudkowsky and his institute. They may be notable enough to deserve a mention in Machine ethics, not an article on their own. Most of the references are primary sources such as blog posts or papers self-published on MIRI's own webiste, which don't meet reliability criteria. The only source published by an indepdendent editor is the chapter written by Yudkowsky in the Global Catastrophic Risks book, which is still a primary source. The only academic source is Omohundro's paper which, although related, doesn't directly reference these issues. As far as I know, other sources meeting reliability criteria don't exist. Moreover, various passages of this article seem highly speculative and are not clearly attributed, and may be well original research. For instance: "Yudkowsky's Friendliness Theory relates, through the fields of biopsychology, that if a truly intelligent mind feels motivated to carry out some function, the result of which would violate some constraint imposed against it, then given enough time and resources, it will develop methods of defeating all such constraints (as humans have done repeatedly throughout the history of technological civilization)." Seriously, Yudkowsky can infer that using biopsychology? Biopsychology is defined in its own article as "the application of the principles of biology (in particular neurobiology), to the study of physiological, genetic, and developmental mechanisms of behavior in human and non-human animals. It typically investigates at the level of nerves, neurotransmitters, brain circuitry and the basic biological processes that underlie normal and abnormal behavior."

Comment Anon, like you, I disagree with the MIRI conception of AGI and the threat it poses. But if we think the academic references need beefing up, perhaps add the Springer volume - or Nick Bostrom's new book "Superintelligence: Paths, Dangers, Strategies" (2014)? --Davidcpearce (talk) 08:17, 29 March 2014 (UTC)[reply]

  • Delete There are several issues here: the first is that Friendly AI is and always has been WP:OR. That it has lasted this long on the Wikipedia is evidence of the lack of interest to researchers who would have otherwise recognized this and nominated deletion sooner. As we all know, Wikipedia is not a place for original research. Second, even if you manage to find WP:NOTABLE sources, this does not substantiate an article for it when it can and should be referenced in the biography for the author. Frankly, that is a stretch itself, given that it doesn't pass WP:TRUTH as a verifiable topic, but I don't think anyone would object to it. Third, in WP:TRUTH, the minimum condition is that the information can be verified from a notable source. This strengthens the deletion argument, as there are no primary, peer-reviewed sources on the topic of Friendly AI. And it is not sufficient to pass notability by proxy; using a notable source that references non-notable sources, such as Friendly AI web publications, would invalidate such a reference immediately. Fourth, even if we were to accept such a stand-alone article, it would be difficult to establish it to an acceptable quality due to the immense falsehood of the topic. This kind of undue weight issue is mentioned in WP:TRUTH. Therefore, and in light of these issues, I strongly recommend deletion. --Lightbound talk 20:22, 29 March 2014 (UTC)[reply]

Comment Lightbound, for better or worse, all of the essays commissioned for the recent Springer volume ("Singularity Hypotheses: A Scientific and Philosophical Assessment: Amnon H. Eden (Editor), James H Moor (Editor), Johnny H Soraker (Editor)) were academically peer-reviewed, including "Eliezer's Friendly" AI paper; and critical comments on it. --Davidcpearce (talk) 20:43, 29 March 2014 (UTC)[reply]

Comment I'm afraid I'm going to have to invoke WP:COI, as you, David, were involved with the organization, publication, and execution of that source. And you were also a contributing author beyond this. Any administrator considering this page's contents should be made aware of that fact. Now, moving back to the main points: firstly, Friendly AI as a theory is WP:PSCI, and any Wikipedia article that would feature it would immediately have to contend with issues of WP:UNDUE and WP:PSCI. That Springer published an anthology of essays does not substantiate the mathematical or logical theories behind Friendly AI theory. In fact, this will never occur, as it is mathematically impossible to do what the theory suggests and intractible in practice, even if it were. That this wasn't caught by the referees calls into question the validity of that source. Strong evidence can be brought here to counter the theory, and it would end up spilling over into the majority of the contents of the article as to why it is WP:PSCI. Should every Wikipedia page become an open critique on fringe and pseudoscientific theories? I would hope not. Further, to substantiate a stand-alone article, this topic will need to have several high quality primary sources. Even if we somehow allow these issues I've raised to pass, that final concern should be sufficient to recommend deletion alone. --Lightbound talk 21:12, 29 March 2014 (UTC)[reply]

Comment Lightbound, if I have a declaration of interest to make, it's that I'm highly critical of MIRIs concept of "Friendly AI" - and likewise of both Kurzweilian and MIRI's conception of a "Technological Singularity". Given my views, I didn't expect to be invited to contribute to the Springer volume; I wasn't one of the editors, all of whom are career academics. Either way, to say that there are "no primary, peer-reviewed sources on the topic of Friendly AI" is factually incorrect. It's a claim that you might on reflection wish to withdraw. --Davidcpearce (talk) 22:03, 29 March 2014 (UTC)[reply]

Comment (OP) The Springer pubblication is paywalled, I can only access the first page where Yudkowsky discusses examples of anthropomorphization in science fiction. Does the paper substantially supports the points in this article? Even if it does, it is still a primary source. If I understand correctly, even though Springer is generally an academic publisher, this volume is part of the special series "The Frontiers Collection", which is aimed at non-technical audiences. Hence I wouldn't consider it an academic pubblication. I think that the subject may be notable enough to deserve a mention in MIRI and/or Machine ethics, but not notable and verifiable enough to deserve an article on its own. — Preceding unsigned comment added by 131.114.88.192 (talk) 21:14, 29 March 2014 (UTC)[reply]

Comment David, there is not a single primary, peer-reviewed journal article on the scientific theory of "Friendly AI". And there is a very logical reason why there is not, and it is related to why it was published in an anthology. "Friendly AI" can not survive the peer-review process of a technical journal. To do so, such a paper would need to come in the form of a mathematical proof or, at the very least, a rigorous conjecture. As pointed out above, the book is oriented towards a non-technical audience. Again, even if we let this source pass (which we shouldn't), this is not sufficient in quality or quantity to warrant a stand-alone article. --Lightbound talk 22:26, 29 March 2014 (UTC)[reply]

Comment Lightbound, your criticisms of Friendly AI are probably best addressed by one of its advocates, not me! All I was doing was pointing out that your original claim - although made I'm sure in good faith - was not factually correct. --Davidcpearce (talk) 23:06, 29 March 2014 (UTC)[reply]

Comment I still strongly support deletion. David, feel free to cite the actual rigorous mathematical conjecture or scientific theory paper that directly entails the "Friendly AI" theory and I'll gladly concede; however, if you cite the anthology from Springer, then it has its own issues, though largely moot as one source is not enough for a stand-alone article. That a source is from a major publication does not automatically make it sufficient to establish the due dilligence in the spirit of WP:NOTABLE, especially in light of the arguments made against it above. You could replace "Friendly AI" with any pseudoscientific theory and I would (and have, in the past) respond the same. This is a significantly weak minimal POV that can scarcely stand on its own outside of this encyclopedia. Yet, somehow, it has spread into many articles and sideboxes on Wikipedia as if a "de facto" part of machine ethics! That no one has taken issue with it until now is that it has simply been ignored. Lastly, I would point out that if your primary concern was WP:POV, the article could have reflected that before it was nominated for deletion, as it has been in place for years, and you have ties with its author and those interested in its theme. Again, sharing a close connection with the topic and or authors should be noted by administrators. --Lightbound talk 23:21, 29 March 2014 (UTC)[reply]

Comment Lighthound, what are these mysterious "ties" of which you speak? Yes, I have criticized in print MIRI's conception of Friendly AI; but this is not grounds for suggesting I might be biased in its favour (!). --Davidcpearce (talk) 23:58, 29 March 2014 (UTC)[reply]

Comment David, in the interest of keeping this on topic, I'm not going to fill this comment section with all the links that would show your affiliations with many of the authors of the Springer anthology source you mentioned, and the author of the "Friendly AI" theory. Anyone who wishes to do that can find that information through a few Google searches. It is sufficient for WP:COI that you share a close relationship with the source material, topic, and reference(s) you are trying to bring forward. This is irrespective of your intentions outside this context. And note that this is supplimental information and is not neccessary to defend the case for deletion. I digress on further comment to keep this focused. Still waiting on that burden of proof that there is a scientific paper that entails "Friendly AI" theory. I'm not sure there is much more that anyone can really say at this point, as, unless new sources are brought forward this seems to devolve into trilemma. --Lightbound talk 00:06, 30 March 2014 (UTC)[reply]

Comment Lighthound, the ad hominems are completely out of place. I have no affiliations whatsoever with MIRI or Eliezer Yudkowsky. As to your very different charge of having "a close relationship with the source material, topic, and reference", well, yes! Don't you? How would ignorance of - or a mere nodding acquaintance with - the topic and the source material serve as a better qualification for an opinion? How else can one make informed criticism? This debate is lame; our time could be more usefully spent strengthening the entry.--Davidcpearce (talk) 00:59, 30 March 2014 (UTC)[reply]

Comment I would like to propose a final closing perspective, which is independent of my former arguments and notwithstanding them. Consider this article as an analogy to perpetual motion, but before we knew that it was an "epistemic impossibility". This is a concept that is mentioned in the perpetual motion article as well. The problem with having a stand-alone article on this fallacious topic is that it shifts the burden of proof onto editors to compile a criticism section for something that is so wantonly false that it is unlikely to be formally taken up. That is to say, disproving this is simple enough that one can point to the Halting problem and Gödel's incompleteness theorems for the theoretical side, and cracking and reverse engineering for the practical side. But these are basic facts within the field, and this basic nature is part of the problem of establishing WP:NPOV; while the world waits for an academic to draft a formal refutation of an informally stated concept that hasn't even been put forward as a stand-alone mathematical conjecture, the article would remain here on the Wikipedia as WP:OR. I believe this clearly violates the spirit of these guidelines, and that knowledge of this asymmetry has been used as an opportunity to present this "theory" as something stronger than it actually is. That this isn't just a matter of debate, but something so incredulous that it has been nearly totally ignored by the mainstream scientific community. That should be a strong indicator of the status of this "theory". --Lightbound talk 00:42, 30 March 2014 (UTC)[reply]

Comment David, pointing out to administrators that you may be involved in WP:EXTERNALREL is not an ad hominem; it is a fact that you contributed to the Springer source, and it is a verifiable fact through simple Google queries that you know the author(s) involved in the article. This is important for judgement in looking at the big picture of WP:NPOV and WP:COI. Thankfully, someone was able to bring this information to light so that it could at least be known. What to be done about it is up to administrators. My only purpose in pointing out a fact was to provide the whole truth. I do not have a WP:COI with this topic as I did not create the theory nor contribute or collaborate with others who did. The spirit of WP:EXTERNALREL is that you are affiliated or involved in some non-trivial way with the contributors or sources or topic of concern, which is completely distinct from a Wikipedian who is absolutely putting the interest of this community first. And, in the interest of this community it should be a non-issue that this article can not stand on its own. --Lightbound talk 01:08, 30 March 2014 (UTC)[reply]

Comment Lighthound, I was invited to contribute to Springer volume as a critic, not an advocate, of the MIRI perspective. So to use this as evidence of bias in their favour takes ingenuity worthy of a better cause. --Davidcpearce (talk) 01:36, 30 March 2014 (UTC)[reply]

Comment You may want to review what is meant by WP:COI. Again, the issue isn't just intention but proximity. And here is the evidence that you helped plan the book. That you weren't merely a contributor who happened to not know anyone involved. This proves the proximity of WP:EXTERNALREL: "He will be joined as a speaker by David Pearce, who has been actively involved behind the scenes in the planning of the book, and who contributed two articles in the book." This is useful knowledge to anyone making a judgement on this page. Of the two citations you brought to the table to use, both of them are WP:EXTERNALREL. What is being stated is that there is significant proximity to the sole ensemble of resources for which you are providing to defend the notability of the article as a stand-alone topic. There are more links available if desired, but I think this shows that this isn't conjecture on my part. By the way, still waiting on that scientific journal article on the theory of "Friendly AI" that you said was not factual on my part. --Lightbound talk 01:59, 30 March 2014 (UTC)[reply]

Comment Lighthound, forgive me, but you're missing my point. I'm a critic of the MIRI conception of an Intelligence Explosion and Friendly AI. Many of the contributors to the Springer volume are critical too. This critical stance is not evidence of bias in its favour! --Davidcpearce (talk) 02:23, 30 March 2014 (UTC)[reply]

Comment I am giving your comments deep consideration and have not missed your points. Again, it isn't about pro/con. I don't make the decision on the deletion, but others should know your involvement. The issue is that you originally raised two sources to defend this article as stand-alone, but it is about your proximity to those sole sources you are providing that is part of the issue. It is not to say they are invalid because of this, but that it is need to know information for someone making the final decision. That has been done, and we need not discuss it further. This isn't even the primary concern of the deletion of this article. Can you actually provide any credible 3rd party sources that you didn't orchestrate or were involved with? Can you show, objectively, why this "theory" merits its own dedicated article? Also, what about the arguments that this is an impossible concept, and therefor will always be in lack of equally credible POV to dismiss it, as I mentioned above? I've asked you to prove to us that I was wrong that there exists nothing in the technical scientific literature on the theory of "Friendly AI". I know I certainly can't find it, despite reading the literature daily. This could have been solved with a quick Scholar search. But I understand you won't be doing that because it doesn't exist and can't exist due to the nature of its impossibility. So, please, do prove me wrong, and bring forth at least one or two really strong notable sources. Otherwise, I still strongly recommend deletion. --Lightbound talk 02:35, 30 March 2014 (UTC)[reply]

Comment Lighthound, any Wikipedia contributor is perfectly entitled to a use a pseudonym - or indeed an anonymous IP address, as did the originator of the proposal for deletion. Where a pseudonym becomes problematic is when it's used to attack the integrity of those who don't. I have not "orchestrated" any literature - popular or academic - favourable to MIRI / Friendly AI. My only comments on Friendly AI have been entirely critical. So it's surreal to be accused of bias in its favour. If the Wikipedia Flat Earth Society entry were nominated for deletion, I'd vote a "Strong Keep" too. This isn't because I'm a closet Flat Earther.--Davidcpearce (talk) 08:18, 30 March 2014 (UTC)[reply]

Comment Again, David, claims of bad faith are not going to help your case. The statements made are factual and evidence/references have been provided; that is enough to prove WP:COI. Again, it doesn't require us to form conjecture about your agenda, only to show proximity. Regardless, this does not solve the notability issue of the source, nor the issues of WP:OR as per the comments above. This has now been repeated several times. I'll be stepping back from this as I believe all that is needed has been shown in all the comments above. --Lightbound talk 08:58, 30 March 2014 (UTC)[reply]

Comment Lighthound, a willingness to engage in critical examination does not indicate favourable bias - any more than your own critique above. We both disagree with "Friendly AI"; the difference is that you believe its Wikipedia entry should be deleted, whereas I think it should be strengthened - ideally by someone less critical of the MIRI perspective than either of us, i.e. a neutral point of view. Perhaps I should add - without claiming to know all the details - that I am troubled by the lack of courtesy shown to Richard Loosemore below. --Davidcpearce (talk) 16:43, 30 March 2014 (UTC)[reply]

  • Delete I am Richard Loosemore, and I am also a contributor to the recent Springer volume ("Singularity Hypotheses: A Scientific and Philosophical Assessment: Amnon H. Eden (Editor), James H Moor (Editor), Johnny H Soraker (Editor)). That book is not sufficient justification for keeping the Friendly artificial intelligence page: it was one of the poorest peer reviewed publications that I have ever seen, with credible articles placed alongside others that had close to zero credibility. Also, it does not help to cite people at the Future of Humanity Institute (e.g. Nick Bostrum) as evidence of independent scientific support for the Friendly artificial intelligence idea, because the Yudkowsky organization (Machine Intelligence Research Institute) and FHI are so closely aligned that they sometimes appear to be branches of the same outfit. I think the main issue here is not whether the general concept of AI friendliness is worth having a page on, but whether the concept as it currently stands is anything more than the idiosyncratic speculations of one person and his friends. The phrase ″Friendly artificial intelligence″ is generally used to mean the particular ideas of a small group around Eliezer Yudkowsky. Is it worth having a page about it because there are pros and cons that have been discussed in the literature? To answer that question, I think it is important to note the ways in which people who disagree with the ″FAI″ idea are treated when they voice their dissent. I am one of the most vocal critics of his theory, and my experience is that whenever I do mention my reservations, Yudkowsky and/or his associates go out of their way to intervene in the discussion to make slanderous ad hominem remarks and encourage others not to engage in discussion with me. Yudkowsky commented in a recent discussion: Comment dated 5th September 2013 ″Warning: Richard Loosemore is a known permanent idiot, ponder carefully before deciding to spend much time arguing with him.″. And, contrariwise, I have just returned from a meeting of the Association for the Advancement of Artificial Intelligence, where there was a symposium on the subject of ″Implementing Selves with Safe Motivational Systems and Self-Improvement," which was mostly about safe AI motivational systems ... friendliness, in other words. I delivered a paper debunking some of the main ideas of Yudkoswky's FAI proposals, and although someone from MIRI was local to the conference venue (Stanford University) and was offered a spot on the program as invited speaker, he refused on the grounds that the symposium was of no interest (Mark Waser: personal communication). I submit that all of this is evidence that the ″Friendly artificial intelligence″ concept has no wider academic credibility, but is only the speculation of someone with no academic standing, aided and abetted by his friends and associates. If the page were to stay, it would need to be heavily edited (by someone like myself, among others) to make it objective, and my experience is that this would immediately provoke the kind of aggressive response I described above. LimitingFactor (talk) 16:15, 30 March 2014 (UTC)[reply]

Comment David Pearce, you are indeed a respected critic of FAI, so I would not attack your position just because you were also involved with the Singularity Hypotheses book. My reasons for disagreement have only to do with the wider acceptance of the idea and the maturity of those who aggressively promote it. Your presence in the book and my presence in the book are clearly not the issue, since it is now clear that we take opposite positions on the deletion question. So perhaps that argument can be put aside. LimitingFactor (talk) 16:29, 30 March 2014 (UTC)[reply]

Comment Limitingfactor, many thanks, you're probably right; I should let it pass. --Davidcpearce (talk) 16:46, 30 March 2014 (UTC)[reply]

Comment Limitingfactor and David are clearly choosing to ignore what WP:COI means and why their close relationship to the people and processes behind the sources they promote would need to be a consideration. Your close proximity to the source(s) are sufficient. You can continue to WP:CANVAS, David, and bring in more meat puppets, but that isn't going to help the fact that this article can not stand on its own without a significant body of notable sources. You claimed early on that there were in fact notable sources. You claimed I was incorrect that no technical/mathematical scientific paper or rigorous conjecture exists that is published from a real source, then failed to provide or substantiate that. And the reason is because such a paper does not exist in the literature. You've been asked several times to provide some sources and citations beyond the two you did. It has been explained that even withstanding those two sources, and if there were even no issue with them, that they are not enough to allow this page to stand as-is. All you or anyone else has to do, instead of ignoring well-established guidelines, is to provide some strong sources beyond the two which have been contested. And they are contested beyond the need the fact of proximity; they don't hold up even if you had been someone else suggesting them. --Lightbound talk 17:50, 30 March 2014 (UTC)[reply]

Comment Aghh, Lighthound, please re-read. I am a critic of "Friendly AI"! I would like to see a balanced and authoritative Wikipedia entry on the topic by someone less critical than me - not polemics. --Davidcpearce (talk) 18:07, 30 March 2014 (UTC)[reply]

Comment Administrators have been contacted. This is out of hand. Again, it isn't the primary issue whether or not you are polemical or not; for the topic or against the topic; pro or con; love it or hate it. The sources are contested here and are invalid, regardless of the fact that you helped create and organize them. But what does matter is that you are clearly canvasing at this point. The points to be made have been made. It has been requested that someone — anyone — please provide credible sources other than these. Let us end this futile discussion on whether or not you are for or against whatever topics. It has never been the issue, only that it is important to know that you are pumping the source because you contributed to it and helped orchestrate it. For or against it, that is still WP:COI in my view. And you continue to pump them when we've asked that you provide at least a few alternatives. But we know why that isn't going to happen! --Lightbound talk 18:22, 30 March 2014 (UTC)[reply]

Comment Lighthound, you've left me scratching my head. I am a critic of "Friendly AI", not a partisan. I neither contributed to the entry nor helped "orchestrate" it. If you've seriously any doubts on that score, why don't you drill through the history of the article's edits? --Davidcpearce (talk) 18:46, 30 March 2014 (UTC)[reply]

Comment (OP) Please let's try to avoid personal attacks. I don't Davidcpearce canvassed Limitingfactor into the discussion, since they voted in opposite ways. Also, in my understading of WP:COI, it is sufficient that users who have professional stakes in the subject or personal relationships with people or organizations associated to it declare them. Limitingfactor declared them himself and in the case of Davidcpearce they are public domain, since he is commenting under his real name. The fact that they have these relationships doesn't automatically invalidate their votes and comments, it just means that their votes and comments should be considered while taking into account that these relatioships exist. Also, the fact that Davidcpearce suggested to add a source he was involved with doesn't automatically disqualify that source. — Preceding unsigned comment added by 131.114.88.192 (talk) 18:49, 30 March 2014 (UTC)[reply]

Comment All you did was repeat what I've said above at least four times. And, again, these are not "personal attacks". This is all externally verifiable information. It is canvasing because he is bringing people into the discussion from outside the Wikipedia to support his arguments. This particular argument was that he was somehow for or against this topic, which has been pointed out repeatedly to be irrelevant and not the issue. The real issue, which I keep trying to steer us towards, is that even if we accept this anthology of essays as a credible source, it is not sufficient for a stand-alone article on an impossible topic. It has already been repeated that it is not sufficient that he is proximal to it to invalidate it alone, but that is valuable need to know information. This was all stated over and over again. Reading the full discourse is helpful to prevent this kind of circular argumentation. Again, let us stop this. Provide more sources, please. The ones listed are contested because of their non-technical status, and that they don't actually substantiate the theory beyond speculation! --Lightbound talk 18:57, 30 March 2014 (UTC)[reply]

Comment (OP) Just to restate my case for the deletion proposal, it seems that this "Friendly AI" is a neologism WP:NEO created by Yudkowsky to encompass a number of arguments he and people closely associated to him have made on the subject of Machine ethics. A more apt title for the article would be something like "Yudkowskian Machine ethics" or "Eliezer Yudkowsky's school of Machine ethics", but the point is that these views are not notable enough to warrant a stand-alone Wikipedia article. This is evidenced by the fact that the only available sources are primary sources written by Yudkowsky and his associates, and most of them are non-academic and in fact even self-published sources. — Preceding unsigned comment added by 131.114.88.192 (talk) 19:07, 30 March 2014 (UTC)[reply]

  • Strong Delete Neologism created by Eliezer Yudkowsky. Can be more than adequately covered in articles about the highschoolelementary school graduate who invented the term or his Harry Potter fanclub. Hipocrite (talk) 19:32, 30 March 2014 (UTC)[reply]

Comment ...and adopted by big-name Oxford professor. There are powerful arguments against singleton AGI; Eliezer Yudkowsky's home-schooling isn't one of them. (cf. http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=9126040 ) --Davidcpearce (talk) 20:01, 30 March 2014 (UTC)[reply]

It is a primary source with a close relationship. One of the authors is the director at MIRI. The other author is from The Future of Humanity Insititute. It is a verifiable fact that these organizations are aligned and in public cooperation with each other as evidenced by their websites and the cross-promotion of their member's books and articles. This does not represent a strong, notable secondary/tertiary source. There needs to be something more. Further, the article is only 7 pages long and is devoid of logical or mathematical rigor on the topic. --Lightbound talk 20:55, 30 March 2014 (UTC)[reply]
  • Keep I don;t care about who wrote the article or created the terminology. I think its a reasonable topic, and not really covered in detail in any other existing article. Further, I think it's likely to be expandable. There are sufficient secondary sources from other than the devisor of the term. What the article needs is some editing for clarity. (and not mentioning the creator's name quite as often) DGG ( talk ) 19:42, 30 March 2014 (UTC)[reply]

Comment That's clearly untrue. There are no notable, credible secondary/tertiary sources on the theory of "Friendly AI". Prove us wrong by linking them! It can't be done; because, they don't exist. --Lightbound talk 19:47, 30 March 2014 (UTC)[reply]

Comment I am troubled by the inflammatory allegations being made in this discussion (by Lightbound). First, I am not a meatpuppet or sockpuppet, nor did David Pearce contact me in any way, directly or indirectly, about this discussion. I have long had an interest in this page because it is in my field of research. I came here because there was a discussion in progress, and I felt that I had relevant information to offer. Second, I did not become an editor in order to comment here: I have been registered as a Wikipedia editor since 2006. Third, you do not seem to have noticed, Lightbound, that when I entered the discussion I voted against David Pearce! I therefore makes no sense to claim that I was canvassed into the discussion by him. Fourth, The conflict of interest issue is a red herring. I do not stand to gain by the deletion, and I exposed my involvement in the community of intellectual discourse related to the issue here straight away. It would help matters if the discussion from here forward did not contain any more accusations. LimitingFactor (talk) 21:19, 30 March 2014 (UTC)[reply]

If a conscientious reader starts at the top of this page and follows to the bottom, they will see that careful attention has been paid to separating the fact that the WP:COI notice was informational/supplimental in content. And that all arguments are as it pertains to the quality of sources. Again, and this has now been repeated many times, it is not about whether or not someone is for or against the topic, but to root out the true quality of these sources and citations. So far, no one has provided any significant citation or reference, and all that is being done is an attempt to spin or frame my responses and informational annotations about all relevant facts as ad hominem, which is in bad taste. I've already repeatedly asked that we drop this informational line of discourse on the WP:COI issue. So, you can remain troubled, but there is no issue other than the quality of the sources. To which it presently stands that there are none, and all that has been brought forth is not even substantive of the subject matter. All of this leads to the fact that this is an article long overdue for deletion. --Lightbound talk 21:25, 30 March 2014 (UTC)[reply]

Comment Stepping back from the fray... I think the deletion proposal is not an easy one to decide, because the topic itself (the friendliness or hostility of a future artificial intelligence) is without doubt a topic of interest and research. I voted to delete because the page, as it stands, treats the topic as if it were the original scientific creation of Eliezer Yudkowsky. Most of the page is couched in language that implies that his 'theory' is the topic, but nowhere is there a pointer to peer-reviewed scientific papers stating any 'theory' of friendliness at all. Instead, the articles that do exist are either (a) poor quality, non-peer-reviewed and sourced by people with an activist agenda in favor of Yudkowsky, or (b) by credible people (Bostrum, Omohundro, myself and others) but few in number and NOT lending credibility to Yudkowsky's writings. That imbalance makes it difficult to imagine a satisfactory article, because it would still end up looking like a pean to Yudkowsky (on account of the sheer volume of speculation generated by him and his associates) with a little garnish of other articles around the edges. LimitingFactor (talk) 21:49, 30 March 2014 (UTC)[reply]

Agreed, and thank you for dropping that previous line of discourse. I am in consensus with the above comment. What LimitingFactor is alluding to at the end of his comment is explained by philosopher Daniel Dennett in his paper The Higher Order Truths of Chmess. That discourse on a philosophical topic does not actually mean that it makes sense or is substantial or real in any meaningful way. So far, all the sources that can be found are merely this kind of discourse. There has never been an actual technical mathematical or logical proof or rigorous conjecture published anywhere on the idea itself, only vague language and speculation. This supports the remarks echoed by LimitingFactor and the anonymous editor(s) above as well, ultimately showing that making a quality Wikipedia article on this topic would be a feat as impossible as the topic itself. --Lightbound talk 22:10, 30 March 2014 (UTC)[reply]
  • Keep or merge. Secondary sources found:
  1. a New Atlantis journal article
  2. a New Atlantis journal article, reply to previous article
  3. section 5.3 of the book "The Nexus between Artificial Intelligence and Economics"
  4. chapter 4 of the book "Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World"
The Omohundro paper is a RS independent of Yudkowsky, but looks more like primary research than a secondary review of FAI. The four sources above are in depth about FAI, and seem independent. The nexus book is from Springer and presumed reliable. The singularity book is from BenBella Books, a "publishing boutique" that may be reputable. Based on the two New Atlantis articles and the nexus book, this topic looks marginally notable per WP:GNG. The article is essay-like in parts and I agree with DGG that it is a bit promotional, but these are surmountable problems, per WP:SURMOUNTABLE. A marginally notable topic and surmountable article problems suggest keeping the article. Even if others don't find it notable, basic facts about FAI ideas (it exists, when it was coined, a short summary) are verifiable in reliable sources. Per WP:PRESERVE and WP:ATD, preservation of verifiable material is preferable to deletion. Machine ethics would be a reasonable target for such a merge. --Mark viking (talk) 22:21, 30 March 2014 (UTC)[reply]
None of those articles above substantiate and rigorously define the concept of "Friendly AI" as a theory beyond merely being WP:NEO. Further, the books you linked are citing non-notable sources for the materials on "Friendly AI" theory and are only covering the topic in 2-3 pages maximum at minimal depth. Oppose Keep on those grounds. As for a merge, I oppose that based on the argumentation that it isn't clear that "Friendly AI" as a WP:NEO can be separated cleanly from this loose concept of the "theory" of "Friendly AI", which indeed has no credible sources which detail the subject matter. That is to say, people are saying that AI should be "friendly" and confusing or not seeing that there was indeed a speculative, non-rigorous fringe theory that specifies a kind of architecture for doing this. The Atlantic articles are blog-like, and directly link to the non-notable sources in question as well. --Lightbound talk 22:30, 30 March 2014 (UTC)[reply]
(OP) The "Singularity Rising" book by James Miller probably shouldn't be considered as an independent source, as the author has professional ties with MIRI: he is listed as a "research advisor" on MIRI's website and as you can see on the Amazon page, the book is endorsed by MIRI's director Luke Muehlhauser, MIRI's main donor and advisor Peter Thiel, and advisor Aubrey de Grey. The very chapter you cited directly pleas for donations to MIRI! The other sources look valid, however. I agree that the general topic of Machine ethics is notable, and Yudkowsky's "Friendly AI" is probably notable enough to deserve a mention in that page, but a stand-alone article gives it undue weight, since it is a minority view in the already niche field of machine ethics. In my understanding "Friendly AI" boils down to "Any advanced AI will be dangerous to humans by default, unless its design was provably safe in a mathematical sense". This view has been commented on and criticized by independent academics such as David Pearce and Richard Loosemore, among others, and therefore probably passes notability criteria, but most of the content of this article is unencyclopedic essay-like/poorly sourced/promotional content, and if you were to remove it, very little content would remain, and I doubt that the article could be expanded with high-quality notable content. Therefore, 'Delete or Merge seem reasonable. 131.114.88.192 (talk) 00:02, 31 March 2014 (UTC)[reply]
The issue with a merge is that there still isn't a significant source on the actual theory of "Friendly AI". Thus, the merge would be based on a concept entailed by a neologism and wouldn't even stand on its own even in that context. A source merely mentioning it, referencing a non-notable primary source is still not actually telling us what this "theory" is in any concrete way; they are simply documenting an apparent controversy in an idea of whether or not machine intelligence can be benevolent, which is distinct from the actual non-rigorous concepts presented by "Friendly AI" as a theory. There are two sub-issues to be unpacked:
  1. Distinguishing criticisms about whether or not AI can be made or to stay benevolent, which is more general than and not specific to the "Friendly AI" theory. This, doubtless, was part of the idea behind naming this theory in such a way. This is the issue with it being WP:NEO; the attempt to rebrand a concept and redefine what it means when its always been about what is already being covered under machine ethics as a whole.
  2. Criticisms of the architectural/mathematical framework that is "Friendly AI" and "Coherent Extrapolated Volition", which are indeed not notable sourced concepts, and are WP:PSCI. This is also clear given that these concepts as an architecture are often presented or introduced in the context of science fiction/laws of robotics.
Thus, trying to merge doesn't solve the WP:NEO and WP:OR issues. The problems will remain: finding sources that do not merely discuss (and confuse) the two above issues, and finding sources that actually give a technically sound, rigorous, peer-reviewed proof or mathematical conjecture for the topic. That is, if someone is going to promote a new kind of physics or a new kind of communications theory, and we were going to cover that, we would at least need a strong source that fully details that concept. It would be fair enough to provide a criticism section under machine ethics that simply addresses the concerns of making AI benevolent instead of trying to force everyone into this lexicon, which is not only not widely supported but is becoming increasingly confused with the two points above. --Lightbound talk 00:28, 31 March 2014 (UTC)[reply]
Oppose. The nomination for deletion isn't just that this doesn't stand on its own. It's that it doesn't stand anywhere. Merging doesn't solve the fact that the actual "Friendly AI" theory is WP:PSCI, of which doesn't get consideration of equal footing the same as POV and minor POV issues, as explicitly stated in those guidelines. Such a theory could never survive direct publication in a technical journal; this is why no one so far has been able to come up with an actual source that specifies unambiguously and rigorously what the theory of "Friendly AI" is. And the burden of proof is not on editors to keep pseudoscience, but to establish first with notable sources. All that the sources so far establish is that some people have been using the phrase "friendly AI" to refer to the act of making machine intelligence safe(er) or to discuss the theoretical implications. So, again, are we merging a neologism or merging the theory of "Friendly AI"? Neither appear to be acceptable, and for all the reasons that have been unveiled in the above comments. --Lightbound talk 18:54, 31 March 2014 (UTC)[reply]
  • Keep – Friendly AI is a concept in transhumanist philosophy, under widespread discussion in that field and the field of AI. I've never read that the concept itself is a theory. A hypothetical technology, yes. A scientific research objective, yes. A potential solution to the existential risk of the technological singularity, yes. Much of the article is unverified, and rather than the whole article being deleted, unverified statements can be challenged via WP:VER and removed. I suggest moving any challenged material to the article's talk page, where it can be stored and accessed for the purpose of finding supporting citations. The article needs some TLC, and is worth saving. The Transhumanist 23:25, 31 March 2014 (UTC)[reply]
Oppose. Wikipedia is not a sounding board for our opinions, nor a discussion forum to debate WP:OR about a philosophical or speculative issue through talk pages. If we followed this suggestion, the entire article would have to be moved to the talk page, at which point it would simply become a forum. That you didn't know that "Friendly AI" is part of the "theory" along with "Coherent Extrapolated Volition" is part of the issue with neologisms, and why they are usually weeded out on this encyclopedia. The desire to have ethical machines is distinct, more general, and has been in existence, long before "Friendly AI" theory came onto the scene. If we want to have a topic about making machines ethical there is already an article namespace for that. If we want to talk about the pseudoscientific, non-credible, non-independently sourced fringe theory that is "Friendly AI", which is what this page is about, then that is another issue. I am repeating all of this; because, people are coming in and expressing an emotional appeal or vote without considering these issues or looking at the (lack of) evidence to support the existence of this original research on Wikipedia. I believe strongly at this point that someone needs to at least start moving this forward by providing strong sources that substantiate this theory. But, as mentioned before, those references do not exist. Had we been having this discussion while this article was a stub it would have been a candidate for speedy deletion, but it has embedded itself and slipped unnoticed for years because of its (non-)status in the field. --Lightbound talk 00:23, 1 April 2014 (UTC)[reply]
You didn't present it as part of a theory, but as the theory. It is not the name of a theory. It's the article title we're talking about here, and whether it warrants a place on Wikipedia. Problems with the content should be worked out on the article's talk page. The term and subject "friendly AI" exists as a philosophical concept independently of the theory you so adamantly oppose. You could remove the theory from the page, or give it a proper "Theory of" heading, or clarify it as pseudoscience (there are plenty of those covered on Wikipedia). Deleting the article would be counterproductive. Because... The subject "friendly AI" is encountered as a philosophical concept so frequently out there in transhumanist circles and on the internet, that not to cover its existence as such on Wikipedia would be an obvious oversight on our part. And by "discussion" (in the field of transhumanism), I meant philosophical debate (that's what discussions in a philosophical field are). Such debate takes place in articles, in presentations and panel discussions at conferences, etc. In less than fifteen minutes of browsing, I came across multiple articles on friendly AI, a mention in a Times magazine article, an interview, and found it included in a course outline. But as a philosophical concept or design consideration, not a field of science. It was apparent there is a lot more out there. (Google reported 131,000 hits). I strongly support fixing the article. The Transhumanist 02:45, 1 April 2014 (UTC)[reply]
Strongly oppose. Yes, it is, in fact, a claim to a theory, and I quote the words of the creator of this "theory" and neologism from that non-notable source: "This is an update to that part of Friendly AI theory [sic] that describes Friendliness, the

objective or thing-we’re-trying-to-do". My emphasis has been added so it is crystal clear. See, this is part of the problem. There is a pseudoscientific "theory" (read: not a theory) called "Friendly AI" and then there is the adjective enhancing AI that refers to the concept, practice, or goal of making an AI friendly, viz. benevolent. These are two very, very distinct concepts which have been lamenated together and are trying to be used here to edge in an unsubstantiated theory. Again, there are no notable, credible, independent 3rd party sources on the "theory" of "Friendly AI", and this has been stated over, and over again now. As for wanting or desiring or wishing there was a canonical place to discuss "friendliness" of AI, this is not it unless it can be backed by significant quality sources. As it stands, machine ethics should be the place for the general overview of this field and the goals it shares. Anyone reading this so far should see clearly this distinction. This is intentionally obfuscated for a reason and it is part of why this is so difficult to separate out, unpack, and discuss. Please try to see the distinction that is not without difference. --Lightbound talk 03:13, 1 April 2014 (UTC)[reply]

Recommendation for Reformatting

I recommend that this deletion discussion be reorganized into a combination of the survey itself with !votes and a discussion, into which the walls of comments can be moved. At least the walls of comments are labeled as such, but they distract from trying to determine whether the !votes form a WP:CONSENSUS. I can reorganize if desired. Robert McClenon (talk) 02:20, 31 March 2014 (UTC)[reply]

Support. I think you reorganizing would help clarify the flow of this AfD page greatly. --Lightbound talk 02:47, 31 March 2014 (UTC)[reply]
  • I won't express an opinion either way, but will say that the above structure will not pose a problem for any admin that is experienced at closing discussions. Really, it is pretty calm and reasonably formatted, even if done so in a somewhat unorthodox way. I would be more afraid of causing drama over reformatting, rather than the closer's ability to read it. Dennis Brown |  | WER 20:09, 31 March 2014 (UTC)[reply]
  • The article needs to be reformatted a lot more than this discussion does. Or better yet, wikified. Concerning this discussion, there are two questions we need to answer: 1) Does the concept "Friendly AI" exist? and 2) Is it notable? The Transhumanist 02:51, 1 April 2014 (UTC)[reply]