Talk:Machine Intelligence Research Institute: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
(15 intermediate revisions by the same user not shown)
Line 178: Line 178:
{{reflist-talk}}
{{reflist-talk}}
:::is still in the article. This is a primary source (a conference paper published on their own website and branded even) and the content is randomly grabbing some thing out of it. Not encyclopedic. This is what fans or people with a COI do (they edit the same way). There are a bunch of other conference papers like this as well and used in the same way. Conference papers are the bottom of the barrel for scientific publishing. There are still somewhat crappy blogs or e-zines like OZY and Nautlius.
:::is still in the article. This is a primary source (a conference paper published on their own website and branded even) and the content is randomly grabbing some thing out of it. Not encyclopedic. This is what fans or people with a COI do (they edit the same way). There are a bunch of other conference papers like this as well and used in the same way. Conference papers are the bottom of the barrel for scientific publishing. There are still somewhat crappy blogs or e-zines like OZY and Nautlius.
::::I've explained again and again that published primary sources are perfectly encyclopedic, and OZY and Nautilus are both published secondary sources. Computer science is different from other fields: most CS work is done in workshops and conferences rather than journals, and they are not considered "bottom of the barrel", so perhaps you aren't equipped to know what is reputable or not in the field of computer science. I don't know if you've actually looked at that citation either; the content in this article is roughly summarizing the thesis. If you want there to be *more* detail in this article, that's fine - feel free to add it yourself, but that's clearly not a reason to take away any details. So, I'm at a loss to see what the problem is. Perhaps you should familiarize yourself with the use of academic sources elsewhere on Wikipedia, because this is exactly how we write things all the time. [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 21:38, 25 August 2018 (UTC)
:::A different kind of bad:
:::A different kind of bad:
:::<blockquote>In early 2015, MIRI's research was cited in a research priorities document accompanying an [[Open Letter on Artificial Intelligence|open letter on AI]] that called for "expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial".<ref name="priorities">{{cite report |author=Future of Life Institute |authorlink=Future of Life Institute |coauthors= |date=2015 |title=Research priorities for robust and beneficial artificial intelligence |url=http://futureoflife.org/static/data/documents/research_priorities.pdf |publisher= |page= |docket= |accessdate=4 October 2015 |quote= }}</ref> Musk responded by funding a large AI safety grant program, with grant recipients including Bostrom, Russell, [[Bart Selman]], [[Francesca Rossi]], [[Thomas Dietterich]], [[Manuela M. Veloso]], and researchers at MIRI.<ref name="post">{{cite news |last=Basulto |first=Dominic |date=2015 |title=The very best ideas for preventing artificial intelligence from wrecking the planet |url=https://www.washingtonpost.com/news/innovations/wp/2015/07/07/the-very-best-ideas-for-preventing-artificial-intelligence-from-wrecking-the-planet/ |newspaper=[[The Washington Post]] |agency= |location= |access-date=11 October 2015}}</ref> MIRI expanded as part of a general wave of increased interest in safety among other researchers in the AI community.<ref name=life>{{cite book |last1=Tegmark |first1=Max |title=[[Life 3.0: Being Human in the Age of Artificial Intelligence]] |date=2017 |publisher=[[Anchor Books|Knopf]] |location=United States |isbn=978-1-101-94659-6 }}</ref></blockquote>
:::<blockquote>In early 2015, MIRI's research was cited in a research priorities document accompanying an [[Open Letter on Artificial Intelligence|open letter on AI]] that called for "expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial".<ref name="priorities">{{cite report |author=Future of Life Institute |authorlink=Future of Life Institute |coauthors= |date=2015 |title=Research priorities for robust and beneficial artificial intelligence |url=http://futureoflife.org/static/data/documents/research_priorities.pdf |publisher= |page= |docket= |accessdate=4 October 2015 |quote= }}</ref> Musk responded by funding a large AI safety grant program, with grant recipients including Bostrom, Russell, [[Bart Selman]], [[Francesca Rossi]], [[Thomas Dietterich]], [[Manuela M. Veloso]], and researchers at MIRI.<ref name="post">{{cite news |last=Basulto |first=Dominic |date=2015 |title=The very best ideas for preventing artificial intelligence from wrecking the planet |url=https://www.washingtonpost.com/news/innovations/wp/2015/07/07/the-very-best-ideas-for-preventing-artificial-intelligence-from-wrecking-the-planet/ |newspaper=[[The Washington Post]] |agency= |location= |access-date=11 October 2015}}</ref> MIRI expanded as part of a general wave of increased interest in safety among other researchers in the AI community.<ref name=life>{{cite book |last1=Tegmark |first1=Max |title=[[Life 3.0: Being Human in the Age of Artificial Intelligence]] |date=2017 |publisher=[[Anchor Books|Knopf]] |location=United States |isbn=978-1-101-94659-6 }}</ref></blockquote>
Line 183: Line 184:
:::In the first sentence
:::In the first sentence
::::a) the first citation is completely wrong, which I have [https://en.wikipedia.org/w/index.php?title=Machine_Intelligence_Research_Institute&type=revision&diff=856474155&oldid=856450346 fixed].
::::a) the first citation is completely wrong, which I have [https://en.wikipedia.org/w/index.php?title=Machine_Intelligence_Research_Institute&type=revision&diff=856474155&oldid=856450346 fixed].
:::::Sure. Technical problem, I didn't write it, kudos to you for noticing.[[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 21:38, 25 August 2018 (UTC)
::::b) The quotation doesn't appear in the cited piece (which is not the open letter itself, but rather says of itself "This article was drafted with input from the attendees of the 2015 conference The Future of AI: Opportunities and Challenges (see Acknowledgements), and was the basis for an open letter that has collected nearly 7000 signatures in support of the research priorities outlined here."
::::b) The quotation doesn't appear in the cited piece (which is not the open letter itself, but rather says of itself "This article was drafted with input from the attendees of the 2015 conference The Future of AI: Opportunities and Challenges (see Acknowledgements), and was the basis for an open letter that has collected nearly 7000 signatures in support of the research priorities outlined here."
:::::Sure, that was probably a quotation from some other source that got lost in the perpetual churn and hacking. This is the kind of problem that articles have when people start revising them without paying any attention to the basic process of writing content.[[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 21:38, 25 August 2018 (UTC)
::::c) The cited document doesn't mention MIRI - this content saying "MIRI's research was cited in a research priorities document..." is pure commentary by who over wrote this; similar to the way the conference papers are used, discussed above. Again we don't do this.
::::c) The cited document doesn't mention MIRI - this content saying "MIRI's research was cited in a research priorities document..." is pure commentary by who over wrote this; similar to the way the conference papers are used, discussed above. Again we don't do this.
:::::No, that is a straightforward statement of fact, which is different from commentary. I presume that we do make straightforward statements of fact all the time.[[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 21:38, 25 August 2018 (UTC)
:::In the second sentence:
:::In the second sentence:
::::a) The WaPo source doesn't mention the open letter. (I understand the goal here, but this is invalid way to do it)
::::a) The WaPo source doesn't mention the open letter. (I understand the goal here, but this is invalid way to do it)
:::::Okay. Then rewrite it to "Musk funded". [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 21:38, 25 August 2018 (UTC)
::::b) The following people named as getting money, are not mentioned in the WaPo source: Russell, Selman, Rossi, Dietterich. However, Bostrom, Veloso, and Fallenstein at MIRI are mentioned. The WaPo piece mentions Heather Roff Perkins, Owain Evans, and Michael Webb. But this list has nothing to do with MIRI, so what is this even doing here?
::::b) The following people named as getting money, are not mentioned in the WaPo source: Russell, Selman, Rossi, Dietterich. However, Bostrom, Veloso, and Fallenstein at MIRI are mentioned. The WaPo piece mentions Heather Roff Perkins, Owain Evans, and Michael Webb. But this list has nothing to do with MIRI, so what is this even doing here?
:::::I don't have WaPo access, so I don't know. Again I presume that the information was present across multiple sources, which got lost in one or more of your bouts.[[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 21:38, 25 August 2018 (UTC)
::::The content is not even ''trying'' to summarize the source. This is editing driven by something other than the basic methods of scholarship we use here.
::::The content is not even ''trying'' to summarize the source. This is editing driven by something other than the basic methods of scholarship we use here.
:::::You don't summarize the source, you summarize the part of the source that is relevant to the subject matter of the article. Maybe you should think more about this sort of thing before throwing accusations around. [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 21:40, 25 August 2018 (UTC)
:::The third sentence:
:::The third sentence:
::::The source here is the one that actually is telling the whole story of this paragraph. The reference lacks a page number (another issue of basic scholarship). It doesn't say that MIRI expanded per se; there is one sentence mentioning MIRI and it says "Major new Al­ safety donations enabled expanded research at our largest nonprofit sister organizations: the Machine Intelligence Research Institute in Berkeley, the Future of Humanity Institute in Oxford and the Cen­tre for the Study of Existential Risk in Cambridge (UK)."
::::The source here is the one that actually is telling the whole story of this paragraph. The reference lacks a page number (another issue of basic scholarship).
:::::"Basic scholarship"! My ebook lacks page numbers so I do not know which page it's on, but somehow you assume that I am bad at basic scholarship? That's rather arrogant on your part. Please do better in the future. [[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 21:38, 25 August 2018 (UTC)
::::It doesn't say that MIRI expanded per se; there is one sentence mentioning MIRI and it says "Major new Al­ safety donations enabled expanded research at our largest nonprofit sister organizations: the Machine Intelligence Research Institute in Berkeley, the Future of Humanity Institute in Oxford and the Cen­tre for the Study of Existential Risk in Cambridge (UK)."
:::::Expansion of research at a research group = expansion. It would be idiotic to bicker over this level of semantics.[[User: Kbog|'''<font color="black">K</font>''']].[[User talk:Kbog|'''<font color="blue">Bog</font>''']] 21:38, 25 August 2018 (UTC)
:::I have fixed the paragraph [https://en.wikipedia.org/w/index.php?title=Machine_Intelligence_Research_Institute&type=revision&diff=856481026&oldid=856474155 here]. [[User:Jytdog|Jytdog]] ([[User talk:Jytdog|talk]]) 15:00, 25 August 2018 (UTC)
:::I have fixed the paragraph [https://en.wikipedia.org/w/index.php?title=Machine_Intelligence_Research_Institute&type=revision&diff=856481026&oldid=856474155 here]. [[User:Jytdog|Jytdog]] ([[User talk:Jytdog|talk]]) 15:00, 25 August 2018 (UTC)



Revision as of 22:00, 25 August 2018

NPOV for Pakaran

I've taken a lot of stuff out of the article that seemed to be basically just handwaving and self-promotion. This is what it read like when I found it:

"The Singularity Institute for Artificial Intelligence is a non-profit organization that seeks to create a benevolent artificial intelligence capable of improving its own design. To this end, they have developed the ideas of seed AI and Friendly AI and are currently coordinating efforts to physically implement them. The Singularity Institute was created in the belief that the creation of smarter-than-human, kinder-than-human minds represents a tremendous opportunity to accomplish good. Artificial intelligence was chosen because the Singularity Institute views the neurological modification of human beings as a more difficult and dangerous path to transhuman intelligence."
"The Singularity Institute observes that AI systems would run on hardware that conducts computations at billions or trillions of times the characteristic rate of human neurons, resulting in a corresponding speedup of thinking speed. Transhuman AIs would be capable of developing nanotechnology and using it to accomplish real world goals, including the further enhancement of their own intelligence and the consensual intelligence enhancement of human beings. Given enough intelligence and benevolence, a transhuman AI would be able to solve many age-old human problems on very short timescales."

As it stands, that isn't a bad article, it's just that it isn't really suitable for an encyclopedia. It presents some things as fact that are clearly opinion. It makes contentious statements, such as that it originated concept of "Seed AI" (astonishing for such a new organization--I read similar ideas in Von Neumann's book in the mid-seventies, and that had been written nearly thirty years before). The claim to be "coordinating efforts to physically implement" Seed AI and Friendly AI seem to rest on fundraising and writing a lot of papers about an extremely loosely defined programming language which seems to lack even an experimental implementation.

Wikipedia isn't for original research, it isn't for us to put up our pet ideas (however valid they may be). It's to catalog human knowledge from a neutral point of view. The article as it stood was in my opinion not so much an encyclopedia article as a promotional panegyric. --Minority Report 03:07, 23 Nov 2004 (UTC)

Ok, in the interest of admitting biases, I'm a financial donor to the SIAI. It's true that there have been holdups in beginning actual development, largely because there's a need to get all the theoretical underpinnings of Friendly AI done first.
That said, claiming that the SIAI is a "religion" rather than a group (which you may or may not agree with) is intrinsically PoV. --Pakaran (ark a pan) 03:55, 23 Nov 2004 (UTC)
I agree with most of your criticisms, Minority Report, and the article was not NPOV as it existed before. The statement that they are coordinating efforts to implement seed AI is quite valid, however. SIAI is developing a smaller, less ambitious AI program, although the primary objective of its research now is formalizing the theoretical framework for Friendly AI.
Also, using the phrase "quasi-religious" to describe an institution that claims to be entirely secular is highly misleading. SIAI has no affiliation with any religion.
I'm interested in your comments regarding von Neumann's work. I was not aware that von Neumann had speculated in this area. If you can find a source perhaps it should be mentioned at Seed AI. — Schaefer 05:02, 23 Nov 2004 (UTC)
I think my use of the term "quasi-religious" was an overstatement. I was trying to encapsulate the visionary aspect of this work, and the use of language which seems to owe more to religion than to engineering. I apologise if I also mischaracterized the Seed AI stuff; from looking around the site I saw a lot of hot air and little activity. I read a few books by Von Neumann in the late seventies, and the idea of having self-improving machines was very much his aim. I'm sorry I can't recall the specific book. I thought it might be The Computer and the Brain but a glance at the contents page on Amazon doesn't offer any clues. The idea was certainly in the air in the 1970s, long before Vinge's 1993 paper. --Minority Report 11:09, 23 Nov 2004 (UTC)
I've added basic information on the SIAI-CA and removed an erroneous middle initial. I also changed the first paragraph to reflect the fact that the SIAI actually does want to build software, rather than just talk about it, and to clarify that the 'Singularity' in the name refers to influencing the outcome of a technological singularity. --Michael Wilson


Merges

I have merged in information from the previously separate items on Emerson and Yudkowsky, which amounted to about a line of exposition and a few links. Those items now redirect to this item.

Yeah, I'd like that redirect to be removed. Actually, I'm removing it now. Eliezer Yudkowsky is wikified in many articles already. There is no reason to redirect an article about a person to their association's article. Biographical articles can be fleshed out over time and as of now it *looks* like we don't have an article on Yudkowsky when in fact we did. A line would have been a good start for smeone to write more. I'm making Eliezer Yudkowsky a bio-stub. --JoeHenzi 22:52, 21 Apr 2005 (UTC)

What is "reliably altruistic AI"?

Is it behavior that promotes the survival and flourishing of others at a cost to one's own? Wouldn't this require an AI with self-awareness and free will? But if an AI has free will, would it be moral to enslave it to serve the interests of others at the cost of it's own interests? Or is this merely a nod at Azimov's science-fiction "Three Laws of Robotics"? Those are close to a robotic version of a policeman's duties which may be seen as altruistic but may also be seen fulfilling a contract for which one is compensated. Or does the statement merely envision a non-self-aware AI with an analog of what ethologists call biological altruism? Whatever SIAI has in mind, I think the article should either make it explicit or drop the sentence since, as it stands, it is difficult or impossible to know what it means. Blanchette 04:43, 25 September 2006 (UTC)[reply]

The SIAI web site spends many pages addressing that question. Unfortunately I don't think it can be concisely added to this article, which is already fairly long; interested parties will just have to follow the references. Perhaps someone could expand the 'promotion and support' section of the 'Friendly Artificial Intelligence' article to detail the SIAI's definition of the term better. --Michael Wilson

Michael, thanks for the hint that what the author of the phrase "reliably altruistic AI" had in mind was the same thing as "Friendly AI". A search of the SIAI website reveals that the phrase "reliably altruistic AI" is not used there, nor is the term "reliably altruistic" nor is "altruistic AI". So "reliably altruistic AI" looks like an attempt to define Friendly AI that leads one away from rather than closer to understanding SIAI's ideas. I have replaced it with "Friendly AI" and the curious will then understand that further information is available through the previous link to "Friendly artificial intelligence". --Blanchette 07:06, 22 November 2006 (UTC)[reply]

Notibility

I just removed the notice about notability considering the institute has been written about in dozens of major publications. It's fairly obvious the notice doesn't belong. —Preceding unsigned comment added by 68.81.203.35 (talk) 15:21, 29 November 2008 (UTC)[reply]

Robotics attention needed

  • Update
  • Expand
  • Check sources and insert refs
  • Reassess once finished

Chaosdruid (talk) 08:30, 18 March 2011 (UTC)[reply]

Self-published papers?

Two of the several linked papers are even slightly peer-reviewed. Should these be linked? There is no evidence given that this work is noteworthy, either. If these extensive sections should be here, there needs to be evidence they're noteworthy and not effectively just an ad - David Gerard (talk) 11:53, 18 July 2015 (UTC)[reply]

When was SIAI->SI name change?

The SI->MIRI name change was January 2013. When was the SIAI->SI name change? I can't pin it down more closely than "some time between 2010 and 2012" - David Gerard (talk) 11:09, 10 April 2016 (UTC)[reply]

Neutrality?

User User:Zubin12 added a neutrality POV tag in this edit. However, the tag says "[r]elevant discussion may be found on the talk page," and the only discussion of neutrality issues on the talk page dates back to 2004. Per this guideline, the POV tag can be removed "[i]n the absence of any discussion." I'm going to remove the tag now, and if anyone feels the need to discuss the neutrality of the article, they can discuss it here first. --Gbear605 (talk) 00:29, 21 July 2018 (UTC)[reply]

Large amounts of Bias present

The article is most likely written by those supportive of the organization and it's mission, which is to be expected but that has caused a large amount of bias to appear in the article. Not only is much of the terminology used in the article confusing and not standardized but tons of tenous connections some of which I have removed.

The research section is incredibly confusing and next to impossible for a layman or even somebody not familiar with the specific sub-culture associated with the organization to follow, additional criticism or controversy about the organization remains limited. For this reason the article doesn't meat W:NPV standards Zubin12 (talk) 00:49, 21 July 2018 (UTC)[reply]

Thanks for adding your reasoning for the tag. I'm not entirely convinced it needs to be there, but I'm okay with leaving it for now. Gbear605 (talk) 01:07, 21 July 2018 (UTC)[reply]
I don't see how any of it is biased or confusing at all. Could you give some examples? K.Bog 15:15, 28 July 2018 (UTC)[reply]
It is a blatant advertisement, full of sources by the organization and other primary sources, and quotes that are not encyclopedic. This is not an encyclopedia article Jytdog (talk) 15:40, 28 July 2018 (UTC)[reply]
I think that's completely false. The primary sources here are being used for straightforward facts just like WP:PRIMARY says; it's okay to cite research to say what the research says. The presence of primary sources doesn't make something an advertisement. And the quotes seem perfectly encyclopedic to me. K.Bog 16:06, 28 July 2018 (UTC)[reply]
Hmm, that being said the research section does have some problems. So, I'll go ahead and fix it, and probably you will feel better about it afterwards.K.Bog 16:21, 28 July 2018 (UTC)[reply]
It is disgusting to see fancruft with shitty, bloggy sources on science topics. Video games, I understand more This is just gross. Jytdog (talk) 17:04, 28 July 2018 (UTC)[reply]
Please keep your emotions out of it. If you're not capable of evaluating this topic reasonably then move on to other things. Plus, I was in the middle of major edits, as I noted already. It's not good etiquette to change the article at the same time. I'm going to return it to the version I am writing, because I was working on it first, and then incorporate your changes if they are still relevant and suitable. K.Bog 17:29, 28 July 2018 (UTC)[reply]
"Disgust" is more an opinion, and one quite appropriate to blatant fan editing. This needs some serious non-fan review, and scouring of primary sources - David Gerard (talk) 17:40, 28 July 2018 (UTC)[reply]
But it's not fan editing, and primary sources are acceptable in the contexts used here. If you believe it requires third party review then flag it for actual third party review - you don't get to claim that you are unbiased if you have an axe to grind, whether it's negative or positive. K.Bog 17:44, 28 July 2018 (UTC)[reply]
@Jytdog you can finish if you want but you interrupted a major in-progress edit (this is the second time you did this to me, as I recall) and I'm going to revise it to my draft before looking at your changes.K.Bog 18:03, 28 July 2018 (UTC)[reply]
I wasn't aware that you were working over the page. That is what tags are for. Please communicate instead of edit warring. I will self revert. Jytdog (talk) 18:06, 28 July 2018 (UTC)[reply]
Thanks. I searched for the tag but couldn't remember what it was called. That's why I wrote it here on the talk page. K.Bog 18:11, 28 July 2018 (UTC)[reply]

Merge of changes

@User:Jytdog these are the significant differences between my version and your version:

  • I kept the summary quote from the AI textbook because it is an easy to understand general overview of the research. One person here believed the article was too technical. I don't generally agree, but this quote is good insurance in case many people do find it too technical.
  • I added Graves' article because it was published by a mainstream third party magazine and deals extensively with the subject matter.
  • I have revised/streamlined the information about forecasting to read better.
  • I have kept the AI Impacts info because it is referenced by reliable secondary sources.
  • I kept brief references to all the papers that have been published in journals or workshops. Since they were published by a third party, they are notable enough for inclusion, and they follow WP:Primary, as they are being used to back up easily verifiable information about the subject ("X works on Y"). With these inclusions we have enough material to preserve all four research subsections.

The other things that you changed are things that I agree to change. I finished the article to my current satisfaction. Let me know if there is a problem with any of this or if the merge is complete K.Bog 19:40, 28 July 2018 (UTC)[reply]

There are still far too many primary or SPS refs. See below.
OK
bloggy but OKish
churnalism
primary/SPS
(note sources by MIRI people are used as primary sources, where the content comments on them)
Jytdog (talk) 20:04, 28 July 2018 (UTC)[reply]
churnalism removed, I didn't notice it. Your list of primary/SPS is much too long because you are including a lot of separate organizations as well as authors. Bostrom is listed as an advisor on their website, not a member of the organization; if Russell is secondary then so is Bostrom. Givewell and FLI are separate entities. The Humanist Hour is a separate entity. They are not 'MIRI people.' And if an outside group writes or publishes on this group, it's not a primary source, it's a secondary source. e.g., FLI is only a primary source if they talk about themselves.
Also, some of those primary sources are being used in concert with secondary sources. If a fact is cited by both a relevant primary source and a secondary source saying the same thing, what compels you to remove the primary source? Of course, it doesn't really matter to me, so I've gone ahead and removed those, as well as some others from your list. The majority of sources are secondary, however there is no wiki policy that adjudicates on how much of an article can be primary sourced, as long as there are sufficient secondary sources. If an article can be made longer with appropriate use of primary sources, without being too long, then it's an improvement. Because more, accurate, information is simply a good thing.
Moreover, the section is no longer written like an advertisement. So neither tag is warranted.K.Bog 20:58, 28 July 2018 (UTC)[reply]
@Jytdog: There is only a single self-published source here, the FLI report, which satisfies WP:SPS. There are only about a dozen primary sources (i.e. papers written by people at MIRI) - less than half of the sources in the whole article, and all of them are published by third parties, and otherwise in accordance with WP:Primary. So the article mainly relies on secondary sources, therefore the primary source tag is unwarranted, see? As for advertisement - is there any specific wording in it that sounds like an advertisement? K.Bog 04:02, 25 August 2018 (UTC)[reply]
I have no words. I may have some later. Jytdog (talk) 04:19, 25 August 2018 (UTC)[reply]

arbitrary break

Content like this:

He argues that the intentions of the operators are too vague and contextual to be easily coded.[1]

References

  1. ^ Yudkowsky, Eliezer (2011). "Complex Value Systems in Friendly AI" (PDF). Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Berlin: Springer.
is still in the article. This is a primary source (a conference paper published on their own website and branded even) and the content is randomly grabbing some thing out of it. Not encyclopedic. This is what fans or people with a COI do (they edit the same way). There are a bunch of other conference papers like this as well and used in the same way. Conference papers are the bottom of the barrel for scientific publishing. There are still somewhat crappy blogs or e-zines like OZY and Nautlius.
I've explained again and again that published primary sources are perfectly encyclopedic, and OZY and Nautilus are both published secondary sources. Computer science is different from other fields: most CS work is done in workshops and conferences rather than journals, and they are not considered "bottom of the barrel", so perhaps you aren't equipped to know what is reputable or not in the field of computer science. I don't know if you've actually looked at that citation either; the content in this article is roughly summarizing the thesis. If you want there to be *more* detail in this article, that's fine - feel free to add it yourself, but that's clearly not a reason to take away any details. So, I'm at a loss to see what the problem is. Perhaps you should familiarize yourself with the use of academic sources elsewhere on Wikipedia, because this is exactly how we write things all the time. K.Bog 21:38, 25 August 2018 (UTC)[reply]
A different kind of bad:

In early 2015, MIRI's research was cited in a research priorities document accompanying an open letter on AI that called for "expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial".[1] Musk responded by funding a large AI safety grant program, with grant recipients including Bostrom, Russell, Bart Selman, Francesca Rossi, Thomas Dietterich, Manuela M. Veloso, and researchers at MIRI.[2] MIRI expanded as part of a general wave of increased interest in safety among other researchers in the AI community.[3]

References

  1. ^ Future of Life Institute (2015). Research priorities for robust and beneficial artificial intelligence (PDF) (Report). Retrieved 4 October 2015. {{cite report}}: Cite has empty unknown parameter: |coauthors= (help)
  2. ^ Basulto, Dominic (2015). "The very best ideas for preventing artificial intelligence from wrecking the planet". The Washington Post. Retrieved 11 October 2015.
  3. ^ Tegmark, Max (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. United States: Knopf. ISBN 978-1-101-94659-6.
In the first sentence
a) the first citation is completely wrong, which I have fixed.
Sure. Technical problem, I didn't write it, kudos to you for noticing.K.Bog 21:38, 25 August 2018 (UTC)[reply]
b) The quotation doesn't appear in the cited piece (which is not the open letter itself, but rather says of itself "This article was drafted with input from the attendees of the 2015 conference The Future of AI: Opportunities and Challenges (see Acknowledgements), and was the basis for an open letter that has collected nearly 7000 signatures in support of the research priorities outlined here."
Sure, that was probably a quotation from some other source that got lost in the perpetual churn and hacking. This is the kind of problem that articles have when people start revising them without paying any attention to the basic process of writing content.K.Bog 21:38, 25 August 2018 (UTC)[reply]
c) The cited document doesn't mention MIRI - this content saying "MIRI's research was cited in a research priorities document..." is pure commentary by who over wrote this; similar to the way the conference papers are used, discussed above. Again we don't do this.
No, that is a straightforward statement of fact, which is different from commentary. I presume that we do make straightforward statements of fact all the time.K.Bog 21:38, 25 August 2018 (UTC)[reply]
In the second sentence:
a) The WaPo source doesn't mention the open letter. (I understand the goal here, but this is invalid way to do it)
Okay. Then rewrite it to "Musk funded". K.Bog 21:38, 25 August 2018 (UTC)[reply]
b) The following people named as getting money, are not mentioned in the WaPo source: Russell, Selman, Rossi, Dietterich. However, Bostrom, Veloso, and Fallenstein at MIRI are mentioned. The WaPo piece mentions Heather Roff Perkins, Owain Evans, and Michael Webb. But this list has nothing to do with MIRI, so what is this even doing here?
I don't have WaPo access, so I don't know. Again I presume that the information was present across multiple sources, which got lost in one or more of your bouts.K.Bog 21:38, 25 August 2018 (UTC)[reply]
The content is not even trying to summarize the source. This is editing driven by something other than the basic methods of scholarship we use here.
You don't summarize the source, you summarize the part of the source that is relevant to the subject matter of the article. Maybe you should think more about this sort of thing before throwing accusations around. K.Bog 21:40, 25 August 2018 (UTC)[reply]
The third sentence:
The source here is the one that actually is telling the whole story of this paragraph. The reference lacks a page number (another issue of basic scholarship).
"Basic scholarship"! My ebook lacks page numbers so I do not know which page it's on, but somehow you assume that I am bad at basic scholarship? That's rather arrogant on your part. Please do better in the future. K.Bog 21:38, 25 August 2018 (UTC)[reply]
It doesn't say that MIRI expanded per se; there is one sentence mentioning MIRI and it says "Major new Al­ safety donations enabled expanded research at our largest nonprofit sister organizations: the Machine Intelligence Research Institute in Berkeley, the Future of Humanity Institute in Oxford and the Cen­tre for the Study of Existential Risk in Cambridge (UK)."
Expansion of research at a research group = expansion. It would be idiotic to bicker over this level of semantics.K.Bog 21:38, 25 August 2018 (UTC)[reply]
I have fixed the paragraph here. Jytdog (talk) 15:00, 25 August 2018 (UTC)[reply]

Mention of Nick Bostrom

Kbog, why are you edit-warring a tangential mention of Nick Bostrom in? If he's MIRI it's self-sourced puffery, and if he's not then it's tangential. Having lots of blue numbers after it - one of which is a Bill Gates interview on YouTube with no mention of MIRI - doesn't make it look cooler or something - David Gerard (talk) 08:17, 25 August 2018 (UTC)[reply]

That mention was in both my and Jytdog's versions of the article, clearly I am not trying to edit war anything *into* the article, this is typical bold-revert-discuss, exactly how things are supposed to work. The relevance is that it is background for the expansion of interest and funding for the organization. E.g. in the World War II article, the "Background" section has a mention of World War I, and that is not tangential. You are right that Gates is not relevant, I took him out of it. Puffery is non-neutral language, like "esteemed", "highly regarded", etc - whereas this article uses plain factual language. Anyway the current wording should make it more clear - I can now see how the previous wording might have made it appear out of place and gratuitous. K.Bog 08:28, 25 August 2018 (UTC)[reply]