Wikipedia talk:Wikipedia Signpost/Single/2019-01-31

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia


Comments[edit]

The following is an automatically-generated compilation of all talk pages for the Signpost issue dated 2019-01-31. For general Signpost discussion, see Wikipedia talk:Signpost.

Arbitration report: An admin under the microscope (280 bytes · 💬)[edit]

Discuss this story

This snowman has an excellent hat! Nosebagbear (talk) 09:30, 31 January 2019 (UTC)[reply]

Discussion report: The future of the reference desk (1,626 bytes · 💬)[edit]

Discuss this story

  • @Pythoncoder and Kudpung: For what it's worth, the changes in CSD cats are doubly noted: here as well as in the News and notes. The text in the latter is from my summary here. ~ Amory (utc) 11:47, 31 January 2019 (UTC)[reply]
Your N&N one is more thorough so I took mine out. — pythoncoder  (talk | contribs) 14:04, 31 January 2019 (UTC)[reply]
  • "Grumbling about lack of adequate testing to ensure correct results before the well-foreseen event occurred"... That would be me.[1] :) --Guy Macon (talk) 23:46, 31 January 2019 (UTC)[reply]
  • The RfC on blocking policy just closed on 28 January 2019, but the most recent edit to Wikipedia:Blocking policy was on 26 January 2019, so the change called for by the RfC has yet to made there. wbm1058 (talk) 20:00, 1 February 2019 (UTC)[reply]

Essay: How (7,155 bytes · 💬)[edit]

Discuss this story

  • I find it shameful that The Signpost lionized Jytdog in this manner. In last month's issue I expressed my satisfaction at seeing him gone. I suppose re-publishing this essay was an easy lay-up, forgetting that Jytdog isn't gone for no reason. It's only a matter of time until he creates a fresh-start account, anyway. Chris Troutman (talk) 07:19, 31 January 2019 (UTC)[reply]
  • I'm grateful to The Signpost for highlighting this essay. I particularly liked that part that reads "really, don't be a jerk and follow people around, bothering them". Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 10:54, 31 January 2019 (UTC)[reply]
    • [I've removed the outrageous personal attack here. Do not restore. Bishonen | talk 17:50, 1 February 2019 (UTC).] Qwirkle (talk) 16:37, 31 January 2019 (UTC)[reply]
”Outrageous personal attack?” Nonsense. Reader, look though the talk history and see. (Quickly, before someone covers his tracks with a revision deletion). Hyperbolic? Sure, the explicit reference to Godwin in the edit summary is a clue that that just might be deliberate.
What is outrageous is this attempt to polish the reputation of one of Wikipedia’s serial offenders just in time for his inevitable return. Qwirkle (talk) 15:15, 4 February 2019 (UTC)[reply]
  • I had to hunt for it (Incipient case mooted: Editor resigns) so I might as well share. – Athaenara 12:02, 31 January 2019 (UTC)[reply]
  • While I acknowledge that referencing Jytog's leaving is, to an extent, inevitable, I do think the blue box user testimonials are over the top and distract from the actual essay. Especially given their placement further down the page. ZettaComposer (talk) 14:23, 31 January 2019 (UTC)[reply]
  • JYT was a mixed bag. On the one hand, he was an effective force for nipping the distortions of COI charlatans in the bud. On the other hand, he was a bully who embodied the old maxim that "the end justifies the means," and could not rein in his aggressive and biting behavior in a collaborative editing environment. He's certainly not one to be worshiped or emulated; neither was he wholly a villain. Ya gotta know where the line of appropriate behavior is and stay behind it -- anti-COI obsessiveness can become its own negative form of COI editing... Carrite (talk) 19:51, 31 January 2019 (UTC)[reply]
  • Ignoring much of the above, this is a solid expansion of WP:5P to be explainable in layman's terms. I'd be agreeable to moving the original essay to WP:Five pillars/Slightly longer version or something. --Izno (talk) 23:41, 31 January 2019 (UTC)[reply]
    • We'd need to fix the essay's egregious misrepresentations of policy first; for example, it has "We ask you... not edit content directly where you have a COI" where the policy actually says "you are strongly discouraged from editing affected articles directly". Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:28, 1 February 2019 (UTC)[reply]
      • Those seem like identical statements to me, and what's more, both phrases appear in the guidelines: "COI editors should not edit affected articles directly" (11th sentence) and "You are strongly discouraged from editing affected articles directly" (20th sentence). Regards,  Spintendo  23:03, 6 February 2019 (UTC)[reply]
    • It is something of a testament to Jytdog's presence on Wikipedia that in this matter I find myself in complete agreement with Chris Troutman and Carrite. Gamaliel (talk) 17:23, 1 February 2019 (UTC)[reply]
    • Now that you mention it, I found myself in the same dilemma as I almost never agree with Carrite, particularly on matters relating to COI and paid editing. Coretheapple (talk) 19:12, 4 February 2019 (UTC)[reply]
  • I agree with Chris troutman. This essay is shameful. Coretheapple (talk) 00:01, 3 February 2019 (UTC)[reply]
  • I will say it was disheartening to see this user being promoted in such a way. It would have been better not to highlight their work at this point. PackMecEng (talk) 16:03, 4 February 2019 (UTC)[reply]
  • I would think that dealing with plausible material from an editor who is/was disagreeable is one of Wikipedia's strong suits. If this essay could be useful, then it could be put wherever, in Wiki space, and a swarm of editors would buzz around it correcting its flaws, after which the original author would hardly recognize his own work, and his name wouldn't be anywhere on it except in the History. It wouldn't be the first time I had seen the names of dubious editors in the History of articles that I liked well enough to read and edit. Of course, then the question is, would it be useful? The essay gathers together various things which most of you probably already know, because they're already somewhere in Wiki space. Would the people who need most to read it, ever see it? I don't know, but it seems plausible. Bruce leverett (talk) 02:09, 6 February 2019 (UTC)[reply]
  • I am not looking over my shoulder any more. Editing is more fun and pleasant.Best Regards, Barbara 17:08, 27 February 2019 (UTC)[reply]

Featured content: Don't miss your great opportunity (1,547 bytes · 💬)[edit]

Discuss this story

This article is called "Don't miss your great opportunity". What would our great opportunity be? — Preceding unsigned comment added by Danleugers (talkcontribs) 15:31, 7 February 2019 (UTC)[reply]

The navy needs you in the WAVES! (featured article #8) Or if you look on the main WP:POST page, you'll see the blurb "Get yourself lost in 1730's Paris, and a wide range of other recently promoted content". - Evad37 [talk] 22:39, 7 February 2019 (UTC)[reply]
I still don't understand. Danleugers (talk) 00:55, 8 February 2019 (UTC)[reply]
It's just a bit of a joke, based on the image I selected for WAVES having those words. The "great opportunity" is to see all this newly-promoted featured content, including the super-detailed, super-high-resolution maps of 1730s Paris. Missing it isn't meant to be something to be literally worried about, as the featured content shown here will likely be around as long as Wikipedia is. - Evad37 [talk] 01:24, 8 February 2019 (UTC)[reply]

From the archives: An editorial board that includes you (507 bytes · 💬)[edit]

Discuss this story

lolwut? Gamaliel (talk) 18:36, 1 February 2019 (UTC)[reply]

  • Very odd that I'm not listed. Tony (talk) 08:01, 21 March 2019 (UTC)[reply]

Gallery: Let us build a memorial fit for such pain and suffering (3,797 bytes · 💬)[edit]

Discuss this story

Unnecessary drama — JFG talk 21:03, 31 January 2019 (UTC)[reply]
The following discussion has been closed. Please do not modify it.
  • Is there a reason why every single image on the page contains an Indian politician? Johnbod (talk) 16:34, 31 January 2019 (UTC)[reply]
    @Johnbod: Yes... please see the blurb - "A tour of some of the world's greatest memorials courtesy the Prime Minister of India." Should I mention this clearly in the article? DiplomatTesterMan (talk) 16:41, 31 January 2019 (UTC)[reply]
But it's three Indian PMs, and none from anywhere else, and this is not mentioned or explained in the lead bit. Johnbod (talk) 21:26, 31 January 2019 (UTC)[reply]
  • @Johnbod:If someone can write a Signpost article only on say one country or one person or one wikiproject etc... why not this. I think these are really nice images and I have also added a disclaimer at the bottom. And the tour aspect I think is like a story in a way. As far as I know The Signpost doesn't follow the same standards as Wikipedia articles do. I request you to reconsider addition of this template here which is inconsiderate. Regards. DiplomatTesterMan (talk) 16:47, 31 January 2019 (UTC)[reply]
    I would prefer this article is deleted rather than let the template stay here. I created this article with good intentions and if someone can only see propaganda they are mistaken. Modi and Nehru and Manmohan together is as neutral as possible, keeping in mind the tour aspect and the news factor. DiplomatTesterMan (talk) 16:52, 31 January 2019 (UTC)[reply]
  • @Bri:, @Kudpung:, is it possible to delete this article now? I really created this with the best of intentions. I don't want people making this seem like a mere and lame propaganda effort. DiplomatTesterMan (talk) 16:52, 31 January 2019 (UTC)[reply]
    I followed this up in the newsroom and teahouse and subsequently nominated for deletion. Regards. DiplomatTesterMan (talk) 18:47, 31 January 2019 (UTC)[reply]

Really impressed with this gallery. It is thought-provoking and interesting; I think it is a great example of what our editors can do when they want to be creative and to build something worthwhile about a theme. (I can think of many themes that would benefit from such a gallery.) Thanks for including it in the Signpost, I think it's one of the best editorial decisions made in a while. Risker (talk) 22:32, 31 January 2019 (UTC)[reply]

Thanks for the feedback. Kudos to DiplomatTesterMan on the selection of images (disclosure: I selected this submission for publication). The profundity of the individual confronting the weight of history, himself representing a billion more, struck me deeply. ☆ Bri (talk) 22:46, 1 February 2019 (UTC)[reply]
This is wonderful feedback. Thank you. I would also like to thank JohnBod for his insightful comment above related to how readers are understanding the article. Thank you for the comment. Regards.DiplomatTesterMan (talk) 20:26, 2 February 2019 (UTC)[reply]

Humour: See what some editors think is humour (1,560 bytes · 💬)[edit]

Discuss this story

I am not allowed to talk about such things. Best Regards, Barbara 02:05, 9 March 2019 (UTC)[reply]
  • Might Wikipedia be sued for the possible BLP violation of leaving the impression that a Sasquatch is a small man with a pointy head? I don't know whether Sasquatches can sue us for defamation, but small men with pointy heads presumably can. Tlhslobus (talk) 07:36, 6 February 2019 (UTC)[reply]
    • I have been dipping down into my vast knowledge of such things and was relieved to find out Bigfoot and Sasquatch are eligible for a drivers license in Portland. Best Regards, Barbara 20:22, 16 February 2019 (UTC)[reply]

In focus: The Collective Consciousness of Admin Userpages (8,819 bytes · 💬)[edit]

Discuss this story

  • Many thanks for the plug up there for my Wikipedia Signature Art Gallery.
I'd like to take this opportunity to exhibit the very first signature in the collection:
SonicChao talk contribs
The original was complete with links, of course, and was shared by an editor who retired several years ago.
I mentioned on a Wikipedia irc channel (circa November 2006) that I was impressed with some of the signatures I was seeing on various talk pages, and he told me about his, which he said other editors were telling him to stop using. I think that's tragic. – Athaenara 07:25, 31 January 2019 (UTC)[reply]
Athaenara, thanks for the comment. Recently I came across an interesting case with User:Flooded with them hundreds signature and people having a problem with it. Can't remember where I saw the discussion going on but it was a long discussion. They had to change the signature or stop commenting on new users pages... something like that. :D DiplomatTesterMan (talk) 19:24, 31 January 2019 (UTC)[reply]
@DiplomatTesterMan: I put some really egregious ones in User:Athaenara/Gallery/Beyond. – Athaenara 20:19, 4 February 2019 (UTC)[reply]
  • Thinking of admin userpage animals, I miss File:OstrichHead.JPG on User:Drmies. DMacks (talk) 14:27, 31 January 2019 (UTC)[reply]
    DMacks, I know I have missed out a lot of admins and all, trying to limit the size of the article as well as time constraints... but this one is a really good one which you pointed out. Shouldn't have missed the ostrich and the text underneath "I am an ostrich and I support these userboxen" on the page of User:Drmies :D Thanks. DiplomatTesterMan (talk) 19:19, 31 January 2019 (UTC)[reply]
  • I certainly don't feel like I've been a Wikipedian for 42.8% of my life! (has it really been that long?) —k6ka 🍁 (Talk · Contributions) 14:57, 31 January 2019 (UTC)[reply]
    K6ka, well, now it's 43%...? :) DiplomatTesterMan (talk) 19:19, 31 January 2019 (UTC)[reply]
    @DiplomatTesterMan: So accurate it's scary! —k6ka 🍁 (Talk · Contributions) 19:33, 31 January 2019 (UTC)[reply]
    Speaking of time on wikipedia, I'm totally fresh to this thing--probably little more than a week. K6ka, mind me asking how you got that data on your userspace? Coyote Codfish (talk) 02:34, 18 February 2019 (UTC)[reply]
    @Coyote Codfish: Well, there's a lot of stuff on my userpage, so you might want to be a little more specific as to what data you're referring to. (though I should admit, I did steal a lot of stuff from other people's userpages) —k6ka 🍁 (Talk · Contributions) 02:42, 18 February 2019 (UTC)[reply]
    @K6ka: The data as in, percentage of your life spent editing wikipedia; your 43% thingy. Coyote Codfish (talk) 00:39, 19 February 2019 (UTC)[reply]
  • For some reason, being a topic of a Signpost article give me warm fuzzies. Although I like the astral landscape of Purico farther north even better. Jo-Jo Eumerus (talk, contributions) 20:47, 31 January 2019 (UTC)[reply]
    He he... I didn't make the cut. Either that, or my user page was too boring. I haven't altered it for at least five years, probably time for some humorous witticisms.  — Amakuru (talk) 21:17, 31 January 2019 (UTC)[reply]
    @Amakuru: No no, you made the cut actually :) I was going through my offline notes and I had put down your name in the word document. "My current username is a Kinyarwanda word, whose literal meaning is news." This line really was interesting :) But I think when trying to minimise the length, I just had to leave things out and also the time constraints, I was endlessly going through the userpages and needed to stop :D the userpages themselves were showing and linked to so many different parts of Wikipedia and ways of editing or seeing Wikipedia which helped in the worldview aspect related to this place. Regards. DiplomatTesterMan (talk) 23:04, 31 January 2019 (UTC)[reply]
    @DiplomatTesterMan: Phew! That's a relief then  — Amakuru (talk) 23:23, 31 January 2019 (UTC)[reply]
    Well all I can say is good taste in images! ~ Amory (utc) 01:05, 1 February 2019 (UTC)[reply]
  • My goodness, anyone would think admins were really human! Nick Moyes (talk) 22:13, 31 January 2019 (UTC)[reply]
  • I like mine as raw as possible, for many reasons. That generated the only oppose in my RfA, of all things (I've met the user in person, he's a great guy and much more important to Wikimedia (specifically NYC) than me), but it works for me. Just in case you thought all admin userpages were especially informative. And, when I inevitably kick off (hopefully in a long time), I've put up an extremely strong editnotice for anyone thinking to edit it. The Blade of the Northern Lights (話して下さい) 03:08, 1 February 2019 (UTC)[reply]
  • Nice to see this as an actual topic in the signpost, looking forward to more like them. -- Amanda (aka DQ) 23:03, 4 February 2019 (UTC)[reply]
  • Just saw this—I'm glad someone else enjoyed my quip about the mop! But personally, I think the hatnote is my page's best feature. How many other admins have one of them?? --BDD (talk) 14:48, 1 August 2019 (UTC)[reply]
  • About the meaning of the '26' in admin 78.26's user name, our article on phonograph records claims that "where the mains supply was 60 Hz, the actual speed was 78.26 rpm." This fact is also mentioned at the top of the user's page. EdJohnston (talk) 18:20, 29 November 2019 (UTC)[reply]
  • Yup, it's true. I suppose if I move to, say, Liechtenstein, I'll have to change my user name to 77.92. How exciting! 78.26 (spin me / revolutions) 19:51, 29 November 2019 (UTC)[reply]

In the media: The Signpost's investigative story recognized, Wikipedia turns 18 and gets a birthday gift from Google, and more editors are recognized (4,025 bytes · 💬)[edit]

Discuss this story

@Smallbones: Your article and the aftermath, about how WSJ and Newser did credit The Signpost whereas others did not, prompts me to ask the same question for Wikipedia. I was also creating another Wikipedia page University of Farmington scam just now where I stumbled on the same question when I saw that Washington Post had said in their article that Detroit Post had broken the story first and I felt morally obliged to mention it in the article too. So the same question.. Should Wikipedia name the origin of investigative stories, even small ones? This got me thinking and prompted me to ask a question on idealab too just now. What do you think Smallbones? (This has nothing to do with supporting or opposing, merely trying to understand this situation personally and in a better way from someone who has experienced it firsthand) Regards. DiplomatTesterMan (talk) 15:51, 31 January 2019 (UTC)[reply]
It's an interesting question. I think that the Village Pump discussion has it more or less correct: Wikipedia is not news so there is *usually* no need to call out in the text who first reported it. Even so, putting the first news report as the 1st footnote, is a good idea. There are times when I mention the first source in the text, but it is a 2-edged sword. If only 1 source is putting their reputation on the line and making a truly remarkable statement that will be notable in the long-term, then our readers need to know that it's only 1 source, and they deserve credit or blame when they are right or otherwise. Once everybody is reporting the same thing, there's usually no reason to put the 1st source in the text of an encyclopedia.
What I really want to say here - one mention of last month's story in the WSJ is certainly enough for me. Other media were right to cite the WSJ because the WSJ did not just take my word on things. Via email they asked me some very detailed questions, and then they checked it out for themselves. There were a couple of things that I "knew" but couldn't check out in enough detail to add here. They were able to confirm them. The WSJ obviously has very good resources for fact checking, and when they checked out the story, that's the only thrill that mattered to me. Smallbones(smalltalk) 18:45, 31 January 2019 (UTC)[reply]
  • Congratulations to Smallbones. That article looked as though it required a lot of skilled background work, not to mention checking and re-checking (since a lot was at stake). Tony (talk) 11:01, 31 January 2019 (UTC)[reply]
    • +1 Nice work, Smallbones! Since the WSJ article is paywalled, here is the quote crediting the Signpost coverage: "Questions about Mr. Whitaker's claims to have been an Academic All-American were raised Monday on Wikipedia Signpost, an in-house publication for Wikipedia editors, by a user named Smallbones." -Pete Forsyth (talk) 04:09, 1 February 2019 (UTC)[reply]
  • Congratulations, Jim Henderson! The recognition is well earned! SeoMac (talk) 05:10, 5 February 2019 (UTC)[reply]

News and notes: WMF staff turntable continues to spin; Endowment gets more cash; RfA continues to be a pit of steely knives (24,896 bytes · 💬)[edit]

Discuss this story

  • It should also be noted that Gog the Mild is the first Military history WikiProject editor to earn both the Newcomer and Military Historian of the Year awards in the same year. For this honor he joins our current Lead Coordinator Peacemaker67 as one of only two editors to have earned both of these awards. For this acomplishment Gog the Mild and Peacemaker67 have received the Big Red One Badge and the title Primus Inter Pares. TomStar81 (Talk) 10:23, 31 January 2019 (UTC)[reply]
  • "where the population is starving and forced to eat garbage"—which implies everyone in the country is. Could the proposition be verified, please, especially the claim that everyone is forced to eat garbage? Tony (talk) 11:07, 31 January 2019 (UTC)[reply]
    • The proposition should be removed. It's not at all NPOV, and frankly very insulting, to describe the two main features of a country to be its "largest oil reserves in the world" and allegedly garbage-consuming population. Bilorv(c)(talk) 18:37, 31 January 2019 (UTC)[reply]
Bilorv, you are forgetting that The Signpost columns are not Wikipedia articles. That said, please have the courtesy to find out what you are talking about before insulting the magazine. Either that consider responding to the appeals for contributions. Kudpung กุดผึ้ง (talk) 07:28, 3 February 2019 (UTC)[reply]
I made no insult towards The Signpost, unlike you towards the people of Venezuela. I'm well aware it's not a Wikipedia article, but it should have some editorial standards. I'm also well aware of what's happening in Venezuela but describing its population as "forced to eat garbage" is ridiculous. Yes, some Venezuelans have been forced into that awful position but there are also plenty of Americans who dumpster dive because they have no other access to food. Bilorv(c)(talk) 13:43, 3 February 2019 (UTC)[reply]
Even if this is defended as the POV, by making such a generalization about an entire country, the POV comes off as defamatory (which is prohibited on Wikipedia); "Defamation, calumny, vilification, or traducement is the communication of a false statement that harms the reputation of, depending on the law of the country, an individual, business, product, group, government, religion, or nation". Likewise, The Signpost has its own content guideline that states, "Contributors should endeavor to avoid putting out material they know to be wrong or misleading." It might be worthwhile to have a Village Pump discussion, for clarification, about whether The Signpost may publish POV editorials on Wikipedia that violate these policies and guidelines; or if they are required to adhere to these guidelines and policies; or if they are required to follow their own guidelines and policies and should the community be involved in the setting these guidelines and enforcement? Mkdw talk 20:51, 3 February 2019 (UTC)[reply]
  • I don't know why NPOV keeps getting thrown around in the reader comments. The Signpost makes no statement about neutrality. Bri.public (talk) 19:05, 31 January 2019 (UTC)[reply]
  • If that's what you think, you should get out of involvement with the Signpost. The standards on display here are appalling. Tony (talk) 00:22, 1 February 2019 (UTC)[reply]
If that's what you think, Tony1, you should either come back and write The Signpost yourself to your own rules, or be less rude in the comments section. If you are one of the editors who insist that Wikipedia standards apply here, then have the courtesy to apply them here too. Kudpung กุดผึ้ง (talk) 06:56, 3 February 2019 (UTC)[reply]
That's the kind of response we're becoming used to from those who've colonised what was once a respectable, trustable news outlet. See my comment at the Village Pump. Tony (talk) 07:41, 4 February 2019 (UTC)ß[reply]
As a news publication, The Signpost should strive for neutrality but it is by no means bound be NPV, which is an article space policy. Opinionated articles that aren't listed as such are one thing though; inaccuracies such as this are another, and the latter should be avoided as much as possible. — pythoncoder  (talk | contribs) 19:13, 4 February 2019 (UTC)[reply]
  • "As documented on YouTube" has to be the most unintentionally hilarious sentence to ever appear in the Signpost. Gamaliel (talk) 14:45, 31 January 2019 (UTC)[reply]
  • Bit surprised to see I've been quoted here without any notice, and implied to have some sort of insider WMF knowledge without being asked about it. I have no idea of the circumstances regarding James' departure, but it baffles me that so many people see this departure and jump right over the most plausible explanations–that he was ready for a change after eight years at a company (practically a lifetime in the software world), or that he decided to take what appears to be an impressive (and probably better-paying) job opportunity–and assume that he was ousted or that there was something seedy happening behind the scenes. Slow news cycle, perhaps? GorillaWarfare (talk) 14:49, 31 January 2019 (UTC)[reply]
Why, GorillaWarfare, should you expect to be notified about being quoted on something you said quite publicly? You yourself clearly inferred that you were privy to inside information. If you are now denying that, you are just as guilty of fuelling the rumours as anyone else on that forum that you contribute to. Kudpung กุดผึ้ง (talk) 07:38, 3 February 2019 (UTC)[reply]
Dude. It's called "common courtesy" and "journalism". You shouldn't have quoted her without letting her know, because it's clear that you took her comments out of context and the two of you have a history of less-than-stellar encounters. This is especially important in such a drama-fueling, poorly-researched, tabloid-style "article" like this.--Jorm (talk) 17:44, 3 February 2019 (UTC)[reply]
I don't expect to be notified, but a quick email to me to ask about my comment would have a) been courteous and b) saved you the awkwardness of having to be corrected by me in these comments. As for "fuelling rumours", it's a bit rich for you to accuse me of that after publishing this sensationalism. GorillaWarfare (talk) 21:07, 3 February 2019 (UTC)[reply]
Lord, you didn't even reach out to James Alexander before writing and publishing this?? GorillaWarfare (talk) 21:23, 4 February 2019 (UTC)[reply]
+1 on what both Jorm and GW say. It's not the quote itself that reads as if GW has insider information, but the introductory statement: "GorillaWarfare [...] appears to be best informed, and explains in one of her posts...". Bilorv(c)(talk) 02:26, 4 February 2019 (UTC)[reply]
+2 👍 Like (RIP Google+) on Jorm and GW's comments. Couldn't have said it better myself. — pythoncoder  (talk | contribs) 19:32, 4 February 2019 (UTC)[reply]
  • The most surprising thing is that K reads Wikipediocracy. WBGconverse 17:30, 31 January 2019 (UTC)[reply]
  • I've worked in big city IT companies with 25 to nearly 50% staff turnover, and yes when it approached 50% the organisation was stressed. I've also been involved in public sector organisations where the staff turnover was below ten%. If the Signpost is going to criticise the WMF for having a revolving door, it would make sense to quote what the staff turnover level is now and what it was during the troubled period a few years back. ϢereSpielChequers 15:16, 1 February 2019 (UTC)[reply]
WereSpielChequers, I don't believe that The Signpost is obliged to provide those kinds of in-depth stats, for one thing, the availability of its editors and contributors is so limited, it's either that or no article at all. I just remember that when I joined the project, there were 7 staff, nowadays there are over 300 (apparently not including the spin-off organisationns)and the volunteers who provide the content that provides the donations that provide their salaries have a right to some transparency. The problems that high staff turnovers bring with them include an important loss of institutional memory and a long and steep learning curve for newcomers. The lack of any visible form of hierarchy or at least clear lines and levels of responsibility also exacerbates the situation and does not help the volunteer community to build bridges and gain confidence with the WMF. The new WMF website actually cloaks some of the employees under further levels of navigation away from the main staff page. Kudpung กุดผึ้ง (talk) 07:28, 3 February 2019 (UTC)[reply]
Dear Kudpung, having reread the article I think my beef was with the "WMF staff turntable continues to spin" headline, the actual article is just focused on three changes, two departures and an arrival. The growth from 7 to 300 is a different topic, and as a former WMUK employee one where I probably should listen rather than opine. ϢereSpielChequers 21:59, 3 February 2019 (UTC)[reply]
  • The issue here is presumably not the level of turnover from a corporate point of view, but more the concept of revolving door (politics). Wikipedia, like government, has been envisioned by many as a noble enterprise seeking fairness ... and such enterprises are seen as unmined resources by every capitalist with a pick-axe. When someone goes from working for the government of Singapore, not a free society, into Wikipedia, that should immediately raise questions in our minds. When someone comes out of Wikipedia into Twitter, we should wonder if they had any way to earn goodwill with the company first. I don't know these things mean anything in these particular cases -- I don't have an NSA-eye view of what lurks in the individual human soul, if I did it wouldn't stop anything because I don't have their omnipotence either. All I know is that the utopia of free information on computers has rapidly degenerated into a dictatorship of machine ownership and control by a few people who corrupt everything, and Wikipedia is the least of what stands to be destroyed, however large that itself may be. In the end the planet itself will be passed through the flames to Moloch. Wnt (talk) 12:55, 2 February 2019 (UTC)[reply]
  • The insinuations regarding James' departure are completely unfounded, based on absolutely nothing other than wild accusations on Wikipediocracy. That blurb reads like a tabloid, not news, and it's about as accurate as one. ~ Rob13Talk 16:37, 2 February 2019 (UTC)[reply]
  • In comparison to the article below, the persons leaving "blurb" is NOT classy, it's downright tabloidy snark, jumping over someone gets a better job to throw slime. Alanscottwalker (talk) 17:48, 2 February 2019 (UTC)[reply]
  • James continued to be involved with important community issues and meetings and remained the face and voice of the Trust and Safety team right up until his departure. The claims and implications in the piece are based upon merely rumors from Wikipediocracy and lacks any sort of credibility. I am disappointed such a piece would even be allowed to be published. Mkdw talk 05:16, 3 February 2019 (UTC)[reply]
I should point out that I have no complaint about the Signpost including editorial pieces from a non-neutral point of view. However, I do care if these pieces are selected for publication and receive wide distribution if they lack credibility, especially when they are about people in our community. I saw a seemingly related discuss on Kudpung's user talk page from Bri. I look forward to the next Signpost issue explaining their "editorial policy on POV in News and notes" and specifically how they view their responsibility when publishing POV pieces that are not credible and seemingly lack integrity. If a piece concludes with "the reasons for Alexander's departure, and why he was not publicly thanked for his eight years' work remain unknown", it brings into question the policy to willingly publish rumours from Wikipediocracy as a POV editorial. Mkdw talk 06:03, 3 February 2019 (UTC)[reply]
Mkdw, I think the report above on Alexander's departure is evenly balanced. The WMF never mentioned the staff change, the only hint came from 1) the disappearance of Alexander from the staff list, and 2) a Wikipedia arbitrator claiming to have inside information which they posted on Wikipediocracy, which was apparently discovered by a Google search. When approached, a very senior (and very friendly) WMF source replied but declined to comment, and another did not respond at all. The conjecture is not of The Signpost 's making, which leaves the question entirely open as to why Alexander left, whether he was lured by a better and/or more interesting offer, or had become disenchanted with the WMF, whatever, but why he was not thanked for his years of service, as most managers are when they leave, remains a mystery. Whatever his current situation is, we naturally wish him all the best. The rest is history - The Signpost moves on. Kudpung กุดผึ้ง (talk) 06:48, 3 February 2019 (UTC)[reply]
Moving where? To the gutter? Your "enquiring minds want to know" [2] [3] ending is among the most unethical trash there is. Alanscottwalker (talk) 14:18, 3 February 2019 (UTC)[reply]
It is not uncommon for people to leave the Wikimedia Foundation without an announcement being made. In fact, as I discovered while compiling my "Wikimedia timeline of events, 2014–2016", it happens fairly often. Do you also conjecture that the folks listed there who did not make or receive on-(public)-list departure emails left under suspicious circumstances? Where are their speculative Signpost articles? Or is James Alexander for some reason unique in receiving this treatment?
It also is ridiculous to claim that Wikimedia employees not commenting on the circumstances regarding someone's departure is somehow indicative of there being an issue, and not standard practice. GorillaWarfare (talk) 21:28, 3 February 2019 (UTC)[reply]
  • Like others here, I am a bit frustrated by the coverage of my departure. While you are right that a large public announcement wasn’t made on wikimedia-l or the like, I certainly did not keep it secret. Public announcements on those lists are increasingly rare nowadays (and, honestly, typically for people higher in the organization chart than I was); appropriate announcements were made in advance to those I worked with most closely, including ArbCom, the Functionaries list, CheckUsers, the stewards and the like. In addition I made a long and detailed post on Facebook where a large number of Wikimedians who I count as friends saw it and responded. I didn’t update my social media sites right away because I was waiting until I had actually started my new job (Jan 14th). The Signpost could have easily acquired that information and more by asking me directly. Both my personal username and email are well known and open. Like GorillaWarfare I received no outreach at any point.
Regarding my actual decision to leave, I’m very proud of the more than 8 years I put into the Foundation and the work I’ve done over that time to grow and professionalize our trust and safety program. While there is certainly more to be done (and more is being done!) I am confident that the team is in a good place to do it without me. After 8 years (as many have pointed out, an eternity in San Francisco terms), I wanted a bit of a new challenge. Twitter is not the first company to reach out to inquire about my working with them, but they were the most attractive. My conversations with them (and the past 3 weeks since I started) made it clear that they share my desire to try and balance the importance of free speech and transparency with safety and health online. Everyone I’m working with is there for the right reason.
I enjoyed a couple of weeks of vacation between roles, and the past couple weeks have been focused on my new job, but I have every intention to continue to be involved in the movement I love so much as a volunteer just like I was before I started to work in it. My former colleagues at the Foundation have been generous and kind in supporting me in this new role, and I expect to continue to interact with them regularly - now in my new capacity. Oh and, while I stand by the Foundation’s policy of not detailing the reasons for behavioral investigations and actions, I also stand completely behind the “registration withdrawal” you mention. YouTube documentation (especially when it’s the recording of someone successfully trying to deescalate an in person situation) rarely tells the full story :). James of UR (talk) 22:08, 3 February 2019 (UTC)[reply]
  • Having complimented another piece in this issue of The Signpost, I'll now add my two cents into this one. The fact that Wikipedia has been blocked by a state-owned ISP is a big deal; however, the manner in which this section has been written "buries the lede". That Venezuelans are dietarily challenged is undoubtedly true - I could find plenty of well-respected sources that confirm it - but it is completely irrelevant to the authoritarian control of media in Venezuela. That control would have been an excellent focus of the article, and would have been entirely in keeping with the mission of The Signpost. I am disappointed that the writers went for the fluff to the point that it distracted from the issue.

    I'm even more disappointed in the skewed, sensationalistic writing about the departure of James Alexander. You wrote a story about someone without asking for their comment; that's well below the standard most people would feel The Signpost should strive to achieve. And you wrote a story that suggests something bad happened here, without any basis in fact. That's also well below the standard I think most of us expect. "News and Notes" can - and should - be written without sounding like a second-rate tabloid. You can do better, and I think we should probably expect better. Risker (talk) 17:17, 4 February 2019 (UTC)[reply]

Venezuela Org.[edit]

That statement by the Venezuela Wiki organization seems particularly classy, in what must be a very trying situation. Brave, even. Well done. You are indeed independent from the rest of us and the other Wikimedia organizations but still in the thoughts of many of us. Alanscottwalker (talk) 16:17, 2 February 2019 (UTC)[reply]

"Turntable continues to spin"[edit]

Shoudn't that be "turnstyle"...? - wolf 23:04, 2 February 2019 (UTC)[reply]

Is "the company of the CANTV status" the right translation for "la empresa del Estado CANTV"?[edit]

Apokrif (talk) 12:33, 3 February 2019 (UTC)[reply]

No, the statement is a translation wreck.

The original statement from Wikipedia Venezuela is here, (as Apokrif notes above).Their exact words are:

Wikimedia Venezuela, y usuarios de Wikipedia, nos han manifestado su imposibilidad de acceder a la enciclopedia libre a través del proveedor de servicios de Internet más importante de Venezuela, la empresa del Estado CANTV.

We ended up with:

... have told us their inability to access the free encyclopedia through the most important Internet service provider in Venezuela, the company of the CANTV status.

It looks like a google translation. The correct translation is:

.. have told us their inability to access the free encyclopedia through the most important Internet service provider in Venezuela, the state-run company CANTV.

It is important to understand that Maduro controls communication (and elections) in Venezuela through control of CANTV, which is the state-run and state-owned telephone and internet provider (election results are transmitted over phone lines, and phone tapping is routine). It is not surprising that Wikipedia's article on CANTV has a deficient lead, and does not make this clear. I have fixed.

I am relieved to see that we now have a Wikipedia Venezuela that speaks up-- in the early days of chavismo, there was clear state influence in the entire suite of Venezuela articles. SandyGeorgia (Talk) 20:48, 3 February 2019 (UTC)[reply]

News from the WMF: News from WMF (0 bytes · 💬)[edit]

Wikipedia talk:Wikipedia Signpost/2019-01-31/News from the WMF

Op-Ed: Random Rewards Rejected (71,414 bytes · 💬)[edit]

Discuss this story

  • I have nothing nice to say about CMU. While I was a grad student at Pitt, I reached out to a CMU professor about an on-wiki issue but they never took me up on my offer. Clearly, we as a community still have a problem with academics mucking about on wiki without having a proper sit-down with editors right in their own neighborhoods. Sad misconceptions like these underline the point I made toward WikiEd a few years ago when they ended the campus ambassador program. I guess we learn nothing. Chris Troutman (talk) 07:24, 31 January 2019 (UTC)[reply]
    • The frustrating thing is that there clearly are ethical ways of doing this sort of research. It's a pity that more thought wasn't put into methods beforehand. T.Shafee(Evo&Evo)talk 10:09, 31 January 2019 (UTC)[reply]
    • @Chris troutman: Small world, I was a grad student at Pitt too through I actually did talk to prof. Kraut and it was a relatively constructive relationship (I helped them with their first wave of Wikipedia research, through sadly didn't manage to get myself credited...). I think the problem with this project is that it was too much focused on theory and too little on the benefis to our community. But, sadly, professors don't advance their career solving Wikipedia/media problems, they do so by publishing papers for which reviewers are more likely to complain about 'not enough theory' rather than 'not enough practical implications'. After all, that's academia. --Piotr Konieczny aka Prokonsul Piotrus| reply here 17:11, 1 February 2019 (UTC)[reply]
  • Were the barnstar bombers indef-blocked for abusing wikipedia for personal gains? - Altenmann >talk 07:49, 31 January 2019 (UTC)[reply]
  • "Random Rewards" reminds me of the badges participants in the The Wikipedia Adventure accumulate, ending up (having barely learned to sign their posts) with more than a dozen such badges on their user pages. Stolen valor! – Athaenara 08:00, 31 January 2019 (UTC)[reply]
  • Stolen valor is an interesting analogy, thanks for pointing that out Athaenara. Bri.public (talk) 19:01, 31 January 2019 (UTC)[reply]
  • For what it is worth, everybody recognizes that TWA awards are meaningless and values them as such. Barnstars are another matter entirely, seeing as they (usually) actually signify something. Compassionate727 (T·C) 19:31, 31 January 2019 (UTC)[reply]
  • We should grant retrospective honoris causa barnstars to Confucius, Omar Khayyam and Spinoza. And then claim that at least these three barnstars are recognizing something. Pldx1 (talk) 09:00, 31 January 2019 (UTC)[reply]
User:Herostratus/External Barnstars Already ahead of you. Herostratus (talk) 02:21, 3 February 2019 (UTC)[reply]
  • Well my ego has been sufficiently stroked on account of the story since the barnstar chosen as the lead image for the story is...The Press Barnstar, which I had suggested to the community some years back. I feel good about it :) Now if only I could actually earn it as opposed to have it simply materialize out of thin air... TomStar81 (Talk) 10:13, 31 January 2019 (UTC)[reply]
  • Bloody FaceBook now wants data on us? Poked and prodded is bad enough, but FaceBook grant says it all and all I need know about CMU. Glad this as rejected.-- Dlohcierekim (talk) 10:41, 31 January 2019 (UTC)[reply]
  • "It will dilute the value of the barnstar" is the most ridiculous argument I've ever heard. It's a digital star that anyone can give for any reason- that someone else got one for a trivial reason does not devalue your own, any more than someone else getting a hug means one from your SO means less. An actual concern is that the researchers aimed to perform a study that affected, in however minor a way, a thousand people who did not explicitly consent to be in the study. I don't care if there's some buried term int he wikimedia TOS that legally indemnifies it- research on subjects without their consent is unethical, regardless of the scale of harm. --PresN 13:21, 31 January 2019 (UTC)[reply]
  • Well, just because the valuation of something is low (or perceived by some as low) does not mean that said value cannot be debased. That said, I agree with the main thrust of your comments, as to what the major issues of significance here are. While it is not per se unethical to conduct research without informed consent under every single one of the various legal and institutional rubrics which define such matters, in this case (where there were non-anonymous subjects whose responses were to be tested and behaviour monitored), the approach was very obviously inappropriate in the extreme, as an ethical matter. Learning of this makes very curious as to who is on Carnegie's institutional review board and how they possibly thought this was permissible behaviour for researchers. Indeed, to the extent any of these researchers are members of the APA, I wonder how they might feel about this attempted research, given it would have, to my eye, pretty flagrantly violated numerous provisions of the associaton's ethics code. I'm also curious whether the information culled, beyond being shared with the venture's partners, was also intended for use in publication with one of the APS journals, given the role Kraught serves as facilitator for the APS Wikipedia initiative. But putting the APA and APS to the side for the moment though, and returning to the active parties here, this whole affair gives off an odour that does not reflect well on any of the institutions involved. It's an embarrassment that any researcher thought this could end well and yet more indication of how disrespectful academics can be of Wikipedia and community in particular--to say nothing of how laissez-faire they can be about their ethical responsibilities with regard to online research generally. I'd like to say I believe it likely that this affair would stain the reputation of the involved parties among serious researchers, but, the truth is that I rather doubt it will.. Snow let's rap 01:03, 1 February 2019 (UTC)[reply]
  • From an economic/monetary standpoint, it clearly does dilute the value of the Barnstar. Barnstars have a concrete value in that they signify the community's respect and thus lend prestige and authority to the recipient. Every instance of a barnstar created reduces the value of every instance that already exists; they would lend much more prestige and authority, for example, if only twelve people had them.[1] We accept the very small dilution every time someone hands out a barnstar because we consider the value the recipient gets out of it, particularly as a morale boost, and the value the community gets out of it, as a way of identifying reputable people, to be worth it.[2] It is harder to make that argument for thousands of random barnstars, especially in light of the fact that they would lose much of their value as a recognition: you would no longer be able to look at someone's user or usertalk page and think: "That person has a barnstar, I can trust him or her to be smart, competent and experienced" because a ton of people who are potentially none of those things now have them. Compassionate727 (T·C) 19:44, 31 January 2019 (UTC)[reply]

References

  1. ^ Your hug analogy fails because the random person's hug and your SO's hug are actually two different commodities whose value is not directly dependent on one another. The value of your SO's hug would diminish, however, if they gave hugs frequently and/or to many people versus if they only hugged you and only once every quarter. The same would also be true if you received hugs from many people, because other peoples' hugs are a substitute for your SO's hug. Of course, you may perceive your SO hugging you as different or special because they're your SO, which is why we typically don't use emotionally-laden things for examples in economics.
  2. ^ If the number of barnstars is not increasing as a percentage of the total number of users in the community, the relative value of the barnstar over time may not be decreasing at all. It would still be decreasing, however, in absolute terms.
  • You could double-check that the paper was 455 pages? That's more of a book or a very lengthy, in-depth study, not a "paper". Liz Read! Talk! 16:23, 31 January 2019 (UTC)[reply]
  • @Liz: Without seeing the paper myself, I would guess there are more than a few pages consisting mostly or entirely of raw or uninterpreted data. Compassionate727 (T·C) 19:48, 31 January 2019 (UTC)[reply]
  • The paper in question is 10 pages long. That's a fairly common page limit in computer science conference papers. Maybe Kudpung could update the op-ed to reflect that it's not 455 pages? Cheers, Nettrom (talk) 16:24, 4 February 2019 (UTC)[reply]
  • Kudpung, you've taken some legitimate flack from the community for some of your Signpost work, but I just want to say that this op-ed piece was really well done. A to you for your work on this. – wbm1058 (talk) 21:31, 31 January 2019 (UTC)[reply]
  • So if the goal of this research was to "help Wikipedia retain editors and encourage them to do needed work", I have an idea for a better experiment. Start refilling some editors' coffee cups, and see what effect that has on their contributions. Also note the effect Trump's experiment with delayed Federal worker pay had on outcomes like the wait time at airport security checkpoints, etc. – wbm1058 (talk) 21:42, 31 January 2019 (UTC)[reply]
  • The relationship between Wikipedia and academia is an interesting one. I would have expected these communities to overlap in goals and values, but this is often not the case. Problems arise when folks come here to further their own academic interests instead of working to build an encyclopedia. In this case it is clear that the purported benefits to Wikipedia were secondary to the research agenda; if this proposal was an earnest effort to improve Wikipedia, the researchers would have worked with the community from the start to design a study that was consistent with our needs and values. Proposals from universities should be treated with the same skepticism as any other form of paid editing, but instead we seem to presume a certain level of altruism and good faith just because they are in the field of education. –dlthewave 05:32, 1 February 2019 (UTC)[reply]
  • As a retired academic researcher and Wikipedian, the problem simply is that the researcher (often a research student) and their advisors and any ethics review panel know little about Wikipedia or how it really works. They will not have encountered what Wikipedia is NOT. If the proposal had been to test the motivation value of handing out Employee of the Month awards to supermarket staff, it would pretty obvious to all concerned that obtaining consent from supermarkets to participate would be required as either physical access to staff or access to staff emails would be needed. The organisational barrier would be obvious. However Wikipedia doesn’t exhibit the same obvious organisational barriers. It’s the free encyclopedia anyone can edit. Many people have no realisation that there is an organisation behind it or a community to be consulted. Even if that occurs to them, it is not obvious from our home page or links off it where they should be asking. For example Wikipedia:Contact_us could be updated to have a heading “For researchers” to direct their enquiry somewhere useful. I note that some researchers do find their way to our research mailing list where they can get practical advice on design (and acceptability) of Wikipedia-related research. So we do have a good entry point for researcher enquiries there. Kerry (talk) 08:23, 1 February 2019 (UTC)[reply]
Kerry Raymond, I don't disagree in the slightest, but to be frank, any researcher working in the social and psychological sciences who is going to be doing research involving direct stimulus-response testing of individuals needs to have informed consent. That is research ethics 101. Believe me, I understand the complications that this creates for the research itself, particularly in the arena of social psychology, but there are reasons these principles were adopted by the scientific community in the last century and those reasons weren't by any stretch of the imagination trivial concerns. The lack of institutional watchdogs such as you would find in using individuals as test subjects through their employement should not be treated as free license to utilize people as test subjects through open communities online, without obtaining consent. Ethics should not go out the window just because there isn't a sufficient presence to invoke practical liabilities--that's not the bedrock upon which ethical research should lay. And frankly, even a grad student not getting this is an embarrassment to the profession and a sign that their institution has flubbed the task of their basic education in this area. Taking shortcuts through that cut through ethical barriers just because you are conducting it online is no more acceptable than trolling people is acceptable because it's done in the anonymity of the internet. No researcher should feel one whit more comfortable conducting research in an online forum where that same experiment would be clearly unacceptable if conducted at a farmer's market. This isn't rocket science: the people one might be inclined to use as subjects online are still people, and it is still just as shady to exploit them by failing to get consent. And if the only thing keeping research in line were intermediaries with their own liabilities and legal limitations, and not the good ethical sense and training of the researcher themselves, things will get to a bad place fast, as indeed we have seen happen repeatedly in recent times, often involving one of the funding partners in this very research. It's bad enough that we have to worry about this kind of behaviour from social media players, marketing firms, and the political class, all of whom act with such disturbing impunity when it comes to the privacy and consent. To allow academics to get in on that game without any sense of concern as to the implications... Snow let's rap 23:18, 1 February 2019 (UTC)[reply]
I too was shocked at what appeared to be blatant disregard for "informed consent". In spite of having all the same problems, Restivo/van de Rijt got published in what I assume to be a peer-reviewed journal. I looked at that paper and I found the following paragraph near the beginning:
This study's research protocol was approved by the Committees on Research Involving Human Subjects (IRB) at the State University of New York at Stony Brook (CORIHS #2011-1394). Because the experiment presented only minimal risks to subjects, the IRB committee determined that obtaining prior informed consent from participants was not required. Confidentiality of personally-identifiable information has been maintained in strict accordance with Human Subjects Committee requirements for privacy safeguards.
What does this mean for us? It means that as far as the above-mentioned Committees on Research Involving Human Subjects are concerned, it's OK to mess with people's heads, not to mention rending the social fabric, without telling them that they're part of an experiment. What's up with that? This looks like a bigger problem than just one naïve professor at CMU and his grad student. Bruce leverett (talk) 02:49, 4 February 2019 (UTC)[reply]
Yup--and while research institutions have been known to apply that "minimal risks" standard when culling data from pre-existing media, here the researchers were to have been directly experimenting with the subjects, providing stimuli and recording responses and that has traditionally been seen by all institutions, professional associations, and researchers in good standing a brightline rule for when consent is required. Unfortunately, it would seem that the principle of social psychology (that most any person in the contemporary world with web access is familiar with), whereby the consequences of improper behaviour online are seen as less consequential or "real" than the same conduct would be perceived to be offline, applies as much to many researchers with regard to their work as it does to random joes who drops their standards for appropriate conduct. Even though such researchers ought to be more on guard than most people about the irrationality and dangers of such a cognitive bias. Like I said, an absolute embarrassment to the profession and something that needs to be addressed. Someone should do a systematic review of that--a research topic of some actual consequence. Cripes, would I love to see the expression on one of these researchers' faces when, while at a conference, they realized they were being referenced obliquely in a breakdown of slipping ethical standards owing to an inability to contextual rationalization. What sweet irony that would be. Snow let's rap 09:02, 4 February 2019 (UTC)[reply]
I would love to have seen the human subjects research ethics application for this study. I wonder if a FOI request would get it? I've seen some questionable applications go through when 'commercialization' is mentioned.AugusteBlanqui (talk) 11:10, 4 February 2019 (UTC)[reply]
Well, Carnegie Mellon is a private university and thus would not typically be subject to FOIA requests directly, and while HS-IRBS are required to maintain minutes of their meetings and other documentation of their review of proposed research, they are not typically required to file these documents with OHRP or other federal oversight entity unless the agency requests it (for example, as part of a review)--and if the documents are not within the possession of such a federal agency, they typically cannot be reached by a FOIA. (There are possible exceptions where, as alluded to before, the research institution is a state entity or it used federal funds in the research). I suppose it's possible (maybe even probable) that when multiple parties sign on to an 'IRB of record' agreement (this is where the involved institutions agree to allow one IRB to investigate and authenticate compliance for joint research), if even one of the researchers involved is from a state institution (or arguably used federal funds on the research in even a trivial way), the documentaion could be reachable by FOIA that way, even if the IRB in question was that of a private institution that did not use federal funds and did not file the documentation with a federal entity. But I just don't know the regulations that intimately to say for sure. My overall inclination is to say taking this approach could be an ordeal. However, some private institutions try to be more transparent than others and I imagine some may be amenable to public requests. This is where one would begin investigating such an inquiry with regard to Carnegie Mellon. Snow let's rap 12:04, 4 February 2019 (UTC)[reply]
Snow Rise here's a question: how would this research have insured that no minors were used as test subjects? I mean, apparently CMU cares about this. Wikipedia-as-petri dish seems to leave the door open for violations of child protection policies as far as informed consent goes or general ethical guidelines for research. Might be worth Wikipedia Foundation getting the word out. AugusteBlanqui (talk) 12:13, 4 February 2019 (UTC)[reply]
I don't see how they could have, honestly. In many (but not all) research situations involving informed consent, parents or other legal guardians can provide consent as proxies. Here though, the IRB decided it was fine to conduct this research without asking anyone for consent, whether directly or in a guardianship role. That actually raises an interesting question, because I note that Pennsylvania has a statute (Act 153) which requires that all researchers likely to have contact with minors to register with three state entities. Now obviously the type of harm that statute seeks to protect against anticipates mostly in person interactions, but looking at the statutory language itself, I see nothing that obviates the university of that responsibility when contact is restricted to online research. In any event, CMU's own internal child protection policy makes clear that "Programs and Activities Involving Minors" is defined as any program, event, or activity involving one or more individuals under the age of 18 that is...[s]ponsored, funded and/or operated by any Carnegie Mellon administrative unit, academic unit, or student organization, regardless of location. This includes programs and activities conducted on-campus, off-campus, or remotely via the internet or other means of communication" [emphasis added]. I suppose if I were in a dialogue with the IRB, that would be a fruitful question to raise regarding their review of this research--whether all researchers who might reasonably have had contact with minors through this research had Act 153-compliant registration. Snow let's rap 12:44, 4 February 2019 (UTC)[reply]
BrownHairedGirl, SlimVirgin, I thought the two of you might be interested in some of the issues we are discussing here, particularly as we can't be sure this will be the last time we will see something of this sort. Thank you, btw, for providing a check here; we really rely on editors like you who volunteer time on both Meta and the local project as a first line of review of such matters, and you really came through for the community. Also, hi to you both--I hope you've been well? Snow let's rap 13:28, 4 February 2019 (UTC)[reply]
  • Dear fellows. May be we are the target of an experiment about "how much proud are these people to mimic the star system of various military organizations" and even crosses with Oak Leaves, Schwertern und Brillanten. Smile better, we are studied. Pldx1 (talk) 10:02, 1 February 2019 (UTC)[reply]
  • How dreadful, to be a victim of a nefarious conspiracy of Facebook and Google to commit random acts of undeserved, unprovoked kindness. Jim.henderson (talk) 14:54, 1 February 2019 (UTC)[reply]
Random distribution of barnstars as part of a behavioral experiment is not an act of kindness though is it? AugusteBlanqui (talk) 17:00, 1 February 2019 (UTC)[reply]
We talk a lot of about intent here on Wikipedia such as in WP:NOTHERE. We are here to build an encyclopedia and I think it is easy to underestimate and dismiss the number of obstacles that stand in the way of doing so, but the overall sum is considerable. Mkdw talk 06:17, 3 February 2019 (UTC)[reply]
  • Nothing would devalue a PhD degree more than if a post-grad student were not able to find anything better or more scientific to base his or her doctoral thesis on than Wikipedia barnstars.Kudpung กุดผึ้ง (talk) 09:10, 3 February 2019 (UTC)[reply]
@Kudpung: Amen to that. – Athaenara 02:43, 25 February 2019 (UTC)[reply]
  • Well we may be disgusted but we should hardly be surprised. Facebook and other habitual intruders-on-people's-lives are (perhaps) finally being shamed and brought to book, but the temptation of unlimited data of unprecedented precision will remain a powerful inducement to misbehave. We may need to be protected by more than angry talk pages; clearly tools could be made to detect such behaviour. Chiswick Chap (talk) 08:55, 4 February 2019 (UTC)[reply]
  • Thanks for your kind words, @Snow Rise, and for the note on my talk. I should point out that I don't monitor WP:VPM systematically, and might have missed the VPM discussion if I hadn't been alerted to it by @Vexations.
I appreciate the points raised above, and agree that there are issues around for example child protection. However, I personally don't think it's productive at this stage to get too locked into those details. My concern is that there should be high-level filters in place, and that those filters should include issue such as addiction and child protection. But it makes little sense to me to start discussion the nature of the filters when there is no framework for all filtration system.
As I noted at the VPM discussion, I have a very low regard for the ethical controls at universities. They are now so heavily dependent on corporate funding that a corporate approach to ethics is hardwired into all their decision-making processes. The responses at VPM by @Diyiy and Robertekraut to my points at VPM about disclosure only underline the convergence between corporate ethics and those contemporary academia.
So Wikipedia needs its own filters. But m:Research:Committee is dormant (or possibly extinct), and there seems to be nothing in its place. Instead we had this proposal brought to community with the support of @Halfak (WMF), who is a previous research colleague of Diyly and Robertekraut. Whatever view anyone takes of either the substantive or ethical merits of that research (see it at m:Research:The Rise and Decline), Halfak had a clear conflict of interest in assessing this research. Yet so far as I could tell from the VPM discussion, there was no other oversight of this project within the WMF.
That is clearly wrong. We need some framework for assessing research ethics either at the WMF or at en.wp, or both; yet we have neither.
I don't try to follow the internal politics of the WMF, so I have idea whether the issues raised at the VPM discussion have led to discussions within WMF; but I have seen nothing publicly about news structures or policy. I think that's a serious and astonishing omission, but it is how it is.
So it seems to me that en.wp needs to set up its own framework for screening research proposals. --BrownHairedGirl (talk) • (contribs) 05:55, 6 February 2019 (UTC)[reply]
@BrownHairedGirl: It would certainly be a start, though it would be a difficult situation for all involved if such a system were not built in lockstep with the WMF, which some under-informed researchers (unfamiliar with the organizational and legal complexities of this project and community) may presume is the only entity with whom they need to communicate such plans. Indeed, in this case, despite the fact that it seems the research was to be carried out in this local community, it seems that no effort was made to seek input anywhere outside of Meta until after the proposal came under the scrutiny of rank and file editors. Incidentally, I noted for the first time today upon review of the proposal page at Meta with a closer eye, that we are months or weeks past when most of this project was supposed to have taken place, and that the first communications here (Dec. 18) took place weeks after the stimulus portion of the experiment was to take place). Are we entirely certain that they did not proceed with any testing before their approach came under under fire? I'd very much like to know the answer to that question.
Anyway, as I was saying, we're going to have real discord (I mean Knowledge Engine levels of animosity, disruption, and distrust) if the local community and the WMF don't operate as a unified front on an issue of this importance. But that shouldn't necessarily stop us from taking preliminary steps. I don't think we would have a difficult time rallying the community to create a policy which states that no research shall be conducted here which involves human behavioural testing relating to the study activities in any way induced by the study itself unless informed consent is sought from each user utilized in said study, and that failure to do so is to be treated as refused consent for each such person.
Really that needs to be in the Terms of Use to have full efficacy (one more reason we need the WMF to hear our concerns here; perhaps it does not hurt to bring WMF Legal into the conversation at this point). But, although it would be one of those very rare policies that is more precatory to outside players than useful for internal processes, creating a community consensus document as to that principle would at least have the impact of putting researchers on notice as to how such behaviour is likely to be regarded here: that would have uncertain effects with regard to later review of their research conduct by those entities (institutional or governmental) who are capable of engaging in oversight of their work at various levels. The professional and legal implications would be quite uncertain without a document that is more expressly legally operative (such as a new section/additional language in the ToU), but given the number of potential complications that might nevertheless arise from disregarding an express statement of this nature (with regard to their institutions, any professional associations to which they belong, the OHRP and other federal regulators, and states Attorneys General, and their funders/commercial partners to name a few interested entities), such a local policy might at least give researchers pause in the future about proceeding with human testing on this platform without first attaining consent. Since it seems we can't always trust them to exercise their professional conduct in this regard by way of their own restraint without making such a blunt statement. Snow let's rap 07:01, 6 February 2019 (UTC)[reply]
Whatever the potential legal consequences to a random 'independent' researcher who violates child protection policies by conducting behavioral research, WMF should have been well aware of child protection issues. If I conduct research in Ireland then I am required to certify whether or not it involves minors. If it does involve minors, or even worse if I admit that I would be unable to tell, then there is not a chance in hell the proposal would get through without informed consent being attached. Zero. A 'minimal risk to participants' rationale might work for adults--maybe. The WMF (and/or the researcher) would be vulnerable in multiple jurisdictions too--what passes for child protection in Pennsylvania might not work in the Netherlands, etc. I'm gobsmacked that WMF 'signed off' on this. Maybe child protection/informed consent was part of the original plan?AugusteBlanqui (talk) 10:47, 6 February 2019 (UTC)[reply]
"If I conduct research in Ireland then I am required to certify whether or not it involves minors. If it does involve minors, or even worse if I admit that I would be unable to tell, then there is not a chance in hell the proposal would get through without informed consent being attached. Zero."
It's meant to work the same way in regard to U.S. research: here are the relevant federal regulations on human testing as regards special protections for research involving minors. Note that both the assent of the child and the permission of a parent or guardian is required, and assent is defined expressly as follows: "Assent means a child's affirmative agreement to participate in research. Mere failure to object should not, absent affirmative agreement, be construed as assent." The IRB is also given the direct responsibility for ascertaining that those requirements are met. Which rather raises the question of whether the researchers here made clear in their application to the IRB that close to 25% of Wikipedians are below the required age of consent for a study of this nature (the regulations also indicate that the age of consent required is the age of consent in the jurisdiction where the research takes place). If not, it raises two unavoidable possibilities, neither one of them great: 1) they just didn't think about the ethical complications here long enough for this to occur to them, or 2) This obvious reality was known to them and they didn't disclose it. If they did in fact disclose this fact in their HS-IRB application, and the Board still let the research move forward, we're potentially talking about an even bigger problem of botched ethical controls at CMU. It's worth repeating here also that Pennsylvania law also requires that any researcher having contact with children obstain a number of different certifications, and that CMU's own internal policies on this make clear that the requirement, insofar as the university is concerned, is meant to those who will undertake such actions online. So I'd be very interested in knowing if these certifications were sought and granted for any party to this research who was going to (or did?) have interactions with our editors, and whether this information was presented in the IRB application or in any review meetings (also required under federal law).
And while these issues regarding minors are certainly very salient and troubling concerns with regard to the ethical controls here, I don't want to get so lost in the weeds of just one of the more eye-catching issues that it seems as if this research was ethically compliant with regard to all the other participants. Because I think very much was not. Putting aside the question of informed consent for adults for a moment--I do not believe most researchers, institutions, or professional associations would view this as acceptable circumstances to proceed without consent, "low risk" or not, but we can table that for a moment--there are numerous other concerns, the most obvious of which is privacy. Experiments which test a subject's response to stimuli (especially those conducted without their consent or knowledge) are meant to be conducted with the utmost confidentiality; there are requirements under law, under the policies of particular research establishments, and under the conduct codes of professional associations. Here, there was absolutely zero possibility of their keeping the stimulus and response of the individual subjects confidential, since they were going to be taking place on very much arguably the world's single most open platform in existence, where every detail of those interactions would be freely viewable to anyone with an internet connection. It is mind-boggling to me that none of this sent up red flags to any of the players involved (the researchers, their university oversight, their financial partners) before this got to the Meta and Wikipedia itself. Snow let's rap 21:30, 6 February 2019 (UTC)[reply]
The process should not be limited to an ethics review; it would also need to include community-level consent. Using an analogy raised by another commenter, imagine if this experiment was proposed to the management of a small employee-owned grocery store and it was discovered that the research was funded by a big-box chain. Even if the procedure was demonstrated to be completely harmless to the participants, it would obviously not be in the best interests of the community and would certainly be rejected.
In this case, a number of editors expressed that they felt uncomfortable with any research associated with Facebook. The researchers did not seem prepared to address these concerns and seemed to think that they only had to convince us that the risk to individual participants was low. We should develop a research approval procedure that involves both an ethics review and community approval, with the understanding that community approval may be withheld for any reason just as individual consent may be withheld for any reason. –dlthewave 14:56, 6 February 2019 (UTC)[reply]
Well, that's just a separate issue from what others have been focused on here--which is not to say it is a non-issue. But I will say that the researchers here did disclose their intentions with a Meta research page--a whole month before they intended to start their research...--though, from what I have seen they did not reach out to the local community until after concerns were raised at Meta (which was, I note with concern, well after they had planned to be already underway in their research). In any event, I share your concern at the cavalier attitude displayed with regard to Facebook being a part of this research, regardless of whether it was through a grant arrangement. If it were quite literally any other company in the world, these concerns would be lessened, but given the company's recent history on privacy issues regarding third party activities that have touched upon so many concerning behaviours, I don't think any concerns in this area can ever be described as mere hand-wringing. One way to address such concerns in the future is to make funding disclosures a requisite part of any research proposal presented at Meta, along with a requirement that such proposals be advertised in major community news spaces for each local project on which research will be conducted. Indeed, when you look at those proposals, they are laughably skimpy on the details regarding parties and oversight--and indeed, give only a partial accounting of the proposed research methodology itself. Snow let's rap 21:30, 6 February 2019 (UTC)[reply]
  • Perhaps some people should read again the contributions of @Robertekraut:, @Diyiy: at meta:Research:How role-specific rewards influence Wikipedia editors’ contribution and Wikipedia:Village pump: Research project: The effects of specific barnstars... or, perhaps, simply read them at least once. Both authors asked for permission and asked for feed-back. Their intent was to select a sample of people worth of a barnstar and then send them or not the said barnstar. Imo, the best method would have been to send a message like Being clear about the sender would have avoided many side-discussions while being endorsed by MLA@CMU would have been perceived as largely more praiseworthy than being endorsed by a random guy on a social network, therefore amplifying the effect (if any). One can also say that not perceiving beforehand that children don't like you playing with their pokemons... was another error. In any case, nothing was done, due to the ignited feed-back. Pldx1 (talk) 13:38, 6 February 2019 (UTC)[reply]
"In any case, nothing was done, due to the ignited feed-back."
I hope that's true. It would be nice to have confirmation on that either way, given that the original proposed timeline had the researchers beginning testing on Dec. 1st, and as best I have seen, the first concerns about the research were not raised until a time after that. Snow let's rap 21:35, 6 February 2019 (UTC)[reply]
I hope that's true. It would be nice to have confirmation. Dear User:Snow Rise. If they say they havn't, then they havn't... except if they are a bunch of liars. What is your educated guess ? In any case, it would be interesting to have a study of all the barnstars that were granted in the current window of say 12 months, centered on dec. 2018, in order to detect changes of behavior, if any. Or to have a more general study, to detect if there are monthly tendencies, or general trends or what else. How many more theses !!! Pldx1 (talk) 12:10, 9 February 2019 (UTC)[reply]
I have to be honest: I can't really tell if you're being facetious or not. For my part I wouldn't have had any reservation about believing them if they said they had not yet proceeded, despite the projected timeline. But they have never actual said as much as far as I can see, in any of the related discussions, and that fact is what inspired my response to you above. However, I also trust that J-Mo would not be asserting below that they had not yet proceeded into human subject testing in such a factual and assuring manner unless he was privy to additional knowledge beyond what is present in the previous threads (which again, do not provide clarity from either researcher as to this point).
Of course, I'm assuming a lot on good faith there--for example: 1) that Diyi and Robertkraut have not responded to pings here because they are feeling a little bruised by the whole affair and taking a wikibreak, and that they are not avoiding answering questions which they know we would not like the answers to and which they now realize could potentially have real professional consequences, 2) that J-Mo did not give assurances on a mere presumption of his, rather than predicating said assurances on additional inside knowledge that he was privy to that allows him to be certain they did not proceed with testing--but I still have enough AGF in me for each of these individuals to allow for that. I may not be blown away by every aspect of the ethical conduct of these researchers or the tone of the response of certain WMF staff members in responding to community concerns here, but my "educated guess" (as you put it) is that they would not complicate the situation further by being misleading (even through omission or assumption). Snow let's rap 15:36, 9 February 2019 (UTC)[reply]


I don't understand the purpose of this op ed. But I was involved in the discussion around this particular research proposal, I have experience performing and evaluating this kind of research, and I know the alleged perpetrators (or perhaps victims is a better term) well. So here are some facts, however unwelcome they may be to some (not all) of the people involved in this discussion, and in the related discussions on the VP and on Meta.

  1. First, to address the preceding comment: the study was not performed, and will not be performed. Robertkraut and Diyiy engaged actively and in good faith with members of English Wikipedia who expressed a range of concerns about the study, and as a result of that discussion decided not to perform the study, a decision which they conveyed promptly and through the appropriate channels. They are both competent, professional and ethical researchers and have done nothing to my knowledge that would suggest otherwise. Dr. Kraut is a founding partner for the Wikipedia Education Program, and a veteran researcher of Wikipedia and other online communities.
  2. In the case of the 2016 paper, "Supporting in part by... a grant from Google" probably means that one or more of the graduate students involved had a research fellowship from Google at the time. Graduate students in technical fields are routinely funded by research grants from public and private institutions. In general, code, data, and research reports generated by researchers funded under Google (or Facebook, or NSF) grants are publicly available, and the choice of what research to perform is not dictated by the grantmaking entity. So a researcher or team might get $100k to fund 3 graduate students (tuition+stipend) for a year, based on a proposal that says something like "We will investigate the motivations of people who contribute to online communities", but Facebook/Google generally doesn't get to tell them what communities to investigate, or what research methods to use, and doesn't get special private access to their findings.
  3. It is not clear to me why Aaron Halfaker's name (and picture) are being called out here. To me, this has the appearance of an attempt to suggest that he is involved in some sort of unethical or otherwise nefarious activities to undermine Wikipedia, and is using his position within WMF to further those activities. If this is indeed what is being suggested, it is both incorrect and, frankly, kind of gross. Aaron Halfaker probably cares more about the wellbeing of English Wikipedia than any researcher you can name, and has done as much or more to further the goals of the project and benefit its participants—in both a volunteer and WMF staff capacity.
  4. IRBs assess potential for harm, according to evidence-based risk assessment criteria. Without knowing the details of CMU's IRB's response to Dr. Kraut's research proposal, I cannot comment directly on the issue of "how could the IRB let this happen?" But I can say that, to my knowledge, sending people templated expressions of gratitude is unlikely to cause them harm. Potential for online community disruption is out of the scope of IRBs, which is why we have our own documented processes for assessing potential for community disruption when we vet research proposals.
  5. Those processes are effective (as they were in this case) only insomuch as the researchers are willing to comply with them (as they did in this case). However, when researchers are subjected to personal attacks and bad faith accusations (as they were, by some editors, in this case) and drummed off Wikipedia, it undermines the authority of the process we ourselves created. A different, less professional and ethical, set of researchers who want to perform a study on Wikipedia may be less inclined to tell us about it after seeing how these researchers were treated. We as a community have very few effective protections against this kind of behavior. And we already have our hands full with legitimate vandalism, COI editors, and (at least potentially) coordinated attempts to sow disinformation in order to further the aims of state and non-state actors.
  6. The real clear and present danger that bad researchers present to Wikipedia is what they do with editors' non-public data. If someone is running a survey, or conducting interviews or otherwise collecting personal information about contributors, they need to have a clear statement of why that data is necessary to collect, how it will be used, how it will be securely stored, anonymized, etc., who has access, and how long it will be kept. Good faith, professional researchers will have clear answers to these questions. Ideally, their data practices should be verifiable by an external authority (IRBs are good at this). Bad faith researchers, or even simply naive researchers who didn't spend years in grad school, may not. Treat bad answers as red flags.
  7. Research benefits Wikipedia directly. We know what we know about the gender gap and the editor decline because of research. Intervention-style research can be an invaluable tool for figuring out how to address pressing issues like the gender gap, knowledge gaps, the editor decline, toxic cultures, vandalism, disinformation, editor burnout, and systemic bias. Some kinds of interventions don't work as well if everyone knows they're being 'intervened' with. Not having that knowledge can have a range of effects, from null/negligable to pronounced. And sometimes we don't like being 'intervened' with even if we believe the intervention isn't likely to be harmful. The potential benefits and risks of any particular intervention should be weighed based on a reasoned assessment of the nature, and scale, of both intended and unintended consequences. We have processes for that, but those processes depend entirely on good faith collaboration between researchers and community members.
  8. Research furthers Wikipedia's mission. In order to make the "sum of all human knowledge" available to everyone we need to understand how Wikipedia came to be, how it works, and even how it doesn't work. Researchers shouldn't expect that they can use "but we make science!" as a blanket excuse to do whatever they want, but the potential mission-aligned benefits of understanding this or that social or psychological feature of Wikipedia (and the people who write it) are valid points for consideration when weighing risk vs. reward.

Finally, an appeal: assume good faith of researchers who approach the Wikipedia community openly and honestly. Recognize that your own preconceived notions about what a researcher wants, or what affiliation and funding sources they have, may be incorrect or incomplete. Ask questions, but try to ask them like you'd interview a job candidate rather than like you'd interrogate a criminal suspect. You can't stop truly nefarious researchers, at least not in a systematic way. You can teach good faith researchers how to respect community norms. And you can tell them "no" and trust they will comply with community decisions. But treat them all as de-facto enemies, and you lose all the potential benefits of research while reaping none of the rewards and doing nothing to curb risks of individual harm or community disruption. J-Mo 23:51, 6 February 2019 (UTC)[reply]

Exceptionally long post: enter at risk of eye strain. Snow let's rap 04:15, 7 February 2019 (UTC)[reply]
:
J-Mo, I'll attempt to respond to your points in the order you have raised the issues, to the extent it is feasible while summarizing what I believe are the concerns that have been raised here.
1. First off, thank you for confirming the research did not proceed as planned before community concerns began to surface; based on the timeline presented in the Meta proposal and the date at which push-back began to develope from the community, that was not at all clear. I was hoping Robertkraut or Diyi would speak to that question, but given the strong sense of certainty you provide in your assurances, I assume you are privy to additional information (that was not made public in the Meta or VP discussion) as to where in their process they stopped. Therefore if you are saying they ceased pursuing this project before any engagement in human subject testing, I'm sure we can of course take you at your word about that and put those concerns to bed. I would note, however, that this episode underscores a need for the local communities who are to be the subject of research to be directly informed of such proposals so that objections can be raised much sooner--rather than just before (or during) the actual research itself. If the goal is to solicit the community's feedback on the proposal, a page on meta, absent promotion on the target project itself, is never going to be very effective in addressing concerns before they become urgent. That's the first thing that needs to change in our procedures.
That issue addressed, I must tell you that I nevertheless do not at this moment in time have as rosey an outlook as you do with regard to the professionalism and ethics displayed in how this research was approached. I'm going to hazard a guess here, based upon your previous comments in the VP discussion and your current assessment here, that you have only ever been a 'researcher' in the commercial/private meaning of that term. Because, had you ever undertaken research in the behavioural sciences in an academic setting, I believe you would better recognize why there are some serious questions here with regard to how these researchers approached issues such as informed consent and privacy protections for human subjects. IRB approval and hand-waving regarding "low risk" or not, I must tell you that approaching subjects in this fashion would not generally be seen as acceptable by most researchers in the social, behavioural, and psychological sciences--nor by most institutions and professional associations that provide oversight for such research. Indeed, going even farther, I believe this research, had it proceeded, could have run afoul of federal regulations (and potentially state statutes) governing the testing of human subjects--particularly with regard to privacy protections and (even more so) the use of underage subjects, for whom the assent of the subject and permission of their parent or guardian is always required (outside a handful of exceptions which do not apply here) and cannot be assumed. It is for exactly this reason that I wonder if the IRB was given all salient information here when making their determination, because I have a hard time seeing how they would approve this research if they knew that nearly a fourth of Wikipedia's editors are below the requisite age of independent consent that is relevant to this particular research.
You have spoken repeatedly in the previous discussions and here about other "potentially less ethical" researchers invading the project if we do not present a welcoming front to those willing to submit proposals. But it is worth noting that in every example you have provided thus far, the research in question at least made the individual being approached aware of the fact that they were talking to a researcher, and sought their willing engagement with the process. While I agree that the examples you provide nevertheless present issues that we as a community (and individuals) should be concerned about, such voluntary procedures--those which use surveys and passive studies of previous (non-induced) data--are considered by oversight entities (both governmental and institutional) to be fundamentally different from the process of exposing a subject to a test stimulus and then observing their reaction. These types of experiments are generally classed separately and, even in the rare case where an exception for consent might be permissible, that exception is not made for minors, and there must be controls for the protection of the privacy of all subjects--something that would have been infeasible on a platform such as this. My main point here under this first section of response being that the ethical questions that are at least raised here are not by any means trivial ones, and they aren't the type you should be eager to dismiss simply by repeatedly re-asserting that you personally think they are well balanced to achieve benefits with "low risk".
2. This is not really where my main concerns lay, and obviously I cannot speak on behalf of those who have raised these concerns. But I will say that your assertion that Facebook does not typically get privileged access to data in its grant agreements is by no means a universal principle--to be fair to you, you did throw in the "generally" there, but I think the general thrust of your statements in this area attempt to provide a degree of assurance that is undue given Facebook's historical (and indeed recent) practices--especially insofar as I presume that you have no particular knowledge as to what degree of data sharing that was agreed to with regard to this particular grant. In fact, this is a big problem for us in general, and I don't see any reason why we should not require disclosures of both financial backing and data-sharing arrangements made by any researcher wishing to advance a proposal here; there's no reason they shouldn't be required to show the same level of transparency towards us as they do the review boards at their respective institutions. As a project, Wikipedia has as much skin in the game (including potential liabilities) as any party, and if researchers wish to avail themselves of this platform for their research, they can be up front with us about anything that might look like a conflict of interest or source of potential exposure for the privacy and personal data of our community members.
3. I'm not sure as to that myself I presume (absent any information to the contrary) that the previous research used sourced data rather than direct human subject testing, and so it is not super relevant, other than Kudpung's stated purpose in showing a previous close working relationship between Mr. Halfaker and the researchers here. However, I suspect part of the reason this was raised was because it seemed as if the WMF's researchers were circling the wagons to insulate the study's researchers (and the proposal itself) from criticism. As someone who did not participate in that thread and now now is on the outside looking in, I must tell you that it's very difficult to tell how much you and EpochFail were commenting as community members there and to what degree you were speaking in your WMF capacities, which I'm sure you will agree is potentially problematic. Further, there are places there where I would describe your comments as needlessly antagonistic towards expressed community concerns. I understand that this was obviously motivated by a desire to protect a pair of individuals whom you respect and whom you felt had acted in good faith. But the most ideal way of doing this is not to accuse others of bad faith, as you did during that discussion and elsewhere. I see no one in the entirety of that thread who seemed to be acting out of anything but concern for the project and its users, or in any other way which would entail "bad faith" as that term is usually used on this project (vandalism, trolling, gamesmanship, sockpuppetry, ect.). Even where they were focusing on issues that you and I may agree were not the most salient issues to contemplate (devaluation of the barnstar and so forth) I wouldn't say that they were completely irrelevant concerns for the community--and, in any event, the community members were obviously being sincere. I think that "bad faith" wording and some other comments represent poorly chosen language on your part that may have served to inflame perceptions of bias on the part of the WMF researchers in this situation.
4. You're right, we don't have access to the IRB's thinking, and in my view, that's another problem. Before we ever consider human testing research on this project again, we may very well consider requiring that exact information; IRBs are required by law to keep minutes of their research review meetings in a very proscribed format, and we could consider requiring a copy of these documents be presented with any human subject proposals in the future. Afterall, we have our own ethical obligations to our community members and I see no reason why we should have less access to the researcher's accounting of the ethical questions raised by their study and how they intend to control for it, for the purposes of making our own decision on whether to allow it to proceed.
5. & 6. I'm not sure just how effective our procedures are here; from where I am standing, they could do with some strengthening, and this situation demonstrates precisely why. I also feel like you are presenting us with false choice here between relaxing protections to make "good actors" feel welcome and actively driving them underground. First off, if they are truly good actors as that term should be applied to behavioural researchers, they wouldn't be inclined to subvert our rules and normal ethical considerations based on how warm a welcome they receive. If a given researcher can't be trusted to comport themselves with out community rules and the mandates of their own profession with regard to ethical research, simply because they are concerned about being grilled here, and they would consider just ignoring our processes instead...then they certainly can't be trusted with the much more demanding responsibilities of protecting user confidentiality and seeking proper informed consent--and they therefore aren't the type of person we should be tailoring our approach towards in any event.
Also, I disagree that there's nothing to be done about the "bad actors". Where they are simply hoovering up information, of course we can't stop that, but there's also no reason to stop them, insofar as everyone who participates on this project agrees to allow their contributions and statements to be freely accessible and usable for almost all purposes. But where an outside researcher is trying to trigger a response, that kind of activity is going to leave a record and people are going to notice suspicious patterns. If "nefarious" researchers attempt this without having their projects approved by the community and seeking informed consent, we can shut them down just like we would any other WP:disruptive user. And supposing that that they did get past our guard and engage in shady behaviour and they are academics, as soon as they publish or present findings, they can be reported at many different levels of oversight that will have potential professional complications for them--depending on the exact nature of their conduct. Commercial researchers, of course, are a little less amenable to such controls unless they do something blatantly illegal or which would bring them negative press. But commercial researchers (to the extent they come here, which I think is uncertain) are probably not likely to come through the approval process in any event, and there's not point in trying to adjust it to their whims.
7. & 8. Good faith is a two way street. I can't disagree with you that research is of vital importance to us, but it has to be approached in a non-disruptive and ethical fashion or else it will be a net negative to the project. And no researchers should ever be allowed to waltz through the front door to conduct whatever tests they want on our contributors, based solely on their own idiosyncratic analysis (not even when informed by their own IRB process) as to whether the risks and consequences outweigh the benefits. The community should always conduct its own analysis of that question, and should be afforded a high degree of transparency with regard to the researchers' intentions, methodologies, previous institutional reviews, the uses to which their data will be put (including especially with whom and in what way confidential information will be shared), and any potential conflicts of interest. And regardless of the answers to those questions, where their work involves treating our community members as test subjects, we should always, without a single exception, require that they get informed consent from anybody they wish to utilize in that fashion. For anyone who is unwilling to meet those requirements, WP:NOTLAB applies and accounts operating outside of our policies should be shut down, same as with any other disruptive user.
As to your final paragraph I agree with you thoroughly. I would only add that I don't think anyone has treated the researchers here as the "de-facto enemy". The concerns raised have been reasonable and in keeping with the objective of learning from this episode and designing more robust procedures that will benefit both researchers and the community. Nobody's objective, insofar as I have seen, is to "shame" anyone. But there are serious questions raised here as regards respecting the privacy and autonomy rights of volunteers to this project. They are necessary questions to contemplate whenever we consider approving research on this platform. Snow let's rap 04:15, 7 February 2019 (UTC)[reply]
Snow Rise I was unable to keep my response brief, and it felt weird to post another wall of text in the "comments" section of this Op Ed, so I decided to post it on your talkpage instead. Cheers, J-Mo 00:56, 11 February 2019 (UTC)[reply]
  • It is abundantly clear that this "experiment" represented disruptive behavior. Forcing editors to think about restrictions on IP barnstars has effects. Some of the disruption continues even here, with people talking about some kind of "filters" -- which would be truly turning a temporary plague of computers invading human behavior into a permanent case of the same disease. While it may pay off to be more watchful, I hope people will resist giving any more power to computers. The key thing for us to take home from this is that the Golden Age of Vandalism is long behind us. There was a time when people would transclude an HTML table with colored cells to put Goatse on the Main Page for the sheer joy of it. But now, many of our vandals are drawing paychecks. They have a purpose for seemingly mindless irrational behavior, and it may take a lot of imagination to try to riddle out what that purpose could possibly be. Wnt (talk) 15:59, 23 February 2019 (UTC) P.S. I've largely ignored J-Mo's comment above because he seems to describe only partial knowledge -- if the research did take place, he might simply not have heard about it, so it is not really a meaningful denial AFAICT.[reply]

There was actually a similar experiment at the German Wikipedia that showed that barnstars increase new editor retention. I wonder what the difference was between the two that only one was allowed to go forwards. In addition, there's also m:Research:Testing capacity of expressions of gratitude to enhance experience and motivation of editors. What determines whether or not a given experiment like this will be permitted? By the way, should this talk page be broken into sections? It seems to have gotten pretty long. Care to differ or discuss with me? The Nth User 16:15, 28 April 2019 (UTC)[reply]

Op-Ed: Random Rewards Rejected (71,414 bytes · 💬)[edit]

Discuss this story

  • I have nothing nice to say about CMU. While I was a grad student at Pitt, I reached out to a CMU professor about an on-wiki issue but they never took me up on my offer. Clearly, we as a community still have a problem with academics mucking about on wiki without having a proper sit-down with editors right in their own neighborhoods. Sad misconceptions like these underline the point I made toward WikiEd a few years ago when they ended the campus ambassador program. I guess we learn nothing. Chris Troutman (talk) 07:24, 31 January 2019 (UTC)[reply]
    • The frustrating thing is that there clearly are ethical ways of doing this sort of research. It's a pity that more thought wasn't put into methods beforehand. T.Shafee(Evo&Evo)talk 10:09, 31 January 2019 (UTC)[reply]
    • @Chris troutman: Small world, I was a grad student at Pitt too through I actually did talk to prof. Kraut and it was a relatively constructive relationship (I helped them with their first wave of Wikipedia research, through sadly didn't manage to get myself credited...). I think the problem with this project is that it was too much focused on theory and too little on the benefis to our community. But, sadly, professors don't advance their career solving Wikipedia/media problems, they do so by publishing papers for which reviewers are more likely to complain about 'not enough theory' rather than 'not enough practical implications'. After all, that's academia. --Piotr Konieczny aka Prokonsul Piotrus| reply here 17:11, 1 February 2019 (UTC)[reply]
  • Were the barnstar bombers indef-blocked for abusing wikipedia for personal gains? - Altenmann >talk 07:49, 31 January 2019 (UTC)[reply]
  • "Random Rewards" reminds me of the badges participants in the The Wikipedia Adventure accumulate, ending up (having barely learned to sign their posts) with more than a dozen such badges on their user pages. Stolen valor! – Athaenara 08:00, 31 January 2019 (UTC)[reply]
  • Stolen valor is an interesting analogy, thanks for pointing that out Athaenara. Bri.public (talk) 19:01, 31 January 2019 (UTC)[reply]
  • For what it is worth, everybody recognizes that TWA awards are meaningless and values them as such. Barnstars are another matter entirely, seeing as they (usually) actually signify something. Compassionate727 (T·C) 19:31, 31 January 2019 (UTC)[reply]
  • We should grant retrospective honoris causa barnstars to Confucius, Omar Khayyam and Spinoza. And then claim that at least these three barnstars are recognizing something. Pldx1 (talk) 09:00, 31 January 2019 (UTC)[reply]
User:Herostratus/External Barnstars Already ahead of you. Herostratus (talk) 02:21, 3 February 2019 (UTC)[reply]
  • Well my ego has been sufficiently stroked on account of the story since the barnstar chosen as the lead image for the story is...The Press Barnstar, which I had suggested to the community some years back. I feel good about it :) Now if only I could actually earn it as opposed to have it simply materialize out of thin air... TomStar81 (Talk) 10:13, 31 January 2019 (UTC)[reply]
  • Bloody FaceBook now wants data on us? Poked and prodded is bad enough, but FaceBook grant says it all and all I need know about CMU. Glad this as rejected.-- Dlohcierekim (talk) 10:41, 31 January 2019 (UTC)[reply]
  • "It will dilute the value of the barnstar" is the most ridiculous argument I've ever heard. It's a digital star that anyone can give for any reason- that someone else got one for a trivial reason does not devalue your own, any more than someone else getting a hug means one from your SO means less. An actual concern is that the researchers aimed to perform a study that affected, in however minor a way, a thousand people who did not explicitly consent to be in the study. I don't care if there's some buried term int he wikimedia TOS that legally indemnifies it- research on subjects without their consent is unethical, regardless of the scale of harm. --PresN 13:21, 31 January 2019 (UTC)[reply]
  • Well, just because the valuation of something is low (or perceived by some as low) does not mean that said value cannot be debased. That said, I agree with the main thrust of your comments, as to what the major issues of significance here are. While it is not per se unethical to conduct research without informed consent under every single one of the various legal and institutional rubrics which define such matters, in this case (where there were non-anonymous subjects whose responses were to be tested and behaviour monitored), the approach was very obviously inappropriate in the extreme, as an ethical matter. Learning of this makes very curious as to who is on Carnegie's institutional review board and how they possibly thought this was permissible behaviour for researchers. Indeed, to the extent any of these researchers are members of the APA, I wonder how they might feel about this attempted research, given it would have, to my eye, pretty flagrantly violated numerous provisions of the associaton's ethics code. I'm also curious whether the information culled, beyond being shared with the venture's partners, was also intended for use in publication with one of the APS journals, given the role Kraught serves as facilitator for the APS Wikipedia initiative. But putting the APA and APS to the side for the moment though, and returning to the active parties here, this whole affair gives off an odour that does not reflect well on any of the institutions involved. It's an embarrassment that any researcher thought this could end well and yet more indication of how disrespectful academics can be of Wikipedia and community in particular--to say nothing of how laissez-faire they can be about their ethical responsibilities with regard to online research generally. I'd like to say I believe it likely that this affair would stain the reputation of the involved parties among serious researchers, but, the truth is that I rather doubt it will.. Snow let's rap 01:03, 1 February 2019 (UTC)[reply]
  • From an economic/monetary standpoint, it clearly does dilute the value of the Barnstar. Barnstars have a concrete value in that they signify the community's respect and thus lend prestige and authority to the recipient. Every instance of a barnstar created reduces the value of every instance that already exists; they would lend much more prestige and authority, for example, if only twelve people had them.[1] We accept the very small dilution every time someone hands out a barnstar because we consider the value the recipient gets out of it, particularly as a morale boost, and the value the community gets out of it, as a way of identifying reputable people, to be worth it.[2] It is harder to make that argument for thousands of random barnstars, especially in light of the fact that they would lose much of their value as a recognition: you would no longer be able to look at someone's user or usertalk page and think: "That person has a barnstar, I can trust him or her to be smart, competent and experienced" because a ton of people who are potentially none of those things now have them. Compassionate727 (T·C) 19:44, 31 January 2019 (UTC)[reply]

References

  1. ^ Your hug analogy fails because the random person's hug and your SO's hug are actually two different commodities whose value is not directly dependent on one another. The value of your SO's hug would diminish, however, if they gave hugs frequently and/or to many people versus if they only hugged you and only once every quarter. The same would also be true if you received hugs from many people, because other peoples' hugs are a substitute for your SO's hug. Of course, you may perceive your SO hugging you as different or special because they're your SO, which is why we typically don't use emotionally-laden things for examples in economics.
  2. ^ If the number of barnstars is not increasing as a percentage of the total number of users in the community, the relative value of the barnstar over time may not be decreasing at all. It would still be decreasing, however, in absolute terms.
  • You could double-check that the paper was 455 pages? That's more of a book or a very lengthy, in-depth study, not a "paper". Liz Read! Talk! 16:23, 31 January 2019 (UTC)[reply]
  • @Liz: Without seeing the paper myself, I would guess there are more than a few pages consisting mostly or entirely of raw or uninterpreted data. Compassionate727 (T·C) 19:48, 31 January 2019 (UTC)[reply]
  • The paper in question is 10 pages long. That's a fairly common page limit in computer science conference papers. Maybe Kudpung could update the op-ed to reflect that it's not 455 pages? Cheers, Nettrom (talk) 16:24, 4 February 2019 (UTC)[reply]
  • Kudpung, you've taken some legitimate flack from the community for some of your Signpost work, but I just want to say that this op-ed piece was really well done. A to you for your work on this. – wbm1058 (talk) 21:31, 31 January 2019 (UTC)[reply]
  • So if the goal of this research was to "help Wikipedia retain editors and encourage them to do needed work", I have an idea for a better experiment. Start refilling some editors' coffee cups, and see what effect that has on their contributions. Also note the effect Trump's experiment with delayed Federal worker pay had on outcomes like the wait time at airport security checkpoints, etc. – wbm1058 (talk) 21:42, 31 January 2019 (UTC)[reply]
  • The relationship between Wikipedia and academia is an interesting one. I would have expected these communities to overlap in goals and values, but this is often not the case. Problems arise when folks come here to further their own academic interests instead of working to build an encyclopedia. In this case it is clear that the purported benefits to Wikipedia were secondary to the research agenda; if this proposal was an earnest effort to improve Wikipedia, the researchers would have worked with the community from the start to design a study that was consistent with our needs and values. Proposals from universities should be treated with the same skepticism as any other form of paid editing, but instead we seem to presume a certain level of altruism and good faith just because they are in the field of education. –dlthewave 05:32, 1 February 2019 (UTC)[reply]
  • As a retired academic researcher and Wikipedian, the problem simply is that the researcher (often a research student) and their advisors and any ethics review panel know little about Wikipedia or how it really works. They will not have encountered what Wikipedia is NOT. If the proposal had been to test the motivation value of handing out Employee of the Month awards to supermarket staff, it would pretty obvious to all concerned that obtaining consent from supermarkets to participate would be required as either physical access to staff or access to staff emails would be needed. The organisational barrier would be obvious. However Wikipedia doesn’t exhibit the same obvious organisational barriers. It’s the free encyclopedia anyone can edit. Many people have no realisation that there is an organisation behind it or a community to be consulted. Even if that occurs to them, it is not obvious from our home page or links off it where they should be asking. For example Wikipedia:Contact_us could be updated to have a heading “For researchers” to direct their enquiry somewhere useful. I note that some researchers do find their way to our research mailing list where they can get practical advice on design (and acceptability) of Wikipedia-related research. So we do have a good entry point for researcher enquiries there. Kerry (talk) 08:23, 1 February 2019 (UTC)[reply]
Kerry Raymond, I don't disagree in the slightest, but to be frank, any researcher working in the social and psychological sciences who is going to be doing research involving direct stimulus-response testing of individuals needs to have informed consent. That is research ethics 101. Believe me, I understand the complications that this creates for the research itself, particularly in the arena of social psychology, but there are reasons these principles were adopted by the scientific community in the last century and those reasons weren't by any stretch of the imagination trivial concerns. The lack of institutional watchdogs such as you would find in using individuals as test subjects through their employement should not be treated as free license to utilize people as test subjects through open communities online, without obtaining consent. Ethics should not go out the window just because there isn't a sufficient presence to invoke practical liabilities--that's not the bedrock upon which ethical research should lay. And frankly, even a grad student not getting this is an embarrassment to the profession and a sign that their institution has flubbed the task of their basic education in this area. Taking shortcuts through that cut through ethical barriers just because you are conducting it online is no more acceptable than trolling people is acceptable because it's done in the anonymity of the internet. No researcher should feel one whit more comfortable conducting research in an online forum where that same experiment would be clearly unacceptable if conducted at a farmer's market. This isn't rocket science: the people one might be inclined to use as subjects online are still people, and it is still just as shady to exploit them by failing to get consent. And if the only thing keeping research in line were intermediaries with their own liabilities and legal limitations, and not the good ethical sense and training of the researcher themselves, things will get to a bad place fast, as indeed we have seen happen repeatedly in recent times, often involving one of the funding partners in this very research. It's bad enough that we have to worry about this kind of behaviour from social media players, marketing firms, and the political class, all of whom act with such disturbing impunity when it comes to the privacy and consent. To allow academics to get in on that game without any sense of concern as to the implications... Snow let's rap 23:18, 1 February 2019 (UTC)[reply]
I too was shocked at what appeared to be blatant disregard for "informed consent". In spite of having all the same problems, Restivo/van de Rijt got published in what I assume to be a peer-reviewed journal. I looked at that paper and I found the following paragraph near the beginning:
This study's research protocol was approved by the Committees on Research Involving Human Subjects (IRB) at the State University of New York at Stony Brook (CORIHS #2011-1394). Because the experiment presented only minimal risks to subjects, the IRB committee determined that obtaining prior informed consent from participants was not required. Confidentiality of personally-identifiable information has been maintained in strict accordance with Human Subjects Committee requirements for privacy safeguards.
What does this mean for us? It means that as far as the above-mentioned Committees on Research Involving Human Subjects are concerned, it's OK to mess with people's heads, not to mention rending the social fabric, without telling them that they're part of an experiment. What's up with that? This looks like a bigger problem than just one naïve professor at CMU and his grad student. Bruce leverett (talk) 02:49, 4 February 2019 (UTC)[reply]
Yup--and while research institutions have been known to apply that "minimal risks" standard when culling data from pre-existing media, here the researchers were to have been directly experimenting with the subjects, providing stimuli and recording responses and that has traditionally been seen by all institutions, professional associations, and researchers in good standing a brightline rule for when consent is required. Unfortunately, it would seem that the principle of social psychology (that most any person in the contemporary world with web access is familiar with), whereby the consequences of improper behaviour online are seen as less consequential or "real" than the same conduct would be perceived to be offline, applies as much to many researchers with regard to their work as it does to random joes who drops their standards for appropriate conduct. Even though such researchers ought to be more on guard than most people about the irrationality and dangers of such a cognitive bias. Like I said, an absolute embarrassment to the profession and something that needs to be addressed. Someone should do a systematic review of that--a research topic of some actual consequence. Cripes, would I love to see the expression on one of these researchers' faces when, while at a conference, they realized they were being referenced obliquely in a breakdown of slipping ethical standards owing to an inability to contextual rationalization. What sweet irony that would be. Snow let's rap 09:02, 4 February 2019 (UTC)[reply]
I would love to have seen the human subjects research ethics application for this study. I wonder if a FOI request would get it? I've seen some questionable applications go through when 'commercialization' is mentioned.AugusteBlanqui (talk) 11:10, 4 February 2019 (UTC)[reply]
Well, Carnegie Mellon is a private university and thus would not typically be subject to FOIA requests directly, and while HS-IRBS are required to maintain minutes of their meetings and other documentation of their review of proposed research, they are not typically required to file these documents with OHRP or other federal oversight entity unless the agency requests it (for example, as part of a review)--and if the documents are not within the possession of such a federal agency, they typically cannot be reached by a FOIA. (There are possible exceptions where, as alluded to before, the research institution is a state entity or it used federal funds in the research). I suppose it's possible (maybe even probable) that when multiple parties sign on to an 'IRB of record' agreement (this is where the involved institutions agree to allow one IRB to investigate and authenticate compliance for joint research), if even one of the researchers involved is from a state institution (or arguably used federal funds on the research in even a trivial way), the documentaion could be reachable by FOIA that way, even if the IRB in question was that of a private institution that did not use federal funds and did not file the documentation with a federal entity. But I just don't know the regulations that intimately to say for sure. My overall inclination is to say taking this approach could be an ordeal. However, some private institutions try to be more transparent than others and I imagine some may be amenable to public requests. This is where one would begin investigating such an inquiry with regard to Carnegie Mellon. Snow let's rap 12:04, 4 February 2019 (UTC)[reply]
Snow Rise here's a question: how would this research have insured that no minors were used as test subjects? I mean, apparently CMU cares about this. Wikipedia-as-petri dish seems to leave the door open for violations of child protection policies as far as informed consent goes or general ethical guidelines for research. Might be worth Wikipedia Foundation getting the word out. AugusteBlanqui (talk) 12:13, 4 February 2019 (UTC)[reply]
I don't see how they could have, honestly. In many (but not all) research situations involving informed consent, parents or other legal guardians can provide consent as proxies. Here though, the IRB decided it was fine to conduct this research without asking anyone for consent, whether directly or in a guardianship role. That actually raises an interesting question, because I note that Pennsylvania has a statute (Act 153) which requires that all researchers likely to have contact with minors to register with three state entities. Now obviously the type of harm that statute seeks to protect against anticipates mostly in person interactions, but looking at the statutory language itself, I see nothing that obviates the university of that responsibility when contact is restricted to online research. In any event, CMU's own internal child protection policy makes clear that "Programs and Activities Involving Minors" is defined as any program, event, or activity involving one or more individuals under the age of 18 that is...[s]ponsored, funded and/or operated by any Carnegie Mellon administrative unit, academic unit, or student organization, regardless of location. This includes programs and activities conducted on-campus, off-campus, or remotely via the internet or other means of communication" [emphasis added]. I suppose if I were in a dialogue with the IRB, that would be a fruitful question to raise regarding their review of this research--whether all researchers who might reasonably have had contact with minors through this research had Act 153-compliant registration. Snow let's rap 12:44, 4 February 2019 (UTC)[reply]
BrownHairedGirl, SlimVirgin, I thought the two of you might be interested in some of the issues we are discussing here, particularly as we can't be sure this will be the last time we will see something of this sort. Thank you, btw, for providing a check here; we really rely on editors like you who volunteer time on both Meta and the local project as a first line of review of such matters, and you really came through for the community. Also, hi to you both--I hope you've been well? Snow let's rap 13:28, 4 February 2019 (UTC)[reply]
  • Dear fellows. May be we are the target of an experiment about "how much proud are these people to mimic the star system of various military organizations" and even crosses with Oak Leaves, Schwertern und Brillanten. Smile better, we are studied. Pldx1 (talk) 10:02, 1 February 2019 (UTC)[reply]
  • How dreadful, to be a victim of a nefarious conspiracy of Facebook and Google to commit random acts of undeserved, unprovoked kindness. Jim.henderson (talk) 14:54, 1 February 2019 (UTC)[reply]
Random distribution of barnstars as part of a behavioral experiment is not an act of kindness though is it? AugusteBlanqui (talk) 17:00, 1 February 2019 (UTC)[reply]
We talk a lot of about intent here on Wikipedia such as in WP:NOTHERE. We are here to build an encyclopedia and I think it is easy to underestimate and dismiss the number of obstacles that stand in the way of doing so, but the overall sum is considerable. Mkdw talk 06:17, 3 February 2019 (UTC)[reply]
  • Nothing would devalue a PhD degree more than if a post-grad student were not able to find anything better or more scientific to base his or her doctoral thesis on than Wikipedia barnstars.Kudpung กุดผึ้ง (talk) 09:10, 3 February 2019 (UTC)[reply]
@Kudpung: Amen to that. – Athaenara 02:43, 25 February 2019 (UTC)[reply]
  • Well we may be disgusted but we should hardly be surprised. Facebook and other habitual intruders-on-people's-lives are (perhaps) finally being shamed and brought to book, but the temptation of unlimited data of unprecedented precision will remain a powerful inducement to misbehave. We may need to be protected by more than angry talk pages; clearly tools could be made to detect such behaviour. Chiswick Chap (talk) 08:55, 4 February 2019 (UTC)[reply]
  • Thanks for your kind words, @Snow Rise, and for the note on my talk. I should point out that I don't monitor WP:VPM systematically, and might have missed the VPM discussion if I hadn't been alerted to it by @Vexations.
I appreciate the points raised above, and agree that there are issues around for example child protection. However, I personally don't think it's productive at this stage to get too locked into those details. My concern is that there should be high-level filters in place, and that those filters should include issue such as addiction and child protection. But it makes little sense to me to start discussion the nature of the filters when there is no framework for all filtration system.
As I noted at the VPM discussion, I have a very low regard for the ethical controls at universities. They are now so heavily dependent on corporate funding that a corporate approach to ethics is hardwired into all their decision-making processes. The responses at VPM by @Diyiy and Robertekraut to my points at VPM about disclosure only underline the convergence between corporate ethics and those contemporary academia.
So Wikipedia needs its own filters. But m:Research:Committee is dormant (or possibly extinct), and there seems to be nothing in its place. Instead we had this proposal brought to community with the support of @Halfak (WMF), who is a previous research colleague of Diyly and Robertekraut. Whatever view anyone takes of either the substantive or ethical merits of that research (see it at m:Research:The Rise and Decline), Halfak had a clear conflict of interest in assessing this research. Yet so far as I could tell from the VPM discussion, there was no other oversight of this project within the WMF.
That is clearly wrong. We need some framework for assessing research ethics either at the WMF or at en.wp, or both; yet we have neither.
I don't try to follow the internal politics of the WMF, so I have idea whether the issues raised at the VPM discussion have led to discussions within WMF; but I have seen nothing publicly about news structures or policy. I think that's a serious and astonishing omission, but it is how it is.
So it seems to me that en.wp needs to set up its own framework for screening research proposals. --BrownHairedGirl (talk) • (contribs) 05:55, 6 February 2019 (UTC)[reply]
@BrownHairedGirl: It would certainly be a start, though it would be a difficult situation for all involved if such a system were not built in lockstep with the WMF, which some under-informed researchers (unfamiliar with the organizational and legal complexities of this project and community) may presume is the only entity with whom they need to communicate such plans. Indeed, in this case, despite the fact that it seems the research was to be carried out in this local community, it seems that no effort was made to seek input anywhere outside of Meta until after the proposal came under the scrutiny of rank and file editors. Incidentally, I noted for the first time today upon review of the proposal page at Meta with a closer eye, that we are months or weeks past when most of this project was supposed to have taken place, and that the first communications here (Dec. 18) took place weeks after the stimulus portion of the experiment was to take place). Are we entirely certain that they did not proceed with any testing before their approach came under under fire? I'd very much like to know the answer to that question.
Anyway, as I was saying, we're going to have real discord (I mean Knowledge Engine levels of animosity, disruption, and distrust) if the local community and the WMF don't operate as a unified front on an issue of this importance. But that shouldn't necessarily stop us from taking preliminary steps. I don't think we would have a difficult time rallying the community to create a policy which states that no research shall be conducted here which involves human behavioural testing relating to the study activities in any way induced by the study itself unless informed consent is sought from each user utilized in said study, and that failure to do so is to be treated as refused consent for each such person.
Really that needs to be in the Terms of Use to have full efficacy (one more reason we need the WMF to hear our concerns here; perhaps it does not hurt to bring WMF Legal into the conversation at this point). But, although it would be one of those very rare policies that is more precatory to outside players than useful for internal processes, creating a community consensus document as to that principle would at least have the impact of putting researchers on notice as to how such behaviour is likely to be regarded here: that would have uncertain effects with regard to later review of their research conduct by those entities (institutional or governmental) who are capable of engaging in oversight of their work at various levels. The professional and legal implications would be quite uncertain without a document that is more expressly legally operative (such as a new section/additional language in the ToU), but given the number of potential complications that might nevertheless arise from disregarding an express statement of this nature (with regard to their institutions, any professional associations to which they belong, the OHRP and other federal regulators, and states Attorneys General, and their funders/commercial partners to name a few interested entities), such a local policy might at least give researchers pause in the future about proceeding with human testing on this platform without first attaining consent. Since it seems we can't always trust them to exercise their professional conduct in this regard by way of their own restraint without making such a blunt statement. Snow let's rap 07:01, 6 February 2019 (UTC)[reply]
Whatever the potential legal consequences to a random 'independent' researcher who violates child protection policies by conducting behavioral research, WMF should have been well aware of child protection issues. If I conduct research in Ireland then I am required to certify whether or not it involves minors. If it does involve minors, or even worse if I admit that I would be unable to tell, then there is not a chance in hell the proposal would get through without informed consent being attached. Zero. A 'minimal risk to participants' rationale might work for adults--maybe. The WMF (and/or the researcher) would be vulnerable in multiple jurisdictions too--what passes for child protection in Pennsylvania might not work in the Netherlands, etc. I'm gobsmacked that WMF 'signed off' on this. Maybe child protection/informed consent was part of the original plan?AugusteBlanqui (talk) 10:47, 6 February 2019 (UTC)[reply]
"If I conduct research in Ireland then I am required to certify whether or not it involves minors. If it does involve minors, or even worse if I admit that I would be unable to tell, then there is not a chance in hell the proposal would get through without informed consent being attached. Zero."
It's meant to work the same way in regard to U.S. research: here are the relevant federal regulations on human testing as regards special protections for research involving minors. Note that both the assent of the child and the permission of a parent or guardian is required, and assent is defined expressly as follows: "Assent means a child's affirmative agreement to participate in research. Mere failure to object should not, absent affirmative agreement, be construed as assent." The IRB is also given the direct responsibility for ascertaining that those requirements are met. Which rather raises the question of whether the researchers here made clear in their application to the IRB that close to 25% of Wikipedians are below the required age of consent for a study of this nature (the regulations also indicate that the age of consent required is the age of consent in the jurisdiction where the research takes place). If not, it raises two unavoidable possibilities, neither one of them great: 1) they just didn't think about the ethical complications here long enough for this to occur to them, or 2) This obvious reality was known to them and they didn't disclose it. If they did in fact disclose this fact in their HS-IRB application, and the Board still let the research move forward, we're potentially talking about an even bigger problem of botched ethical controls at CMU. It's worth repeating here also that Pennsylvania law also requires that any researcher having contact with children obstain a number of different certifications, and that CMU's own internal policies on this make clear that the requirement, insofar as the university is concerned, is meant to those who will undertake such actions online. So I'd be very interested in knowing if these certifications were sought and granted for any party to this research who was going to (or did?) have interactions with our editors, and whether this information was presented in the IRB application or in any review meetings (also required under federal law).
And while these issues regarding minors are certainly very salient and troubling concerns with regard to the ethical controls here, I don't want to get so lost in the weeds of just one of the more eye-catching issues that it seems as if this research was ethically compliant with regard to all the other participants. Because I think very much was not. Putting aside the question of informed consent for adults for a moment--I do not believe most researchers, institutions, or professional associations would view this as acceptable circumstances to proceed without consent, "low risk" or not, but we can table that for a moment--there are numerous other concerns, the most obvious of which is privacy. Experiments which test a subject's response to stimuli (especially those conducted without their consent or knowledge) are meant to be conducted with the utmost confidentiality; there are requirements under law, under the policies of particular research establishments, and under the conduct codes of professional associations. Here, there was absolutely zero possibility of their keeping the stimulus and response of the individual subjects confidential, since they were going to be taking place on very much arguably the world's single most open platform in existence, where every detail of those interactions would be freely viewable to anyone with an internet connection. It is mind-boggling to me that none of this sent up red flags to any of the players involved (the researchers, their university oversight, their financial partners) before this got to the Meta and Wikipedia itself. Snow let's rap 21:30, 6 February 2019 (UTC)[reply]
The process should not be limited to an ethics review; it would also need to include community-level consent. Using an analogy raised by another commenter, imagine if this experiment was proposed to the management of a small employee-owned grocery store and it was discovered that the research was funded by a big-box chain. Even if the procedure was demonstrated to be completely harmless to the participants, it would obviously not be in the best interests of the community and would certainly be rejected.
In this case, a number of editors expressed that they felt uncomfortable with any research associated with Facebook. The researchers did not seem prepared to address these concerns and seemed to think that they only had to convince us that the risk to individual participants was low. We should develop a research approval procedure that involves both an ethics review and community approval, with the understanding that community approval may be withheld for any reason just as individual consent may be withheld for any reason. –dlthewave 14:56, 6 February 2019 (UTC)[reply]
Well, that's just a separate issue from what others have been focused on here--which is not to say it is a non-issue. But I will say that the researchers here did disclose their intentions with a Meta research page--a whole month before they intended to start their research...--though, from what I have seen they did not reach out to the local community until after concerns were raised at Meta (which was, I note with concern, well after they had planned to be already underway in their research). In any event, I share your concern at the cavalier attitude displayed with regard to Facebook being a part of this research, regardless of whether it was through a grant arrangement. If it were quite literally any other company in the world, these concerns would be lessened, but given the company's recent history on privacy issues regarding third party activities that have touched upon so many concerning behaviours, I don't think any concerns in this area can ever be described as mere hand-wringing. One way to address such concerns in the future is to make funding disclosures a requisite part of any research proposal presented at Meta, along with a requirement that such proposals be advertised in major community news spaces for each local project on which research will be conducted. Indeed, when you look at those proposals, they are laughably skimpy on the details regarding parties and oversight--and indeed, give only a partial accounting of the proposed research methodology itself. Snow let's rap 21:30, 6 February 2019 (UTC)[reply]
  • Perhaps some people should read again the contributions of @Robertekraut:, @Diyiy: at meta:Research:How role-specific rewards influence Wikipedia editors’ contribution and Wikipedia:Village pump: Research project: The effects of specific barnstars... or, perhaps, simply read them at least once. Both authors asked for permission and asked for feed-back. Their intent was to select a sample of people worth of a barnstar and then send them or not the said barnstar. Imo, the best method would have been to send a message like Being clear about the sender would have avoided many side-discussions while being endorsed by MLA@CMU would have been perceived as largely more praiseworthy than being endorsed by a random guy on a social network, therefore amplifying the effect (if any). One can also say that not perceiving beforehand that children don't like you playing with their pokemons... was another error. In any case, nothing was done, due to the ignited feed-back. Pldx1 (talk) 13:38, 6 February 2019 (UTC)[reply]
"In any case, nothing was done, due to the ignited feed-back."
I hope that's true. It would be nice to have confirmation on that either way, given that the original proposed timeline had the researchers beginning testing on Dec. 1st, and as best I have seen, the first concerns about the research were not raised until a time after that. Snow let's rap 21:35, 6 February 2019 (UTC)[reply]
I hope that's true. It would be nice to have confirmation. Dear User:Snow Rise. If they say they havn't, then they havn't... except if they are a bunch of liars. What is your educated guess ? In any case, it would be interesting to have a study of all the barnstars that were granted in the current window of say 12 months, centered on dec. 2018, in order to detect changes of behavior, if any. Or to have a more general study, to detect if there are monthly tendencies, or general trends or what else. How many more theses !!! Pldx1 (talk) 12:10, 9 February 2019 (UTC)[reply]
I have to be honest: I can't really tell if you're being facetious or not. For my part I wouldn't have had any reservation about believing them if they said they had not yet proceeded, despite the projected timeline. But they have never actual said as much as far as I can see, in any of the related discussions, and that fact is what inspired my response to you above. However, I also trust that J-Mo would not be asserting below that they had not yet proceeded into human subject testing in such a factual and assuring manner unless he was privy to additional knowledge beyond what is present in the previous threads (which again, do not provide clarity from either researcher as to this point).
Of course, I'm assuming a lot on good faith there--for example: 1) that Diyi and Robertkraut have not responded to pings here because they are feeling a little bruised by the whole affair and taking a wikibreak, and that they are not avoiding answering questions which they know we would not like the answers to and which they now realize could potentially have real professional consequences, 2) that J-Mo did not give assurances on a mere presumption of his, rather than predicating said assurances on additional inside knowledge that he was privy to that allows him to be certain they did not proceed with testing--but I still have enough AGF in me for each of these individuals to allow for that. I may not be blown away by every aspect of the ethical conduct of these researchers or the tone of the response of certain WMF staff members in responding to community concerns here, but my "educated guess" (as you put it) is that they would not complicate the situation further by being misleading (even through omission or assumption). Snow let's rap 15:36, 9 February 2019 (UTC)[reply]


I don't understand the purpose of this op ed. But I was involved in the discussion around this particular research proposal, I have experience performing and evaluating this kind of research, and I know the alleged perpetrators (or perhaps victims is a better term) well. So here are some facts, however unwelcome they may be to some (not all) of the people involved in this discussion, and in the related discussions on the VP and on Meta.

  1. First, to address the preceding comment: the study was not performed, and will not be performed. Robertkraut and Diyiy engaged actively and in good faith with members of English Wikipedia who expressed a range of concerns about the study, and as a result of that discussion decided not to perform the study, a decision which they conveyed promptly and through the appropriate channels. They are both competent, professional and ethical researchers and have done nothing to my knowledge that would suggest otherwise. Dr. Kraut is a founding partner for the Wikipedia Education Program, and a veteran researcher of Wikipedia and other online communities.
  2. In the case of the 2016 paper, "Supporting in part by... a grant from Google" probably means that one or more of the graduate students involved had a research fellowship from Google at the time. Graduate students in technical fields are routinely funded by research grants from public and private institutions. In general, code, data, and research reports generated by researchers funded under Google (or Facebook, or NSF) grants are publicly available, and the choice of what research to perform is not dictated by the grantmaking entity. So a researcher or team might get $100k to fund 3 graduate students (tuition+stipend) for a year, based on a proposal that says something like "We will investigate the motivations of people who contribute to online communities", but Facebook/Google generally doesn't get to tell them what communities to investigate, or what research methods to use, and doesn't get special private access to their findings.
  3. It is not clear to me why Aaron Halfaker's name (and picture) are being called out here. To me, this has the appearance of an attempt to suggest that he is involved in some sort of unethical or otherwise nefarious activities to undermine Wikipedia, and is using his position within WMF to further those activities. If this is indeed what is being suggested, it is both incorrect and, frankly, kind of gross. Aaron Halfaker probably cares more about the wellbeing of English Wikipedia than any researcher you can name, and has done as much or more to further the goals of the project and benefit its participants—in both a volunteer and WMF staff capacity.
  4. IRBs assess potential for harm, according to evidence-based risk assessment criteria. Without knowing the details of CMU's IRB's response to Dr. Kraut's research proposal, I cannot comment directly on the issue of "how could the IRB let this happen?" But I can say that, to my knowledge, sending people templated expressions of gratitude is unlikely to cause them harm. Potential for online community disruption is out of the scope of IRBs, which is why we have our own documented processes for assessing potential for community disruption when we vet research proposals.
  5. Those processes are effective (as they were in this case) only insomuch as the researchers are willing to comply with them (as they did in this case). However, when researchers are subjected to personal attacks and bad faith accusations (as they were, by some editors, in this case) and drummed off Wikipedia, it undermines the authority of the process we ourselves created. A different, less professional and ethical, set of researchers who want to perform a study on Wikipedia may be less inclined to tell us about it after seeing how these researchers were treated. We as a community have very few effective protections against this kind of behavior. And we already have our hands full with legitimate vandalism, COI editors, and (at least potentially) coordinated attempts to sow disinformation in order to further the aims of state and non-state actors.
  6. The real clear and present danger that bad researchers present to Wikipedia is what they do with editors' non-public data. If someone is running a survey, or conducting interviews or otherwise collecting personal information about contributors, they need to have a clear statement of why that data is necessary to collect, how it will be used, how it will be securely stored, anonymized, etc., who has access, and how long it will be kept. Good faith, professional researchers will have clear answers to these questions. Ideally, their data practices should be verifiable by an external authority (IRBs are good at this). Bad faith researchers, or even simply naive researchers who didn't spend years in grad school, may not. Treat bad answers as red flags.
  7. Research benefits Wikipedia directly. We know what we know about the gender gap and the editor decline because of research. Intervention-style research can be an invaluable tool for figuring out how to address pressing issues like the gender gap, knowledge gaps, the editor decline, toxic cultures, vandalism, disinformation, editor burnout, and systemic bias. Some kinds of interventions don't work as well if everyone knows they're being 'intervened' with. Not having that knowledge can have a range of effects, from null/negligable to pronounced. And sometimes we don't like being 'intervened' with even if we believe the intervention isn't likely to be harmful. The potential benefits and risks of any particular intervention should be weighed based on a reasoned assessment of the nature, and scale, of both intended and unintended consequences. We have processes for that, but those processes depend entirely on good faith collaboration between researchers and community members.
  8. Research furthers Wikipedia's mission. In order to make the "sum of all human knowledge" available to everyone we need to understand how Wikipedia came to be, how it works, and even how it doesn't work. Researchers shouldn't expect that they can use "but we make science!" as a blanket excuse to do whatever they want, but the potential mission-aligned benefits of understanding this or that social or psychological feature of Wikipedia (and the people who write it) are valid points for consideration when weighing risk vs. reward.

Finally, an appeal: assume good faith of researchers who approach the Wikipedia community openly and honestly. Recognize that your own preconceived notions about what a researcher wants, or what affiliation and funding sources they have, may be incorrect or incomplete. Ask questions, but try to ask them like you'd interview a job candidate rather than like you'd interrogate a criminal suspect. You can't stop truly nefarious researchers, at least not in a systematic way. You can teach good faith researchers how to respect community norms. And you can tell them "no" and trust they will comply with community decisions. But treat them all as de-facto enemies, and you lose all the potential benefits of research while reaping none of the rewards and doing nothing to curb risks of individual harm or community disruption. J-Mo 23:51, 6 February 2019 (UTC)[reply]

Exceptionally long post: enter at risk of eye strain. Snow let's rap 04:15, 7 February 2019 (UTC)[reply]
:
J-Mo, I'll attempt to respond to your points in the order you have raised the issues, to the extent it is feasible while summarizing what I believe are the concerns that have been raised here.
1. First off, thank you for confirming the research did not proceed as planned before community concerns began to surface; based on the timeline presented in the Meta proposal and the date at which push-back began to develope from the community, that was not at all clear. I was hoping Robertkraut or Diyi would speak to that question, but given the strong sense of certainty you provide in your assurances, I assume you are privy to additional information (that was not made public in the Meta or VP discussion) as to where in their process they stopped. Therefore if you are saying they ceased pursuing this project before any engagement in human subject testing, I'm sure we can of course take you at your word about that and put those concerns to bed. I would note, however, that this episode underscores a need for the local communities who are to be the subject of research to be directly informed of such proposals so that objections can be raised much sooner--rather than just before (or during) the actual research itself. If the goal is to solicit the community's feedback on the proposal, a page on meta, absent promotion on the target project itself, is never going to be very effective in addressing concerns before they become urgent. That's the first thing that needs to change in our procedures.
That issue addressed, I must tell you that I nevertheless do not at this moment in time have as rosey an outlook as you do with regard to the professionalism and ethics displayed in how this research was approached. I'm going to hazard a guess here, based upon your previous comments in the VP discussion and your current assessment here, that you have only ever been a 'researcher' in the commercial/private meaning of that term. Because, had you ever undertaken research in the behavioural sciences in an academic setting, I believe you would better recognize why there are some serious questions here with regard to how these researchers approached issues such as informed consent and privacy protections for human subjects. IRB approval and hand-waving regarding "low risk" or not, I must tell you that approaching subjects in this fashion would not generally be seen as acceptable by most researchers in the social, behavioural, and psychological sciences--nor by most institutions and professional associations that provide oversight for such research. Indeed, going even farther, I believe this research, had it proceeded, could have run afoul of federal regulations (and potentially state statutes) governing the testing of human subjects--particularly with regard to privacy protections and (even more so) the use of underage subjects, for whom the assent of the subject and permission of their parent or guardian is always required (outside a handful of exceptions which do not apply here) and cannot be assumed. It is for exactly this reason that I wonder if the IRB was given all salient information here when making their determination, because I have a hard time seeing how they would approve this research if they knew that nearly a fourth of Wikipedia's editors are below the requisite age of independent consent that is relevant to this particular research.
You have spoken repeatedly in the previous discussions and here about other "potentially less ethical" researchers invading the project if we do not present a welcoming front to those willing to submit proposals. But it is worth noting that in every example you have provided thus far, the research in question at least made the individual being approached aware of the fact that they were talking to a researcher, and sought their willing engagement with the process. While I agree that the examples you provide nevertheless present issues that we as a community (and individuals) should be concerned about, such voluntary procedures--those which use surveys and passive studies of previous (non-induced) data--are considered by oversight entities (both governmental and institutional) to be fundamentally different from the process of exposing a subject to a test stimulus and then observing their reaction. These types of experiments are generally classed separately and, even in the rare case where an exception for consent might be permissible, that exception is not made for minors, and there must be controls for the protection of the privacy of all subjects--something that would have been infeasible on a platform such as this. My main point here under this first section of response being that the ethical questions that are at least raised here are not by any means trivial ones, and they aren't the type you should be eager to dismiss simply by repeatedly re-asserting that you personally think they are well balanced to achieve benefits with "low risk".
2. This is not really where my main concerns lay, and obviously I cannot speak on behalf of those who have raised these concerns. But I will say that your assertion that Facebook does not typically get privileged access to data in its grant agreements is by no means a universal principle--to be fair to you, you did throw in the "generally" there, but I think the general thrust of your statements in this area attempt to provide a degree of assurance that is undue given Facebook's historical (and indeed recent) practices--especially insofar as I presume that you have no particular knowledge as to what degree of data sharing that was agreed to with regard to this particular grant. In fact, this is a big problem for us in general, and I don't see any reason why we should not require disclosures of both financial backing and data-sharing arrangements made by any researcher wishing to advance a proposal here; there's no reason they shouldn't be required to show the same level of transparency towards us as they do the review boards at their respective institutions. As a project, Wikipedia has as much skin in the game (including potential liabilities) as any party, and if researchers wish to avail themselves of this platform for their research, they can be up front with us about anything that might look like a conflict of interest or source of potential exposure for the privacy and personal data of our community members.
3. I'm not sure as to that myself I presume (absent any information to the contrary) that the previous research used sourced data rather than direct human subject testing, and so it is not super relevant, other than Kudpung's stated purpose in showing a previous close working relationship between Mr. Halfaker and the researchers here. However, I suspect part of the reason this was raised was because it seemed as if the WMF's researchers were circling the wagons to insulate the study's researchers (and the proposal itself) from criticism. As someone who did not participate in that thread and now now is on the outside looking in, I must tell you that it's very difficult to tell how much you and EpochFail were commenting as community members there and to what degree you were speaking in your WMF capacities, which I'm sure you will agree is potentially problematic. Further, there are places there where I would describe your comments as needlessly antagonistic towards expressed community concerns. I understand that this was obviously motivated by a desire to protect a pair of individuals whom you respect and whom you felt had acted in good faith. But the most ideal way of doing this is not to accuse others of bad faith, as you did during that discussion and elsewhere. I see no one in the entirety of that thread who seemed to be acting out of anything but concern for the project and its users, or in any other way which would entail "bad faith" as that term is usually used on this project (vandalism, trolling, gamesmanship, sockpuppetry, ect.). Even where they were focusing on issues that you and I may agree were not the most salient issues to contemplate (devaluation of the barnstar and so forth) I wouldn't say that they were completely irrelevant concerns for the community--and, in any event, the community members were obviously being sincere. I think that "bad faith" wording and some other comments represent poorly chosen language on your part that may have served to inflame perceptions of bias on the part of the WMF researchers in this situation.
4. You're right, we don't have access to the IRB's thinking, and in my view, that's another problem. Before we ever consider human testing research on this project again, we may very well consider requiring that exact information; IRBs are required by law to keep minutes of their research review meetings in a very proscribed format, and we could consider requiring a copy of these documents be presented with any human subject proposals in the future. Afterall, we have our own ethical obligations to our community members and I see no reason why we should have less access to the researcher's accounting of the ethical questions raised by their study and how they intend to control for it, for the purposes of making our own decision on whether to allow it to proceed.
5. & 6. I'm not sure just how effective our procedures are here; from where I am standing, they could do with some strengthening, and this situation demonstrates precisely why. I also feel like you are presenting us with false choice here between relaxing protections to make "good actors" feel welcome and actively driving them underground. First off, if they are truly good actors as that term should be applied to behavioural researchers, they wouldn't be inclined to subvert our rules and normal ethical considerations based on how warm a welcome they receive. If a given researcher can't be trusted to comport themselves with out community rules and the mandates of their own profession with regard to ethical research, simply because they are concerned about being grilled here, and they would consider just ignoring our processes instead...then they certainly can't be trusted with the much more demanding responsibilities of protecting user confidentiality and seeking proper informed consent--and they therefore aren't the type of person we should be tailoring our approach towards in any event.
Also, I disagree that there's nothing to be done about the "bad actors". Where they are simply hoovering up information, of course we can't stop that, but there's also no reason to stop them, insofar as everyone who participates on this project agrees to allow their contributions and statements to be freely accessible and usable for almost all purposes. But where an outside researcher is trying to trigger a response, that kind of activity is going to leave a record and people are going to notice suspicious patterns. If "nefarious" researchers attempt this without having their projects approved by the community and seeking informed consent, we can shut them down just like we would any other WP:disruptive user. And supposing that that they did get past our guard and engage in shady behaviour and they are academics, as soon as they publish or present findings, they can be reported at many different levels of oversight that will have potential professional complications for them--depending on the exact nature of their conduct. Commercial researchers, of course, are a little less amenable to such controls unless they do something blatantly illegal or which would bring them negative press. But commercial researchers (to the extent they come here, which I think is uncertain) are probably not likely to come through the approval process in any event, and there's not point in trying to adjust it to their whims.
7. & 8. Good faith is a two way street. I can't disagree with you that research is of vital importance to us, but it has to be approached in a non-disruptive and ethical fashion or else it will be a net negative to the project. And no researchers should ever be allowed to waltz through the front door to conduct whatever tests they want on our contributors, based solely on their own idiosyncratic analysis (not even when informed by their own IRB process) as to whether the risks and consequences outweigh the benefits. The community should always conduct its own analysis of that question, and should be afforded a high degree of transparency with regard to the researchers' intentions, methodologies, previous institutional reviews, the uses to which their data will be put (including especially with whom and in what way confidential information will be shared), and any potential conflicts of interest. And regardless of the answers to those questions, where their work involves treating our community members as test subjects, we should always, without a single exception, require that they get informed consent from anybody they wish to utilize in that fashion. For anyone who is unwilling to meet those requirements, WP:NOTLAB applies and accounts operating outside of our policies should be shut down, same as with any other disruptive user.
As to your final paragraph I agree with you thoroughly. I would only add that I don't think anyone has treated the researchers here as the "de-facto enemy". The concerns raised have been reasonable and in keeping with the objective of learning from this episode and designing more robust procedures that will benefit both researchers and the community. Nobody's objective, insofar as I have seen, is to "shame" anyone. But there are serious questions raised here as regards respecting the privacy and autonomy rights of volunteers to this project. They are necessary questions to contemplate whenever we consider approving research on this platform. Snow let's rap 04:15, 7 February 2019 (UTC)[reply]
Snow Rise I was unable to keep my response brief, and it felt weird to post another wall of text in the "comments" section of this Op Ed, so I decided to post it on your talkpage instead. Cheers, J-Mo 00:56, 11 February 2019 (UTC)[reply]
  • It is abundantly clear that this "experiment" represented disruptive behavior. Forcing editors to think about restrictions on IP barnstars has effects. Some of the disruption continues even here, with people talking about some kind of "filters" -- which would be truly turning a temporary plague of computers invading human behavior into a permanent case of the same disease. While it may pay off to be more watchful, I hope people will resist giving any more power to computers. The key thing for us to take home from this is that the Golden Age of Vandalism is long behind us. There was a time when people would transclude an HTML table with colored cells to put Goatse on the Main Page for the sheer joy of it. But now, many of our vandals are drawing paychecks. They have a purpose for seemingly mindless irrational behavior, and it may take a lot of imagination to try to riddle out what that purpose could possibly be. Wnt (talk) 15:59, 23 February 2019 (UTC) P.S. I've largely ignored J-Mo's comment above because he seems to describe only partial knowledge -- if the research did take place, he might simply not have heard about it, so it is not really a meaningful denial AFAICT.[reply]

There was actually a similar experiment at the German Wikipedia that showed that barnstars increase new editor retention. I wonder what the difference was between the two that only one was allowed to go forwards. In addition, there's also m:Research:Testing capacity of expressions of gratitude to enhance experience and motivation of editors. What determines whether or not a given experiment like this will be permitted? By the way, should this talk page be broken into sections? It seems to have gotten pretty long. Care to differ or discuss with me? The Nth User 16:15, 28 April 2019 (UTC)[reply]

Recent research: Ad revenue from reused Wikipedia articles; are Wikipedia researchers asking the right questions? (0 bytes · 💬)[edit]

Wikipedia talk:Wikipedia Signpost/2019-01-31/Recent research

Technology report: When broken is easily fixed (857 bytes · 💬)[edit]

Discuss this story

  • Normally I don't look at the tech stuff, since I don't use but one script to assist my editing, but I have to admire the thoroughness of this report. -Indy beetle (talk) 18:13, 31 January 2019 (UTC)[reply]
  • Concerning green redirects, there's also User:Anomie/linkclassifier.js which adds colour to many other link types, like links to things in deletion process (a pinkish red), or disambiguation pages (beige highlights) and the like. Headbomb {t · c · p · b} 00:05, 1 February 2019 (UTC)[reply]

Traffic report: Death, royals and superheroes (2,132 bytes · 💬)[edit]

Discuss this story

Royalty[edit]

Concerning the story about the wedding of the Duke & Duchess of Sussex. Harry's father (Charles) is not the heir to the throne. He's the heir-apparent to the throne. GoodDay (talk) 21:54, 31 January 2019 (UTC)[reply]

Corrected, thanks. — JFG talk 02:16, 1 February 2019 (UTC)[reply]
Isn't heir apparent a kind of heir? In other words, saying "heir" is correct but less precise? ☆ Bri (talk) 03:14, 1 February 2019 (UTC)[reply]
Though it might seem weird to most. The heir in this situation is Queen Elizabeth II. Legally, an 'heir' is the person who has the position, where as an 'heir apparent' is a person who will eventually have the position. GoodDay (talk) 17:56, 2 February 2019 (UTC)[reply]

Another boo boo. British royals ability to marry Catholics, took effect in 2015, not 2013. GoodDay (talk) 18:00, 2 February 2019 (UTC)[reply]

Great summary[edit]

Hey authors of this report - great job. PMG (talk) 12:39, 1 February 2019 (UTC)[reply]

Elizabeth II can't change the succession[edit]

With all due respect to the commentator at the Elizabeth II entry. The Queen can't replace Charles with William as next-in-line. That decision belongs 'soley' to the UK & 15 other Parliaments. GoodDay (talk) 22:25, 2 February 2019 (UTC)[reply]

  • This report is the placebo I needed to see today. You guys are incredibly funny. Hey I found another image of death for the next issue.
    . I think probably eats through its tiny nostrils. Best Regards, Barbara