Two heavy spam attacks on the English Wikipedia last month have been traced back to a researcher at a U.S. university, in an affair that is likely to add to existing debates about the ethics of Wikipedia research.
The first attack occurred on July 14, with several autoconfirmed accounts (example) inserting the message "Want to be inches larger?" in large letters on top of many different articles, linking to an online shop. In a blog post for computer security firm Sophos ("Wikipedia hacked - Footballers need help in bed?", a reference to 2010 FIFA World Cup, one of the affected articles), Chester Wisniewski, a senior security advisor at the company, described the vandalism, noting that the advertised site had an unusual appearance: "Unlike the usual spam for penis pills and cheap Canadian drugs that uses a couple of 'medical professionals' to promote the site, this campaign uses a photo of a satisfied couple" (he included a screenshot, too). Wisniewski's observations were quoted in news reports about the attacks that appeared on Softpedia.com and on Spamfighter.com.
Following the attacks, Versageekblocked a number of other accounts with the rationale "abusing multiple accounts for spamming - checkuser block" and posted the following on the talk page of an established user, under the heading "Misdirected Testing?":
Checkuser results suggest that one of your linkspam related software tests may inadvertently be pointing to the English Wikipedia rather than test wiki. Please check your settings & adjust accordingly.
The account belongs to A. W., a doctoral student at the University of Pennsylvania's Department of Computer and Information Science. On his university home page, he states:
Currently, I work on the Quantitative Trust Management (QTM) project under the advisement of [I.L.], [S.K.], and [O.S.]. My recent research has been on spam mitigation techniques, the prevention of vandalism on Wikipedia, and spatio-temporal reputation.
W. is known to Wikipedians as the developer of STiki, a vandalism detection tool released earlier this year which relies on a "spatio-temporal analysis" of revision metadata and machine learning techniques. It has received praise by several of its users and was the topic of W.'s presentations at several conferences (Eurosec, Wikisym, Wikimania).
W.'s edits during the following days do not show a reaction to Versageek's note. On July 20, another heavy spam attack occured, inserting a message on top of many articles that read "Congratulations! Wikipedia's one-billionth user. Click to collect your prize!". (Example of one of the autoconfirmed accounts used for the attack.) Many readers of Wikipedia appear to have been troubled by the message, judging from the questions about it in web fora and on Wikipedia's help desk. Some suspected a PC virus infection ("My sister was searching on wikipedia and the following text came up in big red letters: ..." ).
I have blocked this account (amongst others) for the recent issues with regards to recent tests done on Wikipedia's articles. Please contact the Arbitration Committee via email [...] at your earliest timeframe, to discuss this. SirFozzie (talk) 16:37, 21 July 2010 (UTC)
The contributions of one of the accounts blocked by SirFozzie show a rapid succession of edits to the Sandbox with the edit summary "an exploration into rate-limiting".
The ArbCom later confirmed to the Signpost that W. had carried out both attacks.
Resolving the affair
On August 11, ArbCom member Risker posted the following statement on W.'s talk page:
The Arbitration Committee has reviewed your block and the information you have submitted privately, and is prepared to unblock you conditionally. The conditions of your unblock are as follows:
You provide a copy of the code you used for your "research" to Danese Cooper, Chief Technical Officer and to any other developer or member of her staff whom she identifies. [Note - this step has been completed]
You review any future research proposals with the following groups: the wikiresearch-L mailing list <https://lists.wikimedia.org/mailman/listinfo/wiki-research-l>; the wikimedia-tech mailing list for any research relating in whole or in part to technical matters; and your faculty advisor and/or University's research ethics committee for any research that involves responses by humans, whether directly or as an indirect effect of the experiment. Please note that your recent research measured human responses to technical processes; you should be prepared to provide evidence that those aspects have been reviewed in advance of conducting any similar research.
[... T]his project [the English Wikipedia], the Wikimedia Foundation, or an inter-project group charged with cross-site research [to] be developed [...] may establish global requirements for research which may supersede the requirements in (2) above.
Any bots you develop for use on this project, whether for research or other purposes, must be reviewed by the Bot Approvals Group (WP:BAG) in advance of use, unless otherwise approved by the WMF technical staff.
You must identify all accounts that are under your control by linking them to your main account. The accounts used in your July 2010 research will remain blocked.
W. reacted to the unblock offer ten minutes later, stating:
"I agree to these conditions, and offer a sincere apology to the community.
As clarified by ArbCom to The Signpost, condition 3. refers to the possibility that the English Wikipedia might develop a community process to oversee research, and to the Research Committee that the Wikimedia Foundation intends to form (see last week's Signpost coverage).
According to an RfC announcement about the introduction of the "Researcher" user rights group last June (see Signpost coverage), W. had requested to be granted this new right back then, but his application had been put on hold by the Foundation's Deputy Director Erik Möller, suggesting it should be handled by the community.
W. agreed to answer several questions about the affair to the Signpost:
1. What were your motives for carrying out these edits?
An economic study of spamming behaviors on Wikipedia was conducted. That is, for a link addition (or group thereof), how many (1) see the link, (2) click the link (click-through), and (3) continue to make a purchase on the destination site (conversion). The net-profit of these sales can then be compared to the cost of making the link additions, and an economic argument made about such behaviors.
The experiments allowed us to obtain data that convincingly demonstrates (1) that Wikipedia is vulnerable to major spam attacks, which can be highly profitable to the perpetrators, and (2) that current protection mechanisms are insufficient. Having shown this, it was our intention to collaborate with WP/WM/WMF on solutions to prevent truly malicious attacks of this nature.
2. Why did you choose these particular forms of vandalism for your test?
To an end-user, we desired our experiments to appear consistent with what a truly malicious entity (i.e., a spammer) might attempt. In this manner, the click through and conversion rates we measured would be unbiased. Blatant link placement on popular articles permits many users to see the link -- even under the assumption it will be reverted seconds later. Vulnerabilities in Wikipedia make it trivial for users to obtain the privileges necessary to carry out such an attack.
Internal to the experiment, protections were taken to ensure no harm to participants (e.g., Wikipedia users). Our external links took users to an online business under our control, a pharmacy. The payment functionality of this pharmacy was disabled, and therefore could only measure an “intent to purchase.” Further, the IP addresses of our visitors were not stored (our goal was to measure their quantity).
3. Was one of your advisors ([I.L.], [O.S.], or [S.K.]) aware of these actions, and if yes, did he approve them?
[S.K.] was not aware of these experiments. [I.L.] and [O.S.] were aware of my motivations in these experiments, and support them.
4. Any other comments you would like to make on the issue?
Our decision to engage in active measurement involved many considerations. Primarily, more passive strategies were believed to be inappropriate. For example, a proxy-based redirection of existing spam was considered. But, the nature of existing spam events is such that statistics would not speak to the economics of a blatant strategy that targets popular pages. Further, a large quantity of such redirection events (somewhat disruptive) would have been required to obtain meaningful statistics.
Objective data could not be obtained without these experiments and their non-consenting participants. Attempting to have participants “opt in” or “de-briefing” them after their participation presents both technical and practical difficulties. Opt-in procedures would bias user behavior. Given the “pipeline” nature of experiments, ex-post facto “de-briefing” is difficult, and may have forced us to sacrifice user anonymity. Additionally, our pharmacy collected a minimal amount of data about visitors – a level consistent with what most major websites measure.
Some users have speculated that these experiments were the result of a mis-configuration of my anti-vandalism tool, STiki. I would like to clarify that this is not the case. STiki remains a safe tool, which is still under active development, and working hard to locate acts of vandalism on Wikipedia.
Finally, we apologize to the Wikipedia community for any disruption caused, and reinforce that our intentions were for the betterment of the community.
^Kanich et al., “Spamalytics: An Empirical Analysis of Spam Marketing Conversion” 
[Note: The names of the UPenn researchers have been redacted to initials for this article.]
The Signpost is written by editors like you — join in!