Jump to content

User:Reagle/QICs: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Sydneys92 (talk | contribs)
Line 389: Line 389:


Design claim 3 caught my attention: "Recruiting new members from the social networks of current members increases the number of new members more than impersonal methods" (Kraut and Resnick, p. 186). In order for communities to continue to thrive, they must replace members on a somewhat frequent basis. Having face-to-face conversations is uncomfortable for many, and thanks to technological advance, we now have the ability to conceal this discomforts behind a "wall". Sharing information online is much easier and allows information to receive more attention and reach a larger audience. On sites such as linkedin users can connect with individuals we may have never met. However, we can do so easily by using virtual connections. In the text, they discuss sharing information from sites such as the ''New York Times'' and Costco Photo Center. By using sites such as these, members can easily share this information, thus bringing in a new array of members over a period of time. [[User:Nwells1229|Nwells1229]] ([[User talk:Nwells1229|talk]]) 22:31, 5 March 2015 (UTC)
Design claim 3 caught my attention: "Recruiting new members from the social networks of current members increases the number of new members more than impersonal methods" (Kraut and Resnick, p. 186). In order for communities to continue to thrive, they must replace members on a somewhat frequent basis. Having face-to-face conversations is uncomfortable for many, and thanks to technological advance, we now have the ability to conceal this discomforts behind a "wall". Sharing information online is much easier and allows information to receive more attention and reach a larger audience. On sites such as linkedin users can connect with individuals we may have never met. However, we can do so easily by using virtual connections. In the text, they discuss sharing information from sites such as the ''New York Times'' and Costco Photo Center. By using sites such as these, members can easily share this information, thus bringing in a new array of members over a period of time. [[User:Nwells1229|Nwells1229]] ([[User talk:Nwells1229|talk]]) 22:31, 5 March 2015 (UTC)

As I started off the reading, Design Claim 2 (“Word-of-Mouth recruiting is substantially more powerful than impersonal advertising) made me think about websites that people look to to get opinions about companies, stores, and restaurants. People would rather hear about a company from a friend or trusted reviewer than by looking at an advertisement. [http://www.angieslist.com/ Angie’s List], a paid subscription website dedicated to having local businesses be reviewed by previous users, is unique in the way where people can go on this website to find service providers like handymen, housecleaning, and pest control. The website prides itself on getting reviews about local services and makes members feel like their friends are giving them the advice. Angie’s list also relates to Design Claim 11 (providing potential new members with an accurate and complete picture of what the members experience will be once they join increases the fit of those who join) by looking at the [http://www.angieslist.com/how-it-works.htm “How it Works” link] . Here it shows potential members why this website is better than free review sites and what members get out of subscribing. They also make a note saying that their data is certified, and they “guard against providers and companies that try to report on themselves or competitors”, which sites like Yelp have been known to have issues with. Angie’s list is a great website when it comes to bringing on new members and showing them what they will get and how reliable they are.


== Mar 10 Tue - NO CLASS ==
== Mar 10 Tue - NO CLASS ==

Revision as of 22:54, 5 March 2015

Questions, Insights, Connections

Leave your question, insight, and/or connection for each class here. I don't expect this to more than 3 or 4 sentences. Make sure it's unique to you if you can. For example:

  • Here is my unique question (or insight or connection).... And it is signed. -Reagle (talk) 19:54, 6 January 2015 (UTC)

Jan 13 Tue - Intro and Wikipedia

Jan 16 Fri - Persuasion

Kraut and Resnick mention that "Specific, challenging, and immediate goals stimulate higher achievement than do easy goals". Are the principles of reciprocation, consistency, social validation, liking, authority and scarcity more likely to influence an individual when their main motivation is intrinsic or extrinsic? It seems as though the idea that people are more likely to participate when there's evidence of others participations can be explained by more than just social validation but also scarcity. We know that scarcity can affect the value of both commodities and information, but what about social constructs? For example, if there is only so much that can be contributed to a project and you see your peers contribution is become more substantial then your own you may want to contribute more to the project. The Johnny Cash Project is prime example of how taking peoples' intrinsic motivations and providing them with easy to use tools for finding and tracking work that needs to be done can create collaborative and effective efforts in a community. Trevor O'Brien Jones--TWOBJ (talk) 22:22, 15 January 2015 (UTC)

On page 33 Kraut and Resnick's 11th claim is that "People are more likely to comply with requests if they come from others who are familiar to them, similar to them, are attractive, are of high status.." (33, Kraut and Resnick) Why is it that people are more likely to follow requests from people they are familiar with like celebrities than any other person? I believe that people feel this attachment to celebrities they admire or look up to and begin to imitate their actions. For example, if a celebrity advertises a moisturizing lotion it will make more women want to buy/try the product than if a random person advertises lotion. aveeno ad vaseline ad Latifaak (talk) 23:32, 15 January 2015 (UTC) Latifa Alkhalifa

"Design Claim 12: People are more likely to comply with a request when they see that other people have also complied" (Kraut, p. 35). Does the number of people who complied affect how likely someone else is to comply as well? I feel as though it can help to a point if many people have also complied, but if too many have... motivation decreases again because your contribution might feel insignificant. For instance, I came to this page to see if anyone else had done a QIC for tomorrow's class - yes, two people have. I thought "okay I will too," but if 15 people had already posted, I might be less inclined to post, considering that is the majority of the class and therefore my QIC could easily get overlooked. -Enarowski (talk) 01:42, 16 January 2015 (UTC)

"In many discussion communities, it is the conversations that participants exchange with each other that provide benefits to others in the community" (Kraut & Resnick, 21). In this instance, as described by Kraut and Resnick as well as my peer in the previous QIC, it is clear that the contributions of other members within one's community do indeed benefit the community as a whole. Feeding off of one another, members within a community are encouraged to participate and contribute as they see their counterparts doing the same—therefore, increasing one another's likeliness of making "public commitments" (Cialdini, 78). In the same vein as Elissa, I had first checked the QICs to find that no one had yet posted: leaving me feeling a bit reluctant to contribute just yet. After returning to the Wiki page later on, discovering the few new posts, relating my own thoughts to theirs, I felt much more willing and able to share my thoughts in this post. -Kristinam 0330 (talk) 18:22, 16 January 2015 (UTC)

Jan 20 Tue - A/B testing

A/B testing has changed the way websites function and the way companies operate. In the past, the final decision on website layout and content had to be made by one person. There would be long meeting and finally the "HiPPO" (highest paid person's opinion" would determine which would go live. Now, there are endless possibilities for webpages. Almost every major webpage is running an A/B test. You and another person are most likely seeing different home pages, shopping carts, and landing pages. There's been a shift, instead one one person's opinion guiding a company, decisions are made by numbers. To make decisions, companies test and measure everything they do online. What's interesting is, there aren't many lessons or rules to follow. Companies don't take the time to figure out why users act one way or another. There's no need to worry about why people like the ottoman better if it appears to the left of a webpage or why they click on the throw rug more when it appears to the right. The answer to any question about A/B testing is "because it works." Listening to numbers and not gaining understanding why things work can be hard to swallow. In Brian Christain's article "The A/B Test: Inside the Technology That’s Changing the Rules of Business"he recognizes this fact and says "even if we accept that testing is useful in learning how to run a business, it’s hard to take the next step and accept that we won’t learn how to run our businesses at all." He also mentions that with the way A/B testing is moving, soon it will all be automated and the changes will be made automatically. You won't even know which layout or headline worked better because it will change without approval. In my own experience, this factor of A/B testing has always been hard to grasp. I run tests, but I'm not really learning a lesson or gain understanding. I've run many A/B tests on my company blog using Hubspot's software to test which Calls-to-Action are clicked more, which landing pages are filled out, and various other factors. Sometimes I would get caught up with trying to figure out why certain things worked better than others, but in the end I just had to settle with "because it works." BrieShell47 (talk) 19:59, 18 January 2015 (UTC)

A/B testing gives companies the ability to truly see what their viewers are interested. By looking at the data they receive based on their viewers, they can understand how something as minor as wording or color can effect what a viewer looks at on a website. However, each of their viewers is unique, so they may be more attracted to something on a page more than another viewer and become more invested in a site (i.e. purchasing more, subscribing, etc). Are websites able to change specific peoples views based on what they usually do? For example, one viewer never choses anything from the “recommendations for you” section of a page, but then purchase something similar when they navigated to another page. Would a website take that section off, but highlight those same items on pages the viewer does look at? From the reading, it appears that each person gets a different version of a normal website, but do they personalize it for each viewer once they see a trend based on that viewer? If they are able to make each viewers experience unique to what they want, they may potentially gain more viewers than just testing different versions. A/B testing is very helpful for many websites, but personalizing it to each viewer’s individual need may change things even more. --Sydneys92 (talk) 00:57, 19 January 2015 (UTC)

A/B testing has become a key factor in web development success. In his article, The A/B Test: Inside the Technology That's Changing the Rules of Business, Brian Christian explains how A/B testing can boost success with specific examples such as the 2008 Obama campaign. The goal for this campaign was to turn average users into subscribers of the site. Though A/B testing is not something that is publicized, it is the standard method for maximizing online sites and products. The way A/B testing works is by customizing sites to meet the needs of individual consumers. Based on the user feedback, the site is either changed, or remains the same. This allows for companies to create the best possible site without relaying on the consumer for written feedback. The success of a site can be a result of simple changes such as, color, key terms and font. However, without A/B testing, it is difficult to determine which combination consumers prefer. A/B testing is a relatively new method that is still evolving. Soon, site owners won't need to make physical changes, rather the site will automatically make changes based on the consumer feedback. A/B testing makes the web a much safer place. There is no need to understand why consumers prefer certain layouts and with this method, the work is done on its own to create the optimal sites for consumers. Nwells1229 (talk) 20:38, 19 January 2015 (UTC)

The banner data page was super interesting. I remember seeing those little messages both on Google and on Wikipedia (admittedly I never clicked on any of them) but I never realized just how much work goes into them. It is wild that a single word, such as "only," can make such a difference in the number of clicks a banner can get. It also makes me wonder what exactly a banner would have to say to get me to actually click on it when it is asking for money... -Enarowski (talk) 01:27, 20 January 2015 (UTC)

  • I think Elissa's point about how a single word can make such a difference relates to the science of persuasion we discussed in class on Friday. A/B testing allowed the Obama campaign to see what words and banners worked better for the campaign. By combining what the data in A/B testing with other knowledge about persuasion, the campaign can make conclusions and drive donations. I believe that this type of A/B testing could be useful for charities. They could test language on their online spaces and use the more productive languages for their physical campaigns (e.g. letters, flyers, posters). This technique would be difficult for charities that cannot afford it, and what works online may not work on a page. The idea of using A/B testing in physical situations (e.g. floor plans for a store) blows my mind, but it's also crazy that such trivial things such as wording, colors, and pictures can really dictate consumers' experiences. I just wonder what Google I am using. Nucomm23 (talk) 04:06, 20 January 2015 (UTC)
    • Nice response to another student and connection to previous class! -Reagle (talk) 17:28, 20 January 2015 (UTC)

I think the thing that stood out to me the most from the A/B reading was the slight change in the language on Obama's campaign site, which resulted in so many more people signing up! It's kind of crazy to think that the slight change in wording influenced so many people, especially when it comes to a political campaign. I think ideally, we like to think that we're all using logic and making well thought out decisions when we choose whether to support a political candidate or not. To find out that a simple manipulation of language can affect our decisions more than logic is really eye-opening. It makes me question all the language I see on websites, on advertisements, etc. I remember seeing the Wikipedia banners telling me that if every user donated $5 they'd reach their goal... and I thought "that's really cool. I can't afford to." And I didn't donate. But it seems a lot of other people were convinced to donate based on that banner... is there a different wording that would have convinced me to give $5? My gut says no, but I'd love if someone could prove me wrong. -SamDiamond88 (talk) 02:21, 20 January 2015 (UTC)

I found Wikipedia's Fundraising 2010/Banner Testing page very interesting. Going off of what Elissa posted earlier, it's kind of hard to believe that one word or the phrasing of a sentence can change the traffic to a site, the sign-ups to Barack Obama's campaign mailing list, or the amount of people that donate $5 to Wikipedia - you would think that, as humans, we get the message in whatever way it's said and either agree/choose to participate or don't. When reading through the different banners that Wikipedia tested to see which one worked best for donations, it makes sense that the banner "If everyone reading this donated $5" worked the best. I feel like it's human nature to want to be a part of something and be involved in this group that helps Wikipedia stay afloat. By advertising the somewhat small task of everyone donating $5, it makes readers think it's easy, and they might get the feel-good feeling afterwards. Shannclark (talk) 03:30, 20 January 2015 (UTC)

I have conflicting opinions about A/B testing that is being frequently used in businesses today. I do see how from a business standpoint, using this method can lead to an economic and timely outcome for choosing the layout or design of a website. It is also very refreshing to think that not all of our activity online is being chosen by some business executive who predicts our (the consumer's) behavior. This technique does allow for consumers to choose for themselves when deciding how these advertisements and websites are being presented. However, on the flip side, it is easy to see how this method could be extremely unethical. Consumers are counted for as a majority group, being that companies often use A/B testing to find an interface that responds well with the "majority" of people rather than considering society as a group of unique individuals. It would be accurate to equate A/B testing to subliminal messaging, since the consumer is unknowingly being tested and used for the economic gain of a certain business. Shouldn't these businesses and especially political campaigns, be trying to entice consumers by their ideals and product, rather than aesthetic techniques? Where is the line drawn when considering taking advantage of consumers? This article has definitely encouraged me to take a closer look at what I am buying or supporting online! Nduryea (talk) 03:34, 20 January 2015 (UTC)

I also was having the same thoughts as Nadia regarding the ethical concerns behind A/B testing when I was reading the article. Larger businesses have a significant advantage in their ability to conduct this type of testing and as a result have distinct market advantage over those that can't compete. A counter-balance to this would ideally be more companies like Optimizely, which could provide affordable and effective testing to smaller-sized companies. I also wasn't sure how I felt about the idea of users being tested unknowingly - though the testing is harmful in nature, I've always had an uneasiness about tracking of data without users knowing. Going past the ethical concerns, the article also got me wondering about how many A/B tests I have been exposed to in my own Internet experiences. I can think of a few occasions in which Facebook has applied changes to my page or account, yet friends of mine hadn't experienced the same changes. Even if this wasn't the case with Facebook, I would assume that it happens a lot more than I would think. I look forward to talking more about this testing tomorrow and listening what others think about it. - Matt rodgers2 (talk) 04:38, 20 January 2015 (UTC)

Personally I was not much informed about the A/B platforms happening around web pages and I feel it is an excellent structure. Technically the users are the ones making the decisions and I like that. Individuals are being tested on their likes and dislikes and then those results are shared to create a more appealing web page to users. The only thing I have been thinking is that, Is it ethical to play with an individuals mind, by changing their Google homepage to test the A/B platform, without their consent or approval first? . I believe we individuals who use the web constantly, should be aware that A/B experimentation is happening and that we might have been targets of the experimentation. I really do think it is a great idea to measure how much success a page would have, but I believe this platform process should be more transparent to the public per se, and the public should be more prone to know this can happen anytime and wont cause any trouble. Iferrrerb (talk) 14:52, 20 January 2015 (UTC)

I can understand Matt's point that larger companies might have an advantage over smaller companies when it comes to resources for A/B testing, and if I had not just been on co-op with a tech startup, I would have assumed the same thing. However, today, inbound marketing and sales software like Hubspot offer services like A/B testing as part of the package, making it easy for even small start ups to alter the promotions and launch pages they send to their email list in order to get the most impressions (and subscriptions) possible. And in terms of caring about whether or not I have a different Facebook user page than my friends, I agree with Matt that it is annoying. One of the most recent changes Facebook made was making their mobile app's inbox an entirely separate app. I'm assuming this was created after A/B testing because users preferred to keep their inbox separate from their notifications, but I think it would be interesting to hear the class's feelings on this. --Kaylynn Nelson (talk) 16:27, 20 January 2015 (UTC)

My question is, why is A/B Testing not more well known? From the article is seems that it is used almost across the entire broad spectrum of the internet so why isn't it something that is more well known to the average internet user. It is also amazing to me that so much is involved in it. In the article it is said that there is a huge amount of legwork involved in using A/B Testing as well as implementing changes on sites and pages around the internet. Im my opinion, I would have thought that considering the great strides that coding and web mastery has made even in the past few years that this process would be easier. nonetheless it is still clearly evident that A/B Testing is something that has genuinely peaked my interest and something that I will spend some time looking into on my own in the future. Tschn012 (talk) 18:15, 20 January 2015 (UTC)

Jan 23 Fri - Reddit

I remember my high school boyfriend used to be obsessed with Reddit. I would look past him at the desktop screen in his basement and wonder what the big deal was. He tried to show me around the community but claimed, "You probably won't understand, only Redditors really get it." And with that, I gave up on trying to understand why he spend countless hours up voting, down voting, posting, and commenting. Since then, I haven't really touched Reddit much. After reading the articles, I can see why it's a tough community to advertise on or make any drastic changes to. Its users are obsessed, I've seen it first-hand. If ads suddenly appeared on their organic, never-changing feeds, they'd likely do something drastic like leave in masses. Also, with the nature of it's anonymous posting, it's hard to correctly target people with ads. One of the articles is titled: Can Reddit Grow Up? I think this is a very valid question. Personally, I believe change on Reddit will be very difficult but maybe if it is slowly and carefully implemented, it could be possible. Brianne Shelley (talk) 18:42, 23 January 2015 (UTC)

Similar to the previous post (Sorry, I can only see numbers on the signature, so I have no idea who said this), I am not a “Redditor”. I had never really even heard of Reddit until I got to college and all of the guys I was friends with were on it regularly, one even called it “the front page of the internet”. What I really did not find appealing, and still don’t, is the “online message board plucked from the 1990s” design, which makes it look very hard to navigate. The appeal of the website makes sense though- it gives people information on hundreds of thousands of topics all in one place. In the article “Can Reddit Grow Up”, Mike Isaac talks about how the website may begin to bring advertisements into it. People don’t really seem that thrilled about it, but would it really affect anything? The article says that it would look like a “native ad” to make it look like a regular Reddit conversation. I don’t think that it would make that much of a difference if the website doesn’t get any real changes, besides a link that can potentially be an ad. I think if Reddit allowed more ads like this, it would benefit them economically and it wouldn’t affect the readers at all. Sydneys92 (talk) 01:40, 22 January 2015 (UTC)

  • Many of you think Reddit is stale and childish, which I'm sure we'll talk about! -Reagle (talk) 18:28, 23 January 2015 (UTC)

I often consider Reddit to be one of the strongest internet communities out there. I've seen article upon article of Reddit users being able to photoshop pictures of a deceased relative, or turn up information from the deepest corners of the internet. I can absolutely understand the appeal of Reddit, although like Sydney, I don't consider myself a Redditor. Reddit allows you to waste time laughing through the "jokes" page, or getting involved in discussing the theories of Serial in the podcast's subreddit. That being said, subreddits discussed on the Controversial Reddit Communities (/r/jailbait or /r/creepshots) wiki page make me question Reddit user's freedom and the trust discussed in Yoni Applebaum's article. Reddit users' power can clearly be used for productive and insightful means, but the fact that users are allowed to even interact on subreddits for any extended period of time makes me lose a little trust in the site and its administrators. We often have to take a step back from the Internet and think how what we post (i.e. Serial - it's real life, not just a murder mystery) can affect people's lives. An online community can be a bit consuming, and can make people forget that life goes further than Reddit. -Shannclark (talk) 04:31, 22 January 2015 (UTC)

Like Sydney, I had never heard of Reddit till I got to college. I remember taking a class in my sophomore year that touched on Reddit quite a fair bit. To be completely honest, I still don’t understand the purpose of Reddit… What do people use it for? They have a confusing and dated web interface (in my opinion). Therefore, you can see how shocked I was to find out how big the Reddit community is on the Internet. Putting my personal opinions aside, I suppose what makes Reddit such a successful online community is that every user has to abide to certain rules known as “reddiquette” when posting. Yet at the same time, the website still has a somewhat childish feel – visually and content wise. “Discussions are often peppered with vulgar schoolyard humor,” Mike Isaac wrote in the New York Times article “Can Reddit Grow Up?” With that said, it might be a little difficult to take the website seriously. Nsiu (talk) 21:21, 22 January 2015 (UTC)

I found the article "How the Professor Who Fooled Wikipedia Got Caught by Reddit" to be extremely interesting. I took a Political Communication course last year and we talked a lot about how news is created. Many news stories become popular simply for their appeal. If the story is enticing with shocking and unbelievable scenarios, more people are going to be interested and create a buzz around the whole topic. This is exactly what these students did by fabricating these historical tales. It is funny to think too that we have now become so confident in Wikipedia as a news source. I rarely am skeptical of Wikipedia articles even though I have always known they can be edited by anyone! But while Wikipedia was the unreliable source, Reddit actually portrayed the truth. I liked the quote in the article that stated, "The hoax took months to plan but just minutes to fail." It shows how powerful these online communities can really be in terms of spreading news. Nduryea (talk) 18:42, 23 January 2015 (UTC)

  • make sure you sign! -Reagle (talk) 18:28, 23 January 2015 (UTC)

I think it’s incredibly brave of Reddit’s founders and administrators to have a policy of trying not to ban content even if they find it deplorable but only when it possibly endangers the public or tries to game the system. Reddits environment of encouraged anonymity could explain in part why some more questionable sub Reddits occur. Although I find the outrage at users being ousted as real people, and the idea that it’s a violation of their first amendment rights, to be suspect. Is remaining anonymous when requested a prerequisite to free speech? Should it be? The article in The Atlantic made an interesting hypothesis as to Reddits success. Unlike Wikipedia or Facebook “Reddit, by contrast, builds its strong community around the centralized exchange of information. Discussion isn't a separate activity but the sine qua non of the site.” If Wikipedia functioned more like Google docs where editors could track one another in real time, or had a comments section where users could post their questions and findings, would the community strengthen? TWOBJ (talk) 03:29, 23 January 2015 (UTC)Trevor

This is my first time hearing about Reddit. Even though I am not familiar with Reddit, I kind of know what it is about, because there is a similar website in my country. I believe the site like Reddit, it has its own value in online world, even it seems like useless, outdated and childish. This kind of website provides a platform for internet users to gossip and exchange information. People enjoy posting topics and getting feedbacks from other users. It satisfies people's psychological need somehow. And I agree with what Natassia said, as a successful online community, "every users has to abide to the rules." But, I am also curious about how Reddit make profit to support its operation for the past years? The article " Can Reddit Grow up?" mentioned, Reddit is not follow the dominant business models of selling advertising on their site. How do they make money without selling ads?? Yulu Lei (talk) 04:39, 23 January 2015 (UTC)Yulu

  • Good question, which we will discuss -Reagle (talk) 18:28, 23 January 2015 (UTC)

I am a casual Redditor, and would agree that there's a very strong Reddit community. I was pleased to hear that they couldn't be fooled by the internet hoax. There's a lot of great discussion and collaboration on that site, and I find it to be well-moderated, so it's neat to see how that results in a "smarter" community. I love the site, but I didn't realize they were trying to bulk up their advertising game, and I'm not sure that will go well... Redditors, as the articles suggest, are an overall intelligent and internet savvy group. I worry that large changes in advertising style may not only drive many Redditors off, but cause some sort of uproar in the community. And when a large number of people, with community ties, feel upset about something on the internet, you can get some bad results (i.e. harassment). I haven't had any negative experiences with other Redditors, but I can imagine some sort of collective uproar occurring if the company isn't careful. On the other side, I was trying to think of a nice way to implement advertising without upsetting the community... it's not an easy problem to solve. It would have to be something that Redditors can say "okay, I see what they're doing and why that makes sense. I can live with it." Are there any alternatives to bulking up the advertising? Is this the only route the company can take? How has the company been making money so far? Mixing ads with content seems risky with such alert consumers. -SamDiamond88 (talk) 15:24, 23 January 2015 (UTC)

  • They are still in the "red" as far as I know -Reagle (talk) 18:28, 23 January 2015 (UTC)

I've heard about reddit from college classes but I never went on it or was curious to know what it is about. I've been going through it from a while now and I still am confused as to what it really is about, or how to use it? I really enjoyed the article "How the Professor Who Fooled Wikipedia Got Caught by Reddit". It was very interesting to read and to learn about the differences between the wikipedia community and the reddit community, and how wikipedia has a weak community but focuses on the exchange of information where as reddit has a strong community and also focuses on the exchange of information. How can an online community form high levels of trust with its members? it's hard to form with all the ambiguity and all that. The article "Can reddit grow up" explained to me more about reddit things that i didn't know and wouldn't know by searching on their website it was useful to read, to understand the community. Latifaak (talk) 18:42, 23 January 2015 (UTC)

I, like Sam, am a causal Redditor. However, I've never looked into Reddit's background before. I found all of the articles very interesting and also a little terrifying. I have always been an advocate of free speech, but after reading the post on "Controversial Reddit Communities," I'm starting to realize the line for what people can share anonymously and who should be outed or "doxxed" to discontinue their subreddits is blurred. The "/r/jailbate" was specifically alarming to me and made me cringe to think of young, naive girls who aren't aware they're the face or "butt" of a Reddit sub community. True, it's their fault for posting revealing pictures, but I blame lack of knowledge and those girls being unaware that everyone has access to their pictures, and not just the cute boys in their school. I was personally happy when Michael Brutsch was doxxed for his /r/jailbate because it advocated child pornography. Plus, he was also the moderator for the subreddit community /r/creepshots, which any innocent woman can be subject to indecent exposure. I think certain sub reddit communities like the ones aforementioned, deserve to be banned if they're harming the public's peace and ruining reputations. --Kaylynn Nelson (talk) 15:59, 23 January 2015 (UTC)

I found it interesting that the article "How the Professor who Fooled Wikipedia got Caught by Reddit" claimed that the reason Wikipedia was able to be tricked by the students of T. Mills Kelly's George Madison University Class was because it is a "weak community" that relies on good will and a lot of trust. On the other hand, the author claimed that Reddit "builds its strong community around the centralized exchange of information." At first I felt almost offended upon reading Yoni Applebaum's words, as much of this course is based around Wikipedia and the online community that it forms, but then I felt that there was some truth to what he was saying. Although, I do not agree with his remarks that Facebook has many strong communities within it, as we have discussed in class that many users do not feel "attached" in any way to the Facebook world, while there are people that identify as "Wikipedians", it does seem true that Redditor's feel a sense of community unlike any of the other online communities mentioned. While virtually anyone can try to write an article for Wikipedia or edit someone else's, (as I have learned from this class), the Reddit community is one that feels so exclusive and elusive to outsiders and yet somehow so inclusive to its users. Before reading the articles today, I knew basically nothing about Reddit beside that it was a site where users share information about anything and everything, but now I get a sense as to why a "false" article can go unnoticed on Wikipedia for days, while being exposed on Reddit in minutes. It is incredible to me the way Redditors use this forum in order to find and uncover information, but also as a place to "bond" with users that most have never met in real life. -Jretrum (talk) 16:15, 23 January 2015 (UTC)


After reading your article on Photo.net and Kraut chapters, I was very impressed to see what really constitutes useful evaluation and feedback from users in an online community. Personally, I agreed with some of the fixes made on photo.net. For example as it being constituted by subscribers who had to paid for their membership, instead of being a free community that anyone can join. I believe when users have to pay for their membership they take things more seriously, and benefit from the perks it brings. I feel photo.net was and still is an experimental platform in which we could learn enormously of the users consumption and behavior. Due to its several fixes and changes in the platform, we could study and learn for example why the “nudes pictures” where getting more popularity than natural pictures and so forth. I also could identify some users behavior described in the article. I personally, practice photography on my free time, and have a Flickr page where I post all my pictures, and I have seen many times how people get a good rating of a picture, just because of “karma whoring”. I personally hate “karma whoring” and think a lot how it can be stopped in social media but it is very hard, because most of the time users are driven by the popularity instead of the love of art or photography. I really have come to the terms that the rating of a picture, is just a number now a days, to me it does not mean that a picture is better than other, it just received more popularity than any other, maybe because of “ karma whoring” or any other technique. As Brianna said too, I don’t often go on sites that have online numerical system ratings. Iferrrerb (talk) 15:55, 3 February 2015 (UTC)


Yoni Appelbaum's telling of hoaxing events in "How the Professor Who Fooled Wikipedia Got Caught by Reddit" amazes me for two reasons: one being just how easy it was for Professor T. Mills Kelly's classes to completely fool the Internet and, two, just how easily a community of people is quick to be fooled. These notions immediately pushed my mind back to the ideas noted in our first reading, Cialdini's "Science of Persuasion." In Kelly's first successful prank, it seemed he and his students had covered all of their tracks—securing expert fact and evidence—leaving very little room for people to question the information. His students' second attempts, however, proved less secure and significantly more prone to communal backlash: "...it took just twenty-six minutes for a redditor to call foul, noting the Wikipedia entries' recent vintage. Others were quick to pile on, deconstructing the entire tale" (Appelbaum). Because in this case, one, then two, and three, four, five, six people openly expressed their doubts on this public, communal forum, increasingly more suspicion arose amidst the reddit population reading "Lisa Quinn's" post: a fine example of the process of social influence, no doubt. - Kristinam 0330 (talk) 18:11, 23 January 2015 (UTC)

  • nice connection -Reagle (talk) 18:28, 23 January 2015 (UTC)

Jan 30 Fri - NEU Special Collections

Feb 03 Tue - Gaming motivation

After reading your article about Revenge Rating, I've come to realize more about the complexity of online rating systems. Personally, I don't often go on sites that have online numerical rating systems and when I do, I'm more of a "lurker" than a contributor. I can now more clearly see how online rating can be gamed or manipulated. It was interesting to read about photo.net and its experimentation over the years with different rating systems. I took a look at the site to better understand your article and to see what the site looks like today. Now, they're back to a comment only which I think makes more sense for this type of website. Art really is hard to judge numerically because there are so many aspects of it and interpretation is certainly varied. Perhaps the only other form of rating that would make sense is allowing people to "like" a photo. This might generally work and prove to be useful however this can also be gamed. People who want to get their content to the top can crowd source others to contribute their "likes".Conversely, people can also down vote or leave negative comments on other people's posts. Although the rating system present on other websites are now more advanced than the original experimentations on photo.net, it still makes me wonder how much of today's online ratings are gamed. If anything, it's a good idea to keep the system and nature of gaming systems in mind instead of blindly accepting all reviews present to be accurate. Brianne Shelley (talk) 19:53, 2 February 2015 (UTC)


I too agree with Brianne I did not understand the complexity of rating or commenting and the affect it has on every post. My question is how come there are still so many ways people can go around the system even with the CAPTCHA or rules for commenting or rating in the community. Bengtsson on page 11 talked about a time where you can only comment if you leave a rating below 5 but I agree with him that this will lead to overratings. I think he had good ideas but still the reading made a point that people will begin to leave basic or two worded comments just to get their rating in. I do not feel that I actively rate or comment a lot in communities but I do believe that obstacles will discourage me from doing so if I ever wanted to. I have heard about people buying followers to their profile or likes to a specific picture which makes the profile or the picture popular, I guess this is a new way for people who are desperate to make their post popular and there is no way for another user to know that the number of likes is fake therefore as the reading said it makes them want to like it since most of the people in their community did. Latifaak (talk) 21:46, 2 February 2015 (UTC)


Before reading this article, I wasn't aware of the site Photo.net. I found the article very interesting, but similarly to Brianna and Latifa, I did not fully understand the rating and commenting systems that are in place on the site. Throughout the article, I couldn't help but think about how similar the concept is to instagram. However, when I visited the site, I did not find it to be user friendly at all. Photo.net is the instagram for the more computer tech savvy and people that are professional photographers. I can't seem to understand why Philip Greenspun felt the need to make the rating system so complicated. The reason why sites such as tumblr and instaram are so popular, are because of their user friendly system. It is incredibly difficult to rate art, especially on a numerical scale. What is art to one person, is not necessarily art to another person. What's worse is that people started using manipulation and fake profiles to get their content to the top of the rankings. In the section titled "Tweak Critique" the author mentioned Ansel Adams and how he was noted for the extreme dedication he took in not only capturing an image on a negative but also producing a print. With digital photography, the conductor of the photo and the composer don't necessarily need to be the same person. The use of editing software gives individuals the opportunity to manipulate photos way beyond the negative. This article shows how desperate some "artists" are to get their work noticed and how people within communities tend to favor similar works. Nwells1229 (talk) 23:28, 2 February 2015 (UTC)

In the age of Web 2.0, means for online social interaction have certainly come a long way. This proves especially true in regards to photo-based platforms as we note their evolution between Photo.net, Flickr, Tumblr, and Instagram. Professor Reagle's essay on Photo.net bring to light the medium's original purpose: "...the premise of sharing and critique is that one learns to be a better photographer, that members are 'working to help each other improve.' People exchange tips, give each other critique and advice, and the very act of sharing is spoken of in terms of learning" (Reagle). With the development of the medium, however, came new motives for broadcasting one's images across the web. As Kraut examines, "Many members of online communities are motivated because either effort toward the task or successful completion of the task is intrinsically rewarding...Intrinsically motivated actions are ones that directly fulfill some basic desire" (Kraut, 41). Reiss's 16 motives very well embody these modern desires. For example, status or the desire for social standing, including attention, as well as the desires for social contact and approval are increasingly prominent as a motivation for sharing one's content. More often than not, social media users are posting on their respective platforms not only to fulfill their need for "social contact" but also to reel in positive affirmation of themselves and their lives. On these various platforms, "status" is directly linked to numbers (e.g. views, likes, followers, etc.) Therefore, a post is only successful and rewarding if it receives a noteworthy amount of likes. To the same extent, a user is only as successful as his or her number of followers. And, in the same way that the "top rated" pages skewed relevance on Photo.net, users on Instagram tend to be directed to well-received photos posted by well-received users, whether or not they are actually quality, thus perpetuating the idea that "people are people are seduced by numbers" (Reagle). Kristinam 0330 (talk) 23:41, 2 February 2015 (UTC)

Like Natalie, as I was reading your article on Photo.net, I related it to Instagram. Although Photo.net might seem to be catered for a more professional crowd (through its user interface), but I think they are essentially doing the same thing. After all, both sites thrive on the rating/liking system that can be easily susceptible to manipulation. Users are able to create additional accounts or get their friends to rate/like their photograph. Therefore, “top-rated” photographs on Photo.net may not actually be top-rated. Another way the system can be manipulated especially on Instagram is to buy your likes. There are website that allow users to buy likes and comments on Instagram. In doing so, how can we trust if a photo is well liked/rated or not? Another problem that arises with these online photo-sharing sites is that we do not know if these photographs are real or not. It could have been manipulated on Photoshop prior to uploading it. Nsiu (talk) 00:02, 3 February 2015 (UTC)


I have been patiently waiting for the reading on intrinsic vs extrinsic motivation since day one and I was not disappointed with tonight's reading. I would like to challenge the idea of mandatory class participation using some of the arguments made by Kraut. First, having extrinsic motivation to do something decreases our own intrinsic motivation. "...One should be careful about providing rewards and other extrinsic motivations for activities people fine intrinsically interesting, because doing so undermines their intrinsic interest in the task" (Kraut, p. 58). I read that to mean that by putting a lot of pressure on class participation to get a good grade students are motivated to participate for the sake of the grade instead of by their own personal interest in the topic at hand. Second, just as in online communities when people "karma whore" to increase their own rating, rewarding those who speak often in class leads people to speak often even if they have nothing to say. "...Some users contribute many short and not very informative posts" (Kraut, p. 54). This was brought up in an earlier class about how often we should try to speak in class - but if we are trying to speak just for the sake of speaking, is that actually a productive discussion? Can I write my user influence essay on this topic? -Enarowski (talk) 01:47, 3 February 2015 (UTC)

Before reading Kraut and your article about Revenge Ratings, I wasn't aware of all the intrinsic motivations people have before making the decision to rate or comment on a mere photograph. By performing that seemingly meaningless task, the rater gets something out of it, but not a tangible reward. So why does anyone bother? Even though I'm not a photo.net user, when I think about how many people in our class related the site to Instagram, I did the same and then asked myself the same question to find my answer. For whatever reason, when I use the Instagram app I feel "accomplished" after getting likes on my published photographs, I can understand. And I can further get why Photo.net grappled with changing their rating system for so long. Today, there are also revenge comments being made on Instagram, and if Tyler doesn't "like" their friend's most recently uploaded photo, they can expect that friend to ignore Tyler's latest photo in retaliation to feel vindicated. These intrinsic feelings make me think that there is no perfect way to rate photos completely without bias or self-motivated reasons. --Kaylynn Nelson (talk) 01:56, 3 February 2015 (UTC)


I am quite shocking how rating/liking systems apply so many psychological principles after reading these articles. Authors mentioned performance-contingent rewards and task-contingent rewards on page 53. My simple question is that which type of reward that wiki uses to motivate people? And how wikipedia evaluate people's contribution?? I saw some + and -number showing on the contribution page, is the system that wiki uses to evaluate the performance?? Comparing Photo.net and Instagram, I do not think Instagram can be categorized as a professional photo evaluation platform, there are so many unprofessional users (like me) use filters to make photos better. For a lot of high-rated Instagram users, their photos are more likely about fashion and lifestyle, and there are no shooting techniques can be discussed there. Personally, for some of my followers, I even do not know who are they, I do not think these likes and comments are valuable to me. Yulu Lei (talk) 15:22, 3 February 2015 (UTC)Yulu


I think it's safe to assume that most everyone in the class has had experience with some sort of 'liking' or 'rating' system in an online setting. I think it has become such a common component to any internet community because it encourages user involvement. Rating systems allow users a chance to quantify their opinion on a given subject and let the publisher of the photograph know what that person thinks about it. It is a unique way for others' to judge material, and though I agree with Kaylynn in that there is no way to rate photos completely without bias or self-motivated reasons, I think it is a useful tool that can have positive and constructive outcomes. For example, it is an encouraging thought when someone applauds your photograph and provides insight for improvement. That being said, the problems highlighted in the readings in terms of manipulation and mate/revenge rating were something that should be acknowledged and understood by any person involved with the rating process. Mate rating is definitely something I have experienced in the past, yet sometimes it's more out of guilt than trying to increase the rating of my mate. For example, if someone likes a photo of mine, especially one that I'm in, then I will probably like a photo of theirs in the future. It's not even that I consciously think to 'repay' this person with a like or a positive rating, it's more that it just happens subconsciously in the future. Therefore I realize that I don't rate every item with the same scale or with the same degree of thought. Given my own examples in combination with the reading, I tend not to take rating systems that are online to heart, yet I do acknowledge the fact that contribute to them and do see some usefulness in having them. I look forward to talking more in detail about what others think about this topic. Matt rodgers2 (talk) 15:33, 3 February 2015 (UTC)


After reading Krauft chapters and your article, I was very impressed to see what really constitutes useful evaluation and feedback from users in an online community. Personally, I agreed with some of the fixes made on photo.net. For example as it being constituted by subscribers who had to paid for their membership, instead of being a free community that anyone can join. I believe when users have to pay for their membership they take things more seriously, and benefit from the perks it brings. I feel photo.net was and still is an experimental platform in which we could learn enormously of the users consumption and behavior. Due to its several fixes and changes in the platform, we could study and learn for example why the “nudes pictures” where getting more popularity than natural pictures and so forth. I also could identify some users behavior described in the article. I personally, practice photography on my free time, and have a Flickr page where I post all my pictures, and I have seen many times how people get a good rating of a picture, just because of “karma whoring”. I personally hate “karma whoring” and think a lot how it can be stopped in social media but it is very hard, because most of the time users are driven by the popularity instead of the love of art or photography. I really have come to the terms that the rating of a picture, is just a number now a days, to me it does not mean that a picture is better than other, it just received more popularity than any other, maybe because of “ karma whoring” or any other technique. As Brianne said too, I don’t often go on sites that have online numerical system ratings. Iferrrerb (talk) 15:56, 3 February 2015 (UTC)

    • I think the idea of karma on a site or any online app is really interesting. After talking about karma within communities like Reddit two classes ago, I downloaded the app called "Yik Yak," which has a Karma score. This score, along with karma whoring, can seem a bit ridiculous to anyone outside of the community. I too dislike the idea of karma whoring, but I can see how it really works for people within the online community. People would post on Yik Yak about their high karma scores, which would make me think "Why don't I have a score that high?" Even though I don't care about the app much, I still found myself thinking about the karma score, making it more understandable why people who are avid users care so much about a seemingly meaningless number. I can imagine on a photography site it would get annoying if people are posting low-quality pictures and getting tons of likes or comments. -Shannclark (talk) 16:43, 3 February 2015 (UTC)

While reading your "Revenge Rating and Tweak Critique at Photo.net" I couldn't help but think if I too was involved and active on online sites or applications that have been influenced by the "Mcdonaldization" effect of trying to quantify everything. I realized that sites that I find myself on daily, such as Facebook, Instagram, and Pinterest may not have users "rate" images on a scale of 1 to 7, but they too became quantifiable the minute that they started having users "like" these images. I agree with Brie's comment above that perhaps "liking" images would be more effective for Photo.net than having users try to quantify or rate the image based on "aesthetic and originality", because art is much about how it is seen to the beholder, and seems much too objective of a medium to ever be effectively quantifiable. However, my question is, how is assessing the amount of "likes" and "comments" on Facebook and Instagram much different than a rating system? Are users not still obsessing over the amount of positive feedback and reinforcement they get on these sites, and using the number as a way to justify how great of a picture it was, or how many friends he or she might have? I could also relate the "revenge rating" concept to friends of mine who have refused to "like" another's image because "that user did not like my most recent image." I am unbelievably thankful that these popular sites have not gotten to the point that users can rate other users images on a numeric scale, however, I do believe that your statement that "people are seduced by numbers" is undeniably true. I wonder if, like photo.net, social networking sites like Facebook and Instagram, may ever be able to go back to a time where the average user's motivation is not focused so largely on the sheer "number" of rewards and feedback that they receive from others. - Jretrum (talk) 16:20, 3 February 2015 (UTC)

It seems like a lot of people are comparing Photo.net to Instagram. While I see the comparison, the ratings systems for these sites seem very different to me, in terms of context. The idea behind Photo.net was for photographers to get feedback, praise and constructive criticism on their works (and to offer the same to others). The site strove to create a community of people with the common interests of looking at and analyzing photography as art. Instagram falls somewhere between Facebook and Reddit in terms of its status as a "community." We talked about how Reddit users identify themselves as Redditors, but Facebook users do not identify as Facebookers -- one of the reasons that we deem Facebook to be more online network than online community. On Instagram, it's common for people to connect with both friends that they know in real life (offline) as well as users that they don't know offline. In fact, the majority of people I know use their Instagram to follow "unknown" users (people they've never met). Then again, I've never heard someone refer to herself as an Instagrammer" and the platform doesn't foster a lot of communication, collaboration or bond-strengthening between users (like Reddit does). In this way, I see Instagram as more of a network (like Facebook) and less of a community (like Reddit). I see Photo.net, though, as a community, fostering interaction and collaboration. I guess what all of this comes down to is that I think the rating system on Instagram is less relevant, and less important to analyze. It's a simple platform, not meant to really analyze the quality of photography as art, but simply to say "I liked this" or say nothing at all. In this case, the ratings system seems acceptable and successful. Photo.net's rating system deserves more attention because they are trying to use it to enhance the community and to foster "good" interaction between users. Which brings me to my question -- can/should a ratings system be used at all to enhance/promote "high quality" interaction? I'm not sure it can, for many reasons, the main one being that of human nature to game the system. Latifa said that barriers in place to stop gaming the system may discourage her from using a site, or using its rating system at least. I think this would be true for many newcomers to an online community. Therefore, taking any measures to prevent gaming the system would likely have a negative impact on users' intrinsic motivation. But taking no measures to prevent gaming will obviously leave you open to "misconduct." It seems to me that in a community where the quality of the contribution is valued, a ratings system can only be detrimental. Instagram doesn't care about the quality of your contribution, therefore rating is appropriate. But Photo.net cares about quality and I don't see a way to implement ratings without destroying quality. -SamDiamond88 (talk) 16:59, 3 February 2015 (UTC)

After looking at your findings in “Revenge Rating and Tweak Critique at Photo.net,” and at what Kraut and Resnick had to say, I can see why Philip Greenspun had such a difficult time coming up with a rating system that was both hard to game and pleased his target art world audience. Art is exceptionally hard to define let alone quantify, so when it comes to rating another’s photographic artwork there’s no easy way to define parameters. A few people have mentioned that the popularity of image sharing sites like Instagram and Imagur comes from their user-friendly interface. However, that could simultaneously leads to the dilution of the artwork. I think Greenspun’s idea for a merit-based system was on the right track. An Instagram like system where the most likes equals the most popular doesn’t seem to fit with a professional photo-sharing site. Especially because you can see with popular Instagramers a lot of comments will be people trying to link back to their own sites or accounts. Photographers want honest feed back, which is part of the problem with Revenge ratings. Of course we want feedback, and I think we’d prefer it if it was positive. Kraut and Resnick’s 26th Design Claim is that “Rewards that are task contingent but not performance contingent lead to members gaming the system by performing the task with low effort” (p. 55). A good example of that claim is trying to make comments on low scores mandatory as one user suggested. People just type nonsense, or mean spirited jabs, to fill word requirements. And if you really want to encourage solid feedback you would need a moderator in charge of making sure people leave good insight. On a large platform the policing of that policy would become increasing difficult.--TWOBJ (talk) 17:18, 3 February 2015 (UTC)

Like many of my peers, I too compared photo.net to Instagram. I think Instagram is a more wildly known and used photo platform in today's generation. I have always been bothered by how "likes" and "followers" matter so much to the people who post photos. Recently, I unfollowed a girl on Instagram who I knew through a friend. I don't know her that well and her photographs consisted mainly of photos of her face. Don't get me wrong, I have nothing against the occasional selfie, but in that moment I felt I had no need to keep having these pictures appear on my feed. This unfollow was promptly followed by a text from this girl asking why I would unfollow her. I was uncertain of how she would ever know that out of her 465 followers it was ME who unfollower her....but of course, as for most things, there's an app for that. I learned that you are able to download an app that notifies you when someone unfollows you. It bothers me that these apps and websites no longer have anything to do with photos or artistic prowess, but rather a popularity contest. As if a large number of likes or followers somehow makes your photography superior to everyone else's. Ratings, likes, and followings are simply a way for people on these networks to find some sort of reassurance that their work is being valued. Nduryea (talk)

Feb 06 Fri - Kohn on motivation

Given the concept presented by Gratipay along with certain themes addressed in Chad Whitacre's "Resentment," I couldn't help but think immediately of Gratipay's distant cousin, Kickstarter. While Kickstarter isn't necessarily based in motivating its patrons with the appeal of "gratitude," its crowdsourcing roots are undoubtedly one in the same. That being said, as with Gratipay, one person or another is certainly bound to feel some type of resentment toward any given Kickstarter user. Personally, I was particularly furious with a certain Kickstarter, otherwise known as "Potato Salad". In this Kickstarter, a Mr. Zack Danger Brown sought to receive a small $10 to fund his attempt to make potato salad: "Basically I'm just making potato salad. I haven't decided what kind yet," (Brown) as he describes the project. Since establishing his Kickstarter in August 2014, Brown has received $55,492 from 6,911 backers to whom he has made various, ridiculous promises for their monetary support. These include "POTATO MADNESS: Receive a potato-salad themed haiku written by me, your name carved into a potato that will be used in the potato salad, a signed jar of mayonnaise, the potato salad recipe, hang out in the kitchen with me while I make the potato salad, choose a potato-salad-appropriate ingredient to add to the potato salad, receive a bite of the potato salad, a photo of me making the potato salad, a 'thank you' posted to our website and I will say your name out loud while making the potato salad" for a $20 investment and "THE PLATINUM POTATO: Receive the recipe book, the shirt and the hat along with a bite of the potato salad, a photo of me making the potato salad, a 'thank you' posted to our website and I will say your name out loud while making the potato salad" for $110 or more. While Brown's endeavor was indeed initially entertaining, the fact that he has convinced so many people to shell as much money as he has made to make *potato salad* is ultimately upsetting in my personal opinion. After all, $55,492 toward making potato salad in itself is ridiculous; not to mention that such an extensive amount of money could have been put toward something a bit more beneficial to the world as a whole. And “that’s exactly the shit I find devaluing and dangerous about mixing market and social" (David Heinemeier Hansson). Kristinam 0330 (talk) 01:59, 5 February 2015 (UTC)

First, a very interesting (and funny) video on extrinsic reward variation with monkeys https://www.youtube.com/watch?v=HL45pVdsRvE. I think that Kohn's discussion on how extrinsic motivators can eventually demotivate someone to do a task or activity if the original motivator is not involved. The fact that paying people to do something they love can actually make them want to do it less is surprising to me. It would be interesting to hear Kohn's thoughts on the implications of varying extrinsic rewards. I can imagine that if one person was being paid less to do the same job as someone else, then they would not want to do it anymore, but how would this affect the person who is being paid more? Can the pay difference (or any difference between extrinsic rewards) be used as a motivator, or does it always bring rejection (like the monkey video showed)? -Shannclark (talk) 04:24, 5 February 2015 (UTC)

Kohn makes an interesting point in chapter 5, "Most of us, after all, can think of something we used to do just because we found it enjoyable - until we started getting paid for engaging in the activity, after which there was no way we would consider doing it for free. Somehow our intrinsic interest evaporated after rewards were introduced" (p. 71). As I was reading this, I was trying to think of how it relates to me in my daily life. Growing up, I had this book that helped with my spelling. Each page would teach me a new word. It was my favorite book as it would have pictures to aid me in remembering the word. I remember going crazy with my crayons and markers coloring the pictures in and scribbling all over the page. Then one day, my dad decides to reward me with a m&m (in this case, getting paid) every time I spelt a word right. Soon after, I would only touch that book in hope that I get an m&m. Immediately as soon as my dad stopped giving me m&m's, the book became obsolete. In doing so, my dad has unknowingly killed my interest. Therefore, I wonder, can we never reward our child for doing something good? Doesn't rewarding people for doing something good, come naturally? Can we reward someone for doing something good and yet not killing his/her interest? Nsiu (talk) 14:10, 5 February 2015 (UTC)

  • Nsiu, good question that I hope you raise in class.-Reagle (talk) 18:40, 5 February 2015 (UTC)

After reading Kohn's take on extrinsic motivation, I began to think (like those who left a note above me) about my own life and how his teachings could apply to it. I started to think about how, especially at this point in my life, I'm told time and time again to take what I love to do and make it my career when I graduate. But, according to Kohn, this could potentially wreck my inner drive to do what I love. Because I've already had several co-ops and part time jobs, I wondered if this had already happened to me. I love to read books and write poetry. However, now I barely read or write for pleasure. Because my jobs include a great deal of reading and writing for which I am compensated, Kohn would say that my "intrinsic interested evaporated." Looking at it myself, I could see maybe this is the case. That because I've been rewarded extrinsically for doing the things I enjoy, I no longer wish to do them without compensation. On the other hand, maybe I don't read and write for myself anymore because my creativity is exhausted from a long work day. I'm not sure if Kohn's take is right, but I can definitely see how it could be the potential cause of my declining personal reading and writing. Brianne Shelley (talk) 18:30, 5 February 2015 (UTC)

I am very interested in the long term effects that come along with presenting people with extrinsic motivators. For example, the fact that getting paid or rewarded to do something you like, just once, can make you less interested in that same hobby for weeks is fascinating to me. As is the idea that if someone is given extrinsic motivators often it can hinder their ability/interest in developing new hobbies if rewards are not presented. These two ideas make me wonder how much my parents used this when I was growing up and how it affected the way I behave as well as what I am interested in. It definitely begs the question of what is (and what is not) moral and ethical while rearing children and/or teaching them. -Enarowski (talk) 23:49, 5 February 2015 (UTC)

With my last co-op, I definitely experienced a decrease in my intrinsic interest. This article by Kohn's made me think a lot about getting a job post-graduation. I learned from both my parents that the key to a successful career is doing something that you love. But this article makes me think that this might not necessarily be true. I remember as a kid being rewarded for finishing everything on my plate, or even how my dad used to pay me if I got A's in my classes. But over time, those rewards began to wear off. As an adult, I realized that finishing every scrap of food on my plate is not necessarily a healthy habit, and that getting paid to do well on a test or assignment was not helping my academic career. But the question I have is if we are not motivated by reward, then what are we working towards? The reality is that we live in a materialistic, money driven world. If we aren't paid for our jobs or accomplishments, then how will we support ourselves? I agree with Kohn that we should do things because we enjoy them, but work is not always fun and someone has to do it. Nwells1229 (talk) 23:12, 5 February 2015 (UTC)

Similarly to those above me, reading chapter 5 of the Kohn’s made me think about my own personal life and how rewards tend to kill interests. In the study Kohn talks about on page 70 with the college students being paid or not being paid to finish a puzzle, he explains that those who had been paid spent less time and effort on the puzzle than those who hadn’t been paid. This reminded me about when I was younger and my mom bribed me with $100 to read 10 books over summer break. I hated reading (still kind of do). Even though that wasn’t exactly that hard to do and I wanted the money, I still never really wanted to do read. Having the extrinsic motivation of having to read, something that most people love to do, made the idea of reading even less appealing. To this day, I still have this issue. As a photographer, people always ask me to take photos for them and then would pay me for my work. I love photography, I grew up loving photography, but after I photograph someone or an event it makes me not want to stop for a while. Exactly as the article states, “A single, one-time reward for doing something you used to enjoy can kill your interest for weeks”. It’s so unfortunate that people end up not wanting to do something they enjoy just because there is a reward attached to it. Sydneys92 (talk) 00:01, 6 February 2015 (UTC)

  • Sydneys92, that's the curse of the wedding photographer -Reagle (talk) 18:26, 6 February 2015 (UTC)

Recently, a friend of mine convinced me to download the fitness app MyFitnessPal in order to track my daily calorie intake and exercise. The point of the app is to keep me "on track" in terms of my fitness goals and diet regimen, while allowing me to connect and communicate with my "pals". In other words, I can use the app to help motivate myself and also to gain support from my friends that are also users. However, I quickly noticed that having to log each and every thing I ate became more of a chore than something I was interested in doing, and that receiving positive feedback from my friends, or seeing how well they were doing, only led to resentment. I was more intrinsically motivated to work out and eat healthy when I didn't have the chance to be the "healthiest" of my friends. (You can read about the effects of extrinsic and intrinsic motivation in relation to other fitness apps here: "Motivational Boosts to Fitness Behavior Modification"). I think that Kohn's reading can be applied to all aspects of life - the classroom, office, family, relationships, and mobile fitness apps too. It is astounding to me how everything from money incentives to offering a child an hour of television after finishing his homework, can essentially diminish or terminate interest in the task at hand. My question is ultimately how can we as a society create intrinsic motivation without utilizing extrinsic rewards whatsoever? In other words, how can we bring up our children without promising the cookie after dinner or teach in classrooms without promising a strong participation grade for participating? I feel that extrinsic motivation is so ingrained into society as we know it that it is nearly impossible to be rid of it altogether, but that we should look into ways to encourage intrinsic motivation without relying on extrinsic rewards. - Jretrum (talk) 01:53, 6 February 2015 (UTC)

The idea that extrinsic motivation can crowd out intrinsic motivation does seem almost counter intuitive. You would expect that adding a reward would only heightens ones motivation to perform the task. I can understand the idea of an extrinsic motivation making an intrinsic motivation much less desirable. When an extrinsic motivator such as monetary compensation comes into play it starts to quantify your performance. When someone starts to try and measure something you intrinsically enjoy it starts to feel like work. There can be an added pressure to continually perform at the same or higher level. When an extrinsic motivation is introduced you may not feel like you can live up to the expectations that have now been set. Gamification can cause feelings of resentment amongst members of a community, especially when the reason for coming to the community is because of an intrinsic motivation to help others. It explains why Gittip struggled with the idea of a leaderboard. It’s not an easy task to keep people interested without overwhelming them with an extrinsic motivation. Like we discussed in class the reason Facebook is so successful is because it hits on almost every major intrinsic motivation. When I wrote copy for social media during my co-op I experienced first-hand how adding a monetary component sucks the motivation out of wanting to do something.TWOBJ (talk) 04:00, 6 February 2015 (UTC)Trevor

I agree with Shannon and Natassia i too was thinking can we not reward our children for doing a good job? I somewhat agreed with Kohn's point but i do have different thoughts about them. When i know there is a reward if i complete a given task i immediately think of it as a challenge then I decide if I want to participate in this challenge or not. If I do then I complete the task to the best of my capabilities but usually I am not interested in the challenge so I do not complete the task to the best of my capabilities. I remember on coop sometimes my team would make challenges and I was never interested in completing the task properly. It really demotivated me and made me resent my work. I do feel that if rewards are given to people without their knowledge and after the task has been completed that it it may not affect their future completion of that same task or their intrinsic motivation. A final point is if money kills intrinsic motivation for those who love to do their jobs then why are we living in a world where money is so important. I know of people who work just for the money it is what motivates them. Yes there are others that do not care about money and love their job but i'm sure that if they started to get paid less it will affect how much they love their job. i'm excited to talk further about this in class. Latifaak (talk) 13:23, 6 February 2015 (UTC)

Before the reading, I would have assumed that offering rewards for a specific action would increase the motivation to perform the task. But when I considered what Natalie said about applying this concept to her most recent co-op, I could better relate and understand as why this would be so. Despite the fact that I was getting paid hourly, I also noticed my intrinsic motivation waning where at first I was very excited to work. I think that comes from the subtle feeling of knowing the work HAS to get done and that you NEED to get in at 8am and stay until a little after 5pm. I think what Kohn discovered with both the young school kids and "puzzling" adults is that when our jobs cause some of our freedoms (like time) to be taken away or limited, our once beloved jobs seem much less fun. My question is, where does guilt come into play? Despite the fact that my interest declined during my guilt, if I wasn't working to the best of my ability I did feel guilty and like I needed to step it up. So maybe it depends on the person or test subject and how self-aware or morally conscious they are? I think it would be interesting to hear people's thoughts on this in class. --Kaylynn Nelson (talk) 15:34, 6 February 2015 (UTC)


After reading Khon’s chapter, it made me think about my own personal experiences regarding reward and interests as Sydney said before. I remember my grandmother used to bribe me with toys to eat my vegetables. At first I did not find my vegetables disgusting and actually enjoy eating them. Also, because of the (extrinsic motivation) of having more toys I did it. But After a while, I stopped doing it even though there was a reward, because it killed my interest in eating any kind of healthy food because they made me do it to much and I got bored. So I find it interesting how paying people to do a task, motivates them do the task less.

I also find very interesting and funny Shannan video with monkeys. Iferrrerb (talk) 15:39, 6 February 2015 (UTC)

Kohn’s statement “Do this and you will get that” reminds me how I train my 5 months puppy by rewarding him food, he never diminish the desire of food, and he does whatever I want him to do. Most of us have a faith that reward promotes better performance. After reading Kohn’s essay, he crashed the belief that we used to have. Based on what he says in the article, it seems like reward equals punishment, because reward devalues the original intrinsic motivation, and people become less desirable about the certain activity. I agree with the statement, we all have similar experiences. However, I think praise can be an exception, because it is a kind of extrinsic motivation as well. Kohn mentions: “ Praise is essential but not necessary.” I am not 100 % agree with that. Personally, I prefer to be praised, it is a kind of approval. I need approvals to encourage me to perform better or to admit that I am right. And I did not lose any interesting of anything. Therefore, I think praise is essential and necessary. What’s more, I am curious about how educator understand or behave the argument after reading the article?Yulu Lei (talk) 15:40, 6 February 2015 (UTC)

Kohn brings up something that I have been thinking about for a few months myself. Over the summer a friend of mine had me try a new type of workout at a place called Flywheel. I really enjoyed it and went back a handful more times. I came to the point where I even applied to and started working there. As an employee I am able to work out there for free which I thought was perfect because I enjoyed it so much. Now that I have worked here and am incentivized by my pay to do my job I find that when I am done with my work I have little to no desire to workout here. When I read this chapter and noticed how I have absolutely felt this phenomena myself it was really eye opening to the point where I realized this has even happened to me before. I have always wanted to find a career in something that I love and have deep imbedded personal interest in but now having read Kohn I may think twice before jumping head first into my "dream job." This also made me think, is anyone happy working in what they think is their "dream job?"Tschn012 (talk) 15:50, 6 February 2015 (UTC)

Cool vid, Shannon! This was an interesting topic to me but I wasn’t too surprised at Kohn’s point regarding the loss of intrinsic interest as extrinsic rewards are introduced. Elissa mentioned the idea that “getting paid or rewarded to do something you like just once can make you less interested in that same hobby for weeks” and the first thing that came to my mind when reading this were professional athletes. I’m not here to sympathize with anybody that makes millions of dollars playing a game, but I can think of a lot of different examples in which an athlete retires from a sport because “it just isn’t fun to them anymore.” They made a lot of money, yet somewhere along the line it just wasn’t the same. I think that with money, or any extrinsic reward, one’s involvement in any activity becomes more in-depth and requires a larger degree of attention. Conversely, as a kid I used to get paid a small amount of money to mow my lawn every week, but I disliked it and would try to avoid doing it. As I got older and money is no longer a part of the deal, I found that I actually really enjoy this activity. I’m looking forward to hearing other examples of this unique dynamic within motivation. Matt rodgers2 (talk) 15:53, 6 February 2015 (UTC)

I agree and can connect with what Sydney said about "summer reading" as a kid. I think, as a child, I actually recognized what Kohn is talking about, although I didn't realize it. I used to love reading, in elementary school. I often chose books instead of playing with toys in the classroom. When I got older, and summer reading was required for school, my affinity for reading decreased with each passing year. I procrastinated on reading the required books for school... and I ended up just reading the bare minimum and then avoiding reading anything else. To this day, I struggle to make myself "read for pleasure" to try not to lost my love for reading... I wonder if we should get rid of summer reading in schools? Or would there be more detriment to this than we can foresee? - SamDiamond88 (talk) 16:48, 6 February 2015 (UTC)

Straying from the discussion on Kohn, a quote from Whitacre's article on resentment struck me: "I’ll work as hard for a candy bar as for a box of chocolates, until you tell me that the candy bar is worth 50¢ and the chocolates $5. Then I’ll resent you for wanting me to work so hard for so little, and I’ll slack off." This resentment works the same with extrinsic motivators. If I do something I enjoy and suddenly I receive a reward for it, when I go back to doing it for nothing, I'll resent the task instead of the person who gave me the reward in the first place. It's complicated and petty, but psychologically we want our interests to be pure. When we put a monetary value on it, the interest fades and it becomes a job or somewhere we have to do something well instead of choosing to do it well. I am still very baffled by how Gittup works, to be honest. Nucomm23 (talk)

Reading over some of the comments above, it seems as though many people are shocked by Kohn's argument of how extrinsic motivation can sometimes do exactly the opposite of what it intends to do. Currently I am enrolled in a health communication course and we are looking at various health campaigns and what components make them effective or not. Recently, we looked specifically at vaccinations and why people chose not to get them. Health professionals were so frustrated with people who weren't getting vaccinated that they tried modifying their campaigns in various ways to get more people to get vaccinated. One strategy was to lower the cost. They lowered the vaccines from approximately $50 to $5 in hopes that the lowered cost would encourage more people to come in. However, the exact opposite happened. When the price was lowered, even less people were getting vaccinated! Because the price was so low, maybe people thought, "Do I really want/need this vaccine? or am I just getting it because it is cheap?" I think this is a good example of what Kohn was talking about. When people choose to take action, it must be an intrinsic need, rather than an extrinsic source telling them how to act. Nduryea (talk) 17:32, 6 February 2015 (UTC)

An old coworker posted this right after I put in my QIC. [Do what you love because society is a trap and work is meaningless] I think this comedian is arguing that we have all become so extrinsically motivated that we can no longer find our intrinsic values. Nucomm23 (talk)

Feb 10 Tue - SNOW

At its pinnacle, the Internet encourages "ordinary people [to] share their own experiences, often at the intersection of banal and profound" (Reagle, 13). In fact, more often banal than not. This is without a doubt a result of comment culture and more specifically the ease with which it is achieved. "Reactive, short, and asynchronous" (Reagle, 14), a "comment" may be as simple as hitting a "like" or "retweet" button, expressing one's alleged feelings toward a web post. In the same vein as "HOTorNOT" and "Am I Ugly?", selfie culture is deeply motivated by comment culture which boosts—or diminishes—one's self-esteem based on the post's resulting comments or reactions. This is likely why so many inappropriate selfies—like those taken at Auschwitz, in the presence of suicide attempts, at funerals, and so on—exist on and are circulated around the web. Each and every one of these posts seeks to incite a particular reaction. St. Vincent's "Digital Witness", as seen on YouTube, comments on the newfound digital culture in which we find ourselves engulfed today, insisting on the notion that we share simply because we have an audience. In the song, singer Annie Clark claims, "This is no time for confessing." This lyric is interpreted as social media users over-sharing every banal moment of their lives to the point of insincerity in "an infinite stream of 'confessions' [without] enough time or attention for each one" (Genius): hence, the shortness of a comment. Ultimately, we must wonder, "why are you posting it — do other people really have to know?" (Genius). Kristinam 0330 (talk) 23:22, 10 February 2015 (UTC)

Feb 13 Fri - Ethics

Ethics in the study of online communities is, without question, a bit of a slippery slope. Users of a specific online community, like OkCupid for example, might expect to some degree that their personal information shared on the website could be subject to the domain's own research; they are unlikely to consider, however, the idea that outside researchers might seek to source information from their private profiles, collecting it as a part of their data. As a user of various social media, I would hope that my privacy is respected by such researchers. Mr. Kaufman of the Harvard research team completely violated this expectation in his own methods. Nevertheless, it seems his taking advantage of the Facebook privacy setting that says "You can't see me unless you're my friend" in combination with his research assistants does not violate the Facebook's terms of service as it might Twitter's. Still, I fully disagree with Kaufman's tactics as they apply to one's personal social media. While perhaps the web as a whole is a public forum, an individual's personal information, especially when indicated private should indeed remain private. Kristinam 0330 (talk) 23:22, 10 February 2015 (UTC)

I was shocked that social networking sites, like Facebook and OkCupid, were using users private user profiles as data in studies. I am sure that most of us take steps to “protect our information” on Facebook by limiting who can view our interests and personal information. Therefore, despite our efforts in doing so, it is disturbing to hear that researchers were going about this by using research assistants (who had access to private user information) to collect data. However, as unethical as it may be, I think it might be interesting to find out their rationale behind it - why do people do unethical things? In the case of the Harvard study, were researchers planning to collect data unethically from the get-go or was it because of convenience? Was public information not enough? Nsiu (talk) 21:59, 9 February 2015 (UTC)

Reading about ethics when researching and interacting with online communities made me think about the piece we read on called "How the Professor Who Fooled Wikipedia Got Caught by Reddit." After reading Amy Bruckman's piece on teaching students how to study online communities ethically, I absolutely think that the students in T. Millis Kelly's course "Lying About the Past" were being unethical. I understand that there obviously is no set-in-stone right or wrong online ethics, but I think Bruckman's rules for her class make sense. One of Bruckman's requirements that stood out to me was the one that "students openly describe themselves as researchers, and use no identity deception, even if that is common on the site." Kelly's students posed online under false identities to try and make people believe their fabricated stories. I do not think that this was ethical, and this would definitely not fly under Amy Bruckman's watch. Unfortunately, I don't think there will ever be one standard of Internet ethics for people to follow - most people just go by what they themselves think is morally right, and that variation can be huge. -Shannclark (talk) 22:25, 9 February 2015 (UTC)

  • Great response Shannclark, it's clear you read all the sources (most importantly Bruckman) carefully. -Reagle (talk) 13:43, 11 February 2015 (UTC)


As an Internet user, I was not surprised to that sites are using our information and manipulating it or using it in studies. I do not think it is "right," but I don't necessarily expect anything else either - we are putting our information out there willingly. That being said, I think that the Harvard study and OkCupid went too far. In the Harvard study, the fact that students were analyzing the data of their peers seems wrong, especially if those students had marked their page as private. In the OkCupid article, I think the A/B testing mentioned was okay but when they took the steps to lie to users about compatibility that was going too far. This is our own fault, but we put faith in sites such as Facebook and OkCupid and expect that our privacy and interests will be honored (even though we all know that it might not be). I wonder if articles such as these change anything - will other sites continue to do semi-unethical tests or at some point will they get the point that it bothers the public? If not, is there anything that can stop them from continuing to do this? -Enarowski (talk) 23:33, 9 February 2015 (UTC)

Like Shannon Bruckman's requirement that "students openly describe themselves as researchers..." stood out to me too. I was overwhelmed reading about what the students in that class had to do. Just by exploring Wikipedia in this class I can see how hard it is to fit into an online community. The reading by Amy Bruckman was interesting and I did learn a lot about the differences in online communities. The reading on Harvard researchers was very shocking and even though I know that my information can be accessed by other online and i do think about what i am posting prior to posting it, I do not think of it as happening to me as I am going about my day. Also in the facebook reading it mentioned that they did this in the beginning when he got profile pictures of the students from their student ID by hacking into the system. This made me wonder how often does this happen? and I think there should be stricter laws against this since it is harmful to people. Personally, i am more private onlne but i understand that the generations that are growing up onlnie are so open with their lives that this can really harm them. Latifaak (talk) 19:01, 9 February 2015 (UTC)

I was very impressed, primarily with the article that talks about the controversy over the Harvard data set. Sociologist in Harvard collected student’s private information without their knowledge to study them. I believe there is no valid justification for the use and distribution of these students' private information without their consent and this is clearly an ethical setback. I also was very captivated by the response of Kaufman, when a reporter asks him why students were not informed they were collecting their information. Krauf answered: “Alerting students risked "frightening people unnecessarily," he says. “We all agreed that it was not necessary, either legally or ethically," . I believe the answer by Mr. Krauf implies how the problem arises from the investigators and researchers, if they do not think ethically; they are not going to conduct an experiment properly in ways that don’t jeopardize the subjects being studied. Sadly it seems that researchers now a days are more interested in gathering all the data possible regardless were the information came from. Also as Latifaak said, the Facebook article made me think that they should create stronger rules or laws against collecting personal information online without proper consent. Iferrrerb (talk) 01:17, 10 February 2015 (UTC)

I agree with Elissa's comments on expecting nothing less of researchers to utilize our public information in their studies. Likewise, I agree that just because the information is accessible, doesn't make it okay to use it in your studies. For instance, I have a Facebook profile that's set to private and I'm selective with the friends I have. I trust them and personally know all of them, so I know my personal information is secure. I'm sure there are many others out there who like me, prefer their privacy, which was why I was so angered when reading Kaufman's defensive statement on why he didn't inform the 2009 Harvard class that he was compiling all their data. He said, "We all agreed that it was not necessary, either legally or ethically," but just the fact that he specifically mentions legal and ethical problems makes me even more sure that he knew the research he was doing infringed and crossed the line of exactly those things. This makes me believe that "strange distance" is a very real term that causes researchers to see people online as test subjects instead of actual people, and I think Kaufman needs to step back and honestly consider if he himself demonstrated it during his research. After reading the OKCupid article, I was very interested to learn that the myth of compatibility worked just as well as the actual matches that were created. However, they didn't do a long term study to see if the matches created from the myth lasted. My assumption is that they didn't... I think it's legally wrong to have people pay for a service and to skew their results. I also think ethically it's also wrong because the people are trusting this website to find love and they start some form of a relationship with someone believing they're on their way to finding it. --Kaylynn Nelson (talk) 22:19, 10 February 2015 (UTC)

Online communities can be difficult for researchers to navigate ethically, especially since they are a relatively new field. After reading through the OkCupid article, it was clear that the first two experiments were done with A/B testing, however, the third experiment was just unethical. Basically, OkCupid lied to its subscribers in order to collect data. Part of me thinks that when you sign up for these dating sites, you are just signing up to be analyzed for research purposes. These sites are so new that conducting studies with current users is the only way they are going to improve the systems. However, users pay for these services and shouldn't have to worry about their information being leaked or being subject to research studies. The findings from the OkCupid experiments was not surprising though: the content in your profile is essentially worthless. The fact that people believe an algorithm can determine who they should be with is just ridiculous to me. Relationships should not be built on mathematical equations. Users sometimes put too much trust in these sites and in turn are just so vulnerable to the power of suggestion. Nwells1229 (talk) 23:10, 14 February 2015 (UTC)

Kaylynn, you criticized Kaufman's defensive statement, saying "just the fact that he specifically mentions legal and ethical problems makes me even more sure that he knew the research he was doing infringed and crossed the line of exactly those things." I was thinking almost the same thing when I read that part! He precisely said that he and his colleagues decided that making their subjects aware of the study was not legally or ethically necessary. The fact that they all "agreed" upon this suggests that they had one, if not more, conversations about the ethics. This also implies that it wasn't clear to them right off the bat whether it was ethical to do such a thing. Nowhere else would you be able to collect someone's personal information for study without their knowledge - why should it be any different on the Internet? Collecting data without subject consent almost seems worse when it comes to online communities, because as the Bruckman reading proved, there are so many ways in which a "disguised" participant in a study can be traced via the web to a real identity. Bruckman's students seem to be extremely careful about what they reveal about their subjects, and how data could be used to trace subject identity -- and look how easy it was for someone to dig into the Harvard study and discover that they'd used the Harvard class of '09. As for the okcupid reading... I'm not as confident in my stance on this one. It seems to me that a social site, especially a dating site like okcupid, could A/B test for certain things related to their product and as a result, end up with some data about human behavior/psychology. The okcupid "findings" were interesting, but does the company have a right to share those findings? Does it have a right to show people false match percentages? The problem I see here is that the company could find some way to argue that the swapping of match percentages was an A/B test related to the success of their product... it's a gray area already and I think a company could find ways to legally get around these things. Harvard was very clearly doing research on human subjects without consent. okcupid, on the other hand, could be viewed as doing A/B testing for business purposes and coming up with human behavior data as a byproduct. I'm not entirely comfortable with it, but I don't see an easy way of legally preventing this in okcupid's case. -SamDiamond88 (talk) 01:06, 11 February 2015 (UTC)

While reading about Facebook's User Influence experiment in which the social media site manipulated certain user's pages to display more negative or positive language without their knowledge, I was reminded of the reading we did on AB Testing, The A/B test: Inside the technology that’s changing the rules of business" and the discussion in class that followed. Although I do believe that Facebook's 2014 study on emotions was unethical, I wondered how much more unethical this type of manipulation is than certain types of A/B testing, such as changing the monetary value that certain customers see for products they are trying to buy online. For instance, I feel that I personally would be more angered to learn that the advertisements I am seeing on Facebook's page are offering me plane tickets for $50 more expensive than the advertisements that my friend receives. I also was reminded, like Shannclark, of the reading "Lying About the Past", and agree with her that Bruckman would never support this type of deception in her classes' research. However, I also wondered how Bruckman accounts for deception amongst the users of the online communities her students study? In other words, she encourages her students to receive signed letters of consent and to conduct face-to-face and/or phone interviews. However, my concern would be that some members of these online communities are not who they appear to be online, and thus meeting them face-to-face might put her students in some dangerous and potentially unethical situations. - Jretrum (talk) 22:09, 11 February 2015 (UTC)

After reading all the material, I felt a bit surprised I didn't think about the process of researching online communities much before. I've always believed that using information from private profiles is unethical such as in the case of the Harvard study done by Mr. Kaufman and the other Harvard sociologists. However, when it comes to researching public communities, I didn't know there was such an approval process necessary. I thought, because these users are aware that they are publicly posting information for all to see, that their public information would be available for use. I agree with what Christian Rudder says in his OKCupid article, "if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work." After all, we're A/B tested on all the time! Also, all of my personal online profiles are private but if I were to have a public one, I would assume that people could use it in any type of analysis. After reading the article Bruckman wrote about the process her past Online Communities classes had to go through, I started to realize that there are many steps that need to be taken in order to conduct ethical online research. Bruckman was definitely on top of things especially because she started analyzing the internet when it was still very young. I think, like Shannon mentioned, that it makes sense for a researcher to openly establish who they are. I also agreed with protecting people's identities, even if it's an online username and with not directly quoting people because it will make it easy to find them online. However, I was a little surprised to hear the students had to go through the IRB to get approval and reading about how even studying open blogs was subject to ethical questioning. Brianne Shelley (talk) 19:26, 12 February 2015 (UTC)

"The only thing with more bugs than our HTML was our understanding of human nature (OKCupid)." I found the article on OKCupid fascinating! I found myself experiencing cognitive dissonance because what they were doing is not ethical and pretty disturbing, but the information they got fully substantiated other social science theories in way other experiments couldn't! The correlation between "looks" and "personality" demonstrates the "Halo Effect" - the idea that people who are more physically attractive are more rewarding socially as well. It's somewhat satisfying to see a theory learned played out so perfectly, but the fact that OKCupid is so nonchalant about people's *lives* is unethical and concerning. On the other hand, changing the rating systems and adding or taking away pictures is similar to A/B testing, so it may not be as unethical as it seems. I don't think any of the users would have wanted this article to be published though because as Bruckman suggested it shows OKCupid users in an "unflattering light." Studying online communities is definitely a slippery slope and it's especially difficult since everything is technically public. Once a person could become the subject of research, it becomes unethical, even though they have chosen to go to a public forum and give away their information. Nucomm23 (talk)

Similarly to Bri, I was also kind of surprised I did not think about researchers could be using our information, specifically information that we think is private. Granted, we always hear that nothing is really private anymore, especially on the Internet. But when researchers find ways to hack into this information to use you for research, I completely agree that it’s going against Internet ethics. In the Bruckman article, she discusses how the students in her classes had to get legal consent to interview people, which is completely ethical- the people who were being interviewed knew their rights as the subjects. However, in the Harvard research case with Facebook, random students were chosen and researchers were able to get more information because the researchers were friends with them on Facebook. Yes, it technically becomes public if someone is friend’s with someone and able to see what is being posted, but it still shouldn’t mean that researchers should use that “power” to use the profile for data purposes. I think if you are going to be a part of some type of research, you at least have the right to know! 65.96.127.210 (talk) 02:27, 13 February 2015 (UTC)

I certainly concur with Jessica and Shannon that Bruckman would never be a proponent of this type of deception. However I can’t say that I’d agree with the sentiment that Facebook and OkCupid are inherently evil corporations toying with humans for research. In fact I would say they were both in the right for what they did. Let’s look at the Facebook case. Sure at first glance testing human’s emotions without their consent does seem a little ethically gray. But we have to remember we volunteer to participate in working with these sites. Facebook didn’t change or alter any of the information getting to you, just the type of information. Every time we go to a website someone else is controlling the content that reaches us. Often times through AB testing. Or at least that’s what I’ve come to expect as an avid internet user. So at their core are the tests run by Facebook and OK Cupid completely outside the realm of AB testing? I can’t say that I agree with all of Bruckman’s rules on how to engage with online communities particularly, as Shannon mentions, the one that says they should identify themselves as researches. Sometimes in order to gain full exposure to something you have to conceal yourself as a confederate.TWOBJ (talk) 03:49, 13 February 2015 (UTC)Trevor

Emily, I too had A/B testing in the back of my mind when reading these articles! It is definitely concerning to think that people go on OkCupid to find some sort of authentic human connection within a trusted site but in reality, there isn't much of a science to it. Although there are some statistics supporting their successful matches, it didn't convince me in thinking this was some ground breaking method in finding love. OkCupid markets itself as a love connection site when in reality, its no better than Tinder! Although it does bother me in some ways, I don't believe it is unethical simply because of how open OKCupid is about this. They came out with an article blatantly stating they use OKCupid users as their test subjects, so it's not as if they are trying to keep this a secret. Anyone could find this article and learn about it and I am certain many people have seen it, but it doesn't deter users from the site. I think people using the internet assume that there is some sort of behind the scenes statistics and testing going on, and we are all aware it isn't just OkCupid. I find that OKCupid adheres to what Bruckman's guidelines are for studying online communities. OkCupid is open about conducting research, they never revealed confidential information or names, and they observed behaviors but did not actively participate in any connections made. Drawing from Bruckman's rules, OkCupid seems ethical to me. Nduryea (talk) 05:07, 13 February 2015 (UTC)

I agree with many of the comments that have been posted above about internet privacy. I am not shocked that researchers would use people’s private information for research. I do not agree with it but people have to realize that if an individual puts anything on social media, there is a chance of it being seen by the public. Facebook for example has a variety of privacy settings. I choose to only share my information and photos with my friends. Although I have the highest privacy setting, employers and people I am not friends with still find a way to access my information. After reading the article about Harvard’s privacy meltdown, I was bothered by Mr. Kaufman using Harvard students as research assistants to download the data. He stated, “The assistants’ potentially privileged access should have triggered an ethical concern over whether each student truly intended to have their profile data publicly visible and accessible for downloading.” Not only is Kaufman using private information for research, he is crossing a line by using Harvard students to retrieve that information.-HortonV (talk) 14:51, 13 February 2015 (UTC)

In agreement with a lot of the other comments on the topic of researching private information for studying purposes, I was not surprised by the extent as to which researchers will go to find results. Elyssa made a great point when she pointed out that "we are putting our information out there willingly" and if someone wants to take the time to gather all of that data, I don't see any major problem with it. However, as HortonV (Sorry don't know your name) pointed out, I see an issue with Mr. Kaufman using students as assistants in order to find a way to avoid invasions of privacy. It was also a bit eye-opening to me when Kaufman noted that all researchers agreed to not inform students of the project, coming to a consensus that there were no issues ethically or morally about their research. "Alerting students risked "frightening people unnecessarily," he says." In looking at the backlash against the study, it seems that people were frightened in some sense by the fact that they had no idea what was being done with their information. I was torn in reading this article and the OkCupid one because I understand all the benefits that can result from these studies, yet it is a thin line as to being ethically and morally responsible. One last point that I found worthy of thought was when OkCupid attempted to arrange dates without photos, they saw more successful conversations. However, when the photos returned, those conversations melted away. "The goodness was gone, in fact worse than gone. It was like we’d turned on the bright lights at the bar at midnight." I'm looking forward to discussing the implications that images can have in formulating opinions of personalities and relationships. Matt rodgers2 (talk) 16:32, 13 February 2015 (UTC)

Feb 17 Tue - Relational commitment & Needs-based and lock-in

In the past I used a fitness site that allowed members to track their daily activity and calories, as well as interact on community forums and add other members to encourage them in their fitness goals. It seemed as though many members relied on the bond-based aspect of the site because the encouragement from their "friends" helped them. Yet I was drawn to the site more in an identity-based way: I liked what the site allowed me track, how the forums worked, and how it allowed me to reach my goals. Kraut and Resnick discuss that bonds-based communities can have a downside - if the people you have formed bonds with leave, you will be more likely to leave as well. But as a user who depended on the site - instead of on other users - I used it solely for the purpose of tracking my information, instead of as a platform to make friends. It is now much more obvious to me about why some people were so drawn to "friend" other users in this community while I was the type to ignore friend requests. -Enarowski (talk) 19:34, 16 February 2015 (UTC)


You can do better

Shannclark, Sydneys92, Nduryea, Tschn012, Matt rodgers2, and Yulu Lei. In the responses below I like the interaction but you also don't want to give the appearance of only reading the most trivial of the readings and then piggy-backing on your peers. Was this reading the most important, if not, how does it relate to the other readings? What is your *novel* contribution?

I feel ambivalent about Facebook's user "lock-in" policy; on one hand, it makes the site seem way too strict - people who are using another social networking site will probably still use it regardless of whether or not Facebook lets them download and transfer their information to it. On the other hand, perhaps the fact that Facebook doesn't allow users to download this information (that they've spent years building up and storing on FB, and is probably their main social networking site) to other competing social networks would deter people from using another site where they would have to totally rebuild their profile, pictures, friends, etc. Although Google Plus was much "nicer" in terms of letting users leave, in the end this could be its fatal flaw - I feel like not many people use G+ anymore. I would be interested to know if there's any type of numbers that reflect the amount of people who don't participate in other social networks because of FB's lock-in policies. Probably not, but it would be cool to know! -Shannclark (talk) 21:40, 16 February 2015 (UTC)

In this “Download your Information” option that Facebook has made, you are able to take your data from one website so you are able to put it in another. It allows you to have an archive of your old photos and videos, wall posts, messages, chat conversations, and some of your friend’s names and emails. But I do not understand why this is such a big deal. I mean, I guess it’s nice that you can see what you have posted over the years, which is kind of like Timehop. But why would you want to bring that data into another website like Google Plus? I do not understand why people would want to do that. It’s like taking information from when we were on MySpace and bringing it to Facebook. Personally, I know I do not want a lot of the stuff I posted on MySpace to come to Facebook- it’s a lot of middle school and early high school posts that are completely irrelevant now. I also don’t personally care about the email addresses of my friends on these platforms. Especially since a majority of my friends made these emails and then don’t use the ones they used when they created Facebook. Am I missing something? Do people really want all of their old data to bring onto other platforms? Sydneys92 (talk) 02:04, 17 February 2015 (UTC)

Similar to Sydney, I too find difficulty in seeing the importance of the option to download information from Facebook. From my experience with Facebook, it is a platform where friends share pictures, stories, and plans. I have never use Facebook to find an email address or important information, it simply isn't the platform I would use for this. Now that LinkedIn has become the website where professional and more pertinent information is exchanged, Facebook has become strictly social. I don't see why anyone would find it important to have this information saved. As a user, I actually feel safer knowing this information is not being sent to other websites. I think this is a strong privacy policy in place since I don't believe any information on Facebook should be shared with anyone without the user's permission. While maybe it would be easier to begin a profile on another website if your information already filled in, I think this policy gives the user much more control over where there information is being shared. I also agree with Shannon about how Google Plus is scarcely used at all. I rarely come across someone who is a frequent user of Google Plus, however, it is now assumed that the majority of people you know will have a Facebook page. I do not think many people have much information stored on G+ that they want to be transferred anywhere els so I think this download feature is sort of irrelevant. Nduryea (talk) 04:29, 17 February 2015 (UTC)

Much like many of my previous classmates I too am not really sure why Facebook is making such a big deal over the ability to download your information. from what the article says you are able to download your information in a zipfile backup type document. Though this is useful for the sake of keeping track of your own personal information I fail to see how this has any practical use, even when using the microformats that the article is talking about. Even with the informatoin you are downloading in HTML format there is very little use for this information considering that it is already online in a useable format on the Facebook webpage. Beyond having a personal backup of this information I am curious as to why this is even a feature that people want. It is good to see that facebook is open to try to let people have a bit of control on their own information but why are they really making this feature available? Are microformats really that important to where we need features like this available to us as users?Tschn012 (talk) 14:22, 17 February 2015 (UTC)

After reading 3 short articles, I am still a little confused about whether Facebook allows users to export their information or not. It seems like they are not allow downloading their personal information, but at the end of the article “ Facebook policy now clearly bans exporting users data to competing social networks,” it says, “Facebook responded by saying the email addresses of friends are not a user’s own data, and therefore they don’t have the right to export them. Meanwhile, it allowed these same email addresses to be exported through Yahoo! Mail and some smaller third-party email services.” Last sentence is really confusing to me. What does it mean? What is their real action? Like Sydney, I really don’t understand why Google+ create that meaningless feature; downloading the past information and transmitting it to other platforms are useless feature (at least to me). It is only useful when I want to review or recall what have posted in the past. Otherwise, it is not attractive to me. The Data Liberation battle between Facebook and Google+ is really childish. By the way, I really like Google Announcement video; it is funny and creative. Yulu Lei (talk) 15:30, 17 February 2015 (UTC)

Like the others who posted above me, I am confused about why Facebook does or does not allow its users to export their information. And I had to ask myself, if they did why would I really care? What do we need this information for anyways? In a world that revolves around documenting everything online, we know that when we post something it is out there for the public. So why do they need to put bans on what we can and cannot export? I also find it so ironic that they state in regards to the "Download You Information" feature "...there is not much that you, as a user, can do with the data, beyond looking through HTML webpages included in the download". This just goes to show that this feature is essentially useless. I guess maybe the point of all this is to create something like a virtual business card, so that you can pass on your own information through third party sites such as Facebook. Nwells1229 (talk) 15:54, 17 February 2015 (UTC)

User:Nduryea and User:Sydneys92 both addressed an interesting point in regards to how many people actually want their information to be carried over between social networking sites, but I have a slightly different opinion of the ability to connect account information. I agree that a lot of social networks, such as MySpace, are a product of the times and may not stay popular for a long period of time. When people decide to move on to another website, most of the time it is required to make a new account and enter in all of your information. People like to start fresh when moving to a new social network. That being said, I think it would be useful for all of my accounts from all time to be linked together under one online identity. People may not agree with this thought, but I feel like I've put out so much extraneous information on the internet throughout the past 10 years, I would like to have the ability to access all of those accounts and have the option to delete or change my information. For example, my Facebook account is under a "yahoo.com" email address, but I no longer have access to that email address due to lack of inactivity (I moved from yahoo to Gmail about 5 years ago.) I've tried to contact yahoo and get my accounts connected, but it's virtually impossible and now I have an email address and accounts under my name that I am not able to access. Overall I understand people's resistance against having information saved and shared between sites, but if it is an inevitable outcome, then I would prefer to have the ability to delete all account information that pertains to me across all sites. I'm interested to hear more about how others feel about this as I feel like the connecting of accounts has become such a common practice in online settings. Matt rodgers2 (talk) 16:30, 17 February 2015 (UTC)

I certainly agree with User: Tschn012 and those above who questioned the practical use of Facebook's zipfile in HTML format. I too wondered first, how many users have taken advantage of this feature, and secondly, if they found the zipfile helpful after downloading it? I also was curious as to how many avid Facebooker's even know about this feature. I have been on the site for many years, and had no idea that such a downloadable file existed. Even further, I was even more fascinated by the fact that Google+ is marketing itself as being more accessible than Facebook in letting its users essentially leave. Yes, it may appear to some that Google is being more accommodating than Facebook by letting its users take everything from email addresses to "stream" posts, but to others this information is a violation of their privacy, and virtually unnecessary. The only reasons I could see someone wanting such a downloadable file would be because Google + and/or Facebook were being completely shut down or because the user was leaving the site and still wanted access to everything that he or she had on it. In other words, I think User:Shannclark put it nicely when she said that the way in which Google + is allowing its users to essentially take all of their information and join another social network could potentially be its "fatal flaw". K&R discussed the three types of commitment that a user might feel toward an online community, and unfortunately, I do not know anyone who feels strongly committed to Google +. The "exclusive" network drew people in several years ago when users learned that they must receive an invite to join, but just as Pinterest got rid of this invitation feature, many users of other social media sites lost interest and motivation to commit to Google +. And when they decided to leave and join another site with less exclusivity and more chances to create strong interpersonal bonds, Google + had made it all too easy. -- Jretrum (talk) 15:56, 17 February 2015 (UTC)

As social networks, I'd consider Facebook more successful than Google+ in terms of: the number of people I know who use it, widespread recognition of the network, and my own personal use of the network. The people at Google seem to be pretty damn smart, so why haven't they been able to design a social network that can compete with Facebook? Does it have more to do with user commitment or barriers to entry? The first thing I think about when I consider Google+ is that not many of my friends use it, so why would I start using it? But why did I start using Facebook? Facebook started with a heavily bonds-based commitment approach -- you sign up because your friends have signed up. But before all of my friends were on Facebook, people still signed up, because Facebook made it seem both attractive to connect with friends via their platform and easy to invite friends who weren't users (via e-invite). At this point, I would guess that most Facebook users feel a bonds-based commitment, in that if most of their friends left the network, they'd probably leave too. But, I also think that the original bonds-based commitment has likely morphed for some people into other types of commitment. Do some users now feel identity-based commitment? Maybe not to Facebook as a whole, but the network has been very successful in encouraging creation of sub-groups, which as the Kraut reading explained, make it easier for people to feel identity-based commitment. Lots of organizations use Facebook to promote causes. Some users may not feel committed to all of Facebook, but may feel very committed to the cause/purpose of one sub-group. Maybe other users feel some need-based commitment, in that if they leave Facebook they are forfeiting a large piece of their social communication/connection with their friends (you know that one friend who's not on Facebook anymore, and it's super annoying because you have to find another way to connect with her online?). In the articles we read, I got the impression that Google thinks Facebook is causing a barrier to entry for Google+ (that people are deterred from using Google+ because they can't easily transfer their personal data to create a profile). But Sydneys92, Nduryea and others have expressed that they don't really see much significance in being able to transfer their data from Facebook. So maybe it has more to do with the marketing of Google+, the perceived attractiveness of using Google+ to connect with friends. Maybe Google+ doesn't advertise well enough what makes them special - what makes them "better" than Facebook - to not only play into our bonds-based commitment, but foster it so that it morphs into other types of commitment. -SamDiamond88 (talk) 16:22, 17 February 2015 (UTC)

Normative commitment and needs based commitment are very similar I was a bit confused in the beginning. Normative commitment is when an individual is loyal to the group and feels an obligation to contribute to the community. whereas in a needs based commitment an individual may be committed to the community for benefits they experience from the community. for example, I may be committed to a healthy community not for the information I receive from the community but for the social support I may receive and this benefit is of high importance to me therefore the cost of me leaving the community is high. An example of normative commitment is wikipedia. I have an obligation to contribute to the needs of wikipedia and help out the community and in return i get the satisfaction of being a Wikipedian and being part of the community. Normative commitment is more like reciprocity whereas needs based commitment is more like the member benefiting from the community in their own way. My question is how many communities are there that a person can be committed to as a needs based commitment? I feel like its so limited. How would a person benefit differently in a community? Its just so confusing.Latifaak (talk) 17:13, 17 February 2015 (UTC)

I was a little confused with what Kraut and Resnick had to say about normative commitment and needs-based commitment. The way I see it, normative commitment is when people feel obligated to be loyal and give back to the community that it is a part of. It strives on the idea of reciprocity where people feel obliged to help others who have helped them. For example, I depend on Yelp reviews quite a bit and I feel obligated to give back especially if the information I get is accurate. Contrastingly, needs-based commitment is when people feel a sense of attachment to the community because of the social benefits that comes with being a part of it. For example, recovering alcoholics join Alcoholics Anonymous not for learning about any information but for social support and companionship. Therefore, people view other members as assets that produce value in the group and making it hard to leave. Am I right to assume so? I agree with User:Latifaak, ’m still having a hard time wrapping my head around needs-based commitment. Nsiu (talk) 17:16, 17 February 2015 (UTC)

Kraut proposes the paradox that if someone's true identity is hidden, the revelation of personal information will actually increase. I found this paradox fascinating especially in the example provided about SparkPeople, an online community where people who used pseudonyms rather than their real name were more inclined to share intimate details such as weight and struggles. I think this highlights an important part about online communities in general- the paradox of anonymity. Sometimes, in places such as SparkPeople, it is a positive outlet for information and others such as reddit or Wikipedia, it helps with community atmosphere. Other places though, such as FormSpring (also bringing it by to MySpace Days) or YikYak, anonymity is a negative part of the community because it leads to bullying and inappropriate behavior. Nucomm23 (talk)

I really pondered SamDiamond88 ‘s question about what makes Facebook a more popular means of communicating with friends than Google+. Particularly because in my own experience I love Google products and the connectivity they allow you. I do think the initial drive to join a network like Myspace or Facebook is out of bonds based commitment. I have a very low level of commitment to social networking sites to begin with and if my friends didn’t use it to engage with me I probably wouldn’t use it at all. But we did talk about all the intrinsic motivators that Facebook keys in on to make us want to come back again and again. We have that need to connect with the world around us, and preferably with very low barriers of entry. I don’t know much about Google plus but I think one major flaw is that it seems to be about the pure connection between people in your social circles. As opposed to Facebook which can be much more about branding yourself or posting your opinions and ideals. Otherwise Facebook users might get the same sense of wear that I imagine Google + users experience from having large networks. I would suspect that at the opposite end of the spectrum the lack of direct contact in social groups is the reason platforms like MySpace have failedTWOBJ (talk) 18:15, 17 February 2015 (UTC)

Feb 20 Fri - Internet rules & CoC

Kraut and Resnick (2011) states that, "having a rough consensus about normative behaviors can help the community achieve its mission" (p. 126). For example, Wikipedia depend on members to write in a neutral-point-of-view that supports the community's goal to write a trustworthy encyclopedia. However, not everyone complies to the consensus standards of normative behavior in a community, which can lead to trolls and manipulators. Manipulators are extremely dangerous especially in consumer-opinionated platforms like Yelp. There have been many allegations that Yelp manipulated their website's reviews based on participation in its advertising programs. Regulating behavior may be a tough thing to do in any given online community. Kraut and Resnick (2011) believes that in order to limit manipulators, it is important that a community should have a system of algorithms that look for suspicious patterns (p. 135). However, it might be easier said than done. I feel it is worth looking into how manipulators can harm an overall community and perhaps come up with other ways to learn to be smarter about reading consumer reviews. Nsiu (talk) 20:36, 19 February 2015 (UTC)

In chapter 4, Kraut and Resnick (2011) offer the idea that, "Communities differ on which behaviors are normative and which are not" (p.125). They define acceptable behaviors as "normative", but who exactly decided what was appropriate and what wasn't? Why do some communities have different standards than others? This chapter got me thinking about my Wikipedia article and what constitutes appropriate content. I remember in high school, there was a small group of boys that would go on Wikipedia and add false information into random articles. So when Kraut and Resnick discussed the systems that detect suspicious information, it provided some sense of relief. I was interested to learn about the software projects discussed in the article HOWTo design a code of conduct for your community that seek to resolve harassment online. Even though these rules are not necessarily printed somewhere for users to easily view, it is important that these systems exist to eliminate online harassment and chaos. Nwells1229 (talk) 23:13, 19 February 2015 (UTC)

DNFTT (Do Not Feed The Troll) is new to me - I have never heard that acronym before. Funny how I can be on online communities that are full of trolls but have never actually seen/used that before. That being said, I was surprised to see all of the different ways that communities can moderate users activity. I never really thought about how comments are moderated etc. so I found Chapter 4 eye opening. For instance, the idea of having to "pay" with an online currency... Are there any popular online communities that use this method? I can't think of any... -Enarowski (talk) 01:35, 20 February 2015 (UTC)

Code of Conducts are hard to define in any website but I think Ada Initiative does a good job of clearly outlining the details of creating your own conduct. Ada Initiative's best advice, be specific, coincides with Kraut and Resnick's design claim 18 "Explicit rules and guidelines increase the ability for community members to know the norms, especially when it is less clear what others think is acceptable." The idea of clear guidelines and sticking to appropriate behaviors reminds me of an issue Instagram has run into with deleting a few photos that did not fit within the normative forms of feminine beauty including this story on Size Discrimination and this story on Pubic hair. Although there is no way to moderate or control what is acceptable across the internet, it is crazy to think of the graphic, disgusting things 4chan allows while Instagram can't allow non-typical beauty to be posted. I believe that COC are massively important, but in general, I feel that the internet is a free place to explore and if something offends you, you can stay away from that website. I think Ada Initiative said it all when they said "The major weapon of harassers is arguing whether something is actually harassing. It is difficult to enforce a CoC if you have to have a month long nasty argument about whether it was violated. It burns out people like you." Harassment is not the same to all users. Nucomm23 (talk)

  • Nucomm23, nice connections between the readings! -Reagle (talk) 17:38, 20 February 2015 (UTC)

I’ve never been what you could consider an early adopter of social media platforms, not that I was all that late to the party either. My point is that I’ve never been one for setting a tone about how an online community or network functions. It’s only just occurred to me how pivotal it is to lay out a well-vetted code of conduct and make sure your early adopters are sticking to it. For instance, had Facebook not taken the time to create their stringent code of conduct it is possible, perhaps even likely if you believe the rules of the Internet as set fourth by Anonymous. That Facebook would be nothing more than a repository for porn. Kraut and Resnick lay out the three ways people learn the norms of an online community: “1. Observing other people and the consequences of their behavior. 2. Seeing instructive generalizations or codes of conduct. 3. Behaving and directly receiving feedback.” (141) It all comes back to the idea of social proof. That we look to others for cues about what we should be doing in a similar situation. I can think of no place where the idea of monkey see, monkey do, is more prevalent than the Internet. TWOBJ (talk) 07:37, 20 February 2015 (UTC)

  • TWOBJ nice connection and allusion to monkeys! -Reagle (talk) 17:38, 20 February 2015 (UTC)

I found the Troll part to be interesting and the DNFTT. I never heard of this before until we talked about it in class but the book mentioned that the community should try to ignore the troll. (pg. 135) How are the members of the community going to know who the troll is or if the person is a troll and not a member that just broke a norm? Communities have found smart ways to deal with members with bad behavior. they mentions using gags and bans but using them is a way that the member doesn't really know they are being banned. The example they gave is limiting the info that user sees and not letting others in the chat room see this users posts so the user thinks the other members are just ignoring him/her. Also the one about seeing a error page is a really smart technique. this got me thinking about instagram and how easily they ban people for a day or two if they get reported. It also got me thinking about the online communities i am a part of and if i ever might have experienced this without knowing or if I have seen it happen to a friend. Latifaak (talk) 15:27, 20 February 2015 (UTC)

Like Natassia discussed in her post, I find the talk in relation to manipulators on Yelp very intriguing. One design claim from Kraut and Resnick (2011) that stood out to me was design claim 6, which states "Filters or influence limits can reduce the damage of shill raters in recommender systems, but they do so at the cost of ignoring some useful information from honest raters" (p. 135). Yelp does a lot of this - I know they have a specific algorithm that ends up filtering out a lot of customer reviews, which can enrage companies, especially if positive reviews are filtered instead of negative reviews. I think filters can both help and hinder a company, especially in Yelp's case. One California-based pizza restaurant got tired of Yelp's antics and offered a 25% off any pizza deal to anyone who gave them a negative review online (Richmond Restaurant and Bad Yelp Reviews. This is against the normative behaviors for both Yelp users and restaurants, and Yelp quickly tried to remove all of the bad posts. Sometimes trying to control the trolls and manipulators can certainly backfire! -Shannclark (talk) 16:12, 20 February 2015 (UTC)

  • Shannclark, I argue in my book that Yelp ends up being a manipulator itself by way of its filters and ordering of consumer reviews. -Reagle (talk) 17:38, 20 February 2015 (UTC)

After reading the online articles and the design claims about regulating design claims in online communities, I came to realize, I must not participate in many regulated online communities myself. I've never really seen any codes of conduct on websites that I use. Although, there's a chance the codes of conduct aren't made obvious. I know that design claim 19 states "Prominently displayed guidelines may convey a descriptive norm that the guidelines are not always followed" (p.149). But, to what extent should they be hidden and where should they be displayed? When going through Kraut and Resnick's design claims, it made sense to me that regulating behavior with codes of conduct could be beneficial to a community, I'm just not sure where the best place to display them on a website would be. Kraut and Resnick give the example of Reddit hosting their rules on one page but also seeping it into the community with fake articles and posts. After reading this, because I am not a Reddit user, I was a little confused as to how that worked. Brianne Shelley (talk) 16:38, 20 February 2015 (UTC)

Kraut and Resnick chapter was very stimulating; I did enjoy reading it a lot. It was very remarkable for me that if online communities are not regulated, most of the time they can come to an end. After reading this chapter, I understand more the roll that the “trolls” play in online communities, and I like a lot the design claim #2 and #5. In the claim #2 the author talks about redirecting inappropriate comments to other places instead of removing them creates less resistance than removing them. Most of the times, when trolls see that their post has been deleted from a site, they get even more angry and start posting even more inappropriate content to the site, so I am in favor that this content instead of being deleted can be moved to another subpage, where the community users wont be able to read them. Claim #5 on the other hand, talks about how “reversion tools can limit the damage that disrupters can inflict in production communities”. I believe this is a great initiative too to regulate behavior by recognizing the trolls, before they manipulate recommendations in site such as “Trip Advisor or Movielens”. HOWTO design a code of conduct for your community article was interesting because I have always wondered who was in charge for giving a penalty to users who are inappropriate in online communities. After the code of conduct was written there is a conference behind it, that functions as a judge for settling penalties to users. I also liked how they target harasser’s behavior, by explaining to us that the “major weapon of harassers is arguing if something is actually harassing” and for me it is incredible how long conversations can end up being between strangers in the internet fighting wherever something was inappropriate or not, I don’t understand if they don’t have anything else to do, or if people can get really irritated over inappropriate behavior. Nevertheless I am in favor of having a code of conduct, because it feels as if something is protecting you and can be punish if violated. Iferrrerb (talk) 16:52, 20 February 2015 (UTC)

Feb 24 Tue - Compliance and norms

I was surprised of the outright disapproval and disdain people showed in Garfinkel's students' experiments when the students did not comply with the background expectancies of the conversation. I never really think of the content that is implied in a conversation, rather just the content in the actual conversation. When Garfinkel's students violated background expectancy norms by prompting acquaintances to explain what they meant by saying a certain thing, the acquaintances replied with lines such as "Are you kidding me? You know what I mean" (44) and "Look! I was just trying to be polite. Frankly, I don't give a damn how you are" (44). This relates to Design claim 32, where Kraut and Resnick (2011) state "Peer reporting or automatic detection or violations increases the deterrent effect of sanctions" (164). Whether in an online community or in real life, peoples' automatic detection of violations of norms or background expectancies can deter people from breaking those norms. If people immediately reacted so negatively to my breaking or a norm or rule, it would certainly deter me from doing it again. -Shannclark (talk) 19:32, 23 February 2015 (UTC)

In her own act of social breaching, Chelsea Handler pushed the envelope this past October when she posted a photo to Instagram that not only violated Instagram's norms and Community Guidelines, but also those of most people. As Ostrom explains, "collective choice leads to rules [that build] legitimacy and thus compliance with the rules" (Kraut, 151). This is reflected in our own society as we develop norms through the process of socialization. In Handler's case, her sharing a photo that revealed her breasts naturally defied social norms and, of course, those of Instagram. While Handler did so mockingly (recreating a photo of a topless Vladimir Putin) and in an effort for equal rights between men and women, Instagram and select users were highly offended by her post because it did not follow normalized social expectations. Consequently, Instagram removed the image, alerting Handler of her offense. Disagreeing with the standard that a man and not a woman is socially accepted, she pushed back against Instagram and the discriminatory norm by posting the photo once more. Because Instagram's reactions did not align with her belief that the image was not offensive and further violated her own rights as a human being, her "resentment and unwillingness to conform to the rules" (Kraut, 161) persisted. However, the image was removed yet again. Shortly thereafter, she shared the photo for a third time. Of course, Instagram continued to dismiss her arguments and justifications, deleting the picture for the third and last time. Kristinam 0330 (talk) 22:12, 23 February 2015 (UTC)

Kristinam 0330 - I like that you brought up the Chelsea Handler example, because there are a lot of elements in that instance of social breaching that relate to our reading. In this case, Handler specifically wanted to elicit a reaction, which actually makes her an exception to many of Kraut and Resnick's "rules" (or, design claims). The book suggests that identifying users by their real names and photos will decrease norm violations because people are sensitive to the impression they give off to others. Instagram is photo-based, so almost every user is identifiable (via their photos). I would assume that this decreases violations from most users, but Handler is a public figure that likes to stir the pot and make provocative statements. She may be more encouraged to breach norms because of her identifiability, since she has marketed herself as a "firecracker" of sorts. This likely also contributed to her retaliation. According to Kraut and Resnick, the fact that the sanction came from Instagram (as a legitimate source of authority) instead of a fellow user should have somewhat discouraged Handler from retaliation. Then again, it wasn't a graduated sanction -- Instagram removed the photo immediately, which is a pretty severe sanction, and might have been interpreted by Handler as a punishment that didn't "match" the crime. Kraut and Resnick also explained that a severe sanction for a perceived "smaller" crime can lead to debate and require justification from the party enacting the sanction (which seems to ring true in Handler's case). Handler re-posted the photo multiple times to express that she felt it was unfair of Instagram to take it down in the first place. Some might say Instagram could have avoided the retaliation by using a "lighter" sanction, maybe politely asking Handler to delete the photo herself. But I doubt that would have worked; as I said before, Handler intended to spark a reaction in people. She intended to make a provocative statement about our society in general. Kraut and Resnick's recommendations for encouraging people to follow norms wouldn't have worked on Handler because she specifically wanted to break those norms and cause a certain chaos -- in which case, does that make her a "troll" on Instagram? - SamDiamond88 (talk) 01:05, 24 February 2015 (UTC)

When I was reading the Kraut and Resnik, Design Claim 23 really caught my attention. It states: Face-saving ways to correct norm violations increases compliance. i feel like this Design Claim really makes sense and even if it's bad, I can see myself more likely to comply with regulations if I didn't have to admit I'd knowingly done something wrong. I feel like at some point in my life, I've knowingly broke the rules and when I got caught, when I wasn't accused of knowingly breaking the rules, I was still embarrassed and never performed the act again. I feel like this is a powerful way to persuade community members to abide by the rules because they don't have to worry about defending their ego. I've also used this act of persuasion throughout my babysitting and ski instructing days. If a child broke the rules and I flat out told her she was wrong, she were more likely to try to argue, claim she did was right, and she would often repeat the behavior to prove her point. However, if I told a child that I knew she didn't mean to break the rules but pointed out she had made a silly mistake, she would usually apologize and often not repeat the behavior. By giving people a way out, they will more likely to comply to the rules because they wont feel the need to defend themselves. Brianne Shelley (talk) 01:21, 24 February 2015 (UTC)

I was surprised that some sites force new users to give out their credit card number or license number in order to sign up. While I understand that this is to make sure that people do not make multiple accounts, to avoid hackers, etc., I think that if I was trying to make an account for a site and that was required it would deter me. Maybe some people are willing to give out that information willingly, but I would view it as a red flag (especially now that I know all that I do about trolls and hackers) and automatically assume that the site was having problems with spam. This would further make me disinterested in the site and I would probably not end up making an account after all. I suppose if someone is dedicated to the idea of becoming a part of that community it would not be an issue... But I still argue that it might make many people less interested in making an account in the first place. I have definitely gotten into situations (not with online communities, but more for "free give-aways") where the company asks for my credit card number and at that point I always close the browser and do not go forward with the company. -Enarowski (talk) 03:00, 24 February 2015 (UTC)

When reading Kraut and Resnick, I couldn't help but think of trolls when reading a number of the design claims. Claim 21 states "verified identities and pictures reduce the incidence of norm violations." There are, however, many trolls who pride themselves on their actions and are recognized in their online community. They are not necessarily hidden or concealed and instead find joy in what they do. Another claim states, "Reputation systems, which summarize the history of someone's online behavior, encourage good behavior and deter norm violation." Again, this claim does not apply to trolls, who seem to thrive off of feedback from community members, whether it is positive or negative. I keep thinking back to a few classes ago when we were asked how we would stop a troll and after reading this chapter, it seems as though it might be impossible. Trolling breaks many traditional online norms that most people adhere to. Whether it takes a warning or not, most people participating in an online community try not to act out of place. I can imagine trolls receive many violation warnings and and face-saving notifications, yet they continue they harmful actions. So, I guess for me, the question still remains: How would you stop or prevent a troll? Nduryea (talk) 04:21, 24 February 2015 (UTC)

While reading through the list of social breaching experiments that we were asked to choose from for this class, I was reminded of Kraut and Resnick's Design Claims based on compliance of norms in online communities. Some of the experiments, such as the "Facebook Picture Creeper", made me uncomfortable just thinking about having to "creep" on an acquaintance's old photos and leave comments, which is why I believe Kraut & Resnick's Design Claims 25 and 32 are especially pertinent to online communities where a person's true identity is known. Their Design Claim 25 states, "Verified identities and pictures reduce the incidence of norm violations" (2011, p.156). I feel that if the "picture creeper" experiment was one in which I was brand new to the social network and did not personally know the person I was "creeping" on, I would feel much more comfortable violating the "norm" than on a site like Facebook in which an acquaintance can clearly see the identity of the person leaving the creepy comments. At the same time, I would be much more likely to perform such an experiment on a close friend of mine who would likely think that I was teasing her or be less likely to judge me for such behavior. Design claim 32 states "Peer reporting or automatic detection of violations increases the deterrent effect of sanctions" (2011, p.164). As the claim suggests, Facebook is not only a community in which one's true identity is shown, but is also a site in which all of a user's "friends" can see what happens on that user's page. In other words, my "creepy" comments have the potential to be seen by many other people, perhaps even many people I do not know. The experiment list, partnered with Kraut and Resnick's Design Claims, made me realize how important "saving face" is on social networks. We are potentially using these sites to project a desired identity or image of ourselves and we do not want others thinking we violate social norms. I also began to wonder what norms really are for sites like Facebook that attracts such a broad audience today. For example, my mom puts up a "status update" and tags her location whenever she goes on vacation, while I think this is too invasive and unnecessary for someone my age. Is my mom less concerned with "saving face" online, or are norms specific to individual groups and demographics on online communities? - Jretrum (talk) 04:28, 24 February 2015 (UTC)

I saw a few people above touch upon K & R's Design Claim 25 and breaching involving verified identities and pictures. What came to my mind when reading Jessica’s comment regarding these claims and the ‘picture creeper’ were false Facebook accounts. Over the years, I’ve received numerous friend requests from people whom I’ve never met nor seen and have 0 mutual friends. Sometimes these individuals have only 3 friends in total when ‘adding me’ to accompany a few new wall posts and pictures. These people may comment on a picture of mine, yet their pages lack long-term identifiers that typical Facebook users’ have and I immediately disregard that person if I hadn’t already. Kraut and Resnick's Design Claim 28 states ‘increasing the benefits of participating with a long-term identifier increases the community’s ability to sanction misbehavior.’ In looking at this in combination with #25, I started thinking about what types of identifiers people need in order to gain credibility. On sites like Facebook, I think that one’s friend network is the primary identifier for verification along with interactions with others. From its conception, the site only allowed users with Harvard email addresses to join. Now Facebook is notoriously known for cheap pseudonyms and false accounts, though I like to think most users are able to distinguish between real and fake. I’d like to talk more in class about the reasons why online communities might encourage less verification (and thus fake pseudonyms) versus verifying each individual and enforcing true identities. Matt rodgers2 (talk) 15:09, 24 February 2015 (UTC)

After reading Kraut and Resnick chapter, I was very captivated by claim #29 “Imposing costs for or preventing pseudonym switching increases the community’s ability to sanction misbehavior”. I believe this claim to be completely true, because it has happened to me in my personal experience. Once I was working at a pet shop, and I knew a friend of mine that needed the work and also loved animals, so I though to talk to the manager of the store in order to see if he needed any more personal. To make the story short, my friend end up working there 2 years and did an excellent job. As kraut and Resnick wrote “A related strategy is to tie the reputation of existing members to new members whom they invite”, this approach may work because recruiting new employees in organizations via referrals from existing employees is superior than more formal recruiting methods, why? Because both employees have an incentive, the first one to look good in the company while bringing an asset to it, and the second employee wants to succeed and make his sponsor look good by learning the appropriate behavior, norms and regulation that the community may imposed on others. Iferrrerb (talk) 15:11, 24 February 2015 (UTC)

Design Claim 23 "Face-saving ways to correct norm violations increases compliance" (p. 153). This claim is interesting to me because I still think of ths feature as someting new to online communitites. For example, people can post mean comments on a persons picture on Instagram but the person has the ability to delete that comment to save them the embarrassment. this claim states that people will be less likely to violate norms when there are ways for others to save themselves from embarrassment because their violation won't "pay" them anything therefore they will not see the point in posting a rude comment because they will not receive the enough benefits to outweigh the cost. For example, if instagram didn't have this feature of being able to delete any comment that is posted on one of your pictures then people will be more likely be more likely to post rude comments. A person can still be anonymous on instagram because they can use a pseudonym or a phrase as a name and notpost pictures of themselves. The only way of identifying them would be through the email that is linked to their account which nobody has access to other than the administrators. This goes with Design Claim 25, that "Verified ientities and pictures reduce the incidence of norm violations". I can see how this is true but I can also see how people can get around this. Latifaak (talk) 12:05, 24 February 2015 (UTC)

I believe that people are more inclined to disrupt societal norms when they are allowed to be anonymous. Kraut and Resnick (2011) states that, “people often prefer not to reveal their true identities in order to preserve some separation of context between different aspects of their lives” (p. 158). In general, we are all very concerned about what others might think of us. Therefore, if a user were to act under an alias or a pseudonym, he or she is more likely to do things they normally wouldn’t do if their true identity were revealed. For example, there were several students who refused to carry out Garfinkel’s breaching experiment because they were simply afraid to do it. It might be because they did not want their people (in Garfinkel’s case, their parents) to judge them. In the case of Michael Brutsch (aka. violentacrez), he acted under a pseudonym and caused a lot of disruption within the Reddit community. The fact that he assumed that no one could tie his online self to his real self allowed him to disrupt the online community. Nsiu (talk) 17:12, 24 February 2015 (UTC)

When I first read Kraut and Resnick's Design Claim 25, the first thing that popped into my mind was comparing Twitter to Yik Yak. They're very alike in that they both have limited character space to make a public post, people can comment on the posts, and people can "upvote" if it's Yik Yak or "favorite" them if it's Twitter. The thing that makes them such distinct entities is that Twitter (more often than not) has the user's real name in their Twitter handle which makes them identifiable, whereas Yik Yak is completely anonymous. I use both social platforms, but I prefer Yik Yak because I never feel afraid that one of my friends will judge me for what I publish. Not that I ever post anything mean, but sometimes I post personal silly thoughts or daily occurrences, and it's nice to talk about them without worrying if my friends will "favorite" some thing I want to talk about. I found Matt's post on Design Claim 25 very interesting, because I didn't relate it to friend requests from false social accounts. He had a different, but still correct take on it than I did, and I have also witnessed on Instagram both new and false accounts from people I've never met from around the world who would never say hi to me in person if I ran into them on the street. Although in a different way, these Instagram users are doing the same thing that I do with Yik Yak; we're both using anonymity to act outside the norm of how we would if our identities were known. --Kaylynn Nelson (talk) 17:51, 24 February 2015 (UTC)

I was very fascinated by the work done by Garfinkle. The article, “Studies of Ethnomethodology” he uses the term ethnomethodology in order to understand the methods individuals use to understand the culture they are a part of. His background in sociology was used as a lens to develop this term. However, he felt that this was a one- sided method since sociology research typically uses outside sources, as opposed to the individual, to describe situations. In plain terms, he studies the use, of common sense. This article, as well as the assignment outlining the social breaching experiment, reminded me of an article about over sharing on social media I read a while a go in a Women’s Health Magazine. The article describes how social media has become a source of therapy and that we are sharing too much online. But who is to say what is “too much”? I know I get frustrated when my Facebook/ Instagram friends share too many personal details online, however, I am probably guilty of this as well. But what happens when people stop sharing? We now live in an environment in which people feel safer sharing things online versus face-to-face. We relay on these communities to receive pertinent information, even if it does seem like too much at times. A specific area where many people tend to feel like their friends are sharing too much, are health related conversations. I recently dealt with this with my fathers’ passing. Throughout his illness, I never posted anything on social media; however, my friends were shocked when I posted an obituary claiming they had no idea he was ill. Maybe I was too concerned with “saving face”, but I didn’t feel that my dad’s personal health struggles needed to be broadcasted across social media. Nwells1229 (talk) 18:03, 24 February 2015 (UTC)

After reading Kraut and Resnick’s design claims in the reading, Design Claim 22 stuck out to me. The claim states, “Community influence on rule making increases compliance with the rules.” The first thing that I thought of was social media and how its users create the guidelines of how people should behave on the site. For example, with a popular site such as Facebook, many people comment on people’s photos, statuses, or in group messages. These comments can be supportive or they can be negative. If a user commented negatively or inappropriately, other users of Facebook have the right to step up and remind the user of the community norms and what is seen as appropriate behavior on Facebook. People that have been told by their community that their post was wrong tend to take down their posts and the comments that followed if the option is available. If the option is not available, then the Facebook user will defend their point to the rest of the community. -HortonV (talk) 18:09, 24 February 2015 (UTC)

Garfinkel’s theory was more likely focus on the real life experiment, in his study, he analyzed how people interact each other based on the unspoken background knowledge. This theory is appealing to me, because I never thought the importance of unspoken background situation when I have daily conversations with people. His study directly showed us how social norm or unspoken background knowledge important in our daily life. People live in a society with a lot of invisible social norms, and people usually comply with norms. In an online community, it should have online norms as well to regulate participants’ behavior in order to create a healthy community. In Kraut and Rensnick reading, they provided several techniques to maintain online norms, regulate behaviors, and solution of breaking norms. For instance, Claim 25 “Verified identities and pictures reduce the incidence of norm violations.” and Claim 26 “Reputation systems, which summarize the history of someone’s online behavior, encourage good behavior and deter norm violations.” Those two claims are commonly used in many online communities. However, I have question about breaching experiment, based on what we were talking about research ethics, does breaching experiment itself break the rule of research ethics? Is it ethical?? Because experimenters were not telling people they were doing experiment.Yulu Lei (talk) 18:12, 24 February 2015 (UTC)

A few people mentioned in their responses about how anonymity changes our willingness to break societal norms. I would certainly agree with this idea that we are more likely to run amuck when our identities are not exposed, in fact it was one factor we discussed in terms of Trolling behavior. People are far more likely to attempt to bring down a conversation if there won’t be any serious repercussions to themselves. This idea puts a great deal of importance on design claim 27: “Prices, bonds, or bets that make undesirable actions more costly than desirable actions reduce misbehavior” (pg. 158). The price of violating the norms of a site or conversation have to go beyond kicking someone out or giving them a low reputation score. Kraut and Resnick discuss the idea of cheap pseudonyms. If a user can make a new account with a low barrier of entry then nothing stops them from creating new accounts every time they are booted off or given a low score. After exploring Harold Garfinkel and the idea of ethnomethodology I’m still trying to figure where the border between violating a social norm is in terms of things that are ingrained from society and things we choose not to violate because they are law. TWOBJ (talk) 18:15, 24 February 2015 (UTC)

Garfinkel's sociological research argued that there are many commonplace interactions that we take for granted when there is a lot more context and complexion going on based on the relationship. This aspect reminds me of one of the principles of interpersonal communication that all communication has both relational and contextual meaning. Garfinkel's research demonstrates this and provides data for understanding social interactions. His experiment did not put me at ease for performing the social breaching experiment. Garfinkel's research was in person and could be erased, all of my actions online are permanent and hard to reverse which causes a lot more anxiety for me as a researcher. I have discovered that I find compliance to norms is important and I find that Kraut and Resnick's designs would deter me from acting outside of norms. Nucomm23 (talk)

Feb 27 Fri - Governance and banning

After reading your article I was not aware about the depth and meaning that the word consensus is made up of. At first I believe I was kind of blind to the idea, and believe consensus was only about making a yes or no decision. But I was very wrong; this article proves to me that “Achieving consensus requires serious treatment of every group member’s considered opinion…. In the ideal case, those who wish to take up some action want to hear those who oppose it, because they count on the fact that the ensuing debate will improve the consensus”. I do understand that definition, but after reading the article further it talks about how consensus can also be achieved when the community changes its mind, but how many times a community can change its mind before making a consensus? . Correspondingly after consensus is made, who is in charge of making sure the decision process must close and that there is no further room for fighting? What I find impressive is that users might end up fighting hours over the internet, because of the most ridiculous stuff, sometimes I feel people just want to harm communities and do not contribute anything positive to it. I also feel that in order for consensus to work well, it needs to be for small groups of people, who share the same interests, and have a respectable faith and attitude. If individuals involved in the consensus process are not there because they want to help, its going to be hard to make an actual rational decision, because as Meatball consensus page states as the size of the group increases, so does the chances of conflict between individuals and subgroups. Archiving consensus is not an easy task, but by controlling the openness, scope, facilitation and the membership of a community, might facilitate the process a little bit. Iferrrerb (talk) 21:09, 26 February 2015 (UTC)

Whenever you go on any social media platform, there are always ways for people to flag someone as being “inappropriate” or sending out spam. But I had never really thought about what happens after that. It was very interesting reading these articles to see that people who say or post things that get categorized as spam or inappropriate could end up being a banned member of that community. But I’m curious as to what gives someone the authority to be banned. I looked at the Arbitration Committees link that was discussed in the Wikipedia page to try to understand how people get put in this position to judge whether someone should be banned and to what extent. I found out that the judges, or Arbitrators, are volunteer users, usually an experienced editor or admin, who the editors community elects to resolve these disputes within a community. This is similar to the Tribunal group in League of Legends. But then how would that work for Facebook? Who would be considered an “editor” for Facebook to decide whether or not someone is allowed to say what a person says? I looked around Facebook and found their Facebook Site Governance page. While this page was nice to see, it did not really say who gets to be the judge of what is claimed inappropriate or not. Im curious, does anyone know if there is a group of people like the Tribunal or the Wikipedia Arbitrators who pay attention to Facebook?Sydneys92 (talk) 23:06, 26 February 2015 (UTC)

I have come across cases of disambiguation on Wikipedia recently! When I was working in my sandbox (yes, I have settled on a topic finally!) I was linking to other pages on Wikipedia and at one point I linked to press - even though I was actually trying to link to the page press. With all of the "press" options laid out for me it was easy to pick the correct one, but when I was originally linking to the page I was blinding going through the steps - not even thinking about all of the different options. The idea of disambiguation is really important on Wikipedia for that reason (so many topics, with a limit on names) and I was pleased to see that your chapter about consensus referred to it so early on. The different ban options on the Wikipedia ban page also interested me - I was unaware of the number of different ban options (site vs. topic vs. article, etc.). -Enarowski (talk) 01:42, 27 February 2015 (UTC)

"Silence is one of the great challenges to successful consensus" (Reagle, 2010). Consensus on a fully "open" space such as Wikipedia seems impossible. I think of how many times I have been in a group and many people are silent, but then they are the first to complain. Complying with the hundreds of users debating on a single topic is extremely difficult and it's crazy to think of voting as evil. I feel as if this idea of consensus is where Wikipedia really struggles with "acting in good faith." It seems that when debates come up, it goes to "either put up or shut up" which doesn't seem to really be "good faith." Also, it does seem very trivially, but I can see the frustration with broken links, multiple articles, and disambiguation. The lack of consensus, if it results in inconsistency, can lessen the experience for those involved and those using Wikipedia. I am looking forward to hearing what Corey has to say about LoL because it states in the tribunal FAQ, "We believe in giving the community what it needs to define itself and that includes what is acceptable or unacceptable behavior. Any rules provided by Riot Games could unnecessarily influence the community." This statement is in direct contrast with the guidelines presented by the Ada Initiative, which said to be as clear and specific as possible. Is it possible to have multiple ways of reaching consensus? Is it possible to create a one size fits all of fighting trolls and bad behavior? Is true consensus possible or were there always be the minority arguing louder than the rest? Nucomm23 (talk)

Who knew group consensus was so complicated! I appreciate Wikipedia's mission to let the users decide what is right rather than having the administrators decide, but there must be an easier way! It seems that there will always be issues on Wikipedia whether it be a broken link or disambiguation. In the specific example of Buffy the Vampire, I was thinking it might be helpful to have some sort of template that Wikipedians could use when posting an article in a particular category, such as television shows. With a template, there will be certain criteria that must be filled out and it will specify how the name of the article will be displayed. I know this might infringe on the idea that every user has the freedom to post and edit what they choose, but I think it would provide some helpful guidance to avoid disagreements. A pre made template would not alter any content, but simply lay out a format so there is consistency within the way articles are written. I agree with Emily, group consensus on Wikipedia seems impossible, and I truly think there are ways to avoid conflict and the need for consensus altogether. Nduryea (talk) 05:42, 27 February 2015 (UTC)

I would certainly agree that the idea of reaching an ideal consensus where everyone agrees with a particular point of view seems highly unlikely and unreasonable. However I do like the rules set forth in the League of Legends Tribunal. First off it has very clear rules for what constitutes the possibility of ending up in a tribunal case and the possible ramifications for being found guilty. I do like that it clearly states there are no set rules for what ends up being reported or lands you in a tribunal because it is up to the community to self govern. One of the design claims was that self reporting and governance allows communities to feel a greater sense of responsibility to the community. Wikipedia has the same self governance policy when it comes to banning members from editing. Again the rules for what constitutes ban or block worthy actions is a gray area. I think that because Wikipedia, similarly to League of Legends, has such an ambiguous code of conduct to begin with they should be a little more concrete on what is grounds for being blocked or banned. I liked how League of Legends did offer to send communications to players that they were in danger of being subject to a tribunal. Sometimes just the warring that you are disrupting the community is enough to get members to straighten out and fly right. TWOBJ (talk) 05:58, 27 February 2015 (UTC)

Consensus appears to be the backbone of most online communities, but there seems to be a trend of an overarching judicial system being formed in order to make judgements when a majority view can't be distinguished. In fact, it reminds me of a lot of the American judicial system. For instance, although Wikipedia functions fundamentally on consensus, Whales formed the Arbitration Committee in 2004 to make decisions when a clear solution couldn't be reached. According to Reagle (2010), the "ArbCom" is elected “based on the results of advisory elections.” Thus, experienced and credible Wikipedians are voted for in elections, just like politicians are voted into office. Similarly, after reviewing the rules of The Tribunal, I also see parallels with the American judicial system. According to the Tribunal System, cases against players that misbehave "are presented to random community members who use the Tribunal who then review the case files and render a judgment—pardon or punish." This random selection of assigning community members cases is just like getting jury duty. And don't forget the judge in the courtroom, also known as "Player Support!" "Player Support then uses this information to help assign the right penalties to the right players." Thus, I rest my case (pun intended). --Kaylynn Nelson (talk) 14:41, 27 February 2015 (UTC)

When I was first reading your chapter, "The Challenges of Consensus" and then Wikipedia's Banning Policy , I became frustrated by Wikipedia's label as an "egalitarian" community that allows all members to edit and essentially manage the world's largest encyclopedia. I felt that a truly "egalitarian" society should be one in which all members are treated equally and also have equal status. However, the more I read about the difficulties of consensus, the more I realized that there might be a need for a higher level of authority to manage such disputes, such as the Arbitration Committee or The Tribunal in the League of Legends. Although I still find it odd that Founder Jimmy Wales can essentially change, edit, ban or have the ultimate say over everything in Wikipedia, acting as the "benevolent dictator" as you hinted at the end of the chapter, I understand why a "leader" or group of leaders" is ultimately necessary in almost any type of community. How else would disputes that have gone on for months, if not years, find some type of consensus? For example, I found it absurd that some people on Wikipedia find it necessary to have a separate article page for each episode in a TV series! Who needs to dive in to each specific 20 minute episode of Buffy the Vampire Slayer, meaning who has the time to write each of those and who is actually going to use Wikipedia to search for that information? At the same time, I am sure that at some point someone felt very passionately that there needed to be separate pages for each episode, and thus a dispute arose. I am not yet a committed Wikipedian who would argue my Buffy the Vampire Slayer episode dispute with other members of the community, but I do now understand why consensus can be so hard to reach, and why ultimately a team of leaders/head leader is essential to keeping an online community running. - Jretrum (talk) 15:32, 27 February 2015 (UTC)

The concepts central to these readings lead me back to a musing in the previous chapter of Kraut & Resnick: "In thriving communities, a rough consensus eventually emerges about the range of behaviors the managers and most members consider acceptable" (125). With this in mind, it comes as no surprise that the League of Legends player behavior was noted as being at an all time high, despite the recent absence of The Tribunal. As Ostrom argues, "collective choice leads to rules that are better tailored to specific situations, [therefore building] legitimacy and thus compliance with the rules" (151). In other words, the development and regulation of norms by the players of the game—rather than a higher power—fosters the players' willingness to comply with the rules. Ostrom concludes, "Even if the group spends more time initially in discussion and comes to the same decision in the end as that made by an elite core, involving everyone in the decision-making process should result in long-term benefits" (151). Applying this theory to Wikipedia's issues with title consistency, it is clear that Wikipedians' frustrations were rooted in the fact that their voices were not being heard—instead, feeling that Wikipedia elite had exclusive say over the matter. So, as Jessica evaluates, is Wikipedia truly an egalitarian community in which all members have equal power? Kristinam 0330 (talk) 16:34, 27 February 2015 (UTC)

So far, self-governance seems to be effective in online communities where people feel identity-based commitment. I'm thinking of Wikipedia, Reddit, and even online RPG's (you could argue that people have many different types of commitment to an RPG, but I think identity-based is usually a strong one here). TWOBJ pointed out that self-governance makes members feel a greater sense of responsibility to the community. I agree with this, but only where identity-based commitment is present. Think about comparing Wikipedia to a 10th grade classroom (weird comparison, I know). 10th graders don't govern themselves, and have little to no say in the governing party (the teacher). How many 10th grade students really feel committed to their community of students? How many feel committed to the school's well-being? I argue that most 10th graders wouldn't "report" any classmate for misconduct or "bad" behavior in the classroom. For the most part, if the teacher doesn't notice/address an action, it goes unaddressed entirely. Alternately, if you join Wikipedia, and do something that violates the rules, you'll find that "random" users report you to the authorities, or take action to reprimand you themselves. They may not know you at all, and your action didn't affect them directly, but they identify with this community and they care about its well-being. Because the community is self-governed, and because there's a high level of identity-based commitment, members feel a responsibility to govern. A responsibility to the community. If you left a class of 10th graders and told them to govern themselves, I doubt they would do a great job, because although you've increased their responsibility, they still don't have identity-based commitment to that community. I think you need both to be strong in order for self-governance to work. - SamDiamond88 (talk) 16:37, 27 February 2015 (UTC)

It is amazing to me of the depth of what consensus means. I always understood what it meant by its dictionary definition but there is obviously so much more to what it can mean in the realm of the internet and especially in relation to governance. This is where I personally appreciated reading the information on the Tribunal. I do not play any games on PC but I do play some on gaming consoles where there is pretty much no governance at all. Console gaming is really just self governed and has a huge reliance on consensus of what is right and wrong. Because I was so interested in League of Legends after I read the information of the Tribunal I talked with a friend who is very involved in playing LoL. He told me that the fact that people not only rely on consensus and self governance but the Tribunal as well, it gives him more comfort that he and his friends on the game will all be treated and played against fairly. He told me that having the potential threat of the Tribunal lingering in the back of your head really makes you watch what you say and do in the game and makes you treat others in a way that is appropriate for the game. The point that he brought up to me about LoL that I found most interesting was that the people who play the play it because they love it and are almost obsessive about it to a point and if they were to be suspended or banned it would devastate them. To me this speaks volumes to the importance of governance and the Tribunal in League of Legends and that it is an important part of the game and community. Tschn012 (talk) 17:41, 27 February 2015 (UTC)

As an internet user, I never thought online norms and rules were important, and I never read them before joining any social media or sites. After reading articles, I started to realize how important these norms and rules are to a community. It creates a bottom line in order to notice users act in right manners. As large online communities, such as wikipedia and LoL, norms and rules seem like more important and necessary. For long-term companies’ operation, they have to regulate thousands of participants’ behaviors by conducting the clear and proper rules. Such as Wikipedia has many banning policies, which are interesting to me. And LoL has Tribunal system to monitor users. The good fact is that these two communities allow users to regulate themselves (players report: wikipedia active users or LoL level 20+ players) rather than administrators make decisions. This rule makes the community more fair, and decrease players violations. It reminds me Kraut Reading states Claim 32, “ Peer reporting or automatic detection of violations increases the deterrent effect of sanctions.” Peer reporting is a useful method to maintain the norms, and it makes users notice and apply these norms and rules.Yulu Lei (talk) 18:07, 27 February 2015 (UTC)

Mar 03 Tue - Community and collaboration

I strongly believe Wikipedia’s collaborative culture contributes to its success, in many ways. In particular, I think the Assume Good Faith (AGF) guideline (and the complementary, unofficial “assume stupidity” philosophy) may be extra helpful in combatting trolls. Trolls, as we’ve discussed in class, intend to disrupt a community -- to cause upset or outrage. A commonly used metaphor is that trolls want to light a fire, then sit back and watch it burn. In my mind, users practicing the AGF guideline are essentially saying, “Oh hi, I see you accidentally lit a fire over here, let me just put that out for you. There you go!” Can you imagine how completely unsatisfying that would be for a troll? What an utterly useless response to someone who wanted to see you get flustered or angry, who wanted to see you lash out. As a troll, even re-lighting that same fire is likely unappealing after such a “good faith” response. Re-lighting the flame will probably result in a similarly AGF response from either the same or another user. To go one step further, I imagine the users who practice (even informally) the “assume stupidity” concept would be even more frustrating for a troll. That user’s response is basically, “Hey there, I see you accidentally lit a fire here. I’ve squelched it for you, don’t worry. You probably don’t know what a fire is, so let me explain it to you and help you understand why fire can be bad.” It’s one thing to assume the troll didn’t mean to be rude, but it’s another to act as if the troll doesn’t even understand the concept of being rude or how his/her actions might have negative effects – imagine how insulting this would be to a troll who prides him/herself on cleverly igniting those fires. I assume that this takes a lot of the “fun” out of being a Wikipedia troll. I admit that some trolls may become infuriated, and try even harder to get a rise out of the Wikipedia community, but I expect that more often, they move on to other platforms and communities that they view as “easier” or “more fun” targets. - SamDiamond88 (talk) 20:37, 2 March 2015 (UTC)

I think that Sam's point about the "Assume Good Faith" guideline of Wikipedia and discouraging trolls would assume to be true in most circumstances, although I wonder if it is obvious for non-troll users to tell when a troll has posted, and thus isn't as effective. I do believe that Wikipedia has a good, generally positive culture compared to a lot of other sites, but I have some questions about it. As far as the collaborative aspect of the site goes, I think this truly makes the site what it is. In Reagle's "Good Faith Collaboration, he concludes "In the case of the English Wikipedia, there is a collaborative culture that asks its participants to assume two postures: a stance of neutral point of view on matters of knowledge, and a stance of good faith toward one’s fellow contributors." I agree that most users adopt these two postures, but I question what Wikipedia's collaborative policy does for the site's reliability and validity. I trust Wikipedia, and generally take what I read on the site as fact, but no schools or courses will ever take Wikipedia as fact - I feel like this is largely due to its collaborative model. I wonder if and when Wikipedia will ever be taken completely seriously by academia - I hope sooner rather than later for the sake of my future papers!! -Shannclark (talk) 22:55, 2 March 2015 (UTC)

I found Chapter 8 Conclusion: “Commenterrible”? to be a very interesting and relevant piece. We live in a community that has a very strong presence online. Often, we must turn to our online communities in order to receive validation or consent to do certain things. We have built up a system we can turn to for advice, however, when does it get to be too much? Liesegang brings up an important point when she discusses the obsession with turning to online reviews of products before we purchase them. I know I am guilty of this, and am sure plenty of others are as well. The thing we have to think about is who is actually writing those reviews, and can we trust them? Sometimes, reviewers and commenters are people who are heavily invested in the review, but frequently, these are just trolls that do not have the other member's best interest. What really persuades us to buy certain items and eat at certain restaurants? In turn, many sites are banning the ability to comment or require users to pay or create an account to do so. These are barriers that are helping to eliminate the number of comments that can be left on certain sites. When we compare these principles to Wikipedia, a space where discouraging tolls and remaining unbiased are highly enforced. This is why we generally trust the material we read on Wikipedia maybe more so than reviews we find on Yelp. Nwells1229 (talk) 23:33, 2 March 2015 (UTC)

Reading the Commenterrible article by Professor Reagle made me think about all of the comments that I have seen people write on websites. While sometimes reviews and comment sections can be helpful, like when it comes to deciding if you want to go to a restaurant or purchase a product, sometimes it is completely useless and ends up being causing more harm than good. As the article states “Comment can be used to express hate or support, but I suspect that the deluge of hate leaves a much stronger impression than even the kindest expressions of encouragement." People find ways to bully and troll on comment sections to state their negative, unnecessary opinions. I really liked the iea of “drama genre” of a comment, because I kind of feel like comment sections can kind of be like the high school bully. There are people in high school who are mean to people just because they can be, everyone has dealt with them in some way or another. When people comment on sections and attack someone by saying negative things, it causes unnecessary drama. I completely understand why so many people in the article talked about how they took out their comment sections because it wasn’t as good or productive as they thought it should be. Sydneys92 (talk) 01:20, 3 March 2015 (UTC)

While I understand the draw of getting rid of a comment section to decrease negativity, I question the point. No one is forced to read comments and if you get drawn into a comment war that is your own choice and you can leave at any time. Negative comments can take away from the positive aspects of an article/blog/picture/whatever, but again, everyone has the choice whether or not to read those comments. Maybe I didn't fully grasp the point of the readings and just got hung up on this one aspect... But this is all I can focus on now. If trolls are spamming the comment section, just ignore it. Trolls will continue to troll even if the comment section of one site is deleted... This is the internet we're talking about. Moderators can only do so much. I have had the experience of loving something I see online and then being disgruntled by the comments about it. But if the comments bother you that much clearly the original piece didn't matter as much as you thought, or else you'd forget the negativity. -Enarowski (talk) 02:12, 3 March 2015 (UTC)

While reading Professor Reagle's chapter on comment culture I was reminded of my experience in a previous co-op position. While at the co-op, I was asked to try to increase the amount of links to the company's site by commenting on popular blogs with links back to our content. The idea was that our comments would drive blog readers to our site, and also encourage the bloggers to either collaborate with us, or also link to one of our articles in their content. The comments gained the company little traffic, and were treated more or less as "spam" by readers and bloggers alike, but I learned a lot about commenting nonetheless. For instance, on many blogs I was required to "log in" before posting a comment, which often required creating an entire user profile, or linking to my Facebook page. Although the user profile process was often quick and harmless, I was much more hesitant to link to my Facebook page, as I was posting for a company's purpose, and didn't want to be affiliated with their marketing tactics. This reminded me of Kraut and Resnick's Design Claim 25, which we discussed in a previous class as the fact that people are less likely to breach "norms" if they are on a site with verified identities. In other words, I think these bloggers' tactic of requiring commenters to link to their Facebook pages is a clever way to avoid much of the "commenterrible" culture that Professor Reagle discussed in his chapter. Although, verified identities certainly does not stop every user from leaving horrendous comments, or the "spam-y" comments that I was required to post on the job, it certainly discourages the users who only post terrible things because other users have no idea who they are. However, as Professor Reagle points out in his Google +/Youtube example, the linking of "social networks" to otherwise anonymous comment communities can actually build a community for trolls who are extremely frustrated by the situation. The distinction here is that the link to Facebook might be a successful tool for weeding out trolls on relatively small blog sites, but not on those such as YouTube. - Jretrum (talk) 04:15, 3 March 2015 (UTC)

FormSpring is a great example of anonymous gone wrong, so I was happy to see it in the "commenterrible." The hardest part about comments is the medium. Mark Frauenfelder, who began Boing Boing, says "The subtleties of face-to-face communication are lost." Communication richness is one of the first ideas we learn as Communications majors and it is a continual challenge for the online community to behave as a mirror of real life. I believe that comments are be useful and reviews should be taken with a grain of salt (I personally usually "average" them in my head and if there is a really poor one, I mark it up to being a bad day). Despite the "sludge" found in comments, Reagle proposes that the alternative if company such as Google + keep trying to change comment systems such as Youtube. "I fear that the future of online comment will continue to move toward large commercial platforms in which people have little privacy and see mainly the posts of the likeminded, the popular, and those who pay to reach us—a neutered filter bubble that serves the ads rather than the users." Would we want this alternative instead of trolls? What price are we willing to pay to get rid of trolls? I believe that the internet should continue to be a place of deferring opinions, comments, ideas, and information. I don't want everything to be human controlled and linked with advertising. Nucomm23 (talk) 15:14, 3 March 2015 (UTC)

In Chapter 8, the focus was on solving the problem of the comment section. There is a fear that online comments will lead to a commercial industry where people will get paid to get people’s attention. I never knew that a comment section of a website could become commercialized. I like that people are able to share their opinions on a certain topic and I would hate to have my view influenced by people getting paid to do exactly that. The problem with comment sections is that they are open to the public and users are free to share their feelings on a particular subject. Although this can be positive, trolls tend to ruin the comment section for others. As long as online communities have existed and people have been able to share their thoughts in a comment section, trolls have always been around. Although trolls may ruin the experience of commenting on topics that people are interested in sharing, the best thing to do is ignore them because I don’t think there are any measures that can be taken for them to go away.-HortonV (talk) 16:04, 3 March 2015 (UTC)

Before reading the chapter on collaboration, if asked what the difference was between being unbiased and neutral, I would have replied that they're one in the same. However, after examining both Sanger's and Wales' view on the meaning behind the words, I suddenly see them as distant cousins as opposed to directly related synonyms. The specification of what the two words implied to users was discussed in Wikipedia's early heyday. Sanger wished for the encyclopedia's policy to say "avoid bias," but Wales was adamant on the phrasing "neutral point of view." Sanger stated that writers should be instructed to write unbiased because "the point isn’t merely to mention other views not favored by an article’s author; it is to write in such a way that one cannot tell what view is favored by the article’s author." Thus, Sanger was concerned that all "ideological flavor" would be admitted from users applying NPOV. Here, I'd have to take sides with Sanger, because NPOV makes me feel as though I shouldn't express any point of views. Although I believe personal point of views should be abstained from Wikipedia, certain beliefs from political and notable figures as well as the whole sections of society should be represented, so there's a fine line between being completely devoid of a point of view and who's point of view matters and should be recorded. --Kaylynn Nelson (talk) 16:20, 3 March 2015 (UTC)

After reading chapter 3, and in thinking back on our class discussions, the “Neutral Point Of View” policy would appear to be one of the most crucial and not easily attained aspects of Wikipedia. Though Professor Reagle notes that he only focuses on the English Wikipedia, throughout the world there exist so many contrasting view-points and ideologies that it is amazing to me that we as a whole are able to come to a certain degree of agreement in a community like Wikipedia. As noted by Professor, “the ‘Neutral Point of View’ policy notes that when you are writing for the enemy “the other side might very well find your attempts to characterize their views substandard, but it’s the thought that counts.” This is an interesting notion to me in the sense that we tend to give some degree of leniency to alternate viewpoints in an online setting so long as there was an attempt at being neutral and un-biased. Compare this to real-life settings in which non-neutral viewpoints are criticized harshly and aren’t given much credibility and I think we are able see one of the successes of Wikipedia. Understanding differences between individuals is vital for collaboration and if there didn’t exist ways for users to interact, discuss, and correct one another, then it would be near-impossible to ever achieve neutrality. Lastly, one more note I’d like to add before class: 'AWWDMBJAWGCAWAIFDSPBATDMTD' is a crazy acronym. Matt rodgers2 (talk) 16:58, 3 March 2015 (UTC)

I really enjoyed reading the article on collaboration. Reading in depth about how wikipedia strives to do everything that it can to be as good as it can be was a nice sentiment and made me glad that I am editing something in this community. Something that really hit home for me while reading this was a quote from Wales that I felt was very realistic and also really telling as to how much he cares about wikipedia as a community. He said, "We should simply strive to eliminate all the problems that we can, and remain constantly open to sensible revisions. Will this be perfect? Of course not. But it is all we can do *and* it is the least we can do." I appreciate this because he clearly understands that he cannot control what people do and you can also not fix everything nor make everyone happy. The fact that he says that he will do what he can and always be open for more changes is something that I feel is fundamental to the success of wikipedia. My question is, because this has proven to be successful for wikipedia, why don't more online communities try to follow in the footsteps of wikipedia in the sense of dealing with issues and being collaborative and open source about the information presented on the community? I feel that some online communities are on the right track but still clearly see more issues than wikipedia does and I just wonder why they don't reap the success that wikipedia does? Hopefully people will begin to understand that working collaboratively and being open to different things when using the internet leads to a successful community. Tschn012 (talk) 18:20, 3 March 2015 (UTC)

The Commenterrible chapter by Professor Reagle was interesting to read. People comment on a daily basis on many different topics online but no one thinks about who is behind these comment. Boing Boing contributor Xeni Jardin wrote "Online Communities Rot without Daily Trending by Human Hands" (p. 362). This gt me thinking of the thousands of comments a person reads everyday and that her statement is true, all the online communities will thrive without these comments from people. it is absurd that we have reached a point where our online communities must be moderated because of all the negative comments that people post. this also leads the chapter to discuss how these communities may leads to teenage suicide. I remember watching the news with my mom and there was a story about a teenage boy who committed suicide because someone in his online community told him to do it and also told him how to do it. I agree that "the future of online comment will continue to move toward large commercial platforms in which people have little privacy and see mainly the posts of the likeminded, the popular, and those who pay to reach us" since all the online communities will have to become super moderated because of all the drama going on now. Wikipedia on the other hand is a bit different because there is this assumption that people in that community are all "trying to do their best for the greater good of the community" (p. 16). I think this is what distinguishes it from other online communities. All the members have this shared goal that is why they do not post negative comments to disrupt the community. As we talked about in a previous class in some communities people may be apart of it for different reasons and it is impossible for you to know why each member is a part of the community this is why some people post these sort of comments. Latifaak (talk) 17:50, 3 March 2015 (UTC)

I feel that what makes Wikipedia such a successful online community is that members are all working towards the same goal: to create the world’s largest encyclopedia. Wikipedia makes it a point to clearly state its policies and guidelines for every one to see. Members of this community are encouraged to put their personal opinions and biasedness aside, and to always write in a neutral-point-of-view. In doing so, creating a sense of trust between members amongst the community. In other online communities, especially comment based ones (like Yelp), people are often are very opinionated and that might lead to problems with comments. Moreover, comments can benefit users to strive to be better members but it can also be disruptive and manipulative. For example, Yelp is often criticized for manipulating their reviews. This might be because members of this community are not working towards the same goal: business owners want to eliminate negative reviews, and consumers want to share their experiences. Perhaps the most important thing to note about what makes Wikipedia such a successful online community is looking at its collaborative culture. Reagle (2012) defines it as, “collaborative culture refers to a set of assumptions, values, meanings, and actions pertaining to working together within a community." Nsiu (talk) 17:50, 3 March 2015 (UTC)

Professor Reagle's chapters, "Good Faith Collaboration" and "Commenterrible," arrive at one underlying theme of a successful online community: the idea that its well-being depends significantly on its users collaborative efforts. Asynchronous by nature, online communities encourage users to come together in this or that forum and share in a socially creative process. As Professor Reagle discusses, Wikipedia is a fine example of good faith collaboration because its success relies so heavily on its users' ability to trust in and harmonize with one another as they engage the collaborative creation of content. Similar to Wikipedia, Yik Yak places faith in the judgement of its users to regulate content and behavior as they engage one another in a social and asynchronous manner. Implementing the "up-vote, down-vote" tool on all posts and comments, Yik Yak quite successfully manages good behavior as it prompts its users to up-vote exceptionally good, top-notch posts and comments and—to the same extent—essentially vote inappropriate, offensive, or just plain bad posts and comments into oblivion and off of the feed (or, "out of the herd," as Yik Yak calls it). Yakarma (Yik Yak's Karma Score) further enforces good and social behavior as it fluctuates depending on users' positive or negative engagement of the application as well as fellow users' posts and comments. Ultimately, Yik Yak's methods promote positive engagement and good behavior as it discourages any and all kinds of harmful conduct. Kristinam 0330 (talk) 17:17, 3 March 2015 (UTC)

I agree with what professor Reagle said in his comment culture article, comment is “the goal of the game is not to give a correct answer but to provide an answer that corresponds with what others have said, what the “survey says” (362). Comments can be either positive or negative, and there is no certain criteria to judge whether these comment is right or wrong. Comment section is like a “hot pot”, you can put everything inside. I think the best side of comment is that people are able to know more about what others are thinking, and absorb different point of views no matter they are right or wrong. But, down side is that you cannot avoid trolls or haters to comment abusively. A friend of mine is a popular fashion blogger on Microblog ( Chinese Twitter) and instagram, she always post her outfits on there. Some of people just commented in a really rude manner especially on her Microblog. And it caused some conflicts within her followers. Finally, she decided to close her comment on Microblog. It is hard to filter the negative comments online, closing comment is easy, but it does not mean we eliminate the “weeds.” They are still there, and you cannot control what they are thinking and behaving. For now, filtering negative comments is the biggest challenge in the online world.Yulu Lei (talk) 18:05, 3 March 2015 (UTC)

When reading the chapters you wrote, the ideas of "Writing for the Enemy" and "Assume Good Faith" really stood out for me. I also like how you mentioned the fact that the reason "Wikipedia’s culture and practice are compelling to [you] is that it has influenced the way [you] approach controversy and conflict beyond Wikipedia" and "[you] have found these norms to be 'a great way to end an argument in real life.'" When I was reading the The Intersubjective Stance of Good Faith section, I immediately thought about the ways I end and avoid arguments in real life. Although I'm pretty outgoing, I tend to be a passive person who constantly avoids conflict. The two major ways I stop or avoid an argument are one: to think about the other person's thinking and to understand it even if I don't agree with it and two: to assume they mean well and simply aren't knowledgeable (assume stupidity), they simply hold a different opinion, or I realize that I, myself, could be in the wrong. In the real world, people follow similar guidelines to Wikipedia but they tend to hold themselves more responsible for their actions because they can't hide behind a username. In online communities like Wikipedia, there isn't face to face conflict and although a person may follow similar guidelines in real life, they might not be as inclined to do so online. It made me wonder, do I follow these guidelines online as much as I try to do offline? Although I try to, I feel like in general, the shield of the internet lessens people's accountability. However, the fact that Wikipedia has these guidelines does lead the community in the right direction especially when you have people from so many backgrounds working together. It's also important to keep in mind, although I'm not a fan of conflict, it doesn't mean it's necessarily a bad thing (as you mentioned in this chapter). Brianne Shelley (talk) 18:18, 3 March 2015 (UTC)

Mar 06 Fri - Newcomer gateways

Exemplified in the Debian case I think the idea of having higher barriers of entry to new members is essential to making them part of the community. They encourage their users to apply for membership status while warning that it is a strict and rigorous process but alerting potential applicants that’s only to strengthen the community. When joining a community we want to think that everyone else in it will be of equal worth and making solid contributions to our projects. What I like about Dabian is that they allow everyone access to the project but to become a member you earn certain responsibilities such as having access to a private mailing list and more importantly voting. Having the ability to vote and have direct influence over a project gives new members equal weight in some regard to older members. I think giving new members the idea that they can contribute just as much and are valued as equally as new members lets the community know that new members are always welcome provided they can show they belong in the community. The key insight on their becoming a new member page is that “…becoming a Debian Developer grants you several important privileges regarding the project's infrastructure. Obviously this requires a great deal of trust in and commitment by the applicant.” Signaling that you put trust in the new member allows them to reciprocate this feeling. TWOBJ (talk) 17:23, 5 March 2015 (UTC)

“Design Claim 11: Providing potential new members with an accurate and complete picture of what the members’ experience will be once they join increases the fit of those who join” (Kraut and Resnick, p. 199). For example, Debian clearly outlines the expectations of every new member. These expectations are clearly laid out and users can decide on their own whether they want to be a part of it or not. The fact that Debian is an “open community” that welcomes everyone to want to use and help their community allows them to get rid of unwanted new users. Therefore, those members who utilize Debian on a regular basis may feel like they should payback to the community by joining it. Nsiu (talk) 22:00, 5 March 2015 (UTC)

Design claim 3 caught my attention: "Recruiting new members from the social networks of current members increases the number of new members more than impersonal methods" (Kraut and Resnick, p. 186). In order for communities to continue to thrive, they must replace members on a somewhat frequent basis. Having face-to-face conversations is uncomfortable for many, and thanks to technological advance, we now have the ability to conceal this discomforts behind a "wall". Sharing information online is much easier and allows information to receive more attention and reach a larger audience. On sites such as linkedin users can connect with individuals we may have never met. However, we can do so easily by using virtual connections. In the text, they discuss sharing information from sites such as the New York Times and Costco Photo Center. By using sites such as these, members can easily share this information, thus bringing in a new array of members over a period of time. Nwells1229 (talk) 22:31, 5 March 2015 (UTC)

As I started off the reading, Design Claim 2 (“Word-of-Mouth recruiting is substantially more powerful than impersonal advertising) made me think about websites that people look to to get opinions about companies, stores, and restaurants. People would rather hear about a company from a friend or trusted reviewer than by looking at an advertisement. Angie’s List, a paid subscription website dedicated to having local businesses be reviewed by previous users, is unique in the way where people can go on this website to find service providers like handymen, housecleaning, and pest control. The website prides itself on getting reviews about local services and makes members feel like their friends are giving them the advice. Angie’s list also relates to Design Claim 11 (providing potential new members with an accurate and complete picture of what the members experience will be once they join increases the fit of those who join) by looking at the “How it Works” link . Here it shows potential members why this website is better than free review sites and what members get out of subscribing. They also make a note saying that their data is certified, and they “guard against providers and companies that try to report on themselves or competitors”, which sites like Yelp have been known to have issues with. Angie’s list is a great website when it comes to bringing on new members and showing them what they will get and how reliable they are.

Mar 10 Tue - NO CLASS

Mar 13 Fri - NO CLASS

Mar 17 Tue - Newcomer hazing

Mar 20 Fri - Debrief: Social breaching

Mar 24 Tue - Gratitude

Mar 27 Fri - RTFM

Mar 31 Tue - Bootstrapping a niche

Apr 03 Fri - Debrief: Wikipedia

Apr 07 Tue - Winner-take-all

Apr 10 Fri - Bootstrapping and critical mass

Apr 14 Tue - FOMO

Apr 17 Fri - Infocide