User talk:Reagle/QICs

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Edit with VisualEditor

Questions, Insights, Connections[edit]

Leave your question, insight, and/or connection for each class here. I don't expect this to be more than 4 to 6 sentences. Make sure it's unique to you. For example:

  • Here is my unique question (or insight or connection).... And it is signed. -Reagle (talk) 13:29, 6 August 2019 (UTC)[reply]

Be careful of overwriting others' edit or losing your own: always copy your text before saving in case you have to submit it again.


Sep 10 Fri - Intro and Wikipedia[edit]

...

Sep 14 Tue - Persuasion[edit]

...

Sep 17 Fri - Wikipedia and A/B testing[edit]

QIC1: Both the readings from today and last class discussed methods to increase participation in online communities. While Tuesday's readings were about broad psychological principles for marketing, today's readings on A/B testing focus on a specific strategy. While these methods could be easily applied to websites consciously designed to achieve a specific goal (like Obama's campaign site), I'm also curious as to how less structured communities promote participation.

Nazi and Norms discusses an aspect of Wikipedia that isn't as easily reducible to data-based or psychological design: it's "good faith collaborative culture." While Wikipedia employs conscious techniques like A/B testing, how the culture of Wikipedia developed and the ways it impacts the site feels much less deliberately implemented to me. I think this culture of good faith collaboration can also be found in the FOSS movement which greatly inspired Wikipedia and is crucial to the success of these accessible and open communities.

How did this good faith culture first arise, and how has it evolved or been enforced on Wikipedia over the years? Can such a culture be consciously implemented by the leadership of an online community the same way A/B testing can?

-- LatakiaHill (talk) 01:49, 17 September 2021 (UTC)[reply]


As I was reading the first article about A/B testing, the author made it seem like A/B testing was the "end all" solution that could pretty much do anything. However, when I got to point three, I was caught off guard. It hadn't occurred to me that while A/B testing may lead to amazing results, it can greatly slow innovation and prevent large leaps. I think this is a key reason why A/B testing should not be used to fully automate the creation of websites and software. Another one of the counterarguments against A/B testing caught my eye, "There's no time to ask the question, and no reason to answer it. After all, what does it matter if you can get the right result? Keep testing, keep reacting, and save your philosophizing for the off-hours" (Christian, point 4). This sort of attitude towards work and craftsmanship rubs me the wrong way. I feel like understanding "why?" is a key part in developing a product.

With the Wikipedia page that had the different banner ads and their effectiveness listed, I appreciated being able to see the slight visual differences between banners. It almost felt like I was getting a peek behind the scenes, as this is something we're normally totally unaware of.

Finally, the chapter we read from Good Faith Collaboration made me much more aware of the "Wiki" side of Wikipedia. Up to this point, I've only really seen Wikipedia from the "Encyclopedia side". This chapter also helped me to appreciate just how amazing it is that we even have such a thing as Wikipedia. Reading about the visions of people in the past made me realize that I take for granted this huge tool of global knowledge.

-Ortiz.da (talk) 01:59, 17 September 2021 (UTC)[reply]


Sep 21 Tue - Gaming motivation[edit]

According to "The scientists who make apps addictive", Silicon Valley's best-selling technology firms utilize behavioral data to pump dopamine into us and keep us going back to their goods. But several psychologists who have created the persuasive science are unhappy about how it is employed. This article makes me recall a movie, The Social Dilemma. The film offers a detailed insight into how design in the social media promotes admission, manipulates the opinions, emotions and behaviors of individuals, and propagates conspiracy theories and falsehoods to maximize profit. In the last century, Silicon Valley mainly produced hardware and tool software, such as Apple computers, photoshop and so on. They make money by selling tools to you. In this century, the main income depends on advertising. Are there products? Yes, this product is the user, and everyone--your time, attention, and any privacy of yours is packaged and sold to advertisers. The only goal of these apps is to keep you constantly using them.

These technical products are not designed by psychologists who work hard to protect children. They are designed to make the algorithm very good at recommending the next video to you, and very good at allowing you to take photos and add filters. These things are not only controlling where our attention is, but also getting deeper and deeper into the roots of the brain, taking away children's attention and self-worth. We have evolved a mechanism to care about the evaluation of other people in the community, which is very important.

-QiruiLin.real {talk} 12:13, 20 September 2021 (UTC)[reply]

QiruiLin.real, good connection with pop culture and concerns. You might enage with a few more specifics/details from the reading. -Reagle (talk) 17:01, 21 September 2021 (UTC)[reply]

In the first reading from Kraut et al, the section on leaderboards made me think about a somewhat recent interaction I had with leaderboards. A game I was playing had a mode for trying to complete challenges as fast as possible, and there was a leaderboard for this mode. I was able to slowly improve my time which was nice to see, but when I compared myself to the top scores, and saw the strategy needed to achieve them, I did not feel as accomplished. The leaderboard took my focus away from personal growth, and instead made me feel like it wasn’t worth trying. This supports the claim that, “if others’ performance is seen as unattainably high, people may be discouraged from even trying” (Kraut et al, p. 50). Unlike local communities, online communities have the ability to easily compare us to people from around the world. While this can lead to some strong competition, it can also be highly demotivating.

Another interesting use case of leaderboards is with Discord bots that track user messages (such as MEE6). I’ve seen bots in a few servers that “rank” users based on how many messages they’ve sent. I can definitely see this as an incentive to be more active in a server, but it could also promote spam. Furthermore, at least in the few small cases I’ve seen, it seems like people don’t really care too much about their bot level, as there’s usually not much of a reward for having a high level.

In the second reading, a line that really resonated with me was, “Product-makers have the ability to improve people’s lives, to find the points when people are in pain, and help them” (Eyal). I’m a strong believer that, while technology can be used in manipulative, malicious ways, it can also be used to bring people together in positive ways. These “pain points” that Eyal mentions could be anything from confusion on a topic, to loneliness and depression. Companies should not prey on weakness, but see it as an opportunity to help. Unfortunately, Harris points out that “The job of these companies is to hook people, and they do that by hijacking our psychological vulnerabilities”. What are some ways that the persuasive nature of technology can be used for good? How can companies put customer wellbeing first while still being successful?

-Ortiz.da (talk) 01:38, 21 September 2021 (UTC)[reply]

Ortiz.da, good connections and specifics. Check out the history of this page to see a few typos I fixed. Also, it's best to speak of authors directly rather than "first reading" and "second reading". -Reagle (talk) 17:01, 21 September 2021 (UTC)[reply]

In the reading "Scientist who Make apps addictive", I felt that the scientist Fogg brought up a very important point that in order for an individual to do something relating to behavioral change They must;"The person must want to do it, they must be able to, and they must be prompted to do it (Fogg). I believe that the most important point to his theory is the idea that individuals must be able to do it. You may have some of the most motivated individuals in the world that have a goal in mind that are unable to complete the goal just because they do not have the ability or access to it. Relating it directly to my current personal life. I have recently obtained my real estate license. I am looking for clientele and wish to help them in every way that I possibly can. I am highly motivated and believe that I can do it. However at the moment I am unable to do so because I do not have the resources to do so.

Another great point brought up was the idea that if an individual has a great first encounter with a product or service they are more likely to return to this product or service. The examples that were given were champagne being delivered in business class or the first interaction with an apple product is always an experience. I believe this is the type of information all businesses should take note of. From my personal experiences these types of simple but important interactions are the reason I personally come back for more of their product. The idea will also help me build my own business in the future as well.

-dannyryan33 (talk) 9:12, 21 September 2021 (UTC)

dannyryan33, good engagement with readings and relevance to your aspirations. The prose is a bit bumpy and the sentence needs some refinement. (For example, the prose around your quotation of Fogg.) -Reagle (talk) 17:07, 21 September 2021 (UTC)[reply]

"Fogg’s Atlanta talk provoked strong responses from his audience, falling into two groups: either “This is dangerous. It’s like giving people the tools to construct an atomic bomb”; or “This is amazing. It could be worth billions of dollars.” The second group has certainly been proved right."

QIC2: While the huge success of addictive, time-consuming apps has proven the second group right, I think it has in one way proven the first group correct as well. The article also goes through the negative effects that apps such as Instagram and companies like Google have intentionally or unintentionally caused. The unwillingness to change this model, like Eyal notes, probably comes from how lucrative it is; if the only measure for an app's success is how much time the user spends on it, then any consideration for its real benefits or unintended consequences go out the window. Life becomes a big Las Vegas that traps people in with quick psychological satisfaction. However, I'm not sure if I agree with the articles claim that this addiction is all voluntary, that "[y]ou can’t make people do something they don’t want to do." Is it really ethical and voluntary if users are kept in the dark on how they may be psychologically or financially harmed in subtle and hard to notice ways?

As someone who enjoys learning languages, this reminds me of the business model of the language learning app Duolingo. The owner of Duolingo has stated before that the goal isn't to teach people a language thoroughly, since most people drop out quickly, but to simply keep people engaged on the app: "A significant portion of our users use it because it’s fun and it’s not a complete waste of time[.]" This is done by having simple tests with lots of positive feedback when answered correctly, setting up leaderboards, having discussion boards and clear lists for people that want to contribute to the curriculum, and other tactics familiar from Kraut et al. Of course, despite Duolingo's lack of significant results, users are kept in the dark and are given the illusion that they are learning proportionally to the amount of time that they have put in.

Personally, I had the biggest realization of how platforms try to draw you in when I started using alternative front-ends to websites I used (Reddit, Twitter, Youtube). These front-ends removed all the tracking, JavaScript, displays for recommended content, likes/upvotes, etc so that each site was just the bare text or video. It made me realize that although I thought I only used something like Youtube or Reddit to keep up with the news, I was actually wasting a lot of time on things that were suggested to me under the guise that it was important for me to watch them. Now, with everything else stripped away, I can only use these sites the same way I use Wikipedia, as a way to search up something and reference (like "how to do X youtube"), and I don't touch the site beyond that. It makes me personally feel more sane and also more conscious of how manipulative websites and apps can be.

--LatakiaHill (talk) 14:55, 21 September 2021 (UTC)[reply]

LatakiaHill, excellent engagement. You have a few typos (e.g., it's "article's"), and I recommend speaking of authors rather than "articles." -Reagle (talk) 17:07, 21 September 2021 (UTC)[reply]

Sep 24 Fri - Wikipedia project start[edit]

...

Sep 28 Tue - Kohn on motivation[edit]

QIC 3

Kohn’s perspective on motivation was quite new to me. I had never thought of rewards to be akin to punishment. As I read more, I became somewhat skeptical, as it seemed like every possible way of providing feedback was harmful according to Kohn. They do address this, however, I feel like it was a bit too late, as the claims had already formed into a large “snowball” of sorts. It seemed like there was no way to realistically offer feedback in the way Kohn suggests. Furthermore, I even felt a bit attacked for simply liking praise (as most people do). I understand that rewards and praise can cause us to see what lies between us and them as undesirable, but Kohn’s strong denouement [needs copy edit] felt like it was going a bit too far.

When talking about the negative effects of praise, Kohn brings up that “telling someone how good she is can increase the pressure she feels to live up to the compliment”(Kohn, p. 99). While this pressure could affect one’s future performance, it could also affect current performance. An instance that comes to mind would be if I’m playing a competitive video game and people are cheering me on, I may start to focus more on what I’m doing (which could be a good thing, but not in this case), bringing me out of flow and making me anxious to perform well. Button presses become manual instead of automatic, and it’s easy to “choke up”.

Two final points I did agree with were that rewards can cause people to take less risks, and that unexpected rewards are better than those promised ahead of time. For the first point, my school writing comes to mind. If my papers are being graded strictly, then I feel like I can't experiment with as unconventional writing styles, or structures. With the latter point, I think I feel more appreciated (even sometimes embarrassed?) when I'm given a reward for doing something that I wasn't expected to be rewarded for. These types rewards are far less common though.

-Ortiz.da (talk) 01:10, 28 September 2021 (UTC)[reply]


Danny QIC 2 In the article "Resentment" by Chad Whitacre he mentions a tactic in how to deal with resentment. I first would like to challenge the definition of resentment that he used in the article; "Resentment is a negative emotion, an internalized anger at a wrong done to me" (Whitacre). I feel that resentment does not necessarily have to be a wrong done directly to an individual. For example in the article the definition is contradicted immediately by Whitacre with an example he used that they resent an individual because they make more money than themselves. The individual making more money is not intentionally creating a wrong to the other person. Therefore for that reason I feel that the definition being used is not the most accurate representation based off of the examples given.

The tactic that is used to address resentment is presented by Brad Blanton. His "Radical Honesty" approach involves directly contacting an individual and letting them know that you resent them. In a perfect world I can see how this would work and create valuable discussion and potential friendship in the end. However, I do not see this theory as being realistic. In todays [needs copy edit] world no one is going to take the honesty well and in the end will most likely cause issues. I do see the difference between radical honesty and internet vitriol. Internet vitriol is more toxic because it is an attempt to shoot down an individual publicly questioning them in front of an audience instead of individually. Overall, I believe this article had great points but many points that can be challenged as well. It leads to the question; what is the most ideal definition of "Resentment" in this context?

-dannyryan33 (talk) 09:23, 28 September 2021 (UTC)[reply]


Sunishka QIC 1

The article "Resentment" gave me a lot to think about. I don't think Chad Whitacre is necessarily wrong about his analysis of "Resentment". However, I do think his analysis is a very simplistic one. Emotions are a lot more complex than what Chad Whitacre has described and there are many different types of resentments a person could experience. I also think Brad Blanton's solution of "Radical Honesty" is a potentially destructive one. It assumes that people are not capable of critically thinking and dealing with their emotions without direct confrontation. This is dangerous because it puts the responsibility of the resolution on the other person instead of the person who is struggling with said emotion. This article reminded me of the quote "resentment is like drinking poison and waiting for the other person to die." Radical honesty sounds a bit like drinking poison to me. For example, telling someone you resent them for making more money "could" lead to constructive dialogue but could also lead to public humiliation. Unfortunately, in this context, the power and responsibility to absolve someone of their resentment rests in the hands of someone else. This will not always give the person feeling resentful deeper insight on their emotion. However, talking about it in a therapeutic, non-confrontational setting might be more beneficial.

I also checked out the website Brad Blanton has on "Radical Honesty". I got the impression that both parties in the "radical honesty" interactions were aware of the upcoming confrontation and were prepared for it. This might not be effective in a non simulated, uncontrolled setting. This is probably why Internet Vitriol seems so harsh to Chad Whitacre. It is a more realistic representation of people's emotions, in comparison to Brad Blanton's "radical honesty".

While I don't like seeing hateful messages on the internet any more than the next person, Chad Whitacre's statement "we’re used to people saying whatever they want, to whomever, without regard for the other person’s feelings" is actually well within a persons digital rights. I definitely think the online community should develop ways to reduce emotional damage that occurs through cyber bullying/cyber hate but I do not think "radical honesty" is it. For example, internet trolls exist to create chaos and discord online and this mindset of feeding into their "resentment" is exactly what they want. Ignoring/Blocking/Reporting hate to moderators is much more effective because it gives the person receiving the hate the agency to put an end to it.

-sunishka134 (talk) 11:11 am, 28 September 2021 (UTC)

Oct 01 Fri - Wikipedia @ 20 insights[edit]

Dario QIC 4

I think a central idea to all of the essays we read was that Wikipedia can be so many different things for different people: bad pr, a source of income, a place to learn, a place to socialize, a place to educate, a place to teach...


I didn't quite realize that people have found a way to make Wikipedia editing their job. I had figured if there was conflict of interest then you shouldn't edit, and that's that. However, William explained that it's a lot more complicated than that. I guess it makes sense that people would be willing to pay to have their Wikipedia page updated, since the page is probably the first link to come up when someone Googles them. One question I would have would be what sort of quantifiable data is there on the effects of a bad Wikipedia page on an individual or business. Are there noticeably decreased sales, a lower likelihood to be hired, etc? Personally, I wouldn't be interesting in editing while having COI. It's a too line fine for me to feel comfortable with, and I don't like the idea of "threading the needle" to avoid breaking rules while making a profit.


Jake's essay had a more poetic feel to it's style. It's interesting that he found stability though Wikipedia. One thing this class is showing me is that Wikipedia is not just a boring, plainly formatted, black and white educational site. There seems to be a lot going on behind the scenes that most non-Wikipedian's aren't aware of. I liked the description of Wikipedia that compared it to a leaf pile with many different contributors. It captures not only the hugely collaborative nature of the site, but also the different backgrounds of people, as well as the playful or celebratory nature that can sometimes arise. -Ortiz.da (talk) 02:14, 1 October 2021 (UTC)[reply]


QIC3: The three readings today got into the nitty gritty of how Wikipedia deals with the many problems that comes with its ambitious scope. Beutler describes conflicts of interest that arise from knowledge being mixed with money. Orlowitz raises questions on Wikipedia's supposed neutrality and its standards for reputable sources, and the outlines the efforts to ensure marginalized perspectives are heard to truly realize the Enlightenment ideal of the encyclopedia. Harrison covers the harrowing case of a Wikipedia editor being stalked simply for being a woman while covering controversial topics. All three readings show that Wikipedia's claim to neutrality and openness is constantly under siege from the same issues that all large organizations experience. This includes economic interest and bias and discrimination, both explicit and implicit.

The difference is Wikipedia attempts to tackle these problems through open and user-centric practices. While not perfect, these practices such as the edit request system allow for Wikipedia to remain open and accessible without compromising its principles. The previous reading Nazis and Norms outlines three basic tenets of Wikipedia: Neutral Point Of View, No Original Research, and Verifiability. While I first assumed that these were conscious ideals from the inception of Wikipedia, it now seems like these concepts were developed and refined over time as Beutler describes how COI was dealt with over the ages.

I am curious whether new guidelines or concepts have appeared over time, and if there are new goals or ideals that editors now follow either formally or informally. It is interesting to see how standards are developed and re-negotiated over time collectively by the community as Wikipedia deals with the pressures of the outside world.

--LatakiaHill (talk) 05:44, 1 October 2021 (UTC)[reply]


QIC 2

Today's three readings provide a very in depth understanding of why wikipedia works the way it does and what motivates people to work as a wikipedean. When writing papers in high school, teachers would always warn me against using Wikipedia as a source, citing the fact that "anyone can write in it" as a reason to avoid it. Because of this, I had always been skeptical of Wikipedia as a reliable source of information. In particular, I thought the article by Jake Orlowitz described the processes that go into authoring Wikipedia very well. Clark Shirky's quote in it "what Wikipedia presages is a change in the nature of authority" and the following explanation that authors on Wikipedia do not maintain authority of what they write, but instead "vest authority in a visible process" definitely changed my perception of this. While I was aware that everything we write is evaluated, I did not know about the network of processes that go into it. For example, "targeted text-rejection filters" and bots that "seek out nuanced patterns of vandalism".

This actually made me really curious as to why other information outlets, especially newspapers do not use a similar fact-checking process? Most of them manually fact-check themselves, either before or after news publication.

In the beginning of the article, I was skeptical of Jake Orlowitz's description of Wikipedia- "It has its own body of common law, common sense, and common knowledge, which Enlightenment philosophers only dreamed about". However, by the time I reached the end of the article I was inclined to agreeing with him and hoping that other information outlets also take on a similar position.

--sunishka134 (talk) 11:31, 1 October 2021 (UTC)[reply]


Oct 05 Tue - Ethics (interlude)[edit]

QIC 5 Dario

I had never really considered the problems of protecting the privacy of people when doing research related to online communities. I guess my initial thought would just be "well, they put it out there publicly, so anyone can see it anyways". If anything, studying comments and posts made on online communities seems extra convenient since screenshots and quotes can be pulled from a page without even disrupting the environment. However, I do agree that in order to be ethical, researchers should strive to protect the privacy of those they study. Their subjects are willingly providing information about a topic; it wouldn't be right for their goodwill to result in harassment or loss of privacy. I also found it interesting that the researchers decided to treat pseudonyms as real names, though I also agree with this decision. Just because an online username isn't easily traceable to a person in the real world, it is still, well, traceable. Someone who uses the same username on several platforms could have information they shared in one community leak into another with harmful effects.

The Facebook and Ok Cupid research weren't very agreeable to me. Playing with people's emotions and possible relationships for the sake of gathering data does not seem very ethical. It also felt weird that the developers at Ok Cupid had percentages for how likely people were to be compatible. I would like to believe that relationships and friendships in the real world are determined by more than just statistics. Finally, the quote used to describe user's reactions when profile pictures were turned back on stood out to me as surprisingly fitting, "It was like we’d turned on the bright lights at the bar at midnight". While it's easy to blame the users described as being shallow or only interested in looks, I think most people would experience some level of shock the first time they saw a picture of a person they were messaging online. -Ortiz.da (talk) 03:48, 5 October 2021 (UTC)[reply]

Ortiz.da, great response! -Reagle (talk) 17:18, 5 October 2021 (UTC)[reply]

Oct 08 Fri - Regulation and pro-social norms[edit]

Pippa QIC 1

I really enjoyed reading through the 20 different claims of limiting the effects of the bad behavior online (Kraut pp. 125-150). It was interesting to me that there were so many different ways of dealing with trolls, manipulators, and bad actors in online communities! Kraut mentioned that social sanctions against these people don't always work because they can have no impact or even make the behavior worse. As a result, online platforms need to find the system that works best for the content of their site. Some of the claims that interested me the most were claims 7, 9, 15, and 19. Claim 7 stated that trolls who use their online presence to elicit reactions from other people are sometimes best dealt with when the entire community agrees to ignore them. While you do not want to normalize inappropriate behavior, sometimes giving the troll the reaction they are craving only makes things worse. I had never heard of the phrase "Do Not Feed the Troll", but I thought it summed up the claim pretty nicely! Claim 9 reminded me of the issue with people's accounts being banned on TikTok. A lot of times, I will see someone on TikTok post content that the app deems "inappropriate" or "against community guidelines" posting from a new or second account that has a different name. Like the claim says, banning people from platforms will only be affective if its not easy for the person to create a new account and immediately re-join the community.

Claim 15 reminded me of when we talked in class about how sometimes displaying examples of negative behavior as a way of trying to decrease it will only increase it by making that behavior seem more common than it really is. Online community managers are faced with the decision of leaving posts up while taking others down. Similarly, claim 19 states that how you display guidelines can also impact how people understand the norms of the platform or community. All of these claims made me consider how the apps I use control negative behavior to keep their users happy and following the guidelines in place.


QIC4: The main thing that I took away from this chapter from Kraut & Resnick is that while community rules should be strictly explicit, the enforcement of such rules should be communal and open. I often see complaints about an online community's mods being Nazis or having huge egos. It is important that moderators are seen as understanding fellow members of the community, and that enforcement is carried out in a transparent and consistent manner. This prevents resentment and retaliation and prevents people from feeling detached from the moderators.

Reagle's paper on Wikipedia norms is fun because one of the first things I do when checking out an online community is read through humorous guides and commentaries that have emerged on the site over the years. After the first class of the course, I remember finding the No climbing the Reichstag dressed as Spider-Man article. While I enjoy articles like this, I wonder how many Wikipedians actually read and formulate their understandings of the norms of the site based on them. In my experience, this kind of article is usually written by seasoned members of the community to celebrate a culture that has long been well established.

Reagle's point on the use of humor to parody and discourage certain behavior is also interesting since it also shows how different communities deal with different kinds of unwanted behavior: One wiki-like site, the SCP Foundation, is a collaborative fiction writing project that constantly deals with bad but enthusiastic writers that are also often underage. Because of this, many of the humorous guidelines are lists of writer stereotypes and cliche tropes found in their writing. At the same time, this satirical articles play into a mildly hostile community atmosphere that shows no mercy to poor writing. This can be intimidating for newcomers and discourage contributions. Similar to what Kraut & Resnick stress, for good-faith actors that just make poor contributions, minimizing the amount of attention they get while still giving them the chance to learn and grow is also important.

--LatakiaHill (talk) 02:42, 8 October 2021 (UTC)[reply]


Dario QIC 6

I appreciated the multi-directional approach to regulating user behavior that was proposed by Kraut et al. : limit damage, limit amount of bad behavior, and encourage good behavior. Just focusing on one of these areas can only do so much, as humans can be stubborn and difficult to moderate. This perspective on regulating behaviors also reminded me a bit about our readings on punishment and rewards. Moderators of online communities can both punish bad behaviors such as trolling, and rewards good behaviors, such as assisting new players. It was interesting how the way in which rules are presented, including context, frequency, and wording, all play a part in their effectiveness. As I read through the Kraut et al. section, I was thinking of how I've seen the concepts practiced in communities that I've visited. For example, it is explained that Slashdot can hide negatively rated comments, and the same goes for heavily downvoted posts on Reddit. Also, the idea of moving conversations to a new location so as to not disrupt the main channel of discourse, sounded very much like how Discord servers are organized. Go to pretty much any decently sized Discord server, and you're basically guaranteed to find a "misc" or "random channel" for keeping the general channel less spammed and more for actual discussion. Discord also has a "slow mode" that can be enabled for certain channels to limit the number of messages users can send in a period of time. This is like the "quotas" that Kraut et al. mention. Finally, as a small side note, I have never heard the term "gag" used to describe an action taken by a moderator to limit someone's chatting capabilities. I have always heard it referred to as "muting".

The "Be Nice" article was, admittedly, a bit difficult for me to follow along with. However, the idea that "Unless there is strong evidence to the contrary, assume that people who work on the project are trying to help it, not hurt it" seems pretty unique to Wikipedia. It also seems like something that is very difficult to enforce/encourage of users. It's incredible that Wikipedia remains so open and collaborative while being based on norms such as this one. Maybe one reason for this is that, unlike Reddit, Instagram, or Facebook, there's a more central task people are working towards contributing to. Reading this article, I got the feeling that Wikipedia's norms are balancing on a very fine line. The norms are described as "playground rules" which, as we know from experience, are not always the most official. I think Wikipedia can function with a system like this because of it's well established nature, whereas a newly formed online community these days would have a hard time moderating users with a similar set of guidelines. -Ortiz.da (talk) 03:06, 8 October 2021 (UTC)[reply]

Ortiz.da, interesting, I didn't know Discord had a slow mode. -Reagle (talk) 17:06, 8 October 2021 (UTC)[reply]

Sunishka QIC 3

I enjoyed reading about Wikipedia's principles of "Good Faith" and "Neutral Point of View" in Be Nice. However, this belief does seem a little paradoxical to me because in my time as a psychology major, I have encountered numerous researches that appear to prove that true neutrality does not really exist. This makes me wonder whether it is actually possible for Wikipedia writers to feel absolutely nothing while writing or editing content. We certainly do not live in a neutral world. And isn't everything written, after all, a reflection of people's own evaluative reactions to the world?

I realize that putting one's good or bad feelings aside is feasible, but this is only possible if and when the persons in question are aware of their own feelings. And even if we decide to assume "Good Faith" and trust other Wikipedia editors to be objective when we are not, odds are that different Wikipedia writers might be putting their own interpretations (positive and negative) on Wikipedia until they all balance each other out and appear to be "neutral".

This is not me trying to pick out flaws with Wikipedia or assume that all Wikipedia writers are exhibiting "bad faith". However, I think the real reason Wikipedia appears neutral is because of the sheer amount of interpretations and perspectives that it is made up of rather than Wikipedians being truly "neutral". This is even acknowledged in the article, that refers to an essay "Beware of the Tigers" and explains that "Wikipedians are cast as zoo keepers of sometimes difficult ideas."

However, I think the principle of "Neutral Point of View" sometimes requires Wikipedia writers to ignore their emotions which might be more harmful as it could lead to unconscious biases rather than writers being more upfront about their stances and reaching a middle ground. The essay also acknowledges that there are numerous administrators on Wikipedia who seem to be ensuring that every writer gets a fair shot, so I do believe that the principle of "Good Faith" is not compromised.

-sunishka134 (talk) 11:21, 8 October 2021 (UTC)[reply]

interesting response sunishka134, let's discuss in class. 🙂 -Reagle (talk) 17:06, 8 October 2021 (UTC)[reply]

Oct 12 Tue - Norm compliance and breaching[edit]

Dario QIC 7 One example of a platform making it harder for new accounts to join quickly would be Discord. Server owners can choose between several verification levels: - Unrestricted - Must have a verified email on their Discord account - Must also be registered on Discord for longer than 5 minutes - Must also be a member of this server for longer than 10 minutes - Must have a verified phone on their Discord account

This seems like a very effective way of preventing new account or bot spam in servers. It also keeps banned users from immediately rejoining with another account.

The idea of cross community reputation systems was mentioned very briefly, but it caught my eye. This doesn't seem to be a feature in many (or any) online communities, though one could make the argument that a common Steam username between games is a case of this. Aside from being a logistical nightmare to coordinate between companies, having people use the same usernames across platforms may discourage people from joining many communities, as the appeal of anonymity would be far lessened. It could also deter users who are looking to have a new start, or want to keep their activity in different communities separate. It is interesting to note that many users voluntarily use the same username for multiple platforms. This could be out of convenience, fondness of the name, or for the sake of strengthening their online persona.

Another concept which I liked was the long term identifiers, which discourage users from deleting their accounts and making new ones. An example that came to mind could be things like beta items/titles that players receive in games for being early adopters. If these are tied to a user's account and can't be traded, then they tend to be highly sought after.

The readings on social breaching stood out to me as being somewhat immoral ways to study things. As we discussed in class, the benefits of a study should outweigh the harms, which didn't seem to be the case in these social breaching experiments. Participants also weren't asked for consent even though they were being inflicted with emotional distress. I think people should be careful doing such experiments so as to not cause as much emotional distress as described in the Wikipedia article on social breaching. That being said, I did find the tic-tac-toe social breaching experiment particularly funny. A lot of the mentioned experiments seem like something you would see on a prank channel/show. However, I relate to the people who said that just getting themselves to do such strange, socially non-normal things, was very difficult and stressful. -Ortiz.da (talk) 01:16, 12 October 2021 (UTC)[reply]


QIC5: The design claims for Kraut and Resnick today are mainly emphasize two ideas. The first is that rules are more likely to be followed if they are accepted and regulated by the community. The second is that communities should encourage long-term advantages/consequences such as having a reputation system or making the initial attempt to join a community harder. I was once part of an online community where in order to join, one had to read through pages of rules and guidelines and then answer a couple questions to apply and be accepted. The catch was that there was a sentence hidden within the rules like "mention purple rain in your application". If you didn't include the passphrase (which was changed often), your app would be automatically denied. The app requirements itself did not mention the phrase and you would only be aware of it if you actually took the time to read the rules. I imagine this was a pretty effective way to not just weed out trolls, but also low-effort participants.

I am somewhat familiar with breaching experiments since I did something similar for an Anthropology course, although it was not specifically ethnomethodology. Keeping in mind the Belmont principles, I think the crucial thing with breaching experiments is to understand what the experiment is trying to demonstrate and if it's necessary to do an intentional breach. For example, I of course know that if I were to walk up to someone eating and slapped the food out of their hands, they would be mad. I don't need an experiment to know that. But seeing how people react to a stranger asking to take a bite out of their food, while considering variables like the demographics of the people asking and being asked, might be insightful. It is also important to consider potential harm, since violating social norms can make someone very uncomfortable for a long time even if it seems minor (like if an experimenter started chatting up someone that was using the urinal as a social breaching experiment).

I personally used breaching when learning about the Qanon community on 8chan. I basically posted random things, asked questions challenging or affirming others, and tried to see what sort of posts people responded to the most, what invoked the most anger or support, and how people corrected me. I didn't think this was unethical because the site was already one where trolling and hostile behavior was the norm; in many cases I broke the norms by acting nice and supportive! I think it helped me get a fuller picture of the social norms of a community that otherwise prides itself on how opaque it is to outsiders. --LatakiaHill (talk) 15:53, 12 October 2021 (UTC)[reply]


Daniel QIC 3

Speaking directly to Krauts idea of design claims in regulating online behavior I feel there is one section that is not mentioned that I personally am very curious about. What happens when someone complies with all of the rules and meets the moderators requirements but is still clearly a troll in the online community? For example, in high school a group of individuals discovered a facebook group labeled "The Dogs of Rockport". A completely harmless page where proud dog owners will post pictures of their dogs throughout the historic parts of Rockport. High School kids being high school kids joined this group meeting the moderators requirements and all regulations. They began posting pictures of things that were clearly not dogs. This obviously caused conflict in the group but when they were messaged by the moderator and told they will be kicked. They argued that they did not conflict any of the stated rules and regulations. The online community failed because instead of going back to the drawing board the moderator caved in and deleted the group. This also relates to the first half of the chapter when mentioning "one bad apple spoils the whole barrel". Terrible to ruin such a great group but can't help but understand their argument.

Krauts design claim 27 "Prices, bonds, or bets that make undesirable actions more costly than desirable actions reduce misbehavior" is a completely valid claim. Sadly the entire world revolves around money. Nobody wants to go out of their way to cause issues online if they are going to have to pay for their actions. For example, growing up during probably the most toxic of online communities which I believe is the pre-game lobby of "Call of Duty" in the years ranging from 2008-2013. There was a line that if crossed resulted in a punishment. If you were reported for violating the online rules (which means you really crossed the line in this specific community) you were banned for a minimum of 24 hours. Being a teenager with an addiction to these games that was a huge price to pay.

The violation of Social norms was very interesting because it caught so many people off guard. People go with the flow in todays society and often already assume a response or actions from other individuals. Therefore when these norms are violated the reaction to them is often comical as the person does not know what to do. I can understand Dario's idea that the tests are immoral because they did not get consent. However I believe that there was no true harm in the experiment but only a minor inconvenience. dannyryan33 (talk) 09:56, 12 October 2021 (UTC)[reply]


Oct 15 Fri - Community and collaboration[edit]

Pippa QIC 2

I found the Hill and Shaw reading interesting, particularly the part about the gender gap. I was astonished to learn that over 80% of Wikipedia editors in 2008 were men. While I knew that women were in the minority of Wikipedia editors, I had no idea that they were such a small percentage! The reading went on to say that even though the gap has gotten smaller, there is no doubt that Wikipedia editors are overwhelmingly male. The gender gap section raised a lot of interesting and relevant points, in my opinion. Firstly, I'm glad that there are Wikipedians who care about making the gap smaller and are focusing on making the platform more comfortable for people other than men. I think it's vital that a resource like Wikipedia is as inclusive and diverse as possible. Reading the gender gap statistic made me think about all the Wikipedia pages I've read before -- I wonder how many of them were written with a male-centered view. Would those articles be significantly different if they had been written by a woman? Obviously, an article written by a man is not necessarily less accurate or well-researched, but it definitely is concerning if men are heavily dominating the platform. Secondly, the gender gap made me think about all of the Wikipedia and online norms we've talked about this semester. If those norms exist on platforms dominated by men, can we assume that some of them inadvertently discriminate against women? How do we fix that issue? As a woman who participates in online communities, I am always weary of how I engage in different forms of content and how I use my social media profiles. I don't think that any gender should be entitled to more power in any situation, but especially in a space as influential as Wikipedia. Again, I am happy that people are researching these issues, especially in regards to Wikipedia. As Hill and Shaw mention, "Wikipedia is the most influential and widely accessed free information resource on the internet as well as the most widely used information platform in human history" (Hill and Shaw, 2020). With this in mind, it is absolutely necessary that women are given the same opportunity to contribute to the platform as men.


Dario QIC 8

A concept that I had not thought of before is how the wording of "neutral point of view" makes having a "point of view" seem like a bad thing. I agree that a more fitting opposite to NPOV would be "biased", as bias is something that tends to have a negative connotation while having a point of view is usually a good thing. Wikipedia editors seem to be encouraged to have points of view, but to not let those viewpoints morph into biased writing. The more I read about the norms, goals, and community of Wikipedia, I begin to see it as this strange mix between something bordering on a way of life/religion, but also as a place that does not promote one particular side over another. I liked how Reagle explains that the norms of Wikipedia have influenced his way of dealing with conflict in real life. I appreciate when online communities can have such positive, life changing effects like this, even if they may be somewhat subtle or small. Another exploration of language that was interesting to me was the difference between respect and civility (it seems like understanding the core meanings of words is something that Wikipedian's enjoy). Not everyone you meet on Wikipedia will merit your respect, that's just not realistic to expect. However, you can still try to be civil to everyone. I think this idea is something that we could apply to our non-virtual lives as well.

The article on Wikipedia as a "research laboratory" talked about how prevalent Wikipedia has become in our lives. I can't disagree with the statement that "an enormous portion of all successful internet searches would be failures if Wikipedia did not exist". Just the other day, I noticed someone on the T was using Wikipedia to read about something. The site is all around us, every day. Helping to win arguments or give quick summaries.

I know I would personally feel very unmotivated to continue contributing to Wikipedia if someone just reverted my edit, or left a negative comment. I would probably think something along the lines of, "I was just trying to help. I guess they don't need my help though..." I would be much more receptive to a social or directional feedback message, which was shown in this study. These types of feedback would make me feel more connected to the community, and inspired to add something to it. -Ortiz.da (talk) 03:08, 15 October 2021 (UTC)[reply]


QIC6: Wikipedia's NPOV and good faith collaboration do not prevent conflicting opinions, but instead try to create a space where conflicting opinions can be civically and productively discussed. Practices that assume good faith and welcome newcomers allow for the community to largely remain functional when mistakes happen or arguments break out. It is interesting that Zhu et al. found that negative feedback does not decrease general motivation while encouraging more focal edits; I'm pretty sure that negative feedback would not be as effective on a site that didn't have the culture, policies, and guidelines of Wikipedia.

Learning more about the libertarian ideals of Wikipedia's early days and its influence on NPOV is interesting, especially after checking out what Larry Sanger is up to now. Sanger's principle of "avoid bias" is very different from the neutral point of view that seems to exist on Wikipedia today. Avoid Bias is to be as "objective" as possible and make the author's beliefs or argument opaque. NPOV seems closer to the idea of presenting different viewpoints and explicitly describing their arguments, values, and biases. It tries to present different viewpoints as neutrally and charitable as possible so that readers can hopefully come to their own conclusions. This approach seems to abandon the Objectivist ideal of one objective truth reached through pure reason, and it is interesting to see how the concepts of neutrality and bias have collaboratively developed over the years. Wikipedia is difficult to characterize as libertarian in many ways, being a non-profit collaborative project driven by volunteers under the fairly open Creative Commons license, but it probably wouldn't be what it is today without those founding principles.

--LatakiaHill (talk) 04:18, 15 October 2021 (UTC)[reply]


Sunishka QIC 4

The essay "The Most Important Laboratory for Social Scientific and Computing Research in History" did a great job of highlighting the positive aspects of Wikipedia as well as the aspects that need to do better. For examples, the gender discrepancies on articles viewed as "scholarly" is particularly troubling to me because I had always assumed online communities were less likely to be sexist (this was before I was made aware of incel communities and other problematic online communities). Benjamin Mako Hill and Aaron Shaw's essay also contained some fascinating information, such as the fact that Wikipedia has had a significant impact on world tourism and legal practice.

However, I thought the essay might have been overly optimistic at times. For example, the section on educators shifting from forbidding Wikipedia use to encouraging students to participate on Wikipedia is quite rare. Many instructors who have advised me against depending too much on Wikipedia still cite the rationale given in the article that: "Wikipedia’s open editing policy made its content inherently problematic, if not inherently incompatible, with formal institutions of teaching and learning." In some cases, I can relate to this concern. The quality of a Wikipedia article's editing is completely dependent on how effectively the page is moderated and cross-examined. This is (probably) why certain sections of Wikipedia are not as well developed as others. I can see how it would be problematic if a student cited a poorly edited page in their academic essay. The solution to this problem, is also provided in the article: the only way to solve this problem is to conduct more thorough inspection and editing on Wikipedia. I definitely think more students should take an active role in editing Wikipedia, however, I don't think the problem of the "reliability" of Wikipedia articles will go away any time soon. The difficulty with having such a large encyclopedia is that there will always be more pages to "correct," and because we are always learning new things, it will always be incomplete.

--sunishka134 (talk) 11:55, 15 October 2021 (UTC)[reply]


Oct 19 Tue - Reddit's challenges[edit]

QIC7: "In a perfect world, a thirty-four-year-old in soccer shoes wouldn’t have such fearsome power. In the world we live in, the least social-media executives can do is acknowledge that power."

While the higher-ups of Reddit claimed that Reddit's goal was to "become a universal platform for human discourse," it's clear that they did not foresee the great challenges of managing such a platform or the actual power that would come with it. While Chandrasekharan et al. (mentioned in Marantz in the New Yorker piece) showed that the bans were effective in curtailing hate speech, the hectic process of how these bans carried out speak to the need to rethink how organization and moderation works online. Reddit alternatives like Gab and Voat never took off, but the belief that Reddit mods overextended their hand and handed out bans in an unfair and almost arbitrary manner persists. No matter what ideological side of Reddit, I remember seeing constant complains along the lines of "Oh, X subreddit was banned just because they did this, but Reddit will still let Y subreddit exist?" Reddit suffered a lot of backlash from a lack of clear guidelines and transparency so that when a subreddit was banned, other subreddits that violated the same rules were in the clear (“We can have those conversations in the future,” Ashooh said. “But we have to start somewhere.”)

I remember that most subreddits that were first put on quarantine (removed from r/all and threatened with a ban if the moderation could not prove that the sub had improved) did not attempt to re-orient themselves to Reddit's guidelines, but instead defiantly got worse. This was the case for both r/The_Donald and the left-leaning subreddit /r/ChapoTrapHouse (both on the Wikipedia list for controversial Reddit communities). The /r/ChapoTrapHouse appeal letter, written by the mod team, is dripping with snark (For example: "[W]e will make the CSS no longer change "Report" to "Snitch". With this change the report button will be unmolested"). The "The Community Spotlight Program™" intended to "the more positive and wholesome sides of r/ChapoTrapHouse!🙂" just meant that the mods pinned any thread that was titled "I shit in my pants" as a middle-finger towards Reddit's "[expectation] to see notable and sustained community transformation before a successful appeal". These subs would rather have been banned as some symbolic gesture of free speech than listen to what were seen as unjust and unequal policies.

The more community-run efforts to self-moderate, as seen in /r/place, has its own pros and cons, but more interestingly, it shows a more dynamic way to organize online communities that draws much less backlash. The analysis of how "Creatives" and "Protectors" negotiated their goals and rallied communities to help was very fascinating. They were able to blend their own motivations for drawing on /r/place with the motivations of others, finding different ways to mobilize group participation including forming arbitrary teams based on things like color. Sudoscript at the end of the article on /r/place gives an interesting insights: The destructive nature of the "void", when managed and reacted against by the rest of the community, found its place in the ecosystem as a way to tear down old structures and pave the way for new innovations. Finding a way to organically incorporate troublemakers so that their influence emerges as a positive one could be an interesting way to manage online communities (of course, painting pixels all black is very different in its destructiveness to say, painting swastikas). Paying attention to how communities self-organize along certain hierarchies or ideas (like the nationalism of the US flag) could provide insight into how to manage and build online communities whose scale and social influence have been seldom seen in the past.

--LatakiaHill (talk) 18:17, 18 October 2021 (UTC)[reply]


Dario QIC 9

Something the article by Marantz pointed out that stuck with me was that, "To join Reddit, all you needed was a username that hadn’t been claimed yet. You could start as many anonymous accounts as you wanted, which gave rise to creativity, and also to mischief". I recently made a new Reddit account in preparation for my social breaching experiment, and realized that it is a bit odd that you can have as many accounts as you want associated with a single email address. While convenient and useful for separating online identities, I think it might be better to limit the number of accounts that can be associated with an email address to 1 or 2, to prevent spam?

Reading the Reddiquete page was actually surpising to me for several reasons. Firsly, I've been on Reddit for some time now, and I don't think I've ever actually visited and read the Reddiquete page. You can make a Reddit account, post, and browse Reddit without ever having read these rules, which I think is problematic. While a bit longer than just 1 or 2 lines, this page is definitely better than a long terms of service text block. I think it should be a requirement to have read the Reddiquete rules before continuing on to Reddit with a new account. I was also shocked at just how many of these rules are commonly broken on Reddit. Some, such as "Search for duplicates before posting" or don't "Make comments that lack content" are so ignored that reposts and cliche comments are basically ingrained into Reddit culture. Making people more aware of these rules could possibly help to clean up Reddit and make lessen aspects of the site that most people can unanimously agree are annoying (asking for votes...)

Reddit place is fascinating to me. I think the data it provided has a lot of potential to be used to better understand online communities. It dissapoints me how rude, explicit, or immature people can be online in cases like this, but I appreciate the good parts that came out of r/place: the cooridnation, the communities, the art, the stories. I think it would be interesting to run similar experiments to r/place, with slightly modified conditions: maybe only black and white pixels are allowed? A bigger/smaller canvas? A longer time frame? Pre-defined teams?

Here are some cool links related to r/place:


Daniel QIC 4

Reading the article "When Pixels Collide" it amazed me that so many anonymous individuals worked together to spark such a masterpiece of art. The original meme that was sparked from the page "Dickbutt" was a masterpiece that only Reddit users would be able to accomplish. I also thought that it was classic human nature that there began to be a territorial war in the form of gaining territory with the use of color in the image. It seems in human nature we always have to make a competition out of everything and turn it into a game. In an online community where there is a blank page that only your imagination can generate new concepts, we as humans turn it into a territorial war.

In the "Controversial Reddit Communities" Wiki page I feel that Reddit (although not the intention) was the accessible dark web for everyday individuals. Only after a CNN report did the ceo start putting up restrictions on what is allowed to be posted on the pages. However, even though some of the topics are terrible and there should no longer be allowed on the internet Reddit still gave them a grace period in the form of a "Quarantine" to clean up their pages. Although I am a firm believer of no censorship and freedom of speech some of the pages and actions taken on Reddit should have been banned from the start. In my opinion the advertisement of underage nudity or support for terrorism does not deserve a quarantine period.

In the "Reddiquette" guidelines I think they are a great way to structure how people should behave on the page. Setting expectations on what is okay to post on Reddit without officially labeling them allows for the freedom that I believe makes Reddit so popular. I think that other social media sites should follow suite with these guidelines. For example, I have witnessed many controversial posts on Tik Tok that meet the criteria of the guidelines that should actually be taken down but flourished because they met guidelines. The action taken wasn't officially labeled. So therefore, it was not taken down at all. Where other posts that are completely innocent are taken down because of strict guidelines that they "Violate". Where the post can be completely innocent but because the rules are so strict the post is flagged by the algorithm.

--dannyryan33 (talk) 10:44 19 October 2021 — Preceding undated comment added 14:45, 19 October 2021 (UTC)[reply]


Response to Dario Hannah's QIC 1

 — Preceding unsigned comment added by VanillaPumpkin (talkcontribs) 16:31, 19 October 2021 (UTC)[reply] 

Hi Dario,

I totally agree with your concerns about some of Reddit’s policies - the bot problem on Reddit is out of control, and half the posters on many subreddits I’m in are karma-farming bots. The whole karma system is really interesting and I wish Marantz had touched on it - I think it really impacts the way people interact on Reddit. Downvotes sometimes mean losing access to a comment section, posting privileges, or entire communities. Many of the subreddits I’m in have a karma requirement, meant to ensure the accounts posting are real people who have interacted on Reddit before. It’s almost reminiscent of that Black Mirror episode where the rating system determines every aspect of life, except in this case your ability to interact with others online is either hindered or encouraged depending on your Karma. That being said, I think the author did a fantastic job capturing why Reddit is so appealing - it feels authentic in a way that most other social media sites don’t. There’s no posing or filters or follower counts that matter - the anonymity breeds more authentic content. Am I The Asshole posts wouldn’t be nearly as honest if they were connected to the same account where someone posts pictures of their child. Overall, I really enjoyed the read and I think your concerns are totally valid.

Hannah VanillaPumpkin (talk) 16:28, 19 October 2021 (UTC)[reply]


Pippa QIC 3 The New York Times article about the subreddit r/Pizzagate hit close to home for me. Comet Ping Pong, the restaurant that Redditor conspiracists decided contained a child sex trafficking dungeon in its basement, is only a short walk from my home in DC. I passed it every day on the way to my high school, and happily attended multiple birthday parties there in middle and elementary school. In 2016, I was a sophomore in high school. I remember driving to school one morning seeing Comet Ping Ping surrounded by caution tape and police cars. When I finally heard about how a crazy person had taken it upon himself to “investigate” a child sex trafficking ring he believed was being run by the presidential-candidate Hillary Clinton in Comet’s basement, I couldn’t believe it.

Learning about the various problematic Reddit communities this semester opened my eyes to the great irony of Reddit. Redditors expect their ideas to be respected just because they are shared by others. They preach about being an online community with strict rules and guidelines that users respect because they respect the platform. Then, not only will some of these users break these rules, they will go out into the world and do harm. The creators of r/Pizzagate felt like their voices had been unjustly silenced when they’re subreddit was banned. And yet, they saw no problem with one of their own community members entering a public restaurant with a semiautomatic rifle. I’ve never used Reddit and I obviously understand that every platform has “good” and “bad” users. Still, I think something needs to be done to prevent the “bad” users from causing harm in the real world.

Favorite quote: “Is it possible to facilitate a space for open dialogue without also facilitating hoaxes, harassment, and threats of violence? Where is the line between authenticity and toxicity? What if, after technology allows us to reveal our inner voices, what we learn is that many of us are authentically toxic?”


Oct 22 Fri - Governance and banning[edit]

QIC8: The research of Riot Games and League of Legends seems much more productive and ethical than the other cases we saw from before. Keeping the Belmont principles in mind, Riot's research on toxicity is more respectful (users are kept up to date with results and feedback), beneficial (creates a healthier game environment), and justice-oriented (toxic game environments tend to target minorities/vulnerable groups on the internet).

The research itself also had an interesting point, that most toxicity in LoL does not come from dedicated trolls, but "average person just having a bad day". Through priming, establishing norms, and providing concrete feedback when banning people, toxicity can be drastically reduced. While certain issues continue, I'm glad to see that they have decreased over the years; toxicity was a large reason why I do not play games like LoL.

Regarding the effectiveness of banning, I saw an interesting paper today on /r/all on Reddit: Evaluating the Effectiveness of Deplatforming as a Moderation Strategy on Twitter While discussions like what sort of decision-making gets to determine who gets banned are worthwhile, banning by itself seems to be effective at decreasing the overall toxicity of content by millions of followers on Twitter.

--LatakiaHill (talk) 02:07, 22 October 2021 (UTC)[reply]


Dario QIC 10

The parallel that was drawn between Wikipedia discussions and Quaker meetings was an interesting one. I like the idea of having silent moments before and after a meeting, as well as whenever things get too heated. This gives people time to think, reflect, and slow down.

Before reading the chapter on consensus, I had generally just seen voting, polls, and consensus to all be closely related. I didn't realize that some Wikipedians see voting as "evil". Again, the theme of really understanding what a word means comes up here. A quote that explained the difference between voting and consensus that I liked was, "Voting symbolizes, reinforces, and institutionalizes division…. while a decision by consensus includes everyone, reinforcing the unity of the group." I agree that voting can polarize a group; this is seen in American politics very clearly. Voting implies there are sides and splits people up to choose one, where consensus seems to be more targeted at the good of the group as a whole. It was interesting that while voting is disliked by Wikipedians, polling seems to be more accepted, as it can help facilitate discussion. A big part of Wikipedia is discussion, not just reaching a final decision, and this greater acceptance of polls reflects that.

The article on toxicity in League of Legends was a cool read. I appreciated the mix of games with psychology, online communities, and machine learning. I didn't know that Riot has an IRB (or that they ran experiments on such a large scale), and I'm glad they are so open about their research. I think this article is particularly relevant at the moment because very recently, the "all chat" in League was completely disabled in order to combat toxicity. This approach isn't a very glamorous one, as it just blocks off communication rather than trying to change certain player behaviors to be better.

Once again, word definitions are important when talking about banning and blocking on Wikipedia. To me, it seemed like the meanings of a ban and block on Wikipedia mean something different than in most other places online. I think of banning as a system enforced way to prevent a user from being able to use a service, while a block is something an individual does to prevent interaction with a user. Wikipedia's approach is unique, but also seems difficult to enforce, especially if there are not a lot of moderators/admins to help do so. Does enforcing a ban really require individually watching a user to make sure they don't edit any pages?

That being said, I appreciate this in between level of restriction, that limits the damage someone can do, while not being a full on mute/block. The fact that the community can decide to ban someone also seems to be a good idea.

- Ortiz.da (talk) 03:59, 22 October 2021 (UTC)[reply]


Pippa QIC 4 I found the Nature article on preventing toxic and harmful messages in League of Legends particularly interesting this week. While I've never played an online video game like that, I have many friends who do and who enjoy playing the collaborative, team-style games. Even as an outsider, I was well-aware of the occasionally toxic culture of online video game communities before I took this class or read this article. Truthfully, it's part of the reason playing online video games never appealed to me. That being said, reading about Jeffrey Lin's efforts at Riot to prevent toxic behavior in League of Legends really impressed me. Sometimes it feels like the online world is too big and anonymous to really change, but that didn't stop Riot from trying to make things a little better. Reading this article after last week's discussion on Reddit changed my mind about how much progress can be made in terms of changing the impact toxic people can have on others online. To me, it seemed like it didn't even matter what happened to the toxic online users because they would simply find a new platform or community to invade. After all, people who spout abuse and slurs online are not typically one-time offenders. Once they have gotten the attention and reaction they were looking for, they will keep seeking it. Riot used psychology to figure out how to change bad behavior without completely silencing the badly behaved individuals. I thought the introduction of the Tribunal and reform cards was genius! I agree that members of the community have to be part of creating the norms of the platform in order for them to be effective. Overall, I really enjoyed learning more about the research and its effectiveness at preventing toxic behavior.


Danny QIC 5

In the article "Can a video game company contain toxic behavior?" they mention the toxic behavior is actually a rare occurrence. These trolls account for less than 2% of the gameplay activity. I believe that this is the case because you are so dependent on the other users when playing the game of "League of Legends". If your team lacks communication or any sort of chemistry it is a guarantee that you will most likely lose. So in my opinion I do not think it was smart to use a game in the research that is so dependent on collaboration. I also want to make the point that personally these toxic players actually used to drive me to play more of the game. I thought it was extremely entertaining when I would antagonize and upset another toxic player. Maybe you can describe me as one of those "toxic players" but I personally think that it adds to the game and its appeal. It would be interesting to poll players to see generally speaking if they enjoy the toxicity and trolls in the game and see if they are entertained by the interaction.


"The Challenges of Consensus" is a great outline on how to reach a consensus even when you are presented with challenges. From my experience I would always resort to a democracy when making decisions. If the vast majority is in agreement with one another then the agreement should be met. If people who do not agree with the outcome and feel strong enough about their side of the argument. They should either excuse themselves from the group or start their own group and generate a following. I know the goal is to make sure that everyone stays within the group but sometimes you are unable to satisfy all needs, but it is important to satisfy the vast majority. dannyryan33 (talk) 11:14, 22 October 2021

Oct 26 Tue - Moderation[edit]

Dario QIC 11

One area that the Grimmelmann article touched on that I had never considered was the resource management aspect of moderation. If bots are filling up a server, or people are uploading tons of large files for no reason, then the serves of smaller online communities may get overwhelmed. I guess I may not always consider this a problem since I'm used to bigger social sites like Reddit, YouTube, or established games. Speaking of games, I have actually seen a case of a game suddenly increasing in popularity to exceed the capacity of its servers. The area-shooter Splitgate had a surge of popularity this year, which led the developers to implement a queue system for getting into the game in order to prevent servers from being overloaded. In a case like this where space for players is limited, keeping bots out of the servers is extra important. Another area where games relate to this article would be how users have multiple different roles. Grimmelmann mentioned users flagging content as inappropriate, and a similar system is in place for many online games. Players can report others for various reasons: inappropriate chat, cheating, gameplay sabotage, etc. This makes players not just consumers of the content, but also moderators in a small sense.

Another new term for me was cacophony. Google defines the term as "a harsh discordant mixture of sounds", which at first, seems a bit different than how Grimmelmann uses it. However, it actually makes sense: an overcrowding of content makes an ugly mix of posts that doesn't please anyone and makes it hard to pick out what the user really wants (similar to how individual sounds may be hard to pick out). Grimmelmann explained that filtering can help with this, calling it a less destructive form of deleting. If Reddit didn't filter it's posts, and was simply just the "new" post feed, it would be a lot less pleasant to browse. That being said, basic Reddit still lacks some filtering features, namely, being able to block subreddits from your feed. The 3rd party app "Reddit is Fun" (or RIF) allows for this, which, in my experience, makes Reddit much better.

I think a really important idea to keep in mind is that "[F]ostering 'a culture that encourages both personal expression and con-structive conversation' is much more difficult." Communities should ideally be more than just neutral groups. They should have some sort of positive impact on their members' lives. Quickly banning users for small infractions, and heavily filtering content is not only a lot of work, it also creates an atmosphere of fear that users may very well just decide to leave. While it's easy to get caught up in the muting, banning, and reporting part of moderating, a good moderator also works towards promoting positive social interactions for the good of all community members.

Smaller online communities like FPF often have unique features to them not seen in common social media giant platforms. The idea of posting new content once every 24 hours was intriguing to me. It seems like this is a mix of a daily mailing list and a "normal" discussion platform. This gives time for people to think about what they've posted, which can be valuable. As Zuckerman and Rajendra-Nicolucci explain "The slower pace encourages users to think more about what they’re saying and FPF has even had people contact them asking to retract a comment before it appeared the next day." I like this slower, more relaxed approach to an online community. Since there aren't as many people in a community like this compared to say, a global Subreddit, the number of posts each day would be a lot less, making the "daily digest" of posts easier to read through and less overwhelming. -Ortiz.da (talk) 01:42, 26 October 2021 (UTC)[reply]


QIC9: Grimmelmann breaks down the variables that go into moderation clearly and concisely. He argues how successful internet communities incorporate a balance of "top-down" and "bottom-up" moderation strategies. He specifically highlights how setting norms are crucial to moderation and looks at Wikipedia and Metafilter as exemplars. In contrast, the LA Times editorial is a failure for its lack of moderation. Grimmelmann also presents Reddit as a mixed case and perhaps needing stricter moderation. While Grimmelmann focuses on the need for moderation, I'm curious if there are cases of *too much* moderation. I can think of when a website loses an audience because of sudden shifts in policy (Tumblr) but not from strict moderation from the very start. It's probably harder to find examples for this, since if a community is too stifling I guess it can't get popular/interesting enough to be noticed.

While Grimmelmann highlights norm-setting, FPF is a platform where strict moderation must work hard against the undesirable norms of the community. Posts are screened beforehand, and activity is slowed down. I wonder if this was really enough to change the norms of the local community. Although optimistic, even Rajendra-Nicolucci & Zuckerman aren't sure: "Is FPF replicable? Or is it a product of northern New England’s social norms and relative ethnic and cultural homogeneity? [...] A similar forum in New York City may be home to more profanity and casual conflict but that’s not necessarily an issue if it reflects the norms of the community it serves." If a community's "norms" is to harass or even hunt down others, can moderation change them, or will the platform be ignored for less moderated alternatives?

--LatakiaHill (talk) 05:18, 26 October 2021 (UTC)[reply]


Sunishka QIC 5

I found the article on "THE VIRTUES OF MODERATION" by James Grimmelmann very informative. However, I wish the author had delved deeper into some of the ethical issues online moderators face, instead of mostly focusing on the positive aspects of online moderation. For example, Grimmelmann says that "excluding the heaviest users, for example, hurts productivity and openness while also reducing costs" but this does not address the fact that on some websites the heaviest users are usually the more controversial ones. Instead the moderator's tolerance towards users based on their popularity rather than their content is assumed to be an acceptable form of online policing.

People like Violentacerz were allowed to use Reddit because of the traffic they generated for the website in question. Similarly, I remember reading that one of the reasons Donald Trump was not banned from Twitter sooner was because Twitter was worried about the potential loss of market revenue if they banned him. Both of these cases featured the overuse of a platform in a way that Grimmelmann warned about: congestion and manipulation on Twitter by Donald Trump and Abuse on reddit by Violentacerz. However, their actions remained unmoderated for a long time because they made the respective websites more popular. This is an ethical dilemma that needs to be addressed more often when we are talking about moderating online communities.

While most online communities have a mixture of manual and automated moderators, I think an increase in automated moderators would lead to a safer and more unbiased internet space. Since heavily moderating any space needs to be done while being cognizant of people's digital freedoms, I think we do still require manual moderators-just more unbiased ones.

--sunishka134 (talk) 11:52, 26 October 2021 (UTC)[reply]

Daniel QIC6

In the article "The Virtues of Moderation" Grimmelmann touches on the subject of "Infrastructure Capacity". When a cite is being used by a number of individuals it can often become congested and slow. This usually results in the community to become frustrated and demand for a larger capacity and infrastructure in the online community. The problem with this is that it usually comes at a cost in order to make these changes to the community. If the community has the ability to make these changes and pay for the infrastructure that is needed the community remains happy and the congestion goes away. If the community cannot afford to make these changes and charges its community members a price they have the chance of losing key parts of the community because they either do not want to pay or the fee that needs to be reached is not met and therefore the congestion continues. Resulting in angry community members and the potential for the community to break down. My question is to the communities that do not charge for usage. Communities as large as Instagram, Facebook, or even the Call of Duty game War-zone do not charge for usage. I cannot comprehend how these communities are able to afford to stay afloat. Although I know they make money through advertising and in game purchases etc, but the operation is so large they must be pulling in more money inn some way to keep the software engineers paid and the community un-congested. Lastly, from a business point of view, the benefits of charging for usage within the community would far outweigh the negatives. You may lose some players but the vast majority of individuals would be more than willing to pay for these platforms. Which inn return would yield huge profits.

In the article "It's not Always a Beautiful Day in the Neighborhood" I think the model of moderation in the application of FPF was ideal compared to the site of Nextdoor. On the platform of Nextdoor the moderators reacted to the posts and reviewed them only after it had been posted. Mostly when it is brought to the attention of the other users of the site. FPF is a perfect example of how to create a location that posts content that is relevant and useful to the community that is viewing it. They have a team that reviews every post prior to it being allowed on the page. Since the sites are similar in the sense that they work with local communities in the search for buying or a service I believe the model of FPF is ideal. When it comes to social media sites that are meant to spark up conversation or sharing ones experience I do not think that this would be the correct model. Freedom of speech comes into play in these scenarios and I am a fan of lack of filtration as well. However, in these two examples the FPF model is stronger because it suites the desired outcome. Just like freedom of speech is an amendment but you cannot yell "fire" in a movie theatre.

--dannyryan33 (talk) 11:58, 26 October 2021

Oct 29 Fri - Moderation and U.S. law/policy[edit]

QIC10: While early proponents of total internet freedom like Barlow believed that government regulation would only stifle free speech and promote inequality, we now know that free speech and moderation are not clear-cut and simple issues. Both Grimmelmann from last class' reading and Ziniti look at the case of TheDirty.com, and how the protection of Section 230 has been challenged and qualified. Government and private institutions must be careful when regulating harmful content while mitigating the consequences that organizations like the EFF warn of. While I personally disagree with broader libertarian ideals, Barlow's suspicion of regulation comes from legitimate concerns.

For example, Newton discusses how the regulation of online prostitution has made the profession inadvertently more dangerous for sex workers. While largely well-intentioned (although the voices and concerns of sex workers were probably never considered by legislation), crack-downs of things like sex trafficking could unintentionally lead to more extreme markets forming. Similarly, the regulations and government efforts in War on Drugs have only lead to ramped-up smuggling operations and the further destruction of already vulnerable groups. While I am also skeptical of delegating these markets into the hands of large corporations, moderation principles discussed in this class such as promoting good-faith community norms and engaged moderation from members of the community would help create a much healthier online environment.

--LatakiaHill (talk) 20:15, 28 October 2021 (UTC)[reply]


Pippa QIC5

I really enjoyed learning more about Section 230 and the debate on online communication moderation -- most of our class discussions around this issue have concluded that the lines are simply too blurred for any fair rules to be put in place. So, how is there an actual law about content moderation on the internet? To me, Section 230 is important because it prevents social media and internet platforms from being held accountable for what their users post and comment. Realistically, social media platforms like Twitter and Facebook are being used 24/7, 365, in almost every country on Earth. By that logic, it wouldn't make sense i every time a user posted something offensive or inappropriate, the platform could face legal consequences. After all, there are a lot of bad people on the internet! However, someone should be held accountable for offensive comments and hate speech that gets posted. Those comments and posts can have real-life consequences and cause real damage to users. This, of course, is why most social media platforms have community guidelines that prevent bad behavior from going unpunished. Similarly to Erik, I don't generally agree with the Libertarian point of view that favors a lack of moderation online. However, there are certainly valid concerns when it comes to letting websites and platforms police how users interact on the internet and what user can and cannot post. Clearly, the lines are still blurred on this issue!

I thought the example from TheDirty.com was interesting because it showed how Section 230 can be interpreted differently depending on the specific context of the incident. One judge chose to hold the website accountable because the owner/editor was directly involved in spreading the false rumor about the cheerleader. In the other case, the website was not held accountable because it was not concluded that the website's owner was involved enough in the specific incident.

I was surprised to learn that a lot of politicians are working hard to change Section 230 so that certain types of posts would not be able to be taken down. While I don't necessarily agree with this idea, I can see how people could see that this is a violation of free speech. On the other hand, platforms like Twitter, Facebook, and Reddit are private companies that are legally allowed to police content on their websites however they want to. Overall, I think it's a really important debate and I think anyone using the internet should understand how laws like Section 230 impact their use of certain websites and platforms. I am definitely interested in seeing where the debate goes and I look forward to following any changes that may come.

Pippalenderking (talk) 01:10, 29 October 2021 (UTC)Pippa-lenderking[reply]


Dario QIC 12

The "Declaration of the independence of cyberspace" was honestly kinda cringy. It felt like it was written by a teenager who was trying to sound cool. Some of the statements in it are questionably valid too. For example "You have not engaged in our great and gathering conversation, nor did you create the wealth of our marketplaces." The government literally made the internet, and a lot of government funded research has led to the creation of new internet technologies. Trying to claim that the government had no part in the formation of cyberspace is just wrong and ignorant. Another part that stood out to me as questionable was, "In our world, whatever the human mind may create can be reproduced and distributed infinitely at no cost." Obviously, there are costs to distributing things online. On the physical side of things, there's server and cable maintenance, hard drive space, processing power, electricity, etc. There are also costs to distributing things online that are less obvious. For example, once something is online, it's there forever. Finally, there can be costs to one's reputation and emotional/mental health.

I didn't realize that Section 230 is so short. I've heard a lot of talk about it, but I don't think I've ever read through the entire section, so I'm glad I did. The section was more readable than I thought it would be, considering that it a legal document. However, there's definitely a lot of room for it to be interpreted, even with the definitions provided at the end of the section. One of the findings in Section 230 is that the internet has already flourished with little regulation. This can serve as an argument for keeping internet regulation low. That being said, I think that it would not make sense to keep internet regulations as scarce as they were when the internet was just starting up. The internet has changed so much since then, reaching all over the globe, and being used for things, good and bad, that were never considered at its start. -Ortiz.da (talk) 02:52, 29 October 2021 (UTC)[reply]


QIC 7 Daniel

In the article "A Declaration of Independence". I agree with Dario in saying that the article seemed to have been drafted by a teenager. In no way would an article of this nature be taken seriously by government officials. I can almost see this article as being flagged by the FBI and the creators to be put on a watch list. Stating that the internet is its own entity and that the government has no control over what occurs on the internet is an irrational claim. If you are using or accessing the internet on the soil of any of the named countries they therefore have the right to police what is occurring on their networks. The statement that caught my eye was; "We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before." They are talking about cresting their own civilization online using the networks and resources that the government in the end is supplying to them. In a way it's almost a treason act but it's not to that extent because in my opinion the message they wrote was laughable if seen by the government.

The written law; "47 U.S. Code § 230 - Protection for private blocking and screening of offensive material" explains all of the government funded programs and the resume as to why they have such power online. Looking at this after reading "A Declaration of Independence" further proves my point on their claims to be invalid. Basically everything is mentioned as to why the government is allowed to monitor what is occurring online. However, if I were to criticize the written laws. I believe that they designed it perfectly to be very vague so that there are no loop holes. Especially dealing with the internet individuals will definitely find a way to find a way to stretch the rules. This creates an argument for the user as well. If the rules are not descriptively laid out for them and something is flagged and taken down, I believe they are due a reasoning. Especially if the person has spent a lot of time and potential money on creating the content and it is taken down. The lack of detail in the law is both by design but personally I feel is too vague therefore I can see the argument from the creators.

-dannyryan33 (talk) 11:18, 29 October 2021 (UTC)[reply]


Sunishka QIC 6

As long as people have flaws, a world without any rules would lead to chaos. John Barlow's article "A Declaration of the Independence of Cyberspace" had no arguments, no facts and zero evidence about the positives of a lawless digital space. Instead he backed his entire essay behind the statement "This is what I want." It reminded me of the many anti-maskers who ignore all of the evidence in front of them and all of the dangers that their choice brings, simply as an excuse to exercise their "freedom".

There was also a hilarious statement made in this essay that made me laugh for a solid minute: "Our identities have no bodies, so, unlike you, we cannot obtain order by physical coercion." Does he really not think there is a world outside of his own computer screen? If physical coercion was an impossibility why is Julian Assange (creator of Wikileaks) being hunted down by the CIA? Why is he in court, waiting to see what they decide to do with his future? Even if someone was anonymous on the internet, its still superficial anonymity-people still leave digital fingerprints such as their IP adress which can be traced by a teenager. Things people do online can and should have an effect in their real lives. Certain digital freedoms such as the freedom of speech should be protected, however, the freedom to avoid any consequences of ones words and actions is unachievable. For example, trolls are free to bully/harass whoever they want but the consequence of that is them getting blocked or banned. The digital community has always had rules, John Barlow has just never understood them.

He also makes a distinction between digital natives and digital immigrants (people growing up before the internet was a regular thing). However, most digital natives do not see the internet as something that is "outside" of real life. It is a part of our everyday life and like everything else in life, it is also governed by rules and regulation. To pretend otherwise is borderline delusional and potentially dangerous. Most of the time, people online who cry about losing their "freedom" are the ones involved in anything unlawful or unethical.

-sunishka134 (talk) 11:30, 29 October 2021 (UTC)[reply]

Nov 02 Tue - Student nominated topic[edit]

hannah hostetter QIC 2 There were no readings assigned today, but since today is a student-assigned topic I thought I'd share my thoughts on one of the topics that was brought up a few weeks ago.

COVID-19 vaccine misinformation has been spreading rapidly across Facebook by concerned and misguided parents and older people (usually baby boomers). Unfortunately, the anti-vaccine movement has existed long before COVID and the pandemic, but more and more news sources and political figures like Tucker Carlson and Fox News have been encouraging vaccine hesitancy for multiple reasons (none of which are legitimate). This creates an echo chamber where people are surrounded by conspiracies, lies, and misinformation. The web of vaccine hoaxes on Facebook is wide and extra sticky, even when the warnings pop up that those claims have been proven false. I wonder how far the government can go to shut it down without hurting free speech.

VanillaPumpkin (talk) 15:44, 2 November 2021 (UTC)[reply]

Nov 05 Fri - Debrief: Social breaching[edit]

Pippa QIC 6 At first I was very excited about the idea of doing a fun social experiment on my friends and Instagram followers. I thought that my followers would have interesting responses to my off-topic comments and would find them both weird and funny! When it came time to get started on the project and begin commenting, I got really nervous. I was worried about what people would think of me and found myself deleting the comments before I had the chance to post them. I wasn't sure how my followers would react to seeing off-topic comments on their pictures -- maybe they wouldn't find them funny at all and would only find them weird! I'm not usually someone who gets embarrassed easily, but when it came to leaving an off-topic comment on someone's post, I was really insecure about what people would think of me. Would they unfollow me? Would they delete my comment? Would they think I was making fun of them or their post? Would they send me a DM asking if I was ok? In the end, I powered through and made enough comments to satisfy the requirement and chose to let my embarrassment and insecurity go. Nothing particularly exciting happened after my comments went public -- I got a response from a few people, but they were only responses to my questions without any significant reaction to how random the question was. Most people only liked the comment and didn't respond. In the end, after being so stressed that it would be embarrassing for me, I was disappointed that people's reactions hadn't been more exciting! Overall, I enjoyed the assignment and the results showed me that my followers really care about what their feeds look like -- so much that they ignore random content that doesn't fit in with the vibe! Pippalenderking (talk) 00:57, 5 November 2021 (UTC)[reply]

Pippalenderking I wasn't expecting a QIC for today as there are no readings to speak of, but you can still count i t. -Reagle (talk) 16:13, 5 November 2021 (UTC)[reply]

Nov 09 Tue - Newcomer gateways[edit]

Sunishka QIC 7

The challenges of dealing with newcomers by Kraut et al. proposes a good formula of welcoming and maintaining relationships with newcomers in a community. Their description, particularly of the halo effect, was quite relatable, especially as it is a phenomenon I've lately witnessed with myself. I sometimes assume brands are ethical just because they use buzzwords like "organic" and endorse some of the issues I care about, only to ultimately realize they are still very unethical in other ways.

I also liked their points about the discrimination between "valuable" and "invaluable" members-I thought this would be a bad point, however, the examples given in the text convinced me why certain groups are and should remain more exclusive to the people it was designed for and keep out "spectators", "trolls" and other invaluable members. My main concern about closing off groups to others (especially online) is that it poses the potential risk of the members ending up in an echo chamber. For example, most subreddits or Facebook groups usually have little to no space for dissenting opinions and are usually composed of people who have the same thing to say. However, employing screening methods to weed out the spectators sounds like a plausible option.

Having said that, some of the "challenges" about dealing with newcomers came off as a bit too harsh for me. For example, stating that newcomers "ask redundant questions in discussion groups" or cause "virtual death of fellow group members" does not seem like valid criticisms of people trying to be a part of an online community. These remarks infantilize newcomers and portray them as liabilities that need to be hand held by the veteran members of the community. However, in my (albeit limited) experience with online communities, newcomers and older members rapidly form an equal connection (not including moderators) based on the shared factors that drew the newcomers into the online community.

Overall, I think this reading provided me with a better understanding of the labor that goes into policing and screening members of an online community. I also think the writers felt a little too strongly about the new online members' inexperience and were quick to interpret that as probable inexpertise. However, the studies and trends discussed in the reading, were both relevant and informative.

--sunishka134 (talk) 20:42, 8 November 2021 (UTC)[reply]

HANNAH'S RESPONSE TO SUNISHKA, QIC 3 I completely agree with you about the author's description of newcomers. While it can definitely be challenging acclimating to a new message board, chat server, or other online community, most people are pretty fast learners. I like to believe that everyone, including newcomers, can bring value to a conversation. Additionally, most message boards have rules, tags, and other information that educate the newcomer before they even make an error.

VanillaPumpkin (talk) 17:51, 8 November 2021 (UTC)[reply]

---

QIC11: Looking at how which design claims by Kraut et al are present on Dreddit's recruitment page and which are not show Dreddit to be a community that values drawing the "right" newcomers than just as many as possible. For example, Dreddit does not seem to do active recruiting or show how many members they have. However, they do have strict requirements for joining that require either external credentials (existing reddit and EVE accounts with a certain amount of in-game currency, member recommendation and referral) or a willingness to put in time and effort (the arts & crafts assignment). The goals and values of the community are clearly laid out so that players can self-select and decide if Dreddit is right for them. The niche and complex game requires dedication, and most EVE corporations would likely want motivated and effective users than even the average gamer.

This comes with the trade-off that the community is likely to remain small; the Dreddit subreddit indicates that recruitment is now open to all players with reddit accounts, and not just reddit accounts that are active (not sure whether which one is more up to date, the Reddit post or the wiki). The low activity on the subreddit suggests that the community has either dwindled or has grown and detached from Reddit itself.

An interesting detour while I was browsing the Dreddit wiki was a page describing notable events the community was involved in the game, which mentioned a EVE community ran by the forum Something Awful (Goonwaffe). While both communities select players heavily, the difference between Dreddit and Goonwaffe's wikis is very noticeable: While Dreddit's splash page humorously suggests that it is always recruiting, Goonwaffe's intro page has a large header: GoonWaffe is NOT a publicly recruiting corporation. The language is also much more hostile and unwelcoming ("If you consider yourself a "lurker" we do not want you, go away") and the wiki introduction has more stuff about what NOT to do than what should be done. Potential newcomers are preemptively labeled as possibly being "unique butterflies" or "special snowflakes". I am curious of the effectiveness of this kind of recruitment versus the more welcoming attitude of Dreddit; in some cases, this kind of culture and initiation process can be appealing. --LatakiaHill (talk) 03:12, 9 November 2021 (UTC)[reply]


QIC 8 Daniel Looking into Krauts "The Challenges of Dealing with Newcomers" I agree with four and a half out of the five of the basic problems. The issue that I personally have resides with the "Selection" section. I agree with the first part in stating that self selection is an importance. To summarize self selection it means where the individual joins the community through choice o whether or not they are drawn to the community. This is a completely fair way of selection into a group. However, I do not agree with the second half of the section. "Or it may occur through screening, in which the community screens out undesirable members, while encouraging or selecting the others" (Kraut). I understand the idea behind wanting to keep a community clean and deter potential trolls. However, the potential unethical decisions that could occur through this screening is what I feel is unnecessary. Individuals should be allowed access to these communities however since they are new put them on a short leash and monitor them closely. If they act in a way that is not inn favor for the group pull the leash and close access. The idea of a screening for an online community does not sit well with me for this reason.

Drawing to the "Dreddit" guidelines of how to join the community it has a rather intense screening process. Although this may contradict my point about being against the screening process I feel that they have laid out the required credentials clearly and ethically. They have enabled a way to create a following that will only allow members into the community with credentials that make sense. I still feel everyone should have equal opportunity to join these online communities. They find a gateway away from this issue by if you do not meet the requirements for the screening, you are able to get a recommendation from someone that does have these qualifications. This is the most ethical example of a screening process that I can think of. A connection that I have to this sort of screening process is often dealt with through credit unions. Credit unions often require some sort of involvement in an existing group in order to gain their resources. However, you are able to obtain a recommendation from a current member and upon approval will be let into the union. I believe this is the best way of monitoring these groups. Dannyryan33 (talk) 15:07, 9 November 2021 (UTC)[reply]


Dario QIC 13

In the application guide for Dreddit, I noticed that one of the ways you can apply is by having a recommendation from a member. This reminded me of the "mentor" idea that was briefly discussed in class. If an experienced community member is held responsible for the actions of people they invite, they are more likely to invite individuals who are serious about being a part of the community, as they don't want to hurt their own reputation. Something that felt a bit off about the Dreddit application guide was that they claim the community is chill, but then proceed to write several paragraphs about how to join. This could potentially scare off newcomers. However, it makes sense that a community like this would have some requirements for joining, as Eve Online players can be very dedicated, and the conflicts they have can involve large amounts of real money.

The 7 Ages of Wikipedians article was kinda cool since I could see where I fit into it (as a WikiChild). However, I did see some problems with it. First, the amount of Wiki speak was kinda obnoxious and annoying. Second, and more importantly, there was a bit of snarky superiority here. It felt like the authors were trying to say "haha look at us, we're quirky. If you don't like it here you can leave". This attitude could be seen as unwelcoming to new Wikipedians. Online communities should be careful with how much of their inside jokes and humor they present to newcomers. That being said, it's unlikely that most newcomers to Wikipedia would be viewing this page, so it seems like more of a "joke about ourselves and the newbies" page.

Kraut et al. mentioned that interpersonal appeals can be highly effective in recruiting newcomers. I believe that one reason for this which didn't seem to be mentioned, is that having an interpersonal appeal means that as a newcomer, you know you have an experienced member to help you get initiated with the community. The person who personally invited you can help answer all of your "dumb questions" and keep you from getting lost or overwhelmed. This can be especially useful in online game communities, where there a lot of controls, interfaces, and skills to learn.

-Ortiz.da (talk) 16:00, 9 November 2021 (UTC)[reply]

Nov 12 Fri - Newcomer initiation[edit]

Daniel QIC 9

Speaking upon Design Claim 17 "Entry barriers for newcomers may cause those who join to be more committed to the group and contribute more" (Kraut). I believe this is a completely true and very good model to follow. If people feel that they have a tie to a group or have used their time or other resources to become part of this group. They are substantially more likely to stay in the group.


We see Design Claim 17 in the experiment we looked into as well. I thought that "The effect of severity of initiation on liking for a group" was very interesting. The results also yielded exactly what I would expect. People who go through these severe initiations feel a sense of accomplishment. They were able to make it through something that they feared and the people that were joining them got closer in the end. I personally do not condone initiation or in this sense a form of hazing but I can see how the results yielded that it was effective. The thought that everyone in the group has experienced the same thing that you have at some point brings a sense of security. You are now able to relate to them personally. The authors suspect cognitive dissonance is the reason why these initiations are so effective. Mental discomfort when it is experienced with others can bring the individuals closer together. Dannyryan33 (talk) 03:21, 12 November 2021 (UTC)[reply]

dannyryan33, be careful of "very interesting" -Reagle (talk) 18:21, 12 November 2021 (UTC)[reply]

QIC12: The article on cognitive dissonance and initiation was very interesting, and Kraut et al elaborate on its applications to online communities. However, as the other design claims elaborate, it is probably better to just have newcomers be welcomed and treated well than go through a harsh initiation process! Doing so sets a good precedent for newcomer behavior and is likely to have people stay for longer and contribute more. While Kraut et al in Design Claim 22 advocate for collective socialization tactics, they also note that places like Wikipedia largely lack these formal procedures. Although WikiEdu ran us through various tutorials, this is probably not the experience of the average Wikipedian.

Going off the table of the six dimensions of organizational socialization tactics, Wikipedia socialization is fairly individual, informal, and disjunctive unless newcomers opt in to certain groups like the Teahouse and various WikiProjects (assuming they know enough already to parse through wikis in the first place). However, there are pages that give clear guidance on the steps of writing an article, and informal concepts like WikiAge that give a sense for the sequential stages that a new editor might go through. I'm curious why formal socialization methods are not prevalent among communities like Wikipedia. Could it be push-back and lack of consensus over what the formal methods should look like? Or are they around, just not accessible and easily found?

--LatakiaHill (talk) 04:12, 12 November 2021 (UTC)[reply]

LatakiaHill be careful of "very interesting" -Reagle (talk) 18:21, 12 November 2021 (UTC)[reply]

QIC #4 Hazing is illegal in 44 states, and the majority of Greek life worldwide has to undergo anti-hazing training. As a member of a sorority here on campus (let's go Chi O!), reading about the harassment and general distrust of newcomers reminded me of how fraternities used to treat new members. Pledges still have to do things like wear a suit to class or clean up after large gatherings, but the danger and intense humiliation is no longer supposed to be a factor in the recruitment and initiation process. The way newcomers are sometimes treated in online communities is problematic in the same way that hazing is: humiliation is not a long-term draw for pledges or new community members, and it can create shockwaves of harm. Isolation as a newcomer can, as Kraut said, lead to a content desert later on because there are no more regular contributors. If the hazing is so intense it causes pledges to drop out of recruitment and initiation, that fraternity would lose an entire pledge class and be much weaker because of it. With the ease that online communities can be joined or left, places like Wikipedia risk failing completely if newcomers are treated badly and leave. VanillaPumpkin (talk) 14:11, 12 November 2021 (UTC)[reply]

VanillaPumpkin, what number is this? -Reagle (talk) 18:21, 12 November 2021 (UTC)[reply]

Pippa QIC 7 I really like the connection @VanillaPumpkin: made between newcomer initiation in online settings and hazing in the "real world". Every college student, whether they are a part of Greek life on campus or not, knows that hazing is a common practice in many fraternities and sororities on campuses all over the world. Though some forms of hazing can be harmless, there are many incidents in which people are seriously injured or even killed as a result of initiation activities. As Hannah points out, if newcomers do not like the way they are being treated, they have the option to leave the group. For online communities, users who don't see the benefits of being part of the community outweighing the disadvantages, they might leave the group and encourage others to leave as well. Websites and online spaces that already struggle with recruitment and retention, could be wiped out completely if they do not treat their newcomers with respect. In this way, newcomers have a certain amount of power in virtual communities. No matter what, it is always better for newcomers to be treated well because it will increase retention in the community.

As @LatakiaHill: mentioned, socialization on Wikipedia is generally pretty informal and individual, but there are exceptions for users who join specific communities on the site. As a newcomer to Wikipedia, I am also curious as to why there isn't a more formal socialization process, especially since there are a lot of specific rules and guidelines to follow as a user. This ties into our discussion on social breaches. It seems to me that if a user were to try an socialize in an uncommon way, therefore breaching the norms of the site, they might be informed of that rather quickly. In my experience, Wikipedians are eager to collaborate and share suggestions with other users. My own Wiki article was heavily commented on only a few days after it was published into the main space. Are socialization methods something Wikipedians discuss often? Pippalenderking (talk) 15:35, 12 November 2021 (UTC)[reply]


Sunishka QIC 8

I honestly think that people who undergo a severe initiation process trauma bond over the severity of it and extend that bond to the group itself. I've heard stories about forced drinking and alcohol poisoning way too many times to see any sort of silver lining in a severe initiation process, and this applies to online communities as well. I remember reading a story about an online group for students who had recently been accepted into Harvard-someone held an initial process for these students that was so inappropriate that all of them were expelled before they could even start their first semester.

I know most frats, sororities and even online communities are a lot more cognizant of their actions these days and that people can legally report hazing. However I think, sometimes, the potential of actually having a bond with these people is what prevents most of those undergoing something traumatic from reporting it. Like Aronson and Mills reported, the appeal of group membership literally results in cognitive dissonance and the group members convince themselves that the initiation was not so bad.

There is also a trend of frats and sororities keeping their initiation process a secret. I know some people who are a part of these and they won't tell me what their initiation process was even though it was almost 4 years ago. I honestly do not think this should be allowed, because, not only does it allow cognitive dissonance to foster among the members but it also outsiders from figuring out whether what happened was actually okay.

--sunishka134 (talk) 11:15, 12 November 2021 (UTC) — Preceding unsigned comment added by Sunishka134 (talkcontribs) </ span>

Sunishka134, remember, a hyphen is not an em dash. -Reagle (talk) 18:21, 12 November 2021 (UTC)[reply]

Nov 16 Tue - RTFM[edit]

QIC13: As Reagle shows, the obligation to know in a geek community can offload newcomer questions, but it can also shield controversial issues worth re-discussing from being challenged. Questions like whether Python should use significant white-space and tabs are tiring and annoying for older members to shift through, and the manual is then an authoritative way to shut down and redirect newcomers. But when newcomers are put-off by possibly sexist or racist practices in a community, the manual's authority can then become a way to deflect and shut-down opposition. Beyond RTFM, a similar practice is the expectation to lurk. This means that new users are not expected to ask questions, but to simply observe an online community, and only actively join once they have a sense of the culture and practices of the community. Like RTFM, users that ask questions or don't act in accordance to expectations are met with the phrase "lurk moar". Both these "obligation[s] to know" can be a way of obscuring sexism or racism as a natural part of an online community's culture, framing everything as just fine until newcomers barged in to demand change to a community they don't understand and are not a part of.

While I think it is less common elsewhere, I still see RTFM on more tech-y places. For example, the Linux distro that I use is known for its extensive wiki, and newcomers in the forums asking for help are often criticized for not reading the relevant wiki page first, or first figuring something out through the man page of a software. This is prevalent enough that almost all posts in the newbie corner of the forum preemptively say that they have already RTFM.

The obligation to know, and the cultural capital of holding knowledge still exists in more disperse and less-obvious forms. For example, on Twitter I often see phrases similar to "it's not my job to educate you" when progressive activists receive replies about what are seen as basic political concepts (maybe historically related to geek feminism?). More nefariously, alt-right online communities tend to value obscurity and multiple layers of irony in memes, and ridicule people that fail to parse through what is genuine and what is false. Users are lauded for their ability to disguise racist beliefs behind irony in a way that can appear benign to the unsuspecting but obvious to those in the know, which can only be accomplished by extensive lurking.

--LatakiaHill (talk) 00:24, 16 November 2021 (UTC)[reply]

LatakiaHill, nice connections with lurking and alt-right irony. -Reagle (talk) 18:18, 16 November 2021 (UTC)[reply]

Sunishka QIC 9

This reading has a great Foucault quote “we should admit, rather, that power produces knowledge” which is also a great running theme throughout the essay. I want to expand on that and mention that when it comes to accessing knowledge on the internet, some newcomers have it harder than others. For example, newcomers with learning disabilities, physical disabilities or visual/auditory impairments would have a harder time accessing FAQs or remembering them accurately when they do. Such issues should definitely be dealt respectfully, with patience and common decency.

In discussions like this, I also think it is extremely vital to distinguish between "trolls" and "newcomers." Newcomers should not be subjected to the same standards as trolls; one of the reasons people could potentially leave organizations is because they are seen as an irritation rather than a useful part of the community. Even though trolls can be found among newcomers they are not strictly a part of the group-they have no intention to undergo any kind of formal or informal indoctrination into the group (socialization or enculturation). Its also fairly easy to detect a troll, in comparison to someone who is genuinely a contributing member. I personally avoid interacting with group members who do not have any verifiable credentials (no profile picture/no other likes or comments/ fake name) because they are usually trolls. However, newcomers to a group usually have more account activity and take their internet accounts more seriously than trolls. Additionally, I'm sure people can just ban/remove trolls without engaging, instead of being standoffish to every newcomer simply because they might be a potential troll.

It is also worth mentioning that most social media apps and online communities utilize bots these days that can answer newcomer questions as well as moderate any trolling that goes on in an online community. Therefore, I think the future of newcomers being able to successfully integrate in online communities and understand how they function is most likely a positive one.

P.S: I'm glad this reading included "mansplaining" as one of the common issues with online communities-I have personally experienced a lot of this. Most of the time it is not even a question of people assuming I don't know what I am talking about, but rather, ignoring my comments/counterarguments and repeating their own incorrect opinions.

sunishka134 (talk) 10:52, 15 November 2021 (UTC)[reply]

sunishka134, I think the thing that is so difficult about "concern trolls" is it's difficult to perceive their intention. -Reagle (talk) 18:18, 16 November 2021 (UTC)[reply]

QIC 10 Daniel

Professor Reagle and Menning mention the idea of "RTFM". Online communities are a very complex place. There are a lot of rules and even policies that may not verbally be mentioned. These rules and policies are often the most asked questions by newcomers. When these questions that related to these rules and policies arise the moderators will say "RTFM". A very stern but direct way of telling the user that this is mentioned previously or should be analyzed prior to asking the question. Although I am a fan of this acronym and questions that can easily be answered without help from others should be treated sternly. There is an argument on the other side as well. I know individuals are new to the community and therefore can be naive to the rules and policies. However, if it is a question that is asked constantly maybe it isn't as clear as the veterans may think. There has to be a reason for the creation "Frequently asked questions" section in online communities. I believe that these questions can be avoided from the start by creating a mandatory interactive tutorial about the topics that are often brought up.

I also agree with Sunishka when she says "I also think it is extremely vital to distinguish between 'trolls' and 'newcomers.'" These trolls obviously are not worth the time from the moderators. However, these newcomer can potentially be huge influencers in the online community. When they are genuinely reaching out for help and are dismissed immediately they may no longer be interested in the community as "RTFM" can be very off putting especially if the question you were addressing was a meaningful one.

My connection to "RTFM" is from a facebook classifieds group where members are selling exotic, specialized, or rare cars. I honestly had no idea what "RTFM" meant. However, I saw someone post a BMW M3 which is considered a specialized car and therefore is allowed to be posted. However the comments where all flagging it because it wasn't there definition of "Special" enough to be onn the page. The page moderator then commented underneath the comment with "RTFM" as it was a valid post. In this situation I understand the sternness from the moderator as it is clearly labeled in the pages post guidelines.

Dannyryan33 (talk) 15:39, 16 November 2021 (UTC)[reply]


Hannah's QIC #5

The RTFM concept is understandable, considering rudimentary knowledge is necessary for higher-level conversations. It's present in other aspects of everyday life. There are prerequisites to higher-level college courses because you can't start at the foundational level every time. Many jobs have skill-tests and experience requirements in order to ensure the candidate possesses the knowledge necessary to succeed in the role. So RTFM, while harsh, is a necessary policy to ensure communities are not sidetracked by new members. In order to prevent the alienation of newcomers, I think all communities should take the same approach Wikipedia takes, where new members are greeted and educated by a designated individual or group before joining the general community. My sorority takes a similar approach where new members learn ritual and history by a New Member Educator before becoming an initiated sister.

VanillaPumpkin (talk) 17:04, 16 November 2021 (UTC)[reply]


Pippa QIC 8

I found the idea of RTFM really interesting, especially as a newcomer on Wikipedia. Aside from Instagram and TikTok, I'm not a part of any online communities, so this idea was new to me. We talked about newcomer initiation and retention last week, and I think the RTFM manual is an interesting extension of that discussion. Overall, I think the concept of RTFM makes a lot of sense, especially in communities like Wikipedia with a lot of specific norms and rules. I mentioned last week in class that there are a lot of things I would not have known about Wikipedia without taking this course. I genuinely do not know how many of the specific rules and guidelines I would have picked up on without Prof. Reagle's guidance. I'm not a troll and I try to give people the benefit of the doubt in all situations, so I think I could gave still be somewhat successful in my online interactions. Nevertheless, I would have struggled as a newcomer more than I did. When my article was first proposed for deletion, I wasn't sure if I would be dealing with other users positively or negatively. I got a lot of comments on my talk page that agreed that my article was not notable enough to be in the main space, but I felt a lot more confident having Prof. Reagle's help from the beginning. I can understand why online communities use mentor/mentee relationships in their online spaces. While I've never experience an instance of RTFM, I can understand how dealing with that comment alone would be a bit uncomfortable. As we learned from the social breaching experiments, breaking "the rules" and breaching the norms can feel super weird! On the other hand, I think it's important for online communities to protect themselves from users who are not serious about being a contributing member. There are rules and guidelines for a reason, so I can understand why users could be frustrated when newcomers do not put in the effort to learn them. Pippalenderking (talk) 17:38, 16 November 2021 (UTC)[reply]

Nov 19 Fri - FOMO, growth hacking, and ethics[edit]

QIC14: Reagle calls into question whether anxieties of "missing out", coined by the acronym "FOMO" are really unique to our digital age. FOMO is shown to not be some kind of mental disease spread by technology, and the case of neurasthenia is brought up as another historical phenomenon that involves the same psychological concepts that FOMO does. While things like technology have changed how "media-prompted envy" and anxieties of belonging are understood, FOMO is not a unique danger imposed by technology. Instead of seeing FOMO and other new concepts as unprecedented and reflective of grand narratives of technology destroying everything, it's better to consider how the significance of one concept varies by time and culture.

For example, FOMO is described as a bad phenomenon, but /r/superstonk GME speculators use FOMO to ridicule people that didn't join them, but also use the term almost positively as motivation to keep investing (you don't want to miss out on the next wave!). Likewise, beyond a disease partially caused by technological innovations like steam power, neurasthenia (a personal interest of mine!) was also almost glorified by Beard (1881) as a unique symptom of the hard-working American Protestant ethic. For example, lazy Catholics could not suffer from this because they were burdened by the "machinery of religion" (pg. 125). He even stated that African-American slaves only experienced the disease after they were freed and tasted liberty, and because of this, some "expressed a wish to return to slavery" (pg. 127). While FOMO and neurasthenia both extend from similar psychological principles, they are used to further different agendas in different ways in their specific contexts: GME speculators invoke FOMO consciously to get more people to join them, while discussion of neurasthenia's causes propped up old American ideals of liberty and independence. Something like the anxiety of belonging can be both universal and also vary in its significance across time and place, and careful investigation can tease these many meanings out. While I mentioned how FOMO's use has changed in a current context, I'm also curious whether the anxiety of (not) knowing what your friends are doing has changed. Personally, I feel like that anxiety has overall decreased for most people contrary to worries of it getting worse.

--LatakiaHill (talk) 06:45, 19 November 2021 (UTC)[reply]


QIC 11 Daniel

Reagle addresses the idea that "FOMO" is specific to and can potentially be generational due to the constant use of online platforms. It pulls the question of whether this saying will slowly disappear with age. Bringing a new acronym describing the concept will potentially arrive? Personally I do not think this acronym will disappear. I believe the acronym is deeply engraved into our minds. Since the acronym was actually first introduced in 1996 by Patrick James McGinnis all it has done is grown with the increased use of networking. The acronym is applicable to almost any aspect as well. As Eric brings up we see it used inn stock trading. People hate the idea that others are making a potential massive capital gain so therefore they join in FOMO.


As Eric also mentions I believe that FOMO is beyond what it had originally been invented for. The use has expanded past online communities and is used in everyday language as well. Relating back to online communities, my take on the reason people post on these platforms is to inflict FOMO on others. People want to show off and obtain attention from others by spreading information they might not know or to put it simply "show off". FOMO would not exist if people did not go out of their way to post a potential "flex" on how they are living their lives.

Dannyryan33 (talk) 15:38, 19 November 2021 (UTC)[reply]


Sunishka QIC 10

I found today's reading on "dark patterns" very interesting. I had heard about some of these issues-such as Turbo Tax hiding its free tax filing service-but I assumed this was the problem of a single corrupt website rather than a systemic problem with online service providers. The fact that 95% of the applications people use contain dark patterns suggests that maybe we need an unbiased third party regulating the marketing patterns of these websites (not suggesting we regulate the internet in general, just the commercial platforms that are offering goods and services).

Although "incorporating ethics" is a great start, I believe we need to go farther than that. In this reading, the suggested answer is to urge communities to "set standards for themselves". The flaw with this solution is that legality and morality don't often go hand in hand so just calling something unethical usually results in corporations getting better at concealing their unethical behavior rather than outright stopping it. However, labeling a marketing strategy as a fraudulent crime-as it was done in the case of Turbo Tax-is a much better way to hold such corporations accountable. Therefore, I think what consumers need is greater support from legal/financial systems. Additionally, a good preventative measure would be to have measures in place that safeguard the interests of consumers who have been duped into doing business with unethical organizations. While some of these do exist, its much easier to scam consumers online and get away with zero accountability.

I don't think we should wait for corporations to solve this problem on their own, because we would probably be waiting forever. Instead, I think legal procedures should make it impossible for corrupt corporations to profit, when they employ such unethical business strategies.

sunishka134 (talk) 11:22, 19 November 2021 (UTC)[reply]

sunishka134 be careful of "very interesting" -Reagle (talk) 17:51, 19 November 2021 (UTC)[reply]

Pippa QIC 9 As someone who frequently and consistently uses social media platforms like Instagram and TikTok, reading about FOMO was nothing new. The "Fear of Missing Out" is alive and well in online spaces, especially on social media. At Northeastern, FOMO occurs frequently on LinkedIn: students will post about their career and academic successes freely. Whenever I go on LinkedIn, I'm faced with the idea that maybe everyone around me is smarter, richer, and more successful than I am. On Instagram, it's normal for users to post about fun vacations, parties with friends, exciting adventures, and even mundane daily activities. These posts can make viewers feel left out, jealous, and self-conscious. I try not to let posts like these get me down and I like to celebrate other people's accomplishments, especially my friends'. That being said, it's easy to assume that someone's life is perfect when you only look at their social media page or involvement in an online space. Like Danny, I think that FOMO will remain in our vocabulary for a long time because it's a concept that everyone can relate to. Even if you don't participate in an online community, you can relate to being impacted by your peers' accomplishments and wanting to experience cool things because you see other people doing it. I think it's a really interesting concept because it's experienced so easily! All it takes is seeing one picture to feel like your life isn't as good as someone else's -- kinda crazy! Pippalenderking (talk) 17:56, 19 November 2021 (UTC)[reply]


hannah QIC6

My original QIC on dark patterns disappeared when I went to make an edit, but I'll try to hit all the points again.

- Unregulated capitalism encourages the kind of profit-driven mindset that creates manipulative and exploitative practices like the false sense of urgency in consumers and the workforce. Countdown pop-ups are obviously deceptive as the sale won't end, and I think blatant lies should be illegal (aren't they already?) but other gamified techniques like what was mentioned in connection to Uber are also harmful. -Ethics should be at the forefront of technology and innovation, otherwise we'll have a dystopian future on our hands. Profit-driven mindsets should be switched out for people-driven approaches, like what DuckDuckGo does with it's anonymity practices and living wage for all of its employees.

VanillaPumpkin (talk) 18:02, 19 November 2021 (UTC)[reply]

Nov 23 Tue - Gratitude[edit]

QIC 7 Hannah

I thought the article about giving thanks on Wikipedia was so fascinating. The data on having given and not received vs having received but never given thanks on Wikipedia was really insightful in terms of gratitude in different cultures. Persian Wikipedia had more people giving than receiving, while other languages (I have a feeling they’re Western languages) had more people receiving than giving. That could mean that Persian culture values giving thanks and gratitude more than receiving accolades from the community, while western culture values receiving thanks more as an individualist culture. I could be reading too much into it, just a thought. VanillaPumpkin (talk) 14:40, 23 November 2021 (UTC)[reply]

VanillaPumpkin Were you surprised that different cultures had different ways of expressing gratitude? I thought it was interesting too, but wasn't necessarily surprised! I think cultural norms have a really big impact on social interactions -- probably more than we think because they're typically unconscious. How do you think your own culture impacts how you express gratitude? Pippalenderking (talk) 16:08, 23 November 2021 (UTC)[reply]
Pippalenderking, you can only count one QIC per class. -Reagle (talk) 17:19, 23 November 2021 (UTC)[reply]

Pippa QIC 10

The study Indebtedness and Reciprocity in Online Exchange was a really interesting read! I enjoyed it because while writing my Wikipedia article, I saw examples of a lot of the topics mentioned in the reading. For example, when I first published my article in the main space, I thanked other users for using their time to help edit and offer advice. By demonstrating gratitude from the very beginning, I set myself up to receive positive responses from other editors. As a newbie on Wikipedia, I wanted to make sure that I was respectful of editors who had much more experience in the community. I didn't want other editors to assume that I thought of myself as better as anyone else, so I did my best to reply to comments with humility and gratitude. As a result, most of the people who responded to my request for edits wrote positive messages and were very polite. Even when users were proposing my article for deletion, I thanked them for taking the time to educate me on the notability guidelines! I think these expressions of gratitude helped other editors recognize that I had positive intentions in the community.

A quote that stood out to me from the study is: "Prior sociological and social psychological research shows that individuals tend to seek favors or assistance from those who are most similar to them (e.g., individuals in similar situations within a given community), even if the assistance is not as valuable as that which could be obtained from someone else." I thought this was interesting because it implies that people in online communities with similar individuals might have more advice exchanges than communities that are larger and less specific. The authors suggest that communities that are designed to clearly indicate who is similar to each other, empower individuals to pro-active behaviors. In my experience on Wikipedia, I mostly asked for assistance from other sports-minded editors because I knew they would understand the context of my article. Whether they are the "best" editors or not, I turned to them because we have similar interests. With Wikipedia's talk pages and biographies, it is easy to see which editors have similar interests to you! Pippalenderking (talk) 16:03, 23 November 2021 (UTC)[reply]


Sunishka QIC 11

The good thing about online communities is that when someone joins a community, they usually know the rules of the community and sometimes this includes the rules of reciprocity. For example, I'm a member of a Facebook group where we give away things we no longer use for free. I joined it this summer to get rid of kitchen equipment I no longer needed since I was moving apartments. You can't charge people for anything, so the person who took the kitchen equipment didn't pay me, but it doesn't mean they owe me anything because the organization requires members to give away anything they don't want or need.I think joining a group which has the rules of reciprocity written down on its page reduces this feeling of debt and also makes it clear what is expected of the members. After all, reciprocity involves making mutually beneficial exchanges with other people and as long as both parties are satisfied with giving away things for free and receiving said items. In the particular group I'm referring to, I believe there is an underlying presumption that if any other members require something, someone else will offer it, just as the member has for others in the past. So there is an assumption of "good faith" of sorts.

I think this can be applied to the Wikipedia article on giving thanks as well. Since people commenting are specifically told not to comment just to say thanks people might appear more rude if they decide violate the guidelines and say thanks anyway. Usually if I am in a group where it is against guidelines (explicitly or implicitly) to reply to people with just a "thank you" comment, I just include a reaction instead (such as liking the original response).

sunishka134 (talk) 11:34, 23 November 2021 (UTC)[reply]

Nov 30 Tue - Wikipedia in the news[edit]

Sunishka QIC 12

Last month I wrote a QIC about the reasons why I didn't believe true "neutrality" on Wikipedia could exist and I think today's reading "One Woman’s Mission to Rewrite Nazi History on Wikipedia" ties into the same concept of implicit (and in some cases explicit) biases. The citation problem reminded me of a trend I've noticed in online echo chambers where certain members only cite sources that support or glorify their own personal beliefs, irrespective of whether the citation comes from a respectable source or not. When these sources are investigated further, their claims are usually disproven. Since Wikipedia places a lot of importance on "neutral editing", I think certain Wikipedians found a loophole in this system and realized that if the information they are adding comes from a manipulative source, the editor can avoid being blamed for it. Maybe they are not even doing this intentionally-most people inadvertently end up in an echo chamber at some point or the other. And people can get manipulated into believing biased narratives.

This is why Wikipedia might need to put as much emphasis on "trust" as it does on "good faith" and "neutrality". By "trust" I mean that if the sources on Wikipedia cannot be trusted to report the facts in an unbiased manner then neither can the Wikipedians citing those sources. For example, on one occasion Coffman explains that “historians have a uniformly negative view of Nebe and his motives”. Historians are generally more trustworthy sources than internet bloggers. This makes Coffman's claim more reliable than those of her dissenters and therefore she should be considered to be a more reliable editor than those dissenters.

sunishka134 (talk) 12:05, 30 November 2021 (UTC)[reply]

sunishka134, like trust(worthy), Wikipedians do focus a lot on Wikipedia:Reliable sources. -Reagle (talk) 18:33, 30 November 2021 (UTC)[reply]

---

QIC 12 Daniel

Looking into the article "How Wikipedia Became a Battleground for Racial Justice" wikipedias neutral point of view content policies are challenged by the killing of George Floyd. The title of the original article was changed multiple times. From the original "Death of George Floyd" to a more bias (in the moment) title of "Killing of George Floyd". It was originally turned down because it went against the guidelines of Wikipedia in being biased but later was allowed do to the factual background behind the claim as the case progressed. The article had many areas that were questioned by editors and taken down based off of the type of language that was being used in order to describe the article. Wikipedia has grown from this specific article and its approach in handling racial justice. The idea that these guidelines of neutrality may be altered shows how progressive wikipedia is willing to be especially as it pertains to these circumstances.

As we also see in the article about how the Coronavirus has also caused wikipedia trouble in producing articles as the topic is also controversial. It is a topic that is very opinionated and it is a difficult area to remain neutral on as information is constantly changing, facts can no longer be facts, and as the pandemic progresses information is altered and added at such speeds it is hard to moderate and keep up with. It also raises the question of the qualifications of who is making edits and writing these articles. Personally I think that the qualifications as well as the source where the information is obtained is the most vital part of any information given in the article. I think it is overlooked when discussing these articles as to who is actually writing them. In a topic as serious as the COVID-190 pandemic it may be in Wikipedias interest to have people in the field of medicine constantly making edits and updating the page in order to ensure the most accurate information is being broadcasted.

Overall, neutrality can be a difficult area to enforce. When the topic that is being discussed is controversial as well as has many moving parts it is very difficult to moderate. I believe wikipedia has done a great job in moderating these topics by being very progressive but holding true to their guidelines. Dannyryan33 (talk) 16:32, 30 November 2021 (UTC)[reply]

---

Pippa QIC 11 The WIRED article titled "One Woman's Mission to Rewrite Nazi History on Wikipedia" described how Ksenia Coffman edited articles on Wikipedia she felt glorified Nazi officials and misrepresented history during and after the Holocaust. Coffman is very strategic when editing the articles she feels are Nazi glorification, and does a significant amount of research before she makes her edits. Before I read the entire article, I knew that Coffman would probably face backlash on Wikipedia for her edits. However, I was not expecting her to have to fight so hard just to make the changes she thinks are best. The WIRED article mentions instances when there is blatant glorification of Nazi officials, but Coffman's suggestions are still disputed. I found the article really interesting because it took a lot of hard work for Coffman to get her edits taken seriously and I felt as though a lot of other Wikipedians mentioned did not understand how important the edits were. I appreciated that she was unapologetic and strong, but still followed the guidelines of Wikipedia. To me, she is a great editor because she uses the Wikipedia guidelines to justify her edits and remains polite in interactions with other users. Overall, the article examines how history should be written about on Wikipedia and touches on a lot of interesting topics including censorship, reliable sourcing, and editing power. I would be curious to know if there are editors like Coffman in other areas of history and whether they have also found it difficult to make their edits stick. Pippalenderking (talk) 17:15, 30 November 2021 (UTC)[reply]

---

Hannah QIC 8 Responding to User:Pippalenderking : I completely agree about my shock over the backlash Coffman received over removing glorification of Nazis on Wikipedia. The older I get, the more often I realize that not everyone shares the "Nazis bad" sentiment I assumed was universal. With the rise of the far-right during and after Trump's presidency, we've all been more exposed to modern fascist ideology that praises Nazis, or at the very least glorifies some aspects of the regime. Coffman did everything right: used credible sources, pointed out specific issues and fixed them in a polite way compliant with Wiki rules. The fact that she received such backlash proves that Nazi ideology and admiration is, unfortunately, still alive and well in the modern era.

Dec 07 Tue - Infocide[edit]

... QIC 13 Daniel

Infocide is known to have causes that are often negative. This leads to the "Suicide" of the user and the contributions that they have made to the page or community. Yes, I believe that online communities can be very stressful spaces and you may need a break from them which can lead to infocide. However, I also believe that people choose to remove themselves because they want to escape this virtual world. I know a lot of people that have deleted all of social media not because of their mental state but because they felt they were wasting their time. I can completely agree with this mindset and I credit these individuals for doing so because the natural world should be appreciated more. Long story short, I think that this infocide is not always due to a negative interaction with an online community but to gain a better appreciation for the physical world. Leaving out the drama and judgement that they could potentially face on these online communities.

Looking at the retired Wiki page I thought it was rather strange. It was almost as if this was a graveyard for individuals that had once contributed to the community and now no longer do so. I think it's odd that this is monitored. If the individual wants to be retired with the site why must it be mentioned on the page. Unless I am missing the point and these people wish to be remembered, but I think this is a prime example as to why people do "Retire". There is no privacy and everything is recorded online. I can see how this can be exhausting over time.

Lastly, The WikiBreak page is also odd to me. Why must a user disclose why they are not contributing in the moment. Do people really care? Its seems to be intrusive in a way. Dannyryan33 (talk) 15:32, 7 December 2021 (UTC)[reply]

I thought the various responses on Wikipedia’s wiki break page was interesting. Personally, when I take a social media break I just delete the app and assume that anyone who needs to contact me has my phone number already. I think the need to explain why and how long wikipedians are leaving demonstrates the intense sense of obligation and attachment wikipedians have to their community. They really do feel like they owe their community an explanation, and some options even encouraged help or to reach out and remotivate the user. Even the process of leaving the community gives a lot of insight into what it’s like to be a part of it. VanillaPumpkin (talk) 16:49, 7 December 2021 (UTC)[reply]

QIC 12 Sunishka

I think Infocide is a phenomenon that is bound to happen at one point or another as technology and the online communities it supports are continually developing and replacing one another. Wikipedia, for example, may be the largest online encyclopedia, but there are several websites and apps where individuals can access and share information. However, I don't think this is a bad thing. In my experience, most people who delete or quit using one social media account usually do so because they have discovered a better platform that fits their needs or because they have decided that they have better things to do with their time. I'm not sure I agree with equating "deleting your social media" with "digital suicide" since the internet is going to be around for years (possibly forever) and the possibility of return is always present. For example, I have deleted my Facebook account, re created it and managed to find the exact same community of people. I also think equating it with "suicide" is a negative connotation that might be discouraging for social media users to delete their accounts. But like Danny mentions, if deleting one's social media can improve their mental health or help them manage their time better, it is something that should be encouraged. Personally, when I take a break from social media I usually use the term "social media detox" and inform anyone who might be concerned about me that I'm on a digital detox. I think replacing this with "infocide" would have been a lot more alarming to community members who might have been less supportive of the decision.

sunishka134 (talk) 12:20, 7 December 2021 (UTC)[reply]

Pippa QIC12 I had never heard of infocide before, but am not surprised that it exists! I can understand being tired of being online or becoming frustrated with how much time and effort is being put in to an online presence. Like Danny, I can understand how being a part of an online community could be stressful. Firstly, there are a lot of norms, rules, and guidelines for users to follow. Secondly, interacting with other users can be toxic, especially when users act as if there are no consequences to their actions online. As we've discussed in class, online communities can breed extremely toxic behaviors that are unsustainable for other users to be around frequently. Still, the decision to remove your online presence can be a big deal for users who have been online for a very long time and who have established an online persona in a community.

After learning more about the Wikipedia community and spending time on the site, I can understand how some users would become frustrated with the sometimes endless debates that occur on talk pages. It can require a lot of time and effort to keep up with the different interactions, and some people are not able to maintain this in their daily life. The Wikipedia "graveyard" of retired individuals was really interesting to look at. For Wikipedia to keep track of all of the retired users is pretty cool considering how many users are on the site and have been on the site since the beginning. I'm definitely curious as to why all of these people left! Pippalenderking (talk) 17:27, 7 December 2021 (UTC)[reply]