Cyber racism

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

"Cyber racism" is a term coined by Les Back in 2002[1] to capture the phenomenon of racism online, particularly on white supremacist web sites. The term encompasses racist rhetoric that is distributed through computer-mediated means and includes some or all of the following characteristics: Ideas of racial uniqueness, nationalism and common destiny; racial supremacy, superiority and separation; conceptions of racial otherness; and anti-establishment world-view.

Racist views are common and often more extreme on the Internet due to a level of anonymity offered by the Internet.[2][3] In a 2009 book about "common misconceptions about white supremacy online, [its] threats to today's youth; and possible solutions on navigating through the Internet, a large space where so much information is easily accessible (including hate-speech and other offensive content)", City University of New York associate professor Jessie Daniels claimed that the number of white supremacy sites online was then rising; especially in the United States after the 2008 presidential elections.[4]

Cyber racism has been interpreted to be more than a phenomenon featuring racist acts displayed online. Cyber racism's connections to institutional racism have been noted in the work of Jessie Daniels, a professor of sociology at Hunter College.[5] Daniels writes in her 2009 book Cyber Racism that "white supremacy has entered the digital era" further confronting the idea of technology's "inherently democratizing" nature.[6] Yet, according to Ruha Benjamin, researchers have concentrated on cyber racism's focus on "how the Internet perpetuates or mediates racial prejudice at the individual level rather than analyze how racism shapes infrastructure and design."[6] Benjamin continues by stating the importance of investigating "how algorithms perpetuate or disrupt racism…in any study of discriminatory design."[6]

In her article "Rise of the Alt-Right",[7] Daniels explains how algorithms "speed up the spread of White supremacist ideology" by producing search results that reinforce cyber racism.[8] Daniels posits that algorithms direct alt-right users to sites that echo their views. This allows users to connect and build communities on platforms that place little to no restrictions on speech, such as Reddit and 4chan. Daniels points to the internet searches of Dylann Roof, a white supremacist, as an example of how algorithms perpetuate cyber racism. She claims that his internet search for "black on white crime" directed him to racist sites that reinforced and strengthened his racist views.[8] Moreover, Latanya Sweeney, a Harvard professor, has found that online advertisements generated by algorithms tend to display more advertisements for arrest records with African American-sounding names than Caucasian-sounding names.[9]

Furthermore, the popularity of sites used by alt-right communities has allowed cyber racism to garner attention from mainstream media. For instance, the alt-right claimed the "Pepe the frog" meme as a hate symbol after mixing "Pepe in with Nazi propaganda" on 4chan.[8][10] This gained major attention on Twitter after a journalist tweeted about the association. Alt-right users considered this a "victory" because it caused the public to discuss their ideology.

Though there have been studies and strategies for thwarting and confronting cyber racism on the individual level there have not been many studies that expand on how cyber racism's roots in institutional racism can be combated.[11] An increase in literature on cyber racism's relationship with institutional racism will provide new avenues for research on combatting cyber racism on a systemic level.[6]

Although some tech companies have taken steps to combat cyber racism on their sites, most tech companies are hesitant to take action over fears of limiting free speech.[8] A Declaration of the Independence of Cyberspace, a document that declares the internet as a place free from control by "governments of the industrial world",[12] continues to influence and reflect the views of Silicon Valley.

According to the Australian Human Rights Commission, Cyber-Racism involves online activity that can include "jokes or comments that cause offence or hurt; name-calling or verbal abuse; harassment or intimidation, or public commentary that inflames hostility towards certain groups".[13] Racism online can have the same effects as offensive remarks not online.[14]

Laws[edit]

Australia[edit]

In Australia, cyber-racism is unlawful under S 18C of the Racial Discrimination Act 1975 (Cth). As it involves a misuse of telecommunications equipment, it may also be criminal under S 474.17 of the Criminal Code Act 1995 (Cth).[15] State laws in each Australian State make racial vilification unlawful, and in most states serious racial vilification is a criminal offense. These laws also generally apply to cyber-racism, for example S 7 "Racial vilification unlawful" and S 24 "Offence of serious racial vilification" of the Racial and Religious Tolerance Act 2001 (Vic) both explicitly state that the conduct being referred to may include the use of the Internet.[16]

Yahoo! case[edit]

In May 2000, the League Against Racism and Anti-Semitism (la Ligue Internationale Contre le Racisme et I'Antisemitisme-LICRA) and the Union of French Jewish Students (UEJF) brought an action against Yahoo! Inc. who hosted an auction website to sell items of Nazi paraphernalia and Yahoo! France provided the link accessed to the content.[17]

See also[edit]

References[edit]

  1. ^ Back, L. (2002). Aryans Reading Adorno: Cyber-culture and Twenty-first Century Racism, Ethnic and Racial Studies. 25(4), 628–51.
  2. ^ Manfred, Tony (24 May 2012). "Why Is The Internet So Racist?". Business Insider. Retrieved 2 July 2013.
  3. ^ Younge, Gary (12 July 2012). "Who thinks about the consequences of online racism?". The Guardian. Retrieved 2 July 2013.
  4. ^ "Cyber Racism: Race and Technology". WordPress.com. 2013. Retrieved 16 April 2014.
  5. ^ Benjamin, Ruha. (2019). Race after Technology : Abolitionist Tools for the New Jim Code. Polity Press. ISBN 9781509526406. OCLC 1115007314.
  6. ^ a b c d Benjamin, Ruha (2019-08-05). Race after technology : abolitionist tools for the new Jim code. Cambridge, UK. ISBN 9781509526390. OCLC 1078415817.
  7. ^ https://journals.sagepub.com/doi/full/10.1177/1536504218766547
  8. ^ a b c d Daniels, Jessie (February 2018). ""The Algorithmic Rise of the "Alt-Right"". Contexts. 17: 60–65. doi:10.1177/1536504218766547 – via SAGE Publications.
  9. ^ "Redirecting..." heinonline.org. Retrieved 2019-11-18.
  10. ^ Nuzzi, Olivia (2016-05-26). "How Pepe the Frog Became a Nazi Trump Supporter and Alt-Right Symbol". Retrieved 2019-11-18.
  11. ^ Jakubowicz, Andrew, author. (2017-11-12). Cyber Racism and Community Resilience : Strategies for Combating Online Race Hate. ISBN 9783319643885. OCLC 1026787955.CS1 maint: multiple names: authors list (link)
  12. ^ Barlow, John Perry (January 20, 2016). "A Declaration of the Independence of Cyberspace". Electronic Frontier Foundation.
  13. ^ "What is Cyber-Racism". Australian Human Rights Commission. 2014. Archived from the original on 14 July 2014. Retrieved 7 June 2014.
  14. ^ "Racism. No Way: Cyber Racism". NSW Government, Education and Communities. 2013. Retrieved 16 April 2014.
  15. ^ "OHPI Submission on Racial Discrimination and S 18C". Online Hate Prevention Institute. 2014. Retrieved 7 June 2014.
  16. ^ "Racial and Religious Tolerance Act 2001 (Vic) Sect 24". AUSTLII. 2013. Retrieved 7 June 2014.
  17. ^ "France bans internet Nazi auctions". BBC News. 2000. Retrieved 8 May 2014.

Further reading[edit]