Disinformation attack
Disinformation attacks are the intentional dissemination of false information, with an end goal of misleading, confusing, or manipulating an audience.[1] Disinformation attacks may be executed by state or non-state actors to influence domestic or foreign populations. These attacks are commonly employed to reshape attitudes and beliefs, drive a particular agenda, or elicit certain actions out of a target audience.[2][3]
Disinformation attacks can be employed through traditional media outlets, such as state-sponsored TV channels and radios.[4] However, disinformation attacks have become increasingly widespread and potent with the advent of social media. Digital tools such as bots, algorithms, and AI technology are leveraged to spread and amplify disinformation and micro-target populations on online platforms like Instagram, Twitter, Facebook, and YouTube.[5] Due to their nature, disinformation attacks have been called to be formally classified as an actual cyber threat by advocates.[6]
Disinformation attacks can pose threats to democracy in online spaces, the integrity of electoral processes such as the 2016 United States presidential election, and national security in general.[7]
Defense measures include machine learning applications that can flag disinformation on platforms, fact-checking and algorithmic adjustment systems, and collaboration between private social media companies and governments in creating solutions and sharing key information.[3] Educational programs are also being developed to teach people how to better discern between facts and disinformation online.[8]
Disinformation attack methods
Traditional media outlets
Traditional media channels can be utilized to spread disinformation. For example, Russia Today is a state-funded news channel that is broadcast internationally. It aims to boost Russia's reputation abroad and also depict Western nations, such as the U.S., in a negative light. It notably covers negative aspects of the U.S. and presents conspiracy theories aimed to mislead and misinform its audience.[4]
Social media
Perpetrators primarily utilize social media channels as a medium to spread disinformation. They leverage a variety of tools to carry out disinformation attacks, such as bots, algorithms, deep fake technology, and psychological principles.
- Bots are automated agents that can produce and spread content on online social platforms. Many bots can engage in basic interactions with other bots and humans. In disinformation attack campaigns, they are leveraged to rapidly disseminate disinformation and breach digital social networks. Bots can produce the illusion that one piece of information is coming from a variety of different sources. In doing so, disinformation attack campaigns make their content seem believable through repeated and varied exposure.[9] By flooding social media channels with repeated content, bots can also alter algorithms and shift online attention to disinformation content.[3]
- Algorithms are leveraged to amplify the spread of disinformation. Algorithms filter and tailor information for users and modify the content they consume.[10] A study found that algorithms can be radicalization pipelines because they present content based on its user engagement levels. Users are drawn more to radical, shocking, and click-bait content.[11] As a result, extremist, attention-grabbing posts can garner high levels of engagement through algorithms. Disinformation campaigns may leverage algorithms to amplify their extremist content and sow radicalization online.[12]
- A deep fake is digital content that has been manipulated. Deep fake technology can be harnessed to defame, blackmail, and impersonate. Due to its low costs and efficiency, deep fakes can be used to spread disinformation more quickly and in greater volume than humans can. Disinformation attack campaigns may leverage deep fake technology to generate disinformation concerning people, states, or narratives. Deep fake technology can be weaponized to mislead an audience and spread falsehoods.[13]
- Human psychology is also applied to make disinformation attacks more potent and viral. Psychological phenomena, such as stereotyping, confirmation bias, selective attention, and echo chambers, contributes to the virality and success of disinformation on digital platforms.[9][14] Disinformation attacks are often considered a type of psychological warfare because of their use of psychological techniques to manipulate populations.[15]
Examples
Domestic voter disinformation attacks
Domestic voter disinformation attacks are most often employed by autocrats aiming to cover up electoral corruption. Voter disinformation includes public statements that assert local electoral processes are legitimate and statements that discredit electoral monitors. Public-relations firms may also be hired to execute specialized disinformation campaigns, including media advertisements and behind-the-scenes lobbying. For example, state actors employed voter disinformation attacks to reelect Ilham Aliyev the 2013 Azerbaijani presidential election. They restricted electoral monitoring, allowing only certain groups like ex-Soviet republic allies to observe the electoral process. Public-relations firms were also hired to push the narrative of an honest and democratic election.[16]
Russian campaigns
- A Russian operation known as The Internet Research Agency (IRA) spent thousands on social media ads to influence the 2016 U.S. presidential elections. These political ads leveraged user data and to micro-target and spread misleading information to certain populations, with an end goal of exacerbating polarization and eroding public trust in political institutions.[17] The Computation Propaganda Project at Oxford University found that the IRA's ads specifically sought to sow mistrust towards the U.S. government among Mexican Americans and discourage voter turnout among African Americans.[18]
- Russia Today is a state-funded news channel that aims to boost Russia's reputation abroad and depict Western nations in a negative light. It has served as a platform to disseminate propaganda and conspiracies concerning Western states such as the U.S.[4]
- During the Russo-Ukrainian War of 2014, Russia's combined traditional combat warfare with disinformation attacks in its offensive strategy. Disinformation attacks were leveraged to sow doubt and confusion among enemy populations and intimidate adversaries, erode public trust in Ukrainian institutions, and boost Russia's reputation and legitimacy. This hybrid warfare allowed Russia to effectively exert physical and psychological dominance over target populations during the conflict.[19]
Other notable campaigns
An app called “Dawn of Glad Tidings,” developed by Islamic State members, assists in the organization's efforts to rapidly disseminate disinformation in social media channels. When a user downloads the app, they are prompted to link it to their Twitter account and grant the app access to tweeting from their personal account. As a result, this app allows for automated Tweets to be sent out from real user accounts and helps create trends across Twitter that amplify disinformation produced by the Islamic State on an international scope.[18]
Ethical concerns
- There is growing concern that Russia could employ disinformation attacks to destabilize certain NATO members, such as the Baltic states. States with highly polarized political landscapes and low public trust in local media and government are particularly vulnerable to disinformation attacks.[20] Russia may employ disinformation, propaganda, and intimidation to coerce such states into accepting Russian narratives and agenda.[18]
- Disinformation attacks can erode democracy in the digital space. With the help of algorithms and bots, disinformation and fake news can be amplified, users' content feeds can be tailored and limited, and echo chambers can easily develop.[21] In this way, disinformation attacks can breed political polarization and alter public discourse.[22]
- During the 2016 U.S. presidential election, Russian influence campaigns employed hacking techniques and disinformation attacks to confuse the public on key political issues and sow discord. Experts worry that disinformation attacks will increasingly be used to influence national elections and democratic processes.[7]
Defense measures
Federal
The Trump Administration backed initiatives to evaluate blockchain technology as a potential defense mechanism against internet manipulation. The Blockchain is a decentralized, secure database that can store and protect transactional information. Blockchain technology could be applied to make data transport more secure in online spaces and the Internet of Things networks, making it difficult for actors to alter or censor content and carry out disinformation attacks.[23]
"Operation Glowing Symphony" in 2016 was another federal initiative that sought to combat disinformation attacks. This operation attempted to dispel ISIS propaganda in social media channels. However, it was largely unsuccessful: ISIS actors continued to disseminate propaganda on other unmonitored online platforms.[24]
Private
Private social media companies have engineered tools to identify and combat disinformation on their platforms. For example, Twitter uses machine learning applications to flag content that does not comply with its terms of services and identify extremist posts encouraging terrorism. Facebook and Google have developed a content hierarchy system where fact-checkers can identify and de-rank possible disinformation and adjust algorithms accordingly.[7] Many companies are considering using procedural legal systems to regulate content on their platforms as well. Specifically, they are considering using appellate systems: posts may be taken down for violating terms of service and posing as a disinformation threat, but users can contest this action through a hierarchy of appellate bodies.[12]
Collaborative measures
Cyber security experts claim that collaboration between public and private sectors is necessary to successfully combat disinformation attacks. Cooperative defense strategies include:
- The creation of "disinformation detection consortiums" where stakeholders (i.e. private social media companies and governments) convene to discuss disinformation attacks and come up with mutual defense strategies.[3]
- Sharing critical information between private social media companies and the government, so that more effective defense strategies can be developed.[25][3]
- Coordination among governments to create a unified and effective response against transnational disinformation campaigns.[3]
Education and awareness
In 2018, the Executive Vice President of the European Commission for A Europe Fit for the Digital Age gathered a group of experts to produce a report with recommendations for teaching digital literacy. Proposed digital literacy curricula familiarizes students with fact-checking websites such as Snopes and FactCheck.org. This curricula aims to equip students with critical thinking skills to discern between factual content and disinformation online.[8]
See also
- Disinformation
- Propaganda
- Russian web brigades
- Media manipulation
- Deepfake
- Algorithm
- Psychological warfare
- Social media
- Fake news
- Internet Research Agency
References
- ^ Fallis, Don (2015). "What Is Disinformation?". Library Trends. 63 (3): 401–426. doi:10.1353/lib.2015.0014. hdl:2142/89818. ISSN 1559-0682. S2CID 13178809.
- ^ Behavioral Sciences Department, De La Salle University, Manila, Philippines; Collado, Zaldy C.; Basco, Angelica Joyce M.; Communication Department, Adamson University, Manila, Philippines; Sison, Albin A.; Communication Department, Adamson University, Manila, Philippines (2020-06-26). "Falling victims to online disinformation among young Filipino people: Is human mind to blame?". Cognition, Brain, Behavior. 24 (2): 75–91. doi:10.24193/cbb.2020.24.05. S2CID 225786653.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - ^ a b c d e f Frederick, Kara (2019). "The New War of Ideas: Counterterrorism Lessons for the Digital Disinformation Fight". Center for a New American Security.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ a b c Ajir, Media; Vailliant, Bethany (2018). "Russian Information Warfare: Implications for Deterrence Theory". Strategic Studies Quarterly. 12 (3): 70–89. ISSN 1936-1815. JSTOR 26481910.
- ^ Katyal, Sonia K. (2019). "Artificial Intelligence, Advertising, and Disinformation". Advertising & Society Quarterly. 20 (4). doi:10.1353/asr.2019.0026. ISSN 2475-1790. S2CID 213397212.
- ^ Caramancion, Kevin Matthe (March 2020). "An Exploration of Disinformation as a Cybersecurity Threat". 2020 3rd International Conference on Information and Computer Technologies (ICICT): 440–444. doi:10.1109/ICICT50521.2020.00076. ISBN 978-1-7281-7283-5. S2CID 218651389.
- ^ a b c Downes, Cathy (2018). "Strategic Blind–Spots on Cyber Threats, Vectors and Campaigns". The Cyber Defense Review. 3 (1): 79–104. ISSN 2474-2120. JSTOR 26427378.
- ^ a b Glisson, Lane (2019). "Breaking the Spin Cycle: Teaching Complexity in the Age of Fake News". Portal: Libraries and the Academy. 19 (3): 461–484. doi:10.1353/pla.2019.0027. ISSN 1530-7131. S2CID 199016070.
- ^ a b Kirdemir, Baris (2019). "HOSTILE INFLUENCE AND EMERGING COGNITIVE THREATS IN CYBERSPACE". Centre for Economics and Foreign Policy Studies.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ Sacasas, L. M. (2020). "The Analog City and the Digital City". The New Atlantis (61): 3–18. ISSN 1543-1215. JSTOR 26898497.
- ^ Brogly, Chris; Rubin, Victoria L. (2018). "Detecting Clickbait: Here's How to Do It / Comment détecter les pièges à clic". Canadian Journal of Information and Library Science. 42 (3): 154–175. ISSN 1920-7239.
- ^ a b Heldt, Amélie (2019). "Let's Meet Halfway: Sharing New Responsibilities in a Digital Age". Journal of Information Policy. 9: 336–369. doi:10.5325/jinfopoli.9.2019.0336. ISSN 2381-5892. JSTOR 10.5325/jinfopoli.9.2019.0336. S2CID 213340236.
- ^ "Weaponised deep fakes: National security and democracy on JSTOR". www.jstor.org. Retrieved 2020-11-12.
- ^ Buchanan, Tom (2020-10-07). Zhao, Jichang (ed.). "Why do people spread false information online? The effects of message and viewer characteristics on self-reported likelihood of sharing social media disinformation". PLOS ONE. 15 (10): e0239666. Bibcode:2020PLoSO..1539666B. doi:10.1371/journal.pone.0239666. ISSN 1932-6203. PMC 7541057. PMID 33027262.
- ^ Thomas, Timothy L. (2020). "Information Weapons: Russia's Nonnuclear Strategic Weapons of Choice". The Cyber Defense Review. 5 (2): 125–144. ISSN 2474-2120. JSTOR 26923527.
- ^ Merloe, Patrick (2015). "Election Monitoring Vs. Disinformation". Journal of Democracy. 26 (3): 79–93. doi:10.1353/jod.2015.0053. ISSN 1086-3214. S2CID 146751430.
- ^ Crain, Matthew; Nadler, Anthony (2019). "Political Manipulation and Internet Advertising Infrastructure". Journal of Information Policy. 9: 370–410. doi:10.5325/jinfopoli.9.2019.0370. ISSN 2381-5892. JSTOR 10.5325/jinfopoli.9.2019.0370. S2CID 214217187.
- ^ a b c Prier, Jarred (2017). "Commanding the Trend: Social Media as Information Warfare". Strategic Studies Quarterly. 11 (4): 50–85. ISSN 1936-1815. JSTOR 26271634.
- ^ Wither, James K. (2016). "Making Sense of Hybrid Warfare". Connections. 15 (2): 73–87. doi:10.11610/Connections.15.2.06. ISSN 1812-1098. JSTOR 26326441.
- ^ Humprecht, Edda; Esser, Frank; Van Aelst, Peter (July 2020). "Resilience to Online Disinformation: A Framework for Cross-National Comparative Research". The International Journal of Press/Politics. 25 (3): 493–516. doi:10.1177/1940161219900126. ISSN 1940-1612. S2CID 213349525.
- ^ Peck, Andrew (2020). "A Problem of Amplification: Folklore and Fake News in the Age of Social Media". The Journal of American Folklore. 133 (529): 329–351. doi:10.5406/jamerfolk.133.529.0329. ISSN 0021-8715. JSTOR 10.5406/jamerfolk.133.529.0329. S2CID 243130538.
- ^ Unver, H. Akin (2017). "Politics of Automation, Attention, and Engagement". Journal of International Affairs. 71 (1): 127–146. ISSN 0022-197X. JSTOR 26494368.
- ^ Sultan, Oz (2019). "Tackling Disinformation, Online Terrorism, and Cyber Risks into the 2020s". The Cyber Defense Review. 4 (1): 43–60. ISSN 2474-2120. JSTOR 26623066.
- ^ Brown, Katherine A.; Green, Shannon N.; Wang, Jian “Jay” (2017). "Public Diplomacy and National Security in 2017: Building Alliances, Fighting Extremism, and Dispelling Disinformation". Center for Strategic and International Studies (CSIS).
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ White, Adam J. (2018). "Google.gov: Could an alliance between Google and government to filter facts be the looming progressive answer to "fake news"?". The New Atlantis (55): 3–34. ISSN 1543-1215. JSTOR 26487781.