Social bot

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

A social bot (also: socialbot or socbot) or troll bot is an agent that communicates more or less autonomously on social media, often with the task of influencing the course of discussion and/or the opinions of its readers.[1][2] It is related to chatbots but mostly only uses rather simple interactions or no reactivity at all. The messages (e.g. tweets) it distributes are mostly either very simple, or prefabricated (by humans), and it often operates in groups and various configurations of partial human control (hybrid).[3] It usually targets advocating certain ideas, supporting campaigns, or aggregating other sources either by acting as a "follower" and/or gathering followers itself. In this very limited respect, social bots can be said to have passed the Turing test.[4] [5] If social media profiles are expected to be human, then social bots represent fake accounts. The automated creation and deployment of many social bots against a distributed system or community is one form of Sybil attack.

Social bots appear to have played a significant role in the 2016 United States presidential election[6][7] and their history appears to go back at least to the United States midterm elections, 2010.[8] It is estimated that 9-15% of active Twitter accounts may be social bots[9] and that 15% of the total Twitter population active in the US presidential election discussion were bots. At least 400,000 bots were responsible for about 3.8 million tweets, roughly 19% of the total volume.[6]

Twitterbots are already well-known examples, but corresponding autonomous agents on Facebook and elsewhere have also been observed. Nowadays, social bots are equipped with or can generate convincing internet personas that are well capable of influencing real people,[10][4][11] although they are not always reliable.[12]

Social bots,[13] besides being able to (re-)produce or reuse messages autonomously, also share many traits with spambots with respect to their tendency to infiltrate large user groups.[14]

Using social bots is against the terms of service of many platforms, especially Twitter[15] and Instagram.[16] However, a certain degree of automation is of course intended by making social media APIs available.

The topic of a legal regulation of social bots is becoming more urgent to policy makers in many countries, however due to the difficulty of recognizing social bots and separating them from "eligible" automation via social media APIs, it is currently unclear how that can be done and also if it can be enforced. In any case, social bots are expected to play a role in future shaping of public opinion by autonomously acting as incessant and never-tiring influencer.[17][18]


Lutz Finger identifies 5 immediate uses for social bots:[19]

  • foster fame: having an arbitrary number of (unrevealed) bots as (fake) followers can help simulate real success
  • spamming: having advertising bots in online chats is similar to email spam, but a lot more direct
  • mischief: e.g. signing up an opponent with a lot of fake identities and spam the account or help others discover it to discreditize the opponent
  • bias public opinion: influence trends by countless messages of similar content with different phrasings[20]
  • limit free speech: important messages can be pushed out of sight by a deluge of automated bot messages
  • to fish passwords or other personal data

The effects of all points can be likened to and support methods of traditional psychological warfare and information warfare.


The first generation of bots could sometimes be distinguished from real users by their often superhuman capacities to post messages around the clock (and at massive rates). Later developments have succeeded in imprinting more "human" activity and behavioral patterns in the agent. To unambiguously detect social bots as what they are, a variety of criteria must be applied together using pattern detection techniques, some of which are:[11]

  • cartoon figures as user pictures
  • sometimes also random real user pictures are captured (identity fraud)
  • reposting rate
  • temporal patterns[21]
  • sentiment expression
  • followers-to-friends ratio[22]
  • length of user names
  • variability in (re)posted messages
  • engagement rate (like/followers rate)

Botometer[23] (formerly BotOrNot) is a public Web service that checks the activity of a Twitter account and gives it a score based on how likely the account is to be a bot. The system leverages over a thousand features.[24][9] An active method that worked well in detecting early spam bots was to set up honeypot accounts where obvious nonsensical content was posted and then dumbly reposted (retweeted) by bots.[25] However, recent studies[3] show that bots evolve quickly and detection methods have to be updated constantly, because otherwise they may get useless after a few years.

See also[edit]


  1. ^ Ferrara, Emilio; Varol, Onur; Davis, Clayton; Menczer, Filippo; Flammini, Alessandro (July 2016). "The Rise of Social Bots". Communications of the ACM. 59 (7): 96. doi:10.1145/2818717. Retrieved 27 February 2020.
  2. ^ Rao, Sanjeev; Verma, Anil Kumar; Bhatia, Tarunpreet (2021-12-30). "A review on social spam detection: Challenges, open issues, and future directions". Expert Systems with Applications. 186: 115742. doi:10.1016/j.eswa.2021.115742.
  3. ^ a b Grimme, Christian; Preuss, Mike; Adam, Lena; Trautmann, Heike (2017). "Social Bots: Human-Like by Means of Human Control?". Big Data. 5 (4): 279–293. arXiv:1706.07624. doi:10.1089/big.2017.0044. PMID 29235915. S2CID 10464463.
  4. ^ a b "What is socialbot? - Definition from". Retrieved 2016-12-16.
  5. ^
  6. ^ a b Bessi, A & Ferrara, E. (2016) Social Bots Distort the 2016 US Presidential election online discussion. First Monday 21(11), 2016
  7. ^ Shao, Chengcheng; Giovanni Luca Ciampaglia; Onur Varol; Kaicheng Yang; Alessandro Flammini; Filippo Menczer (2018). "The spread of low-credibility content by social bots". Nature Communications. 9 (1): 4787. arXiv:1707.07592. Bibcode:2018NatCo...9.4787S. doi:10.1038/s41467-018-06930-7. PMC 6246561. PMID 30459415.
  8. ^ Ratkiewicz, Jacob; Michael Conover; Mark Meiss; Bruno Gonçalves; Alessandro Flammini; Filippo Menczer (2011). "Detecting and Tracking Political Abuse in Social Media". Proc. 5th International AAAI Conf. on Web and Social Media (ICWSM).
  9. ^ a b Varol, Onur; Emilio Ferrara; Clayton A. Davis; Filippo Menczer; Alessandro Flammini (2017). "Online Human-Bot Interactions: Detection, Estimation, and Characterization". Proc. International AAAI Conf. on Web and Social Media (ICWSM).
  10. ^ Alessandro Bessi and Emilio Ferrara (2016-11-07). "Social bots distort the 2016 U.S. Presidential election online discussion". First Monday.
  11. ^ a b Ferrara, Emilio; Varol, Onur; Davis, Clayton; Menczer, Filippo; Flammini, Alessandro (2016). "The Rise of Social Bots". Communications of the ACM. 59 (7): 96–104. arXiv:1407.5225. doi:10.1145/2818717. S2CID 1914124.
  12. ^ China kills AI chatbots after they start praising US, criticising communists Yahoo! News August 5, 2017
  13. ^ Hossain, Yousuf; Hossain, Ishan Arefin; Banik, Mridul; Arefin Hossain, Ishan; Chakrabarty, Amitabha (June 2018). "Embedded System based Bangla Intelligent Social Virtual Robot with Sentiment Analysis". 2018 Joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition (IcIVPR). Kitakyushu, Japan: IEEE: 322–327. doi:10.1109/ICIEV.2018.8641023. hdl:10361/10174. ISBN 978-1-5386-5163-6. S2CID 61809488.
  14. ^ Ferrara, Emilio (2018). "Measuring social spam and the effect of bots on information diffusion in social media". Complex Spreading Phenomena in Social Systems. Computational Social Sciences. pp. 229–255. arXiv:1708.08134. doi:10.1007/978-3-319-77332-2_13. ISBN 978-3-319-77331-5. S2CID 36750281.
  15. ^ "Automation rules". Retrieved 2018-11-15.
  16. ^ "Terms of Use • Instagram". Retrieved 2018-11-15.
  17. ^ "How robots could shape Germany's political future". The Local. 21 November 2016. "Social Bots" were the sinister cyber friend in the US elections who didn't actually exist. Could they also shape how Germans vote next year?
  18. ^ "The rise of no".
  19. ^ Lutz Finger (Feb 17, 2015). "Do Evil - The Business Of Social Media Bots".
  20. ^ Frederick, Kara (2019). "The New War of Ideas: Counterterrorism Lessons for the Digital Disinformation Fight". Center for a New American Security. Cite journal requires |journal= (help)
  21. ^ Mazza, Michele; Stefano Cresci; Marco Avvenuti; Walter Quattrociocchi; Maurizio Tesconi (2019). "RTbust: Exploiting Temporal Patterns for Botnet Detection on Twitter". In Proceedings of the 10th ACM Conference on Web Science (WebSci '19). arXiv:1902.04506. doi:10.1145/3292522.3326015.
  22. ^ "How to Find and Remove Fake Followers from Twitter and Instagram : Social Media Examiner".
  23. ^ "Botometer".
  24. ^ Davis, Clayton A.; Onur Varol; Emilio Ferrara; Alessandro Flammini; Filippo Menczer (2016). "BotOrNot: A System to Evaluate Social Bots". Proc. WWW Developers Day Workshop. arXiv:1602.00975. doi:10.1145/2872518.2889302.
  25. ^ "How to Spot a Social Bot on Twitter". 2014-07-28. Social bots are sending a significant amount of information through the Twittersphere. Now there’s a tool to help identify them

External links[edit]