Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Social bot

From Wikipedia, the free encyclopedia
Software agent that communicates on social media
Not to be confused withsocial robot.
This article has multiple issues. Please helpimprove it or discuss these issues on thetalk page.(Learn how and when to remove these messages)
This articlemay beconfusing or unclear to readers. Please helpclarify the article. There might be a discussion about this onthe talk page.(October 2023) (Learn how and when to remove this message)
This articlemay contain an excessive amount of intricatedetail that may only interest a particular audience. Please help byspinning off orrelocating any relevant information, and removing excessive detail that may be againstWikipedia's inclusion policy.(October 2023) (Learn how and when to remove this message)
(Learn how and when to remove this message)

Asocial bot, refers to fully or partially automatedsocial media accounts designed to perform most regular users’ actions, such as liking, posting content, and chatting with other users.[1] Although their levels of autonomy vary, and often include ahuman-in-the-loop, social bots can useartificial intelligence to perform social media actions and can uselarge language models to mimic human dialogue. Social bots can operate alone or in groups that coordinate messaging as part of a network ofcoordinated inauthentic behavior.[2] Social bots are often used to performad fraud by artificially boosting viewership and engagement metrics[3] and to spreaddisinformation on social media.[4]

Uses

[edit]

Social bots are used for a large number of purposes on a variety of social media platforms, includingTwitter,Instagram,Facebook, andYouTube. One common use of social bots is to inflate a social media user's apparent popularity, usually by artificially manipulating their engagement metrics with large volumes of fake likes, reposts, or replies. Social bots can similarly be used to artificially inflate a user's follower count withfake followers, creating a false perception of a larger and more influential online following than is the case.[5] The use of social bots to create the impression of a large social media influence allows individuals, brands, and organizations to attract a higher number of human followers and boost their online presence. Fake engagement can be bought and sold in the black market of social media engagement.[6]

Corporations typically use automated customer service agents on social media to affordably manage high levels of support requests.[7] Social bots are used to send automated responses to users’ questions, sometimes prompting the user to private message the support account with additional information. The increased use of automated support bots and virtual assistants has led to some companies laying off customer-service staff.[8]

Social bots are also often used to influence public opinion. Autonomous bot accounts can flood social media with large numbers of posts expressing support for certain products, companies, orpolitical campaigns, creating the impression of organicgrassroots support.[9] This can create a false perception of the number of people who support a certain position, which may also have effects on the direction of stock prices or on elections.[10][11] Messages with similar content can also influencefads or trends.[12]

Many social bots are also used to amplifyphishing attacks. These malicious bots are used to trick a social media user into giving up theirpasswords or otherpersonal data. This is usually accomplished by posting links claiming to direct users to news articles that would in actuality direct to malicious websites containingmalware.[13] Scammers often useURL shortening services such asTinyURL andbit.ly to disguise a link's domain address, increasing the likelihood of a user clicking the malicious link.[14] The presence of fake social media followers and high levels of engagement help convince the victim that the scammer is in fact a trusted user.

Social bots can be a tool forcomputational propaganda.[15] Bots can also be used foralgorithmic curation,algorithmic radicalization, and/orinfluence-for-hire, a term that refers to the selling of an account on social media platforms.

History

[edit]

Bots have coexisted with computer technology since the earliest days of computing. Social bots have their roots in the 1950s withAlan Turing, whose work focused on machine intelligence with the development of theTuring Test. The following decades saw further progress made towards the goal of creating programs capable of mimicking human behavior, notably withJoseph Weizenbaum’s creation ofELIZA.[16] Considered to be one of the firstChatbots, ELIZA could simulate natural conversations with human users through pattern matching. Its most famous script was DOCTOR, a simulation of a Rogerian psychotherapist that was programmed to chat with patients and respond to questions.[17]

With the growth of social media platforms in the early 2000s, these bots could be used to interact with much larger user groups in an inconspicuous manner. Early instances ofautonomous agents on social media could be found on sites likeMySpace, with social bots being used by marketing firms to inflate activity on a user’s page in an effort to make them appear more popular.[18]

Social bots have been observed on a large variety of social media websites, with Twitter being one of the most widely observed examples. The creation ofTwitter bots is generally against the site’sterms of service when used to post spam or to automatically like and follow other users, but some degree of automation using Twitter’sAPI may be permitted if used for “entertainment, informational, or novelty purposes.”[19] Other platforms such asReddit andDiscord also allow for the use of social bots as long as they are not used to violate policies regarding harmful content and abusive behavior. Social media platforms have developed their own automated tools to filter out messages that come from bots, although they cannot detect all bot messages.[20]

Legal regulation

[edit]
Twitter bots posting similar messages during the2016 United States elections

Due to the difficulty of recognizing social bots and separating them from "eligible" automation via social media APIs, it is unclear how legal regulation can be enforced. Social bots are expected to play a role in shapingpublic opinion by autonomously acting asinfluencers. Some social bots have been used to rapidly spread misinformation, manipulate stock markets, influence opinion on companies and brands, promote political campaigns, and engage in malicious phishing campaigns.[21]

In theUnited States, some states have started to implement legislation in an attempt to regulate the use of social bots. In 2019,California passed the Bolstering Online Transparency Act (the B.O.T. Act) to make it unlawful to use automated software to appear indistinguishable from humans for the purpose of influencing a social media user's purchasing and voting decisions.[22] Other states such asUtah andColorado have passed similar bills to restrict the use of social bots.[23]

TheArtificial Intelligence Act (AI Act) in theEuropean Union is the first comprehensive law governing the use of Artificial Intelligence.[24] The law requires transparency in AI to prevent users from being tricked into believing they are communicating with another human. AI-generated content on social media must be clearly marked as such, preventing social bots from using AI in a manner that mimics human behavior.[25]

Detection

[edit]

The first generation of bots could sometimes be distinguished from real users by their oftensuperhuman capacities to post messages. Later developments have succeeded in imprinting more "human" activity andbehavioral patterns in the agent. With enough bots, it might be even possible to achieve artificialsocial proof. To unambiguously detect social bots as what they are, a variety of criteria[26] must be applied together usingpattern detection techniques, some of which are:[27]

  • cartoon figures as user pictures
  • sometimes also random real user pictures are captured (identity fraud)
  • reposting rate
  • temporal patterns[28]
  • sentiment expression
  • followers-to-friends ratio[29]
  • length ofuser names
  • variability in (re)posted messages
  • engagement rate (like/followers rate)
  • analysis of the time series of social media posts[30]

Social bots are always becoming increasingly difficult to detect and understand. The bots' human-like behavior, ever-changing behavior of the bots, and the sheer volume of bots covering every platform may have been a factor in the challenges of removing them.[31] Social media sites, likeTwitter, are among the most affected, withCNBC reporting up to 48 million of the 319 million users (roughly 15%) were bots in 2017.[32]

Botometer[33] (formerly BotOrNot) is a publicWeb service that checks the activity of a Twitter account and gives it a score based on how likely the account is to be a bot. The system leverages over a thousand features.[34][35] An active method for detecting early spam bots was to set uphoneypot accounts that post nonsensical content, which may get reposted (retweeted) by the bots.[36] However, bots evolve quickly, and detection methods have to be updated constantly, because otherwise they may get useless after a few years.[37] One method is the use ofBenford's Law for predicting the frequency distribution of significant leading digits to detect malicious bots online. This study was first introduced at theUniversity of Pretoria in 2020.[38] Another method is artificial-intelligence-driven detection. Some of the sub-categories of this type of detection would beactive learning loop flow,feature engineering,unsupervised learning,supervised learning, and correlation discovery.[31]

Some operations of bots work together in a synchronized way. For example,ISIS used Twitter to amplify its Islamic content by numerous orchestrated accounts which further pushed an item to the Hot List news,[39] thus further amplifying the selected news to a larger audience.[40] This mode of synchronized bots accounts can be used as a tool ofpropaganda as well as stock markets manipulations.[41]

Platforms

[edit]

Instagram

[edit]

Instagram reached a billion active monthly users in June 2018,[42] but of those 1 billion active users, it was estimated that up to 10% were being run by automated social bots. While malicious propaganda posting bots are still popular, many individual users use engagement bots to propel themselves to a false virality, making them seem more popular on the app. These engagement bots can like, watch, follow, and comment on the users' posts.[43]

Around the same time, the platform achieved the 1 billion monthly user plateau.Facebook (Instagram andWhatsApp's parent company) planned to hire 10,000 to provide additional security to their platforms; this would include combatting the rising number of bots and malicious posts on the platforms.[44] Due to increased security on the platform and the detection methods used by Instagram, some botting companies are reporting issues with their services because Instagram imposes interaction limit thresholds based on past and current app usage, and many payment and email platforms deny the companies access to their services, preventing potential clients from being able to purchase them.[45]

Twitter

[edit]
Main article:Twitter bot

Twitter's bot problem is caused by the ease of creating and maintaining them. The ease of creating the account as and the many APIs that allow for complete automation of the accounts are leading to excessive amounts of organizations and individuals using these tools to push their own needs.[32][46] CNBC claimed that about 15% of the 319 million Twitter users in 2017 were bots; the exact number is 48 million.[32] As of July 7, 2022, Twitter is claiming that they remove 1 million spam bots from their platform every day.[47]

Some bots are used to automate scheduled tweets, download videos, set reminders and send warnings of natural disasters.[48] Those are examples of bot accounts, but Twitter'sAPI allows for real accounts (individuals or organizations) to use certain levels of bot automation on their accounts and even encourages the use of them to improve user experiences and interactions.[49]

Meta

[edit]

In 2025, Meta announced it would be creating an AI product that helps users create AI characters on Instagram and Facebook, allowing these characters to have bios, profile pictures, generate and share "AI-powered content" on the platforms.[50][51][52] Bot accounts managed by Meta began to be identified by the public around on January 1, 2025,[53][54] with social media users noting that they appeared to be unblockable by human accounts and came with blue ticks to indicate they had been verified by Meta as trustworthy profiles.[55]

SocialAI

[edit]

SocialAI, an app created on September 18, 2024, was created with the full purpose of chatting with only AI bots without human interaction.[56] Its creator wasMichael Sayman, a former product lead atGoogle who also worked atFacebook,Roblox, andTwitter.[57] An article on theArs Technica website linked SocialAI to the Dead Internet Theory.[58]

See also

[edit]

References

[edit]
  1. ^Diaz Ruiz, Carlos (March 14, 2025). "Bots Talking to Bots: Synthetic Media, AI-Generated Content, and the "Dead Internet" Conspiracy Theory".Market-Oriented Disinformation Research: Digital Advertising, Disinformation and Fake News on Social Media (1 ed.). London: Routledge. p. 198.doi:10.4324/9781003506676-9.ISBN 978-1-003-50667-6.
  2. ^Keller, Franziska B.; Schoch, David; Stier, Sebastian; Yang, JungHwan (March 3, 2020)."Political Astroturfing on Twitter: How to Coordinate a Disinformation Campaign".Political Communication.37 (2):256–280.doi:10.1080/10584609.2019.1661888.ISSN 1058-4609.
  3. ^Lindquist, Johan (May 26, 2021), Woolgar, Steve; Vogel, Else; Moats, David; Helgesson, Claes-Fredrik (eds.),"Good Enough Imposters: The Market for Instagram Followers in Indonesia and Beyond",The Imposter as Social Theory, Bristol University Press, pp. 269–292,doi:10.51952/9781529213102.ch012,ISBN 978-1-5292-1310-2, retrievedNovember 11, 2025
  4. ^Diaz Ruiz, Carlos (April 1, 2025)."Disinformation on digital media platforms: A market-shaping approach".New Media & Society.27 (4):2188–2211.doi:10.1177/14614448231207644.ISSN 1461-4448.
  5. ^Zhou, Liying; Jin, Fei; Wu, Banggang; Chen, Zhi; Wang, Cheng Lu (March 1, 2023)."Do fake followers mitigate influencers' perceived influencing power on social media platforms? The mere number effect and boundary conditions".Journal of Business Research.158 113589.doi:10.1016/j.jbusres.2022.113589.ISSN 0148-2963.
  6. ^Omena, J. J., Chao, J., Pilipets, E., Kollanyi, B., Zilli, B., Flaim, G., ... & Del Nero, S. (2019)."Bots and the black market of social media engagement."Digital Methods Initiative Wiki. Retrieved March 20, 2025.
  7. ^Xu, Anbang; Liu, Zhe; Guo, Yufan; Sinha, Vibha; Akkiraju, Rama (May 2, 2017)."A New Chatbot for Customer Service on Social Media".Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. CHI '17. New York, NY, USA: Association for Computing Machinery. pp. 3506–3510.doi:10.1145/3025453.3025496.ISBN 978-1-4503-4655-9.
  8. ^"Rogers lays off customer-service staff in multiple provinces".The Globe and Mail. February 20, 2025. RetrievedMarch 20, 2025.
  9. ^"The influence of social bots".www.akademische-gesellschaft.com. RetrievedMarch 1, 2022.
  10. ^"What is a social media bot? | Social media bot definition".www.cloudflare.com. RetrievedMarch 20, 2025.
  11. ^Hu, Charlotte."How AI Bots Could Sabotage 2024 Elections around the World".Scientific American. RetrievedMarch 20, 2025.
  12. ^Frederick, Kara (2019).The New War of Ideas: Counterterrorism Lessons for the Digital Disinformation Fight (Report). Center for a New American Security.
  13. ^Shafahi, Mohammad; Kempers, Leon; Afsarmanesh, Hamideh (December 2016). "Phishing through social bots on Twitter".2016 IEEE International Conference on Big Data (Big Data). pp. 3703–3712.doi:10.1109/BigData.2016.7841038.ISBN 978-1-4673-9005-7.
  14. ^Padmanabhan, Sankar; Maramreddy, Prema; Cyriac, Marykutty (2020)."Spam Detection in Link Shortening Web Services Through Social Network Data Analysis". In Raju, K. Srujan; Senkerik, Roman; Lanka, Satya Prasad; Rajagopal, V. (eds.).Data Engineering and Communication Technology. Advances in Intelligent Systems and Computing. Vol. 1079. Singapore: Springer Nature. pp. 103–118.doi:10.1007/978-981-15-1097-7_9.ISBN 978-981-15-1097-7.
  15. ^Haile, Yirgalem A (December 22, 2024)."The theoretical wedding of computational propaganda and information operations: Unraveling digital manipulation in conflict zones".New Media & Society 14614448241302319.doi:10.1177/14614448241302319.ISSN 1461-4448.
  16. ^Ferrara, Emilio; Varol, Onur; Davis, Clayton; Menczer, Filippo; Flammini, Alessandro (June 24, 2016)."The rise of social bots".Commun. ACM.59 (7):96–104.arXiv:1407.5225.doi:10.1145/2818717.ISSN 0001-0782.
  17. ^Bassett, Caroline (December 1, 2019)."The computational therapeutic: exploring Weizenbaum's ELIZA as a history of the present".AI & Society.34 (4):803–812.doi:10.1007/s00146-018-0825-9.ISSN 1435-5655.
  18. ^Errett, Joshua (June 12, 2008)."Robots invade MySpace - NOW Magazine".NOW Toronto. RetrievedMarch 24, 2025.
  19. ^"Rules for Posting automated Tweets with Twitter Bots - Digital Inspiration".digitalinspiration.com. RetrievedMarch 24, 2025.
  20. ^Efthimion, Phillip; Payne, Scott; Proferes, Nicholas (July 20, 2018)."Supervised Machine Learning Bot Detection Techniques to Identify Social Twitter Bots".SMU Data Science Review.1 (2).
  21. ^Gorwa, Robert; Guilbeault, Douglas (2020)."Unpacking the Social Media Bot: A Typology to Guide Research and Policy".Policy & Internet.12 (2):225–248.arXiv:1801.06863.doi:10.1002/poi3.184.ISSN 1944-2866.
  22. ^Julius Cerniauskas (June 11, 2024)."The legal and ethical implications of sharing the web with bots".TechRadar. RetrievedMarch 24, 2025.
  23. ^"State Lawmakers Propose Regulating Chatbots".multistate.ai. RetrievedMarch 24, 2025.
  24. ^Butt, Junaid Sattar (March 2024)."Analytical Study of the World's First EU Artificial Intelligence (AI) Act, 2024".International Journal of Research Publication and Reviews.5 (3):7343–7364.doi:10.55248/gengpi.5.0324.0914 – via ResearchGate.
  25. ^"Filling social media with indistinguishable AI-bots is illegal with EU AI Act".Security, Privacy & Tech Inquiries. January 4, 2025. RetrievedMarch 24, 2025.
  26. ^Dewangan, Madhuri; Rishabh Kaushal (2016)."SocialBot: Behavioral Analysis and Detection".International Symposium on Security in Computing and Communication.doi:10.1007/978-981-10-2738-3_39.
  27. ^Ferrara, Emilio; Varol, Onur; Davis, Clayton; Menczer, Filippo; Flammini, Alessandro (2016)."The Rise of Social Bots".Communications of the ACM.59 (7):96–104.arXiv:1407.5225.doi:10.1145/2818717.S2CID 1914124.
  28. ^Mazza, Michele; Stefano Cresci; Marco Avvenuti; Walter Quattrociocchi; Maurizio Tesconi (2019). "RTbust: Exploiting Temporal Patterns for Botnet Detection on Twitter".In Proceedings of the 10th ACM Conference on Web Science (WebSci '19).arXiv:1902.04506.doi:10.1145/3292522.3326015.
  29. ^"How to Find and Remove Fake Followers from Twitter and Instagram : Social Media Examiner".
  30. ^Weishampel, Anthony;Staicu, Ana-Maria; Rand, William (March 1, 2023)."Classification of social media users with generalized functional data analysis".Computational Statistics & Data Analysis.179 107647.doi:10.1016/j.csda.2022.107647.ISSN 0167-9473.S2CID 253359560.
  31. ^abZago, Mattia; Nespoli, Pantaleone; Papamartzivanos, Dimitrios; Perez, Manuel Gil; Marmol, Felix Gomez; Kambourakis, Georgios; Perez, Gregorio Martinez (August 2019). "Screening Out Social Bots Interference: Are There Any Silver Bullets?".IEEE Communications Magazine.57 (8):98–104.Bibcode:2019IComM..57h..98Z.doi:10.1109/MCOM.2019.1800520.ISSN 1558-1896.S2CID 201623201.
  32. ^abcNewberg, Michael (March 10, 2017)."As many as 48 million Twitter accounts aren't people, says study".CNBC. RetrievedNovember 22, 2022.
  33. ^"Botometer".
  34. ^Davis, Clayton A.; Onur Varol; Emilio Ferrara; Alessandro Flammini; Filippo Menczer (2016). "BotOrNot: A System to Evaluate Social Bots".Proc. WWW Developers Day Workshop.arXiv:1602.00975.doi:10.1145/2872518.2889302.
  35. ^Varol, Onur; Emilio Ferrara; Clayton A. Davis; Filippo Menczer; Alessandro Flammini (2017)."Online Human-Bot Interactions: Detection, Estimation, and Characterization".Proc. International AAAI Conf. on Web and Social Media (ICWSM).
  36. ^"How to Spot a Social Bot on Twitter". technologyreview.com. July 28, 2014.Social bots are sending a significant amount of information through the Twittersphere. Now there's a tool to help identify them
  37. ^Grimme, Christian; Preuss, Mike; Adam, Lena; Trautmann, Heike (2017). "Social Bots: Human-Like by Means of Human Control?".Big Data.5 (4):279–293.arXiv:1706.07624.doi:10.1089/big.2017.0044.PMID 29235915.S2CID 10464463.
  38. ^Mbona, Innocent; Eloff, Jan H. P. (January 1, 2022)."Feature selection using Benford's law to support detection of malicious social media bots".Information Sciences.582:369–381.doi:10.1016/j.ins.2021.09.038.hdl:2263/82899.ISSN 0020-0255.S2CID 240508186.
  39. ^Giummole, Federica; Orlando, Salvatore; Tolomei, Gabriele (2013)."Trending Topics on Twitter Improve the Prediction of Google Hot Queries".2013 International Conference on Social Computing. IEEE. pp. 39–44.doi:10.1109/socialcom.2013.12.ISBN 978-0-7695-5137-1.S2CID 15657978.
  40. ^Badawy, Adam; Ferrara, Emilio (April 3, 2018)."The rise of Jihadist propaganda on social networks".Journal of Computational Social Science.1 (2):453–470.arXiv:1702.02263.doi:10.1007/s42001-018-0015-z.ISSN 2432-2717.S2CID 13122114.
  41. ^Sela, Alon; Milo, Orit; Kagan, Eugene; Ben-Gal, Irad (November 15, 2019)."Improving information spread by spreading groups".Online Information Review.44 (1):24–42.doi:10.1108/oir-08-2018-0245.ISSN 1468-4527.S2CID 211051143.
  42. ^Constine, Josh (June 20, 2018)."Instagram hits 1 billion monthly users, up from 800M in September".TechCrunch. RetrievedNovember 24, 2022.
  43. ^"Instagram Promotion Service (Real Marketing) – UseViral". August 15, 2021. RetrievedNovember 24, 2022.
  44. ^"Instagram's Growing Bot Problem".The Information. July 18, 2018. RetrievedNovember 24, 2022.
  45. ^Morales, Eduardo (March 8, 2022)."Instagram Bots in 2021 — Everything You Need To Know".Medium. RetrievedNovember 24, 2022.
  46. ^Gilani, Zafar; Farahbakhsh, Reza; Crowcroft, Jon (April 3, 2017)."Do Bots impact Twitter activity?".Proceedings of the 26th International Conference on World Wide Web Companion - WWW '17 Companion. Republic and Canton of Geneva, CHE: International World Wide Web Conferences Steering Committee. pp. 781–782.doi:10.1145/3041021.3054255.ISBN 978-1-4503-4914-7.S2CID 33003478.
  47. ^Dang, Sheila; Paul, Katie (July 7, 2022)."Twitter says it removes over 1 million spam accounts each day".Reuters. RetrievedNovember 23, 2022.
  48. ^Azhar, Huzaifa (December 10, 2021)."10 Best Twitter Bots You Should Follow in 2022 - TechPP".techpp.com. RetrievedNovember 24, 2022.
  49. ^"Twitter's automation development rules | Twitter Help".help.twitter.com. RetrievedNovember 24, 2022.
  50. ^Murphy, Hannah; Criddle, Cristina (December 27, 2024)."Meta envisages social media filled with AI-generated users".Financial Times. RetrievedJanuary 1, 2025.
  51. ^Herrman, John (December 31, 2024)."Meta's Big Bet on Bots".Intelligencer. RetrievedJanuary 2, 2025.
  52. ^Kevin Okemwa (December 30, 2024).""They'll have bio's and profile pictures": Meta is gearing up to unleash a wave of AI-powered personas, set to redefine Facebook's social engagement".Windows Central. RetrievedJanuary 2, 2025.
  53. ^"Threads".www.threads.net. RetrievedJanuary 2, 2025.
  54. ^Sato, Mia (January 3, 2025)."Meta's AI-generated bot profiles are not being received well".The Verge. RetrievedFebruary 12, 2025.
  55. ^Growcoot, Matt (January 6, 2025)."Meta Purges AI-Generated Facebook and Instagram Accounts Amid Backlash".PetaPixel. RetrievedFebruary 12, 2025.
  56. ^Davis, Wes (September 17, 2024)."SocialAI: we tried the Twitter clone where no other humans are allowed".The Verge. RetrievedFebruary 12, 2025.
  57. ^Ghosh, Shona."The 24-year-old whiz kid who was hired by Mark Zuckerberg then Google is leaving to work at Roblox".Business Insider. RetrievedFebruary 12, 2025.
  58. ^Edwards, Benj (September 18, 2024).""Dead Internet theory" comes to life with new AI-powered social media app".Ars Technica. RetrievedFebruary 12, 2025.

External links

[edit]
Core content
Mechanisms
Psychological
Computational
Economic
Media and Politics
Tactics
Related terms
Targets and campaigns
International Politics
Politics by country
Antisemitism
Environmental science
Medicine and Public health
Journalism and journalists
Countering disinformation
Fact-checking and research
Fact-checking
Research
WikiProjects
Media practices
Attention
Cognitive bias/
Conformity
Digital divide/
Political polarization
Related topics
Retrieved from "https://en.wikipedia.org/w/index.php?title=Social_bot&oldid=1321581385"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp