Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Algorithmic radicalization

From Wikipedia, the free encyclopedia
Radicalization via social media algorithms

Algorithmic radicalization is the concept thatrecommender algorithms on popular social media sites, such asYouTube andFacebook, drive users toward progressively more extreme content over time, leading to the development ofradicalized extremist political views. Algorithms meticulously record user interactions, encompassing likes, dislikes and the duration of time watching content, with the objective of generating an endless stream of media designed to sustainuser engagement. The phenomenon ofecho chamber channels has been demonstrated to exacerbate thepolarization of consumers, primarily through the reinforcement of media preferences and the validation of one's existing beliefs.[1][2][3][4]

Algorithmic radicalization remains a controversial phenomenon as it is often not in the best interest of social media companies to remove echo chamber channels.[5][6] To what extent recommender algorithms are actually responsible for radicalization remains disputed. Studies have found contradictory results regarding the promotion of extremist content by algorithms.

Social media echo chambers and filter bubbles

[edit]

Social media platforms learn the interests and likes of the user to modify their experiences in their feed to keep them engaged and scrolling, known as afilter bubble.[7] An echo chamber is formed when users come across beliefs that magnify or reinforce their thoughts and form a group of like-minded users in a closed system.[1] Echo chambers spread information without any opposing beliefs and can possibly lead toconfirmation bias. According togroup polarization theory, an echo chamber can potentially lead users and groups towards more extreme radicalized positions.[8] According to the National Library of Medicine, "Users online tend to prefer information adhering to their worldviews, ignore dissenting information, and form polarized groups around shared narratives. Furthermore, when polarization is high, misinformation quickly proliferates."[8]

By site

[edit]

Facebook

[edit]

Facebook's algorithm focuses on recommending content that makes the user want to interact. They rank content by prioritizing popular posts by friends, viral content, and sometimes divisive content. Each feed is personalized to the user's specific interests which can sometimes lead users towards an echo chamber of troublesome content.[9] Users can find their list of interests the algorithm uses by going to the "Your ad Preferences" page. According to a Pew Research study, 74% of Facebook users did not know that list existed until they were directed towards that page in the study.[10] It is also relatively common for Facebook to assign political labels to their users. In recent years,[when?] Facebook has started using artificial intelligence to change the content users see in their feed and what is recommended to them. A document known asThe Facebook Files has revealed that their AI system prioritizesuser engagement over everything else. The Facebook Files has also demonstrated that controlling the AI systems has proven difficult to handle.[11]

In an August 2019 internal memo leaked in 2021, Facebook has admitted that "the mechanics of our platforms are not neutral",[12][13] concluding that in order to reach maximum profits, optimization for engagement is necessary. In order to increase engagement, algorithms have found that hate, misinformation, and politics are instrumental for app activity.[14] As referenced in the memo, "The more incendiary the material, the more it keeps users engaged, the more it is boosted by the algorithm."[12] According to a 2018 study, "false rumors spread faster and wider than true information... They found falsehoods are 70% more likely to be retweeted on Twitter than the truth, and reach their first 1,500 people six times faster. This effect is more pronounced with political news than other categories."[15]

YouTube

[edit]

YouTube has been around since 2005 and has more than 2.5 billion monthly users. YouTube discovery content systems focus on the user's personal activity (watched, favorites, likes) to direct them to recommended content. YouTube's algorithm is accountable for roughly 70% of users' recommended videos and what drives people to watch certain content.[16] According to a 2022 study by theMozilla Foundation, users have little power to keep unsolicited videos out of their suggested recommended content. This includes videos about hate speech, livestreams, etc.[17][16]

YouTube has been identified as an influential platform for spreading radicalized content.Al-Qaeda and similarextremist groups have been linked to using YouTube for recruitment videos and engaging with international media outlets. In a research study published by theAmerican Behavioral Scientist Journal, they researched "whether it is possible to identify a set of attributes that may help explain part of the YouTube algorithm's decision-making process".[18] The results of the study showed that YouTube's algorithm recommendations for extremism content factor into the presence of radical keywords in a video's title. In February 2023, in the case ofGonzalez v. Google, the question at hand is whether or not Google, the parent company of YouTube, is protected from lawsuits claiming that the site's algorithms aided terrorists in recommendingISIS videos to users.Section 230 is known to generally protect online platforms from civil liability for the content posted by its users.[19]

Multiple studies have found little to no evidence to suggest that YouTube's algorithms direct attention towards far-right content to those not already engaged with it.[20][21][22]

TikTok

[edit]

TikTok is a platform that recommends videos to a user's 'For You Page' (FYP), making every users' page different. With the nature of the algorithm behind the app, TikTok's FYP has been linked to showing more explicit and radical videos over time based on users' previous interactions on the app.[23] Since TikTok's inception, the app has been scrutinized for misinformation and hate speech as those forms of media usually generate more interactions to the algorithm.[24]

Various extremist groups, includingjihadist organizations, have utilized TikTok to disseminate propaganda, recruit followers, and incite violence. The platform's algorithm, which recommends content based on user engagement, can expose users to extremist content that aligns with their interests or interactions.[25][failed verification]

As of 2022, TikTok's head of US Security has put out a statement that "81,518,334 videos were removed globally between April – June for violating our Community Guidelines or Terms of Service" to cut back on hate speech, harassment, and misinformation.[26]

Studies have noted instances where individuals were radicalized through content encountered on TikTok. For example, in early 2023, Austrian authorities thwarted a plot against an LGBTQ+pride parade that involved two teenagers and a 20-year-old who were inspired by jihadist content on TikTok. The youngest suspect, 14 years old, had been exposed to videos created by Islamist influencers glorifying jihad. These videos led him to further engagement with similar content, eventually resulting in his involvement in planning an attack.[25]

Another case involved the arrest of several teenagers inVienna, Austria, in 2024, who were planning to carry out a terrorist attack at aTaylor Swift concert. The investigation revealed that some of the suspects had been radicalized online, with TikTok being one of the platforms used to disseminate extremist content that influenced their beliefs and actions.[25]

Self-radicalization

[edit]
See also:Radicalization
An infographic from the United States Department of Homeland Security's "If You See Something, Say Something" campaign. The campaign is a national initiative to raise awareness to homegrown terrorism and terrorism-related crime.

The U.S. Department of Justice defines 'Lone-wolf' (self) terrorism as "someone who acts alone in a terrorist attack without the help or encouragement of a government or a terrorist organization".[27] Through social media outlets on the internet, 'Lone-wolf' terrorism has been on the rise, being linked to algorithmic radicalization.[28] Through echo-chambers on the internet, viewpoints typically seen as radical were accepted and quickly adopted by other extremists.[29] These viewpoints are encouraged by forums, group chats, and social media to reinforce their beliefs.[30]

References in media

[edit]
icon
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(November 2023) (Learn how and when to remove this message)

The Social Dilemma

[edit]
Main article:The Social Dilemma

The Social Dilemma is a 2020 docudrama about how algorithms behind social media enables addiction, while possessing abilities to manipulate people's views, emotions, and behavior to spread conspiracy theories and disinformation. The film repeatedly uses buzz words such as 'echo chambers' and 'fake news' to provepsychological manipulation on social media, therefore leading topolitical manipulation. In the film, Ben falls deeper into asocial media addiction as the algorithm found that his social media page has a 62.3% chance of long-term engagement. This leads into more videos on the recommended feed for Ben and he eventually becomes more immersed into propaganda and conspiracy theories, becoming more polarized with each video.

Proposed solutions

[edit]

Weakening Section 230 protections

[edit]
Main article:Section 230

In theCommunications Decency Act, Section 230 states that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider".[31] Section 230 protects the media from liabilities or being sued of third-party content, such as illegal activity from a user.[31] However, critics argue that this approach reduces a company's incentive to remove harmful content or misinformation, and this loophole has allowed social media companies to maximize profits through pushing radical content without legal risks.[32] This claim has itself been criticized by proponents of Section 230, as prior to its passing, courts had ruled inStratton Oakmont, Inc. v. Prodigy Services Co. that moderation in any capacity introduces a liability to content providers as "publishers" of the content they chose to leave up.[33]

Lawmakers have drafted legislation that would weaken or remove Section 230 protections over algorithmic content.House DemocratsAnna Eshoo,Frank Pallone Jr.,Mike Doyle, andJan Schakowsky introduced the "Justice Against Malicious Algorithms Act" in October 2021 asH.R. 5596. The bill died in committee,[34] but it would have removed Section 230 protections for service providers related to personalizedrecommendation algorithms that present content to users if those algorithms knowingly or recklessly deliver content that contributes to physical or severe emotional injury.[35]

See also

[edit]

References

[edit]
  1. ^ab"What is a Social Media Echo Chamber? | Stan Richards School of Advertising".advertising.utexas.edu. November 18, 2020. RetrievedNovember 2, 2022.
  2. ^"The Websites Sustaining Britain's Far-Right Influencers".bellingcat. February 24, 2021. RetrievedMarch 10, 2021.
  3. ^Camargo, Chico Q. (January 21, 2020)."YouTube's algorithms might radicalise people – but the real problem is we've no idea how they work".The Conversation. RetrievedMarch 10, 2021.
  4. ^E&T editorial staff (May 27, 2020)."Facebook did not act on own evidence of algorithm-driven extremism".eandt.theiet.org. RetrievedMarch 10, 2021.
  5. ^"How Can Social Media Firms Tackle Hate Speech?".Knowledge at Wharton. RetrievedNovember 22, 2022.
  6. ^"Internet Association – We Are The Voice Of The Internet Economy. | Internet Association". December 17, 2021. Archived fromthe original on December 17, 2021. RetrievedNovember 22, 2022.
  7. ^Kaluža, Jernej (July 3, 2022)."Habitual Generation of Filter Bubbles: Why is Algorithmic Personalisation Problematic for the Democratic Public Sphere?".Javnost – the Public, Journal of the European Institute for Communication and Culture.29 (3):267–283.doi:10.1080/13183222.2021.2003052.ISSN 1318-3222.
  8. ^abCinelli, Matteo; De Francisci Morales, Gianmarco; Galeazzi, Alessandro; Quattrociocchi, Walter; Starnini, Michele (March 2, 2021)."The echo chamber effect on social media".Proceedings of the National Academy of Sciences of the United States of America.118 (9): –2023301118.Bibcode:2021PNAS..11823301C.doi:10.1073/pnas.2023301118.ISSN 0027-8424.PMC 7936330.PMID 33622786.
  9. ^Oremus, Will; Alcantara, Chris; Merrill, Jeremy; Galocha, Artur (October 26, 2021)."How Facebook shapes your feed".The Washington Post. RetrievedApril 12, 2023.
  10. ^Atske, Sara (January 16, 2019)."Facebook Algorithms and Personal Data".Pew Research Center: Internet, Science & Tech. RetrievedApril 12, 2023.
  11. ^Korinek, Anton (December 8, 2021)."Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files".Brookings. RetrievedApril 12, 2023.
  12. ^ab"Disinformation, Radicalization, and Algorithmic Amplification: What Steps Can Congress Take?".Just Security. February 7, 2022. RetrievedNovember 2, 2022.
  13. ^Isaac, Mike (October 25, 2021)."Facebook Wrestles With the Features It Used to Define Social Networking".The New York Times.ISSN 0362-4331. RetrievedNovember 2, 2022.
  14. ^Little, Olivia (March 26, 2021)."TikTok is prompting users to follow far-right extremist accounts".Media Matters for America. RetrievedNovember 2, 2022.
  15. ^"Study: False news spreads faster than the truth".MIT Sloan. March 8, 2018. RetrievedNovember 2, 2022.
  16. ^ab"Hated that video? YouTube's algorithm might push you another just like it".MIT Technology Review. RetrievedApril 11, 2023.
  17. ^"YouTube User Control Study – Mozilla Foundation".Mozilla Foundation. September 2022. RetrievedNovember 12, 2024.
  18. ^Murthy, Dhiraj (May 1, 2021). "Evaluating Platform Accountability: Terrorist Content on YouTube".American Behavioral Scientist.65 (6):800–824.doi:10.1177/0002764221989774.S2CID 233449061.
  19. ^Root, Damon (April 2023)."Scotus Considers Section 230's Scope".Reason.54 (11): 8.ISSN 0048-6906.
  20. ^Ledwich, Mark; Zaitsev, Anna (March 2, 2020)."Algorithmic extremism: Examining YouTube's rabbit hole of radicalization".First Monday.25 (3).arXiv:1912.11211.doi:10.5210/fm.v25i3.10419.
  21. ^Hosseinmardi, Homa; Ghasemian, Amir; Clauset, Aaron; Mobius, Markus; Rothschild, David M.; Watts, Duncan J. (August 10, 2021)."Examining the consumption of radical content on YouTube".Proceedings of the National Academy of Sciences.118 (32) e2101967118.arXiv:2011.12843.Bibcode:2021PNAS..11801967H.doi:10.1073/pnas.2101967118.PMC 8364190.PMID 34341121.
  22. ^Chen, Annie Y.; Nyhan, Brendan; Reifler, Jason; Robertson, Ronald E.; Wilson, Christo (September 2023)."Subscriptions and external links help drive resentful users to alternative and extremist YouTube channels".Science Advances.9 (35) eadd8080.arXiv:2204.10921.Bibcode:2023SciA....9D8080C.doi:10.1126/sciadv.add8080.PMC 10468121.PMID 37647396.
  23. ^"TikTok's algorithm leads users from transphobic videos to far-right rabbit holes".Media Matters for America. October 5, 2021. RetrievedNovember 22, 2022.
  24. ^Little, Olivia (April 2, 2021)."Seemingly harmless conspiracy theory accounts on TikTok are pushing far-right propaganda and TikTok is prompting users to follow them".Media Matters for America. RetrievedNovember 22, 2022.
  25. ^abc"TikTok Jihad: Terrorists Leverage Modern Tools to Recruit and Radicalize". The Soufan Center. August 9, 2024. RetrievedAugust 10, 2024.
  26. ^"Our continued fight against hate and harassment".Newsroom | TikTok. August 16, 2019. RetrievedNovember 22, 2022.
  27. ^"Lone Wolf Terrorism in America | Office of Justice Programs".www.ojp.gov. RetrievedNovember 2, 2022.
  28. ^Alfano, Mark; Carter, J. Adam; Cheong, Marc (2018)."Technological Seduction and Self-Radicalization".Journal of the American Philosophical Association.4 (3):298–322.doi:10.1017/apa.2018.27.ISSN 2053-4477.S2CID 150119516.
  29. ^Dubois, Elizabeth; Blank, Grant (May 4, 2018)."The echo chamber is overstated: the moderating effect of political interest and diverse media".Information, Communication & Society.21 (5):729–745.doi:10.1080/1369118X.2018.1428656.ISSN 1369-118X.S2CID 149369522.
  30. ^Sunstein, Cass R. (May 13, 2009).Going to Extremes: How Like Minds Unite and Divide. Oxford University Press.ISBN 978-0-19-979314-3.
  31. ^ab"47 U.S. Code § 230 – Protection for private blocking and screening of offensive material".LII / Legal Information Institute. RetrievedNovember 2, 2022.
  32. ^Smith, Michael D.; Alstyne, Marshall Van (August 12, 2021)."It's Time to Update Section 230".Harvard Business Review.ISSN 0017-8012. RetrievedNovember 2, 2022.
  33. ^Masnick, Mike (June 23, 2020)."Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act". RetrievedApril 11, 2024.
  34. ^"H.R. 5596 (117th): Justice Against Malicious Algorithms Act of 2021".GovTrack. RetrievedApril 11, 2024.
  35. ^Robertson, Adi (October 14, 2021)."Lawmakers want to strip legal protections from the Facebook News Feed".The Verge.Archived from the original on October 14, 2021. RetrievedOctober 14, 2021.
Types
Networks
Services
Concepts and
theories
Models and
processes
Economics
Phenomena
Related topics
Media practices
Attention
Cognitive bias/
Conformity
Digital divide/
Political polarization
Related topics
Retrieved from "https://en.wikipedia.org/w/index.php?title=Algorithmic_radicalization&oldid=1338012401"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp