AI slop (known simply asslop) isdigital content made withgenerative artificial intelligence that is lacking in effort, quality, ormeaning, and produced in high volume asclickbait to gain advantage in theattention economy, or earnmoney.[1][4][5][6] It is a form ofsynthetic media usually linked to the monetization in thecreator economy of social media andonline advertising.[7] Coined in the 2020s, the term has a pejorative connotation similar tospam.[4] "Slop" was selected as the 2025Word of the Year by bothMerriam-Webster and theAmerican Dialect Society.[8][9]
AI slop has been variously defined as "digital clutter", "filler content prioritizing speed and quantity over substance and quality",[10] and "shoddy or unwanted AI content insocial media, art, books, and search results".[11] Jonathan Gilmore, a philosophy professor at theCity University of New York, describes the material as having an "incredibly banal, realistic style" that is easy for the viewer to process.[12]
As earlylarge language models (LLMs) and imagediffusion models accelerated the creation of high-volume but low-quality text and images, discussion commenced among journalists and on social platforms for the appropriate term for the influx of material. Terms proposed included "AI garbage", "AI pollution", and "AI-generated dross".[5] Early uses of the term "slop" as a descriptor for low-grade AI material apparently came in reaction to the release of AI image generators in 2022. Its early use has been noted among4chan,Hacker News, andYouTube commentators as a form of in-groupslang.[11]
The British computer programmerSimon Willison is credited with being an early champion of the term "slop" in the mainstream,[1][11] having used it on his personal blog in May 2024.[13] However, he has said it was in use long before he began pushing for the term.[11]
The term gained increased popularity in the second quarter of 2024 in part because ofGoogle's use of itsGemini AI model to generate responses to search queries,[11] and the large quantities of slop on the internet were widely criticized in media headlines during the fourth quarter of 2024.[1][4]
According to an academic article by Cody Kommers and five other scholars that was published in January 2026, AI slop has "so far resisted formal definition."[14] Although they argue it is impossible to precisely describe a boundary between slop and non-slop, Kommers et.al. identify three "prototypical properties" that characterise AI slop: superficial competence, asymmetric effort and mass producibility.[14]
Beyond these family resemblances, there are many different kinds of AI slop. Three main "dimensions of variance", or ways in which AI slop can vary, are its instrumental utility (why was it created?), the level of personalization (is it so specific as to only be interesting to one person or a small friend group?) and the level of surrealism, where some AI slop is "ludicrously implausible" while other slop is more realistic.[14]
Italian artist and writerFrancesco D'Isa argues that the production of mediocre, boring, and derivative works of art is not an exclusive trait of artificial intelligence but one shared by all forms of culture. He points out that for each work of art generally considered a masterpiece, there are many forgotten and unremarkable works, and that classics are only exceptions that happened to prevail. He has stated that "the majority of human production has always been slop. Mediocrity is not a bug of technology; it is the baseline of culture." He also argues that the fear of AI is merely the first step in a typical trend for technological advances in media starting with panic, continuing to adaptation, and ending with incorporation into the cultural norm.[15]

AI image and video slop have proliferated on social media in part because it can be revenue-generating for its creators onFacebook andTikTok, with the issue affecting Facebook most notably. This incentivizes individuals fromdeveloping countries to create images that appeal to audiences in the United States which attract higher advertising rates.[16][17][18]
The journalist Jason Koebler speculated that the bizarre nature of some of the content may be due to the creators using Hindi, Urdu, and Vietnamese prompts (languages which are underrepresented in the model'straining data), or using erraticspeech-to-text methods to translate their intentions into English.[16]
Speaking toNew York magazine, a Kenyan creator of slop images described givingChatGPT prompts such as "WRITE ME 10 PROMPT picture OF JESUS WHICH WILLING BRING HIGH ENGAGEMENT ON FACEBOOK [sic]", and then feeding those created prompts into atext-to-image AI model such asMidjourney.[4]
AI-generated images of plants and plant care misinformation have proliferated on social media.[19][20] Online retailers have used AI-generated images of flowers to sell seeds of plants that do not actually exist.[20] Many onlinehouseplant communities have banned AI generated content but struggle to moderate large volumes of content posted bybots.[20]
Facebook spammers have been reported as AI-generating images ofHolocaust victims with fake stories; in reality there are only a handful of historical photographs taken atAuschwitz. The posters were described as "slop accounts", and theAuschwitz Memorial museum called the images a "dangerous distortion".[21] History-focused Facebook groups also have been inundated with AI-generated "historical" photos.[22]
Slopper, a pejorative slang term derived from "AI slop", was coined in 2025 to describe someone who is overly reliant on generative AI tools like ChatGPT.[23][24]
Onlinememe content has taken on the trend of using AI-generated content to fool and entertain viewers. Users on social media have begun using AI-generated images to build massive followings by fooling viewers. This strategy has been popularized for those who are interested in an easy way to get an income online as all that is needed is one post, of the hundreds posted weekly, to gain traction and encourage the quickly made content.[25] Some creators are frustrated that their hard work is being stolen by AI-generated content. An artist, Michael Jones,[26] created physical wood carvings of animals using a chainsaw, but the style of the sculptures was taken by and used as a source for AI-generated content, which began to surface with other people beside them, claiming to have made the sculptures themselves. Jones stated that AI-slop is "a huge issue for carvers all over the world who are sadly missing out on the rightful credit exposure to their work..."[25]
According to a 2025 report by video creation and editing service company Kapwing, Korea ranks first worldwide in AI slop consumption. Experts say the phenomenon is driven by the country's rapid embrace of new technology.[27][28]
In January 2026,YouTube CEONeal Mohan stated that reducing slop and detecting deepfakes were priorities for Youtube in 2026.[29]

In August 2024,The Atlantic noted that AI slop was becoming associated with thepolitical right in the United States, who were using it forshitposting andengagement farming on social media, with the technology offering "cheap, fast, on-demand fodder for content".[30]
AI slop is frequently used in political campaigns in an attempt at gaining attention throughcontent farming.[31] In 2025, in the first five months ofDonald Trump'ssecond tenure asUS president, Trump posted several AI-generated images of himself on official government social media accounts, such as images of him as thepope or as a muscular man brandishing alightsaber.[32] In August 2024, Trump posted a series of AI-generated images on hisalt-tech social media platform,Truth Social, portraying fans of the pop singerTaylor Swift in "Swifties for Trump" T-shirts, as well as an AI-generated image of Swift appearing to endorseTrump's 2024 presidential campaign. The images originated from the conservativeTwitter account@amuse, which posted numerous AI slop images leading up to the2024 United States elections that were shared by other high-profile figures within the Republican Party, such asElon Musk, who has publicly endorsed generative AI.[33] In 2025,Wired described Donald Trump as "The first AI slop President", noting his frequent use of AI-generated images and videos in public messaging. The magazine highlighted examples such as AI depictions of Trump as a fighter pilot and as a religious figure, arguing that his reliance on low-quality generative content marked a new phase in political communication.[34]
In the aftermath ofHurricane Helene in 2024, Republican influencers such asLaura Loomer circulated on social media an AI-generated image of a young girl holding a puppy in a flood, and used it as evidence of the failure of PresidentJoe Biden to respond to the disaster.[35][3] The Republican activistAmy Kremer shared the image while acknowledging it was not genuine.[36][37]
The initial version of theMake Our Children Healthy Again Assessment of children's health issues, released by a commission of cabinet members and officials of the Trump administration, and led byUS Department of Health and Human Services SecretaryRobert F. Kennedy Jr., reportedly cited nonexistent and garbled references generated using AI.[38][39]
In response to theNo Kings protests in October 2025, Trump posted a video depicting himself flying a fighter jet and releasing feces on crowds of demonstrators, includingHarry Sisson, a Democraticinfluencer.[40]
In the midst of disruptions tofood stamp distribution during the2025 US government shutdown, anonymous social media users began usingOpenAI'sSora AI model to post slop videos of "welfare queens" complaining, stealing, and rioting in supermarkets; many comments to the videos appeared unaware that they were AI-generated, or acknowledged that they were AI-generated but nonetheless useful in pushing a narrative of widespreadwelfare fraud.[41][42]
A study done by the analytics companyGraphika found that the governments of Russia and China have been using AI generated slop as propaganda. This includes the use of "spamouflage" as AI generated content featuring fake influencers was found to be linked to China. These videos often focused on divisive topics aimed to cause disruption with ulterior motives to the presented content.[43]
In February 2025, Donald Trump shared an AI-generated video onTruth Social andInstagram depicting a hypotheticalGaza Strip after aTrump takeover. The video's creator claimed it was made as political satire.[44]
During theGaza War, AI-generated media was used to exaggerate support for both sides, and to evoke sympathy using fake images of suffering civilians.[45] Because of content restrictions in generative AI, these images and videos rarely depict people wounded in battle, instead focusing on damage to buildings. Fake images of attacks were used to avoid accidentally providingintelligence to enemies.[46]

In November 2024,Coca-Cola used AI to create three commercials as part of their annualholiday campaign. These videos were immediately met with backlash from both casual viewers and artists;[47] animatorAlex Hirsch, creator ofGravity Falls, criticized the company's decision not to employ human artists to create the commercial.[48] In response to the negative feedback, the company defended their decision to use generative AI, stating that "Coca-Cola will always remain dedicated to creating the highest level of work at the intersection of human creativity and technology".[49] Coca-Cola continued to utilize AI-generated commercials for their 2025 holiday campaign.[50]
During the holiday season of 2025,McDonald's Netherlands released an AI-generated Christmas advertisement titledIt's the Most Terrible Time of the Year, which was met with a large amount of backlash. The advert was seen as cynical, portraying Christmas time as "the most terrible time of the year". The company turned off comments on YouTube and later removed the initial upload of the video from public view in response, though reuploads of the original were still public on the site.[51]
In March 2025,Paramount Pictures was criticized for using AI scripting and narration in anInstagram video promoting the filmNovocaine.[52] The ad uses a roboticAI voice in a style similar to low-quality AI spam videos produced by content farms.A24 received similar backlash for releasing a series of AI-generated posters for the 2024 filmCivil War. One poster appears to depict a group of soldiers in a tank-like raft preparing to fire on a large swan, an image which does not resemble the events of the film.[53][54]

In the same month,Activision posted various advertisements and posters for fake video games such as "Guitar Hero Mobile", "Crash Bandicoot: Brawl", and "Call of Duty: Zombie Defender" that were all made using generative AI on platforms such asFacebook and Instagram, which many labelled as AI slop.[55] The intention of the posts was later stated to act as a survey for interest in possible titles by the company.[56] TheItalian brainrot AI trend was widely adopted by advertisers as an attempt to adjust to younger audiences.[57]
DuringSuper Bowl LX in 2026, vodka brandSvedka aired an AI-generated commercial featuring two robots, apparently male and female, drinking Svedka while dancing at a club.[58] The ad was met with heavy criticism from Super Bowl viewers, and many took to social media to express their disdain.[citation needed] Svedka trained the AI on TikTok dances for the robots' dancing.[citation needed]

Fantastical promotional graphics for the 2024Willy's Chocolate Experience event, which took place atGlasgow, Scotland, characterized as "AI-generated slop",[59] misled audiences into attending an event that was held in a lightly decorated warehouse. Tickets were marketed throughFacebook advertisements showing AI-generated imagery, with no genuine photographs of the venue.[60]
In October 2024, thousands of people were reported to have assembled for a non-existent Halloween parade inDublin as a result of a listing on an aggregation listings website, MySpiritHalloween.com, which used AI-generated content.[61][62] The listing went viral on TikTok andInstagram.[63] A similar parade had previously been held inGalway. Dublin had hosted parades in prior years, although there was no parade in 2024.[62] One analyst characterized the website, which appeared to use AI-generated staff pictures, as likely using AI "to create content quickly and cheaply where opportunities are found".[64] The site's owner said that "We asked ChatGPT to write the article for us, but it wasn't ChatGPT by itself." In the past the site had removed non-existent events when contacted by their venues, but in the case of the Dublin parade the site owner said that "no one reported that this one wasn't going to happen". MySpiritHalloween.com updated their page to say that the parade had been "canceled" when they became aware of the issue.[65]
Online booksellers and library vendors now have many titles that are written by AI and are not curated into collections by librarians. The digital media providerHoopla, which supplies libraries withebooks and downloadable content, has generative AI books with fictional authors and dubious quality, which cost libraries money when checked out by unsuspecting patrons.[66]
Users of AmazonKindle and othereBook sites have reported concerns with the increased output of novels seemingly created artificially. Author Jane Friedman stated she had to report 29 novels in one week which were created by AI but used her name and likeness. Kindle created a limit of three novels per day from a single author to combat the issue.[67]
In February 2023, the science fiction magazineClarkesworld had to temporarily close short story submissions after receiving massive amounts of AI spam, which editor Neil Clarke attributed to people from outside the speculative fiction community trying to make easy money.[68] Clarke expressed worry that this trend would result in higher barriers of entry for new authors.[69]
As of 2024, Canadian and United Statescopyright laws ruled that books created by AI cannot be copyrightable. Books created by AI are mostly considered plagiarism; more specifically,non-fiction novels cannot apply for copyright due tohallucinations andbiases ofLLMs.[67]
Call of Duty: Black Ops 6 includes assets generated by AI. Since the game's initial release, many players had accusedTreyarch andRaven Software of using AI to create in-game assets, including loading screens, emblems, and calling cards. A particular example was a loading screen for the zombies game mode that depicted "Necroclaus", a zombifiedSanta Claus with six fingers on one hand, an image which also had other irregularities.[70] The previous entry in the franchise,Call of Duty: Modern Warfare III, was also accused of selling AI-generatedcosmetics.[71] In February 2025, Activision disclosedBlack Ops 6's usage of generative AI to comply withValve's policies on AI-generated or assisted products onSteam. Activision states on the game's product page on Steam that "Our team uses generative AI tools to help develop some in game assets."[72]Call of Duty: Black Ops 7, the most recent entry in theCall of Duty franchise as of January 2026, continued the usage of AI-generated content withStudio Ghibli-styled calling cards which have been criticized by fans.[73]
In 2024,Rovio Entertainment released a demo of a mobile game calledAngry Birds: Block Quest onAndroid. The game featured AI-generated images for loading screens and backgrounds.[74] It was heavily criticized by players, who called itshovelware and disapproved of Rovio's use of AI images.[75][76] It was eventually discontinued and removed from thePlay Store.
According to a 2025 report by AI and Games, about 20% of games published on Steam in 2025 include an "AI disclosure", indicating they use generative AI tools for some assets.[77] The same report uses explicitly the term "AIshovelware problem", arguing that many of these games rely heavily on AI-generated visuals (textures, images, dialogue, story bits, etc.), often with mixed or poor quality.

Some films have received backlash for including AI-generated content. The filmLate Night with the Devil was notable for its use of AI, which some criticized as being AI slop.[78][79] Several low-quality AI-generated images were used as interstitial title cards, with one image featuring a skeleton with inaccurate bone structure and poorly-generated fingers that appear disconnected from its hands.[80]
Some streaming services such asAmazon Prime Video have used AI to generate posters and thumbnail images in a manner that can be described as slop. A low-quality AI poster was used for the 1922 filmNosferatu, depictingCount Orlok in a way that does not resemble his look in the film.[81] A thumbnail image for12 Angry Men onAmazon Freevee used AI to depict 19 men with smudged faces, none of whom appeared to bear any similarities to the characters in the film.[82][83] Additionally, some viewers have noticed that many plot descriptions appear to be generated by AI, which some people have characterized as slop. One synopsis briefly listed on the site for the filmDog Day Afternoon read: "A man takes hostages at a bank in Brooklyn. Unfortunately I do not have enough information to summarize further within the provided guidelines."[84]
In one case,Deutsche Telekom removed a series from their media offer after viewers complained about the bad quality and monotonous German voice dubbing (translated from original Polish) and it was found out that it was done via AI.[85]
In some cases, large volumes of AI-generated tracks have been uploaded with the aim of manipulating streaming platform royalties. For example, in September 2024 US musician Michael Smith was charged after using hundreds of thousands of AI-generated songs and bots to generate more than US$10 million in royalties.[86] In June 2025,Deezer estimated that as much as 70% of streams of AI-generated tracks on its platform were fraudulent, highlighting concerns about mass low quality output competing with human made music.[87]
In April 2024,Pink Floyd held a fan-made music video contest for tracks from theDark Side of the Moon album to celebrate the band's 50th anniversary. Among ten chosen entries, the music video for the track "Any Colour You Like" was noted for being made using theStable Diffusion software, prompting online backlash from fans against the band and the video's submitter for its AI-generated nature as well as its perceived low quality.[88][89][90]
In November 2024,Kanye West released an unexpected music video for the song "Bomb" fromVultures 2, featuring AI-generated versions of his daughters, North West and Chicago West. The video shows the children racing through a desert in futuristicCybertruck-like vehicles, with both providing vocals—North partly in Japanese and Chicago in a freestyle style. The release drew widespread debate, with many fans criticizing West's increasing reliance on AI and specifically referring to the project as "AI slop", expressing discomfort with the use of AI replicas of his children.[91]
In July 2025, numerous media outlets reported on accusations that an indie band namedThe Velvet Sundown, which in only a few weeks had amassed over 850,000 listeners onSpotify, was AI-generated. Critics observed that there were no records of any performances by the band, that individual band members had no social media presence, and that their promotional images appeared to be fake.[92]Deezer's AI detection tool flagged the band's music as being 100% AI-generated.[93]Rolling Stone (which described The Velvet Sundown as "obviously fictional") had reported that a spokesperson for the band named Andrew Frelon had admitted that their music was an "art hoax" generated using the AI toolSuno. However, Frelon later stated that his statement was itself a hoax and that he had no connection to the band.[94][92] However, within a week, the Velvet Sundown's artist biography on Spotify had been updated to say that the band was "a synthetic music project guided by human creative direction, and composed, voiced, and visualized with the support of artificial intelligence", intended as "artistic provocation".[95][96] By this time, the band had over one million monthly listeners on Spotify.[97] According to a former Spotify employee, the high listener count has two likely causes: firstly, Spotify now accepts payments to boost playlist placement, and secondly, playlists are increasingly selected by algorithms rather than humans.[96][98]
AI has also been used to impersonate established musicians, and release music on major streaming services under their name without their knowledge, presumably to make money from streamingroyalties. In particular, in August 2025, a number of Americana and folk-rock musicians includingJeff Tweedy,Father John Misty, andBlaze Foley (who died in 1989) were impersonated, as well as some established USChristian musicians andmetalcore bands. The fake releases all had similar AI-generatedcover art, and were credited to the same three record labels. Many listed "Zyan Maliq Mahardika" as a songwriter, indicating that the impersonation has a single source. Spotify removed the tracks, stating that they "violated our policy against impersonating another person or brand".[99]
In November 2025, the AI-generated song "Walk My Walk" topped the Billboard's country digital song sales chart in the United States with a mere 3,000 sales.[100][full citation needed] The song is by AI artistBreaking Rust, who amassed over 4.5 millionSpotify listeners on their songs. Promotion for "Walk My Walk" was accompanied by an AI slop video shared onInstagram depicting a cowboy walking into the sunset.[101] In the same month, an AI-generated song named "We Are Charlie Kirk", made in tribute to theassassination of Charlie Kirk two months earlier, went viral onTikTok and ranked #1 on Spotify's viral songs chart.[102] That month in theNetherlands, the AI-generatedfar-rightanti-immigrant protest song "Wij zeggen nee, nee, nee, tegen een AZC" (We say no, no, no to an asylum shelter) peaked at No. 5 on theDutch Single Top 100.[103]

Generative AI has been used to write articles which have been published in both low-qualityresearch paper mills and reputablescientific journals.[104][105] In 2024, a peer-reviewed article containing a generated image of a rat with absurdly large genitals accompanied by nonsensical text and diagrams was retracted byFrontiers in Cell and Developmental Biology after drawing attention from scientists on social media.[106][107]
David Berry has used the termscholarslop[108][109] to refer to AI-generated administrative discourse in universities, or quasi-academic texts. Glen Berman argues that AI-generated articles being published in scholarly journals are an "epistemic carcinogen" that will poses a major risk to the knowledge ecosystem.[110]
In January 2026, the open-source command-line utilitycURL ended itsbug bounty program throughHackerOne after receiving a large number of false vulnerability reports.[111] In an announcement on its officialGitHub page, the project stated that rewards would no longer be offered for reported vulnerabilities.Daniel Stenberg, founder and lead developer of cURL, shared examples of fake, AI-generated bug reports submitted through the program, including one that claimed a critical vulnerability that did not actually exist. Many of these reports referenced nonexistentchangelogs, included code snippets that did not match real function signatures, or were the result of simple user errors. Stenberg explained that the influx of AI-generated reports placed a significant burden on cURL's security team and that ending the program was an "attempt to reduce the noise."[112]
AHarvard Business Review study, done in conjunction withStanford University andBetterUp, found that employees were using AI tools to create low-effort "workslop" that created more work for their colleagues.[113] Within the timeframe of the study, it was found that 40% of participating employees received some form of "workslop", with each incident taking an average of two hours to resolve.[114] BetterUp defines workslop as "AI-generated content that looks good, but lacks substance".[114] The study appears to be the first instance of "workslop" and also uses "workslopped" as a verb.[115]