Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Artificial intelligence visual art

From Wikipedia, the free encyclopedia
(Redirected fromArtificial intelligence art)
Not to be confused withGenerative art.
Visual media created with AI
"Artificial intelligence art" redirects here. For other forms of AI-generated content, seeNatural language generation,Music and artificial intelligence, andText-to-video model.

Impressionistic image of figures in a futuristic opera scene
Théâtre D'opéra Spatial (Space Opera Theater; 2022), an award-winning image made using generative artificial intelligence
Part ofa series on
Artificial intelligence (AI)
Glossary

Artificial intelligence visual art, orAI art, isvisual artwork generated (or enhanced) through the use ofartificial intelligence (AI) programs, most commonly usingtext-to-image models (T2I or TTI).Automated art has been created since ancient times. The field of artificial intelligence was founded in the 1950s, and artists began to create art with artificial intelligence shortly after the discipline was founded. Throughoutits history, AI has raised manyphilosophical concerns related to thehuman mind,artificial beings, and also what can be consideredart in human–AI collaboration. Since the 20th century, people have used AI to create art, some of which has been exhibited in museums and won awards.[1]

During theAI boom of the 2020s, text-to-image models such asMidjourney,DALL-E andStable Diffusion became widely available to the public, allowing users to quickly generate imagery with little effort.[2][3] Commentary about AI art in the 2020s has often focused on issues related tocopyright,deception,defamation, and its impact on more traditional artists, includingtechnological unemployment.

History

[edit]
See also:History of artificial intelligence andTimeline of artificial intelligence

Early history

[edit]
Maillardet's automaton drawing a picture

Automated art dates back at least to theautomata ofancient Greek civilization, when inventors such asDaedalus andHero of Alexandria were described as designing machines capable of writing text, generating sounds, and playing music.[4][5] Creative automatons have flourished throughout history, such asMaillardet's automaton, created around 1800 and capable of creating multiple drawings and poems.[6]

Also in the 19th century,Ada Lovelace, wrote that "computing operations" could potentially be used to generate music and poems.[7][8] In 1950,Alan Turing's paper "Computing Machinery and Intelligence" focused on whether machines can mimic human behavior convincingly.[9] Shortly after, the academic discipline of artificial intelligence was founded at a researchworkshop atDartmouth College in 1956.[10]

Since its founding, AI researchers have explored philosophical questions about the nature of the human mind and the consequences of creating artificial beings with human-like intelligence; these issues have previously been explored bymyth,fiction, andphilosophy since antiquity.[11]

Artistic history

[edit]
Karl Sims'Galápagos installation allowed visitors to evolve 3D animated forms.

Since the founding of AI in the 1950s, artists have used artificial intelligence to create artistic works. These works were sometimes referred to asalgorithmic art,[12]computer art,digital art, ornew media art.[13]

One of the first significant AI art systems isAARON, developed byHarold Cohen beginning in the late 1960s at theUniversity of California at San Diego.[14] AARON uses a symbolic rule-based approach to generate technical images in the era ofGOFAI programming, and it was developed by Cohen with the goal of being able to code the act of drawing.[15] AARON was exhibited in 1972 at theLos Angeles County Museum of Art.[16] From 1973 to 1975, Cohen refined AARON during a residency at theArtificial Intelligence Laboratory atStanford University.[17] In 2024, theWhitney Museum of American Art exhibited AI art from throughout Cohen's career, including re-created versions of his early robotic drawing machines.[17]

Karl Sims has exhibited art created withartificial life since the 1980s. He received an M.S. in computer graphics from theMIT Media Lab in 1987 and was artist-in-residence from 1990 to 1996 at thesupercomputer manufacturer and artificial intelligence companyThinking Machines.[18][19][20] In both 1991 and 1992, Sims won the Golden Nica award atPrix Ars Electronica for his videos using artificial evolution.[21][22][23] In 1997, Sims created the interactive artificial evolution installationGalápagos for theNTT InterCommunication Center in Tokyo.[24] Sims received anEmmy Award in 2019 for outstanding achievement in engineering development.[25]

Example ofElectric Sheep byScott Draves

In 1999,Scott Draves and a team of several engineers created and releasedElectric Sheep as afree software screensaver.[26]Electric Sheep is a volunteer computing project for animating and evolvingfractal flames, which are distributed to networked computers that display them as a screensaver. The screensaver used AI to create an infinite animation by learning from its audience. In 2001, Draves won the Fundacion Telefónica Life 4.0 prize forElectric Sheep.[27][unreliable source?]

In 2014,Stephanie Dinkins began working onConversations with Bina48.[28] For the series, Dinkins recorded her conversations withBINA48, a social robot that resembles a middle-aged black woman.[29][30] In 2019, Dinkins won theCreative Capital award for her creation of an evolving artificial intelligence based on the "interests and culture(s) of people of color."[31]

In 2015,Sougwen Chung beganMimicry (Drawing Operations Unit: Generation 1), an ongoing collaboration between the artist and a robotic arm.[32] In 2019, Chung won theLumen Prize for her continued performances with a robotic arm that uses AI to attempt to draw in a manner similar to Chung.[33]

Edmond de Belamy, created with agenerative adversarial network in 2018

In 2018, an auction sale of artificial intelligence art was held atChristie's in New York where the AI artworkEdmond de Belamy sold forUS$432,500, which was almost 45 times higher than its estimate of US$7,000–10,000. The artwork was created by Obvious, a Paris-based collective.[34][35][36]

In 2024, Japanese filmgenerAIdoscope was released. The film was co-directed byHirotaka Adachi, Takeshi Sone, and Hiroki Yamaguchi. All video, audio, and music in the film were created with artificial intelligence.[37]

In 2025, the Japaneseanime television seriesTwins Hinahima was released. The anime was produced and animated with AI assistance during the process of cutting and conversion of photographs into anime illustrations and later retouched by art staff. Most of the remaining parts such as characters and logos were hand-drawn with various software.[38][39]

Technical history

[edit]

Deep learning, characterized by its multi-layer structure that attempts to mimic the human brain, first came about in the 2010s, causing a significant shift in the world of AI art.[40] During the deep learning era, there are mainly these types of designs for generative art:autoregressive models,diffusion models,GANs,normalizing flows.

In 2014,Ian Goodfellow and colleagues atUniversité de Montréal developed thegenerative adversarial network (GAN), a type ofdeep neural network capable of learning to mimic thestatistical distribution of input data such as images. The GAN uses a "generator" to create new images and a "discriminator" to decide which created images are considered successful.[41] Unlike previous algorithmic art that followed hand-coded rules, generative adversarial networks could learn a specificaesthetic by analyzing adataset of example images.[12]

In 2015, a team atGoogle releasedDeepDream, a program that uses aconvolutional neural network to find and enhance patterns in images via algorithmicpareidolia.[42][43][44] The process creates deliberately over-processed images with a dream-like appearance reminiscent of apsychedelic experience.[45] Later, in 2017, a conditional GAN learned to generate 1000 image classes ofImageNet, a large visualdatabase designed for use invisual object recognition software research.[46][47] By conditioning the GAN on both random noise and a specific class label, this approach enhanced the quality of image synthesis for class-conditional models.[48]

Autoregressive models were used for image generation, such as PixelRNN (2016), which autoregressively generates one pixel after another with arecurrent neural network.[49] Immediately after theTransformer architecture was proposed inAttention Is All You Need (2018), it was used for autoregressive generation of images, but without text conditioning.[50]

The websiteArtbreeder, launched in 2018, uses the modelsStyleGAN and BigGAN[51][52] to allow users to generate and modify images such as faces, landscapes, and paintings.[53]

In the 2020s,text-to-image models, which generate images based onprompts, became widely used, marking yet another shift in the creation of AI-generated artworks.[2]

Example of an image made with VQGAN-CLIP (NightCafe Studio, March 2023)
Example of an image made with Flux 1.1 Pro in Raw mode (November 2024); this mode is designed to generate photorealistic images

In 2021, using the influentiallarge languagegenerative pre-trained transformer models that are used inGPT-2 andGPT-3,OpenAI released a series of images created with the text-to-image AI modelDALL-E 1.[54] It is an autoregressive generative model with essentially the same architecture as GPT-3. Along with this, later in 2021,EleutherAI released theopen source VQGAN-CLIP[55] based on OpenAI's CLIP model.[56]Diffusion models, generative models used to create synthetic data based on existing data,[57] were first proposed in 2015,[58] but they only became better than GANs in early 2021.[59]Latent diffusion model was published in December 2021 and became the basis for the laterStable Diffusion (August 2022), developed through a collaboration between Stability AI, CompVis Group at Ludwig Maximilian University of Munich, and Runway.[60]

In 2022,Midjourney[61] was released, followed byGoogle Brain'sImagen and Parti, which were announced in May 2022,Microsoft's NUWA-Infinity,[62][2] and thesource-availableStable Diffusion, which was released in August 2022.[63][64][65] DALL-E 2, a successor to DALL-E, was beta-tested and released (with the further successor DALL-E 3 being released in 2023). Stability AI has a Stable Diffusion web interface called DreamStudio,[66] plugins forKrita,Photoshop,Blender, andGIMP,[67] and theAutomatic1111 web-based open sourceuser interface.[68][69][70] Stable Diffusion's main pre-trained model is shared on theHugging Face Hub.[71]

Ideogram was released in August 2023, this model is known for its ability to generate legible text.[72][73]

In 2024,Flux was released. This model can generate realistic images and was integrated intoGrok, the chatbot used onX (formerly Twitter), andLe Chat, the chatbot ofMistral AI.[3][74][75][76] Flux was developed by Black Forest Labs, founded by the researchers behind Stable Diffusion.[77] Grok later switched to its own text-to-image modelAurora in December of the same year.[78] Several companies, along with their products, have also developed an AI model integrated with an image editing service.Adobe has released and integrated the AI modelFirefly intoPremiere Pro,Photoshop, andIllustrator.[79][80] Microsoft has also publicly announced AI image-generator features forMicrosoft Paint.[81] Along with this, some examples oftext-to-video models of the mid-2020s areRunway's Gen-4, Google'sVideoPoet, and OpenAI'sSora, which was released in December 2024.[82][83]

In 2025, several models were released.GPT Image 1 fromOpenAI, launched in March 2025, introduced new text rendering and multimodal capabilities, enabling image generation from diverse inputs like sketches and text.[84]MidJourney v7 debuted in April 2025, providing improved text prompt processing.[85] In May 2025Flux.1 Kontext by Black Forest Labs emerged as an efficient model for high-fidelity image generation,[86] whileGoogle'sImagen 4 was released with improved photorealism.[87]

Tools and processes

[edit]

Approaches

[edit]

There are many approaches used by artists to develop AI visual art. Whentext-to-image is used, AI generates images based on textual descriptions, using models like diffusion or transformer-based architectures. Users input prompts and the AI produces corresponding visuals.[88][89] When image-to-image is used, AI transforms an input image into a new style or form based on a prompt or style reference, such as turning a sketch into a photorealistic image or applying an artistic style.[90][91] When image-to-video is used, AI generates short video clips or animations from a single image or a sequence of images, often adding motion or transitions. This can include animating still portraits or creating dynamic scenes.[92][93] Whentext-to-video is used, AI creates videos directly from text prompts, producing animations, realistic scenes, or abstract visuals. This is an extension of text-to-image but focuses on temporal sequences.[94]

Imagery

[edit]
Example of a usage ofComfyUI for Stable Diffusion XL. People can adjust variables (such as CFG, seed, and sampler) needed to generate image.

There are many tools available to the artist when working with diffusion models. They can define both positive and negative prompts, but they are also afforded a choice in using (or omitting the use of)VAEs,LoRAs, hypernetworks, IP-adapter, and embedding/textual inversions. Artists can tweak settings like guidance scale (which balances creativity and accuracy), seed (to control randomness), and upscalers (to enhance image resolution), among others. Additional influence can be exerted during pre-inference by means of noise manipulation, while traditional post-processing techniques are frequently used post-inference. People can also train their own models.

In addition, procedural "rule-based" image generation techniques have been developed, utilizing mathematical patterns, algorithms that simulate brush strokes and other painterly effects, as well as deep learning models such asgenerative adversarial networks (GANs) and transformers. Several companies have released applications and websites that allow users to focus exclusively on positive prompts, bypassing the need for manual configuration of other parameters. There are also programs capable of transforming photographs into stylized images that mimic the aesthetics of well-known painting styles.[95][96]

There are many options, ranging from simple consumer-facing mobile apps toJupyter notebooks and web UIs that require powerful GPUs to run effectively.[97] Additional functionalities include "textual inversion," which refers to enabling the use of user-provided concepts (like an object or a style) learned from a few images. Novel art can then be generated from the associated word(s) (the text that has been assigned to the learned, often abstract, concept)[98][99] and model extensions or fine-tuning (such asDreamBooth).

Impact and applications

[edit]

AI has the potential for asocietal transformation, which may include enabling the expansion of noncommercial niche genres (such ascyberpunk derivatives likesolarpunk) by amateurs, novel entertainment, fast prototyping,[100] increasing art-making accessibility,[100] and artistic output per effort or expenses or time[100]—e.g., via generating drafts, draft-definitions, and image components (inpainting). Generated images are sometimes used as sketches,[101] low-cost experiments,[102] inspiration, or illustrations ofproof-of-concept-stage ideas. Additional functionalities or improvements may also relate to post-generation manual editing (i.e., polishing), such as subsequent tweaking with an image editor.[102]

Prompt engineering and sharing

[edit]
See also:Prompt engineering § Text-to-image

Prompts for some text-to-image models can also include images and keywords and configurable parameters, such as artistic style, which is often used via keyphrases like "in the style of [name of an artist]" in the prompt[103] /or selection of a broad aesthetic/art style.[104][101] There are platforms for sharing, trading, searching, forking/refining, or collaborating on prompts for generating specific imagery from image generators.[105][106][107][108] Prompts are often shared along with images onimage-sharing websites such asReddit and AI art-dedicated websites. A prompt is not the complete input needed for the generation of an image; additional inputs that determine the generated image include theoutput resolution,random seed, and random sampling parameters.[109]

Related terminology

[edit]

Synthetic media, which includes AI art, was described in 2022 as a major technology-driven trend that will affect business in the coming years.[100]Harvard Kennedy School researchers voiced concerns about synthetic media serving as a vector for political misinformation soon after studying the proliferation of AI art on the X platform.[110]Synthography is a proposed term for the practice of generating images that are similar to photographs using AI.[111]

Impact

[edit]

Bias

[edit]
Further information:Algorithmic bias

A major concern raised about AI-generated images and art issampling bias within model training data leading towards discriminatory output from AI art models. In 2023,University of Washington researchers found evidence of racial bias within the Stable Diffusion model, with images of a "person" corresponding most frequently with images of males from Europe or North America.[112]

Looking more into thesampling bias found within AI training data, in 2017, researchers at Princeton University used AI software to link over 2 million words, finding that European names were viewed as more "pleasant" than African-Americans names, and that the words "woman" and "girl" were more likely to be associated with the arts instead of science and math, "which were most likely connected to males."[113] Generative AI models typically work based on user-entered word-based prompts, especially in the case ofdiffusion models, and this word-related bias may lead to biased results.

Along with this, generative AI can perpetuate harmful stereotypes regarding women. For example,Lensa, an AI app that trended onTikTok in 2023, was known to lighten black skin, make users thinner, and generate hypersexualized images of women.[114] Melissa Heikkilä, a senior reporter atMIT Technology Review, shared the findings of an experiment using Lensa, noting that the generated avatars did not resemble her and often depicted her in a hypersexualized manner.[115] Experts suggest that such outcomes can result from biases in the datasets used to train AI models, which can sometimes contain imbalanced representations, including hypersexual or nude imagery.[116][117]

In 2024, Google'schatbotGemini's AI image generator was criticized forracial bias, with claims that Gemini deliberately underrepresented white people in its results.[118] Users reported that it generated images of white historical figures like theFounding Fathers,Nazi soldiers, andVikings as other races, and that it refused to process prompts such as "happy white people" and "idealnuclear family".[118][119] Google later apologized for "missing the mark" and took Gemini's image generator offline for updates.[120] This prompted discussions about the ethical implications[121] of representing historical figures through a contemporary lens, leading critics to argue that these outputs could mislead audiences regarding actual historical contexts.[122] In addition to the well-documented representational issues such as racial and gender bias, some scholars have also pointed out deeper conceptual assumptions that shape how we perceive AI-generated art. For instance, framing AI strictly as a passive tool overlooks how cultural and technological factors influence its outputs. Others suggest viewing AI as part of a collaborative creative process, where both human and machine contribute to the artistic result.[123]

Copyright

[edit]
Further information:Artificial intelligence and copyright

Legal scholars, artists, and media corporations have considered the legal and ethical implications of artificial intelligence art since the 20th century. Some artists use AI art to critique and explore the ethics of usinggathered data to produce new artwork.[124]

In 1985, intellectual property law professorPamela Samuelson argued thatUS copyright should allocate algorithmically generated artworks to the user of the computer program.[125] A 2019Florida Law Review article presented three perspectives on the issue. In the first, artificial intelligence itself would become the copyright owner; to do this, Section 101 of the US Copyright Act would need to be amended to define "author" as a computer. In the second, following Samuelson's argument, the user, programmer, or artificial intelligence company would be the copyright owner. This would be an expansion of the "work for hire" doctrine, under which ownership of a copyright is transferred to the "employer." In the third situation, copyright assignments would never take place, and such works would be in thepublic domain, as copyright assignments require an act of authorship.[126]

In 2022, coinciding with the rising availability of consumer-grade AI image generation services, popular discussion renewed over the legality and ethics of AI-generated art. A particular topic is the inclusion of copyrighted artwork and images in AI training datasets, with artists objecting to commercial AI products using their works without consent, credit, or financial compensation.[127] In September 2022, Reema Selhi, of theDesign and Artists Copyright Society, stated that "there are no safeguards for artists to be able to identify works in databases that are being used and opt out."[128] Some have claimed that images generated with these models can bear resemblance to extant artwork, sometimes including the remains of the original artist's signature.[128][129] In December 2022, users of the portfolio platform ArtStation staged an online protest against non-consensual use of their artwork within datasets; this resulted in opt-out services, such as "Have I Been Trained?" increasing in profile, as well as some online art platforms promising to offer their own opt-out options.[130] According to theUS Copyright Office, artificial intelligence programs are unable to hold copyright,[131][132][133] a decision upheld at the Federal District level as of August 2023 followed the reasoning from themonkey selfie copyright dispute.[134]

OpenAI, the developer ofDALL-E, has its own policy on who owns generated art. They assign the right and title of a generated image to the creator, meaning the user who inputted the prompt owns the image generated, along with the right to sell, reprint, and merchandise it.[135]

In January 2023, three artists—Sarah Andersen,Kelly McKernan, and Karla Ortiz—filed acopyright infringement lawsuit against Stability AI,Midjourney, andDeviantArt, claiming that it is legally required to obtain the consent of artists before training neural nets on their work and that these companies infringed on the rights of millions of artists by doing so on five billion images scraped from the web.[136] In July 2023, U.S. District JudgeWilliam Orrick was inclined to dismiss most of the lawsuits filed by Andersen, McKernan, and Ortiz, but allowed them to file a new complaint.[137] Also in 2023, Stability AI was sued byGetty Images for using its images in the training data.[138] A tool built bySimon Willison allowed people to search 0.5% of the training data for Stable Diffusion V1.1, i.e., 12 million of the 2.3 billion instances fromLAION 2B. Artist Karen Hallion discovered that her copyrighted images were used as training data without their consent.[139]

In March 2024, Tennessee enacted theELVIS Act, which prohibits the use of AI to mimic a musician's voice without permission.[140] A month later in that year,Adam Schiff introduced theGenerative AI Copyright Disclosure Act which, if passed, would require that AI companies to submit copyrighted works in their datasets to theRegister of Copyrights before releasing new generative AI systems.[141] In November 2024, a group of artists and activists shared early access to OpenAI's unreleased video generation model,Sora, viaHuggingface. The action, accompanied by a statement, criticized the exploitative use of artists' work by major corporations.'[142][143][144]

On June 11, 2025,Universal Pictures (owned byComcast) andThe Walt Disney Company filed a copyright infringement lawsuit against Midjourney.[145] The suit described Midjourney as "a bottomless pit of plagiarism."[145]

Deception

[edit]

As with other types ofphoto manipulation since the early 19th century, some people in the early 21st century have been concerned that AI could be used to create content that ismisleading and can be made to damage a person's reputation, such asdeepfakes.[146] ArtistSarah Andersen, who previously had her art copied and edited to depictNeo-Nazi beliefs, stated that the spread ofhate speech online can be worsened by the use of image generators.[139] Some also generate images or videos for the purpose ofcatfishing.

AI systems have the ability to create deepfake content, which is often viewed as harmful and offensive. The creation of deepfakes poses a risk to individuals who have not consented to it.[147] This mainly refers todeepfake pornography which is used asrevenge porn, where sexually explicit material is disseminated to humiliate or harm another person. AI-generatedchild pornography has been deemed a potential danger to society due to its unlawful nature.[148]

  • Pseudomnesia: The Electrician won Boris Eldagsen [de] one of the categories in the Sony World Photography Awards competition.
    Pseudomnesia: The Electrician wonBoris Eldagsen [de] one of the categories in theSony World Photography Awards competition.
  • A 2023 AI-generated image of Pope Francis wearing a puffy winter jacket fooled some viewers into believing it was an actual photograph. It went viral on social media platforms.
    A 2023 AI-generated image ofPope Francis wearing a puffy winter jacket fooled some viewers into believing it was an actual photograph. It went viral on social media platforms.
  • Journalist Eliot Higgins' Midjourney-generated image depicts President Donald Trump getting arrested. The image was posted on Twitter and went viral.[149]
    JournalistEliot Higgins' Midjourney-generated image depicts PresidentDonald Trump getting arrested. The image was posted onTwitter and went viral.[149]
  • One of the seven AI-generated images that were used for figures in the now-retracted paper Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway. Figure 1, "Spermatogonial stem cells, isolated, purified and cultured from rat testes".
    One of the seven AI-generated images that were used for figures in the now-retracted paperCellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway. Figure 1, "Spermatogonial stem cells, isolated, purified and cultured from rat testes".

After winning the 2023 "Creative" "Open competition" Sony World Photography Awards, Boris Eldagsen stated that his entry was actually created with artificial intelligence. PhotographerFeroz Khan commented to theBBC that Eldagsen had "clearly shown that even experienced photographers and art experts can be fooled".[150] Smaller contests have been affected as well; in 2023, a contest run by authorMark Lawrence asSelf-Published Fantasy Blog-Off was cancelled after the winning entry was allegedly exposed to be a collage of images generated with Midjourney.[151]

In May 2023, on social media sites such as Reddit and Twitter, attention was given to a Midjourney-generated image ofPope Francis wearing a white puffer coat.[152][153] Additionally, an AI-generated image of an attack on thePentagon went viral as part of ahoax news story on Twitter.[154][155]

In the days beforeMarch 2023 indictment of Donald Trump as part of theStormy Daniels–Donald Trump scandal, several AI-generated images allegedly depicting Trump's arrest went viral online.[156][157] On March 20, British journalistEliot Higgins generated various images of Donald Trump being arrested or imprisoned using Midjourney v5 and posted them on Twitter; two images of Trump struggling against arresting officers went viral under the mistaken impression that they were genuine, accruing more than 5 million views in three days.[158][159] According to Higgins, the images were not meant to mislead, but he was banned from using Midjourney services as a result. As of April 2024, the tweet had garnered more than 6.8 million views.

In February 2024, the paperCellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway was published using AI-generated images. It was later retracted fromFrontiers in Cell and Developmental Biology because the paper "does not meet the standards".[160]

To mitigate some deceptions, OpenAI developed a tool in 2024 to detect images that were generated by DALL-E 3.[161] In testing, this tool accurately identified DALL-E 3-generated images approximately 98% of the time. The tool is also fairly capable of recognizing images that have been visually modified by users post-generation.[162]

Income and employment stability

[edit]
Further information:Workplace impact of artificial intelligence andTechnological unemployment

As generative AI image software such asStable Diffusion andDALL-E continue to advance, the potential problems and concerns that these systems pose for creativity and artistry have risen.[139] In 2022, artists working in various media raised concerns about the impact thatgenerative artificial intelligence could have on their ability to earn money, particularly if AI-based images started replacing artists working in theillustration and design industries.[163][164] In August 2022, digital artist R. J. Palmer stated that "I could easily envision a scenario where using AI, a single artist or art director could take the place of 5–10 entry level artists... I have seen a lot of self-published authors and such say how great it will be that they don't have to hire an artist."[129] Scholars Jiang et al. state that "Leaders of companies like Open AI and Stability AI have openly stated that they expect generative AI systems to replace creatives imminently."[139] A 2022 case study found that AI-produced images created by technology likeDALL-E caused some traditional artists to be concerned about losing work, while others use it to their advantage and view it as a tool.[147]

AI-based images have become more commonplace in art markets and search engines because AI-basedtext-to-image systems are trained from pre-existing artistic images, sometimes without the original artist's consent, allowing the software to mimic specific artists' styles.[139][165] For example, Polish digital artist Greg Rutkowski has stated that it is more difficult to search for his work online because many of the images in the results are AI-generated specifically to mimic his style.[64] Furthermore, some training databases on which AI systems are based are not accessible to the public.

The ability of AI-based art software to mimic or forge artistic style also raises concerns of malice or greed.[139][166][167] Works of AI-generated art, such asThéâtre D'opéra Spatial, a text-to-image AI illustration that won the grand prize in the August 2022 digital art competition at theColorado State Fair, have begun to overwhelm art contests and other submission forums meant for small artists.[139][166][167] TheNetflix short filmThe Dog & the Boy, released in January 2023, received backlash online for its use of artificial intelligence art to create the film's background artwork.[168] Within the same vein,Disney releasedSecret Invasion, aMarvel TV show with an AI-generated intro, on Disney+ in 2023, causing concern and backlash regarding the idea that artists could be made obsolete by machine-learning tools.[169]

AI generated art used at the entrance of a school
AI-generated art at the entrance of a school

AI art has sometimes been deemed to be able to replace traditionalstock images.[170] In 2023,Shutterstock announced a beta test of an AI tool that can regenerate partial content of other Shutterstock's images.Getty Images andNvidia have partnered with the launch of Generative AI byiStock, a model trained on Getty's library and iStock's photo library using Nvidia's Picasso model.[171]

Power usage

[edit]
In this 1923 comic,H. T. Webster humorously imagines the life of a cartoonist in 2023, when machines powered byelectricity can produce and execute ideas for cartoons.

Researchers fromHugging Face andCarnegie Mellon University reported in a 2023 paper that generating one thousand 1024×1024 images usingStable Diffusion's XL 1.0 base model requires 11.49kWh of energy and generates 1,594 grams (56.2 oz) ofcarbon dioxide, which is roughly equivalent to driving an average gas-powered car a distance of 4.1 miles (6.6 km). Comparing 88 different models, the paper concluded that image-generation models used on average around 2.9 kWh of energy per 1,000inferences.[172]

Analysis of existing art using AI

[edit]

In addition to the creation of original art, research methods that use AI have been generated to quantitatively analyze digital art collections. This has been made possible due to the large-scale digitization of artwork in the past few decades. According to CETINIC and SHE (2022), using artificial intelligence to analyze already-existing art collections can provide new perspectives on the development of artistic styles and the identification of artistic influences.[173][174]

Two computational methods, close reading and distant viewing, are the typical approaches used to analyze digitized art.[175] Close reading focuses on specific visual aspects of one piece. Some tasks performed by machines in close reading methods include computational artist authentication and analysis of brushstrokes or texture properties. In contrast, through distant viewing methods, the similarity across an entire collection for a specific feature can be statistically visualized. Common tasks relating to this method include automatic classification,object detection,multimodal tasks, knowledge discovery in art history, and computational aesthetics.[174] Synthetic images can also be used to train AI algorithms forart authentication and to detect forgeries.[176]

Researchers have also introduced models that predict emotional responses to art. One such model is ArtEmis, a large-scale dataset paired with machine learning models. ArtEmis includes emotional annotations from over 6,500 participants along with textual explanations. By analyzing both visual inputs and the accompanying text descriptions from this dataset, ArtEmis enables the generation of nuanced emotional predictions.[177][178]

Other forms of AI art

[edit]

AI has also been used in arts outside of visual arts. Generative AI has been used to createmusic, as well as in video game productionbeyond imagery, especially forlevel design (e.g., forcustom maps) and creating new content (e.g., quests or dialogue) orinteractive stories in video games.[179][180] AI has also been used in theliterary arts,[181] such as helping withwriter's block, inspiration, or rewriting segments.[182][183][184][185] In the culinary arts, some prototypecooking robots can dynamicallytaste, which can assist chefs in analyzing the content and flavor of dishes during the cooking process.[186]

See also

[edit]

References

[edit]
  1. ^Todorovic, Milos (2024)."AI and Heritage: A Discussion on Rethinking Heritage in a Digital World".International Journal of Cultural and Social Studies.10 (1):1–11.doi:10.46442/intjcss.1397403. Retrieved4 July 2024.
  2. ^abcVincent, James (24 May 2022)."All these images were generated with Google's latest text-to-image AI".The Verge. Vox Media.Archived from the original on 15 February 2023. Retrieved28 May 2022.
  3. ^abEdwards, Benj (2 August 2024)."FLUX: This new AI image generator is eerily good at creating human hands".Ars Technica. Retrieved17 November 2024.
  4. ^Noel Sharkey (4 July 2007),A programmable robot from 60 AD, vol. 2611, New Scientist,archived from the original on 13 January 2018, retrieved22 October 2019
  5. ^Brett, Gerard (July 1954), "The Automata in the Byzantine "Throne of Solomon"",Speculum,29 (3):477–487,doi:10.2307/2846790,ISSN 0038-7134,JSTOR 2846790,S2CID 163031682.
  6. ^kelinich (8 March 2014)."Maillardet's Automaton".The Franklin Institute.Archived from the original on 24 August 2023. Retrieved24 August 2023.
  7. ^Natale, S., & Henrickson, L. (2022). The Lovelace Effect: Perceptions of Creativity in Machines. White Rose Research Online. Retrieved September 24, 2024, fromhttps://eprints.whiterose.ac.uk/182906/6/NMS-20-1531.R2_Proof_hi%20%282%29.pdf
  8. ^Lovelace, A. (1843). Notes by the translator. Taylor's Scientific Memoirs, 3, 666-731.
  9. ^Turing, Alan (October 1950)."Computing Machinery and Intelligence"(PDF). Retrieved16 September 2024.
  10. ^Crevier, Daniel (1993).AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. p. 109.ISBN 0-465-02997-3.
  11. ^Newquist, HP (1994).The Brain Makers: Genius, Ego, And Greed In The Quest For Machines That Think. New York: Macmillan/SAMS. pp. 45–53.ISBN 978-0-672-30412-5.
  12. ^abElgammal, Ahmed (2019). "AI Is Blurring the Definition of Artist".American Scientist.107 (1): 18.doi:10.1511/2019.107.1.18.ISSN 0003-0996.S2CID 125379532.
  13. ^Greenfield, Gary (3 April 2015)."When the machine made art: the troubled history of computer art, by Grant D. Taylor".Journal of Mathematics and the Arts.9 (1–2):44–47.doi:10.1080/17513472.2015.1009865.ISSN 1751-3472.S2CID 118762731.
  14. ^McCorduck, Pamela (1991).AARONS's Code: Meta-Art. Artificial Intelligence, and the Work of Harold Cohen. New York: W. H. Freeman and Company. p. 210.ISBN 0-7167-2173-2.
  15. ^Poltronieri, Fabrizio Augusto; Hänska, Max (23 October 2019)."Technical Images and Visual Art in the Era of Artificial Intelligence".Proceedings of the 9th International Conference on Digital and Interactive Arts. Braga Portugal: ACM. pp. 1–8.doi:10.1145/3359852.3359865.ISBN 978-1-4503-7250-3.S2CID 208109113.Archived from the original on 29 September 2022. Retrieved10 May 2022.
  16. ^"HAROLD COHEN (1928–2016)".Art Forum. 9 May 2016. Retrieved19 September 2023.
  17. ^abDiehl, Travis (15 February 2024)."A.I. Art That's More Than a Gimmick? Meet AARON".The New York Times.ISSN 0362-4331. Retrieved1 June 2024.
  18. ^"Karl Sims - ACM SIGGRAPH HISTORY ARCHIVES".history.siggraph.org. 20 August 2017. Retrieved9 June 2024.
  19. ^"Karl Sims | CSAIL Alliances".cap.csail.mit.edu.Archived from the original on 9 June 2024. Retrieved9 June 2024.
  20. ^"Karl Sims".www.macfound.org.Archived from the original on 9 June 2024. Retrieved9 June 2024.
  21. ^"Golden Nicas".Ars Electronica Center. Archived fromthe original on 26 February 2023. Retrieved26 February 2023.
  22. ^"Panspermia by Karl Sims, 1990".www.karlsims.com.Archived from the original on 26 November 2023. Retrieved26 February 2023.
  23. ^"Liquid Selves by Karl Sims, 1992".www.karlsims.com. Retrieved26 February 2023.
  24. ^"ICC | "Galápagos" - Karl SIMS (1997)".NTT InterCommunication Center [ICC].Archived from the original on 14 June 2024. Retrieved14 June 2024.
  25. ^"- Winners".Television Academy.Archived from the original on 1 July 2020. Retrieved26 June 2022.
  26. ^Draves, Scott (2005)."The Electric Sheep Screen-Saver: A Case Study in Aesthetic Evolution". In Rothlauf, Franz; Branke, Jürgen; Cagnoni, Stefano; Corne, David Wolfe; Drechsler, Rolf; Jin, Yaochu; Machado, Penousal; Marchiori, Elena; Romero, Juan (eds.).Applications of Evolutionary Computing. Lecture Notes in Computer Science. Vol. 3449. Berlin, Heidelberg: Springer. pp. 458–467.doi:10.1007/978-3-540-32003-6_46.ISBN 978-3-540-32003-6.S2CID 14256872.Archived from the original on 7 October 2024. Retrieved17 July 2024.
  27. ^"Entrevista Scott Draves - Primer Premio Ex-Aequo VIDA 4.0".YouTube. 17 July 2012.Archived from the original on 28 December 2023. Retrieved26 February 2023.
  28. ^"Robots, Race, and Algorithms: Stephanie Dinkins at Recess Assembly".Art21 Magazine. 7 November 2017. Retrieved25 February 2020.
  29. ^Small, Zachary (7 April 2017)."Future Perfect: Flux Factory's Intersectional Approach to Technology".ARTnews.com.Archived from the original on 12 September 2024. Retrieved4 May 2020.
  30. ^Dunn, Anna (11 July 2018)."Multiply, Identify, Her".The Brooklyn Rail.Archived from the original on 19 March 2023. Retrieved25 February 2025.
  31. ^"Not the Only One".Creative Capital.Archived from the original on 16 February 2020. Retrieved26 February 2023.
  32. ^"Drawing Operations (2015) – Sougwen Chung (愫君)". Retrieved25 February 2025.
  33. ^"Sougwen Chung".The Lumen Prize. Retrieved26 February 2023.
  34. ^"Is artificial intelligence set to become art's next medium?".Christie's. 12 December 2018.Archived from the original on 5 February 2023. Retrieved21 May 2019.
  35. ^Cohn, Gabe (25 October 2018)."AI Art at Christie's Sells for $432,500".The New York Times.ISSN 0362-4331.Archived from the original on 5 May 2019. Retrieved26 May 2024.
  36. ^Turnbull, Amanda (6 January 2020)."The price of AI art: Has the bubble burst?".The Conversation.Archived from the original on 26 May 2024. Retrieved26 May 2024.
  37. ^Cayanan, Joanna (13 July 2024)."Novelist Otsuichi Co-Directs generAIdoscope, Omnibus Film Produced Entirely With Generative AI".Anime News Network.Archived from the original on 4 March 2025. Retrieved4 March 2025.
  38. ^Hodgkins, Crystalyn (28 February 2025)."Frontier Works, KaKa Creation's Twins Hinahima AI Anime Reveals March 29 TV Debut".Anime News Network.Archived from the original on 28 February 2025. Retrieved4 March 2025.
  39. ^"サポーティブAIとは - アニメ「ツインズひなひま」公式サイト" [What's Supportive AI? - Twins Hinahima Anime Official Website].anime-hinahima.com (in Japanese). Retrieved4 March 2025.
  40. ^"What Is Deep Learning? | IBM".www.ibm.com. 17 June 2024. Retrieved13 November 2024.
  41. ^Goodfellow, Ian; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2014).Generative Adversarial Nets(PDF). Proceedings of the International Conference on Neural Information Processing Systems (NIPS 2014). pp. 2672–2680.Archived(PDF) from the original on 22 November 2019. Retrieved26 January 2022.
  42. ^Mordvintsev, Alexander; Olah, Christopher; Tyka, Mike (2015)."DeepDream - a code example for visualizing Neural Networks". Google Research. Archived fromthe original on 8 July 2015.
  43. ^Mordvintsev, Alexander; Olah, Christopher; Tyka, Mike (2015)."Inceptionism: Going Deeper into Neural Networks". Google Research. Archived fromthe original on 3 July 2015.
  44. ^Szegedy, Christian; Liu, Wei; Jia, Yangqing; Sermanet, Pierre; Reed, Scott E.; Anguelov, Dragomir; Erhan, Dumitru; Vanhoucke, Vincent; Rabinovich, Andrew (2015). "Going deeper with convolutions".IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7–12, 2015. IEEE Computer Society. pp. 1–9.arXiv:1409.4842.doi:10.1109/CVPR.2015.7298594.ISBN 978-1-4673-6964-0.
  45. ^Mordvintsev, Alexander; Olah, Christopher; Tyka, Mike (2015)."DeepDream - a code example for visualizing Neural Networks". Google Research. Archived fromthe original on 8 July 2015.
  46. ^Reynolds, Matt (7 April 2017)."New computer vision challenge wants to teach robots to see in 3D".New Scientist.Archived from the original on 30 October 2018. Retrieved15 November 2024.
  47. ^Markoff, John (19 November 2012)."Seeking a Better Way to Find Web Images".The New York Times.
  48. ^Odena, Augustus; Olah, Christopher; Shlens, Jonathon (17 July 2017)."Conditional Image Synthesis with Auxiliary Classifier GANs".International Conference on Machine Learning. PMLR:2642–2651.arXiv:1610.09585.Archived from the original on 16 September 2024. Retrieved16 September 2024.
  49. ^Oord, Aäron van den; Kalchbrenner, Nal; Kavukcuoglu, Koray (11 June 2016)."Pixel Recurrent Neural Networks".Proceedings of the 33rd International Conference on Machine Learning. PMLR:1747–1756.Archived from the original on 9 August 2024. Retrieved16 September 2024.
  50. ^Parmar, Niki; Vaswani, Ashish; Uszkoreit, Jakob; Kaiser, Lukasz; Shazeer, Noam; Ku, Alexander; Tran, Dustin (3 July 2018)."Image Transformer".Proceedings of the 35th International Conference on Machine Learning. PMLR:4055–4064.
  51. ^Simon, Joel."About".Archived from the original on 2 March 2021. Retrieved3 March 2021.
  52. ^George, Binto; Carmichael, Gail (2021). Mathai, Susan (ed.).Artificial Intelligence Simplified: Understanding Basic Concepts -- the Second Edition. CSTrends LLP. pp. 7–25.ISBN 978-1-944708-04-7.
  53. ^Lee, Giacomo (21 July 2020)."Will this creepy AI platform put artists out of a job?".Digital Arts Online.Archived from the original on 22 December 2020. Retrieved3 March 2021.
  54. ^Ramesh, Aditya; Pavlov, Mikhail; Goh, Gabriel; Gray, Scott; Voss, Chelsea; Radford, Alec; Chen, Mark; Sutskever, Ilya (24 February 2021). "Zero-Shot Text-to-Image Generation".arXiv:2102.12092 [cs.LG].
  55. ^Burgess, Phillip."Generating AI "Art" with VQGAN+CLIP".Adafruit.Archived from the original on 28 September 2022. Retrieved20 July 2022.
  56. ^Radford, Alec; Kim, Jong Wook; Hallacy, Chris; Ramesh, Aditya; Goh, Gabriel; Agarwal, Sandhini; Sastry, Girish; Askell, Amanda; Mishkin, Pamela; Clark, Jack; Krueger, Gretchen; Sutskever, Ilya (2021). "Learning Transferable Visual Models From Natural Language Supervision".arXiv:2103.00020 [cs.CV].
  57. ^"What Are Diffusion Models?".Coursera. 4 April 2024.Archived from the original on 27 November 2024. Retrieved13 November 2024.
  58. ^Sohl-Dickstein, Jascha; Weiss, Eric; Maheswaranathan, Niru; Ganguli, Surya (1 June 2015)."Deep Unsupervised Learning using Nonequilibrium Thermodynamics"(PDF).Proceedings of the 32nd International Conference on Machine Learning.37. PMLR:2256–2265.arXiv:1503.03585.Archived(PDF) from the original on 21 September 2024. Retrieved16 September 2024.
  59. ^Dhariwal, Prafulla; Nichol, Alexander (2021)."Diffusion Models Beat GANs on Image Synthesis".Advances in Neural Information Processing Systems.34. Curran Associates, Inc.:8780–8794.arXiv:2105.05233.Archived from the original on 16 September 2024. Retrieved16 September 2024.
  60. ^Rombach, Robin; Blattmann, Andreas; Lorenz, Dominik; Esser, Patrick; Ommer, Björn (20 December 2021),High-Resolution Image Synthesis with Latent Diffusion Models,arXiv:2112.10752
  61. ^Rose, Janus (18 July 2022)."Inside Midjourney, The Generative Art AI That Rivals DALL-E".Vice.
  62. ^"NUWA-Infinity".nuwa-infinity.microsoft.com.Archived from the original on 6 December 2022. Retrieved10 August 2022.
  63. ^"Diffuse The Rest - a Hugging Face Space by huggingface".huggingface.co.Archived from the original on 5 September 2022. Retrieved5 September 2022.
  64. ^abHeikkilä, Melissa (16 September 2022)."This artist is dominating AI-generated art. And he's not happy about it".MIT Technology Review.Archived from the original on 14 January 2023. Retrieved2 October 2022.
  65. ^"Stable Diffusion". CompVis - Machine Vision and Learning LMU Munich. 15 September 2022.Archived from the original on 18 January 2023. Retrieved15 September 2022.
  66. ^"Stable Diffusion creator Stability AI accelerates open-source AI, raises $101M".VentureBeat. 18 October 2022.Archived from the original on 12 January 2023. Retrieved10 November 2022.
  67. ^Choudhary, Lokesh (23 September 2022)."These new innovations are being built on top of Stable Diffusion".Analytics India Magazine.Archived from the original on 9 November 2022. Retrieved9 November 2022.
  68. ^Dave James (27 October 2022)."I thrashed the RTX 4090 for 8 hours straight training Stable Diffusion to paint like my uncle Hermann".PC Gamer.Archived from the original on 9 November 2022. Retrieved9 November 2022.
  69. ^Lewis, Nick (16 September 2022)."How to Run Stable Diffusion Locally With a GUI on Windows".How-To Geek.Archived from the original on 23 January 2023. Retrieved9 November 2022.
  70. ^Edwards, Benj (4 October 2022)."Begone, polygons: 1993's Virtua Fighter gets smoothed out by AI".Ars Technica.Archived from the original on 1 February 2023. Retrieved9 November 2022.
  71. ^Mehta, Sourabh (17 September 2022)."How to Generate an Image from Text using Stable Diffusion in Python".Analytics India Magazine.Archived from the original on 16 November 2022. Retrieved16 November 2022.
  72. ^"Announcing Ideogram AI".Ideogram.Archived from the original on 10 June 2024. Retrieved13 June 2024.
  73. ^Metz, Rachel (3 October 2023)."Ideogram Produces Text in AI Images That You Can Actually Read".Bloomberg News. Retrieved18 November 2024.
  74. ^"Flux.1 – ein deutscher KI-Bildgenerator dreht mit Grok frei".Handelsblatt (in German).Archived from the original on 30 August 2024. Retrieved17 November 2024.
  75. ^Zeff, Maxwell (14 August 2024)."Meet Black Forest Labs, the startup powering Elon Musk's unhinged AI image generator".TechCrunch.Archived from the original on 17 November 2024. Retrieved17 November 2024.
  76. ^Franzen, Carl (18 November 2024)."Mistral unleashes Pixtral Large and upgrades Le Chat into full-on ChatGPT competitor".VentureBeat. Retrieved11 December 2024.
  77. ^Growcoot, Matt (5 August 2024)."AI Image Generator Made by Stable Diffusion Inventors on Par With Midjourney and DALL-E".PetaPixel. Retrieved17 November 2024.
  78. ^Davis, Wes (7 December 2024)."X gives Grok a new photorealistic AI image generator".The Verge.Archived from the original on 12 December 2024. Retrieved10 December 2024.
  79. ^Clark, Pam (14 October 2024)."Photoshop delivers powerful innovation for Image Editing, Ideation, 3D Design, and more".Adobe Blog.Archived from the original on 30 January 2025. Retrieved8 February 2025.
  80. ^Chedraoui, Katelyn (19 October 2024)."Every New Feature Adobe Announced in Photoshop, Premiere Pro and More".CNET.Archived from the original on 5 February 2025. Retrieved8 February 2025.
  81. ^Fajar, Aditya (28 August 2023)."Microsoft Paint will use AI in Windows update 11".gizmologi.id. Retrieved8 February 2025.
  82. ^"OpenAI teases 'Sora,' its new text-to-video AI model".NBC News. 15 February 2024.Archived from the original on 15 February 2024. Retrieved28 October 2024.
  83. ^"Sora".Sora.Archived from the original on 27 December 2024. Retrieved27 December 2024.
  84. ^Mehta, Ivan (1 April 2025)."OpenAI's new image generator is now available to all users".TechCrunch.Archived from the original on 10 June 2025. Retrieved12 June 2025.
  85. ^"Midjourney launches its new V7 AI image model that can process text prompts better".Engadget. 4 April 2025. Retrieved12 June 2025.
  86. ^"Introducing FLUX.1 Kontext and the BFL Playground".Black Forest Labs. 29 May 2025. Retrieved12 June 2025.
  87. ^Wiggers, Kyle (20 May 2025)."Imagen 4 is Google's newest AI image generator".TechCrunch.Archived from the original on 20 May 2025. Retrieved12 June 2025.
  88. ^Wu, Yue (6 February 2025)."A Visual Guide to How Diffusion Models Work".Towards Data Science.Archived from the original on 13 March 2025. Retrieved12 June 2025.
  89. ^"Text-to-image: latent diffusion models".nicd.org.uk. 30 April 2024. Retrieved12 June 2025.
  90. ^"Image-to-Image Translation".dataforest.ai.Archived from the original on 19 May 2025. Retrieved12 June 2025.
  91. ^"What Is Image-to-Image Translation?".Search Enterprise AI. Retrieved12 June 2025.
  92. ^"Unlocking AI: The Evolution of Image to Video Technology".JMComms. 26 May 2025. Retrieved13 June 2025.
  93. ^Digital, Hans India (3 June 2025)."The Small Business Advantage: Leveraging Image-to-Video AI for Big Impact".www.thehansindia.com. Retrieved13 June 2025.
  94. ^"AI Video Generation: What Is It and How Does It Work?".www.colossyan.com.Archived from the original on 18 April 2025. Retrieved12 June 2025.
  95. ^"A.I. photo filters use neural networks to make photos look like Picassos".Digital Trends. 18 November 2019.Archived from the original on 9 November 2022. Retrieved9 November 2022.
  96. ^Biersdorfer, J. D. (4 December 2019)."From Camera Roll to Canvas: Make Art From Your Photos".The New York Times.Archived from the original on 5 March 2024. Retrieved9 November 2022.
  97. ^Psychotic, Pharma."Tools and Resources for AI Art". Archived fromthe original on 4 June 2022. Retrieved26 June 2022.
  98. ^Gal, Rinon; Alaluf, Yuval; Atzmon, Yuval; Patashnik, Or; Bermano, Amit H.; Chechik, Gal; Cohen-Or, Daniel (2 August 2022). "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion".arXiv:2208.01618 [cs.CV].
  99. ^"Textual Inversion · AUTOMATIC1111/stable-diffusion-webui Wiki".GitHub.Archived from the original on 7 February 2023. Retrieved9 November 2022.
  100. ^abcdElgan, Mike (1 November 2022)."How 'synthetic media' will transform business forever".Computerworld.Archived from the original on 10 February 2023. Retrieved9 November 2022.
  101. ^abRoose, Kevin (21 October 2022)."A.I.-Generated Art Is Already Transforming Creative Work".The New York Times.Archived from the original on 15 February 2023. Retrieved16 November 2022.
  102. ^abLeswing, Kif."Why Silicon Valley is so excited about awkward drawings done by artificial intelligence".CNBC.Archived from the original on 8 February 2023. Retrieved16 November 2022.
  103. ^Robertson, Adi (15 November 2022)."How DeviantArt is navigating the AI art minefield".The Verge.Archived from the original on 4 January 2023. Retrieved16 November 2022.
  104. ^Proulx, Natalie (September 2022)."Are A.I.-Generated Pictures Art?".The New York Times.Archived from the original on 6 February 2023. Retrieved16 November 2022.
  105. ^Vincent, James (15 September 2022)."Anyone can use this AI art generator — that's the risk".The Verge.Archived from the original on 21 January 2023. Retrieved9 November 2022.
  106. ^Davenport, Corbin."This AI Art Gallery Is Even Better Than Using a Generator".How-To Geek.Archived from the original on 27 December 2022. Retrieved9 November 2022.
  107. ^Robertson, Adi (2 September 2022)."Professional AI whisperers have launched a marketplace for DALL-E prompts".The Verge.Archived from the original on 15 February 2023. Retrieved9 November 2022.
  108. ^"Text-zu-Bild-Revolution: Stable Diffusion ermöglicht KI-Bildgenerieren für alle".heise online (in German).Archived from the original on 29 January 2023. Retrieved9 November 2022.
  109. ^Mohamad Diab, Julian Herrera, Musical Sleep, Bob Chernow, Coco Mao (28 October 2022)."Stable Diffusion Prompt Book"(PDF).Archived(PDF) from the original on 30 March 2023. Retrieved7 August 2023.{{cite web}}: CS1 maint: multiple names: authors list (link)
  110. ^Corsi, Giulio; Marino, Bill; Wong, Willow (3 June 2024)."The spread of synthetic media on X".Harvard Kennedy School Misinformation Review.doi:10.37016/mr-2020-140.
  111. ^Reinhuber, Elke (2 December 2021)."Synthography–An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography". Google Scholar.Archived from the original on 10 February 2023. Retrieved20 December 2022.
  112. ^Milne, Stefan (29 November 2023)."AI image generator Stable Diffusion perpetuates racial and gendered stereotypes, study finds".UW News.
  113. ^Hadhazy, Adam (18 April 2017)."Biased bots: Artificial-intelligence systems echo human prejudices".Office of Engineering Communications - Princeton University.Archived from the original on 10 July 2018. Retrieved13 November 2024.
  114. ^Fox, V. (March 11, 2023). AI Art & the Ethical Concerns of Artists. Beautiful Bizarre Magazine. Retrieved September 24, 2024, fromhttps://beautifulbizarre.net/2023/03/11/ai-art-ethical-concerns-of-artists/
  115. ^Heikkilä, Melissa."The viral AI avatar app Lensa undressed me—without my consent".MIT Technology Review. Retrieved26 November 2024.
  116. ^Lamensch, Marie."Generative AI Tools Are Perpetuating Harmful Gender Stereotypes".Centre for International Governance Innovation. Retrieved26 November 2024.
  117. ^Birhane, Abeba; Prabhu, Vinay Uday (1 July 2020). "Large image datasets: A pyrrhic win for computer vision?".2021 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 1536–1546.arXiv:2006.16923.doi:10.1109/WACV48630.2021.00158.ISBN 978-1-6654-0477-8.S2CID 220265500.
  118. ^abRobertson, Adi (21 February 2024)."Google apologizes for "missing the mark" after Gemini generated racially diverse Nazis".The Verge.Archived from the original on 21 April 2024. Retrieved20 April 2024.
  119. ^Crimmins, Tricia (21 February 2024)."Why Google's new AI Gemini accused of refusing to acknowledge the existence of white people".The Daily Dot.Archived from the original on 8 May 2024. Retrieved8 May 2024.
  120. ^Raghavan, Prabhakar (23 February 2024)."Gemini image generation got it wrong. We'll do better".Google.Archived from the original on 21 April 2024. Retrieved20 April 2024.
  121. ^"Unmasking Racism in AI: From Gemini's Overcorrection to AAVE Bias and Ethical Considerations | Race & Social Justice Review". 2 April 2024.Archived from the original on 29 August 2024. Retrieved26 October 2024.
  122. ^"Rendering misrepresentation: Diversity failures in AI image generation".Brookings.Archived from the original on 3 October 2024. Retrieved26 October 2024.
  123. ^Tao, Feng (4 March 2022)."A New Harmonisation of Art and Technology: Philosophic Interpretations of Artificial Intelligence Art".Critical Arts.36 (1–2):110–125.doi:10.1080/02560046.2022.2112725.ISSN 0256-0046.Archived from the original on 23 August 2022. Retrieved13 April 2025.
  124. ^Stark, Luke; Crawford, Kate (7 September 2019)."The Work of Art in the Age of Artificial Intelligence: What Artists Can Teach Us About the Ethics of Data Practice".Surveillance & Society.17 (3/4):442–455.doi:10.24908/ss.v17i3/4.10821.ISSN 1477-7487.S2CID 214218440.Archived from the original on 7 October 2023. Retrieved26 October 2023.
  125. ^Pamela, Samuelson (1985)."Allocating Ownership Rights in Computer-Generated Works".U. Pitt. L. Rev.47: 1185.Archived from the original on 22 June 2023. Retrieved27 October 2022.
  126. ^Victor, Palace (January 2019)."What if Artificial Intelligence Wrote This? Artificial Intelligence and Copyright Law".Fla. L. Rev.71 (1):231–241.Archived from the original on 13 August 2021. Retrieved27 October 2022.
  127. ^Chayka, Kyle (10 February 2023)."Is A.I. Art Stealing from Artists?".The New Yorker.ISSN 0028-792X. Retrieved6 September 2023.
  128. ^abVallance, Chris (13 September 2022).""Art is dead Dude" - the rise of the AI artists stirs debate".BBC News.Archived from the original on 27 January 2023. Retrieved2 October 2022.
  129. ^abPlunkett, Luke (25 August 2022)."AI Creating 'Art' Is An Ethical And Copyright Nightmare".Kotaku.Archived from the original on 14 February 2023. Retrieved21 December 2022.
  130. ^Edwards, Benj (15 December 2022)."Artists stage mass protest against AI-generated artwork on ArtStation".Ars Technica.Archived from the original on 14 July 2023. Retrieved21 December 2022.
  131. ^Magazine, Smithsonian; Recker, Jane."U.S. Copyright Office Rules A.I. Art Can't Be Copyrighted".Smithsonian Magazine.Archived from the original on 21 February 2023. Retrieved11 January 2023.
  132. ^"You can't copyright AI-created art, according to US officials".Engadget. 13 December 2022.Archived from the original on 31 May 2023. Retrieved1 January 2023.
  133. ^"Re: Second Request for Reconsideration for Refusal to Register A Recent Entrance to Paradise"(PDF).Archived(PDF) from the original on 24 July 2023. Retrieved16 February 2023.
  134. ^Cho, Winston (18 August 2023)."AI-Created Art Isn't Copyrightable, Judge Says in Ruling That Could Give Hollywood Studios Pause".Hollywood Reporter. Retrieved19 August 2023.
  135. ^Can I sell images I create with DALL·E? (n.d.). OpenAI Help Center. Retrieved November 11, 2024, fromhttps://help.openai.com/en/articles/6425277-can-i-sell-images-i-create-with-dall-eArchived 11 November 2024 at theWayback Machine
  136. ^"James Vincent "AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit" The Verge, 16 January 2023". 16 January 2023.Archived from the original on 9 March 2023. Retrieved14 February 2023.
  137. ^Brittain, Blake (19 July 2023)."US judge finds flaws in artists' lawsuit against AI companies".Reuters.Archived from the original on 6 September 2023. Retrieved6 August 2023.
  138. ^Korn, Jennifer (17 January 2023)."Getty Images suing the makers of popular AI art tool for allegedly stealing photos".CNN.Archived from the original on 1 March 2023. Retrieved22 January 2023.
  139. ^abcdefgJiang, Harry H.; Brown, Lauren; Cheng, Jessica; Khan, Mehtab; Gupta, Abhishek; Workman, Deja; Hanna, Alex; Flowers, Johnathan; Gebru, Timnit (8 August 2023)."AI Art and its Impact on Artists".Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. ACM. pp. 363–374.doi:10.1145/3600211.3604681.ISBN 979-8-4007-0231-0.S2CID 261279983.Archived from the original on 28 September 2023. Retrieved20 September 2023.
  140. ^Rosman, Rebecca (22 March 2024)."Tennessee becomes the first state to protect musicians and other artists against AI".NPR.
  141. ^Robins-Early, Nick (9 April 2024)."New bill would force AI companies to reveal use of copyrighted art | Artificial intelligence (AI) | The Guardian".amp.theguardian.com. Retrieved13 April 2024.
  142. ^Elwes, Jake; CROSSLUCID; Vettier, Aurèce; Rauh, Maribeth (26 November 2024)."Art in the Cage of Digital Reproduction".Art in the Cage of Digital Reproduction. Art in the Cage Collective.Archived from the original on 10 February 2025. Retrieved7 February 2025.
  143. ^Murgia, Madhumita; Criddle, Cristina (26 November 2024)."OpenAI's text-to-video AI tool Sora leaked in protest by artists".Financial Times.Archived from the original on 30 January 2025. Retrieved7 February 2025.
  144. ^Spangler, Todd (27 November 2024)."OpenAI Shuts Down Sora Access After Artists Released Video-Generation Tool in Protest: 'We Are Not Your PR Puppets'".Variety. Retrieved7 February 2025.
  145. ^abChmielewski, Dawn (11 June 2025)."Disney, Universal sue image creator Midjourney for copyright infringement".Reuters. Retrieved11 June 2025.
  146. ^Wiggers, Kyle (24 August 2022)."Deepfakes: Uncensored AI art model prompts ethics questions".TechCrunch.Archived from the original on 31 August 2022. Retrieved15 September 2022.
  147. ^abParra, Dex (24 February 2023)."CASE STUDY: The Case of DALLE-2".University of Texas at Austin, Center for Media Management.Archived from the original on 8 December 2023. Retrieved8 December 2023.
  148. ^Beahm, Anna (12 February 2024)."What you need to know about the ongoing fight to prevent AI-generated child porn".Reckon News.Archived from the original on 7 March 2024. Retrieved7 March 2024.
  149. ^Higgins, Eliot (21 March 2023)."Making pictures of Trump getting arrested while waiting for Trump's arrest". Archived fromthe original on 20 April 2023 – via Twitter.
  150. ^"Sony World Photography Award 2023: Winner refuses award after revealing AI creation".BBC News. 17 April 2023. Retrieved16 June 2023.
  151. ^Sato, Mia (9 June 2023)."How AI art killed an indie book cover contest".The Verge.Archived from the original on 19 June 2023. Retrieved19 June 2023.
  152. ^Novak, Matt."That Viral Image Of Pope Francis Wearing A White Puffer Coat Is Totally Fake".Forbes.Archived from the original on 28 May 2023. Retrieved16 June 2023.
  153. ^Stokel-Walker, Chris (27 March 2023)."We Spoke To The Guy Who Created The Viral AI Image Of The Pope That Fooled The World".BuzzFeed News.Archived from the original on 28 May 2023. Retrieved16 June 2023.
  154. ^Edwards, Benj (23 May 2023)."Fake Pentagon "explosion" photo sows confusion on Twitter".Ars Technica.Archived from the original on 2 July 2024. Retrieved2 July 2024.
  155. ^Oremus, Will; Harwell, Drew; Armus, Teo (22 May 2023)."A tweet about a Pentagon explosion was fake. It still went viral".Washington Post.Archived from the original on 28 May 2023. Retrieved2 July 2024.
  156. ^Devlin, Kayleen; Cheetham, Joshua (25 March 2023)."Fake Trump arrest photos: How to spot an AI-generated image".Archived from the original on 12 April 2024. Retrieved24 February 2024.
  157. ^"Trump shares deepfake photo of himself praying as AI images of arrest spread online".The Independent. 24 March 2023.Archived from the original on 28 May 2023. Retrieved16 June 2023.
  158. ^Garber, Megan (24 March 2023)."The Trump AI Deepfakes Had an Unintended Side Effect".The Atlantic.Archived from the original on 18 May 2024. Retrieved21 April 2024.
  159. ^Lasarte, Diego (23 March 2023)."As fake photos of Trump's "arrest" went viral, Trump shared an AI-generated photo too".Quartz (publication).Archived from the original on 21 April 2024. Retrieved21 April 2024.
  160. ^Guo, Xinyu; Dong, Liang; Hao, Dingjun (2024). Kumaresan, Arumugam (ed.)."Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway".Frontiers in Cell and Developmental Biology.12.doi:10.3389/fcell.2024.1386861.ISSN 2296-634X.
  161. ^Whitwam, Ryan (8 May 2024)."New OpenAI Tool Can Detect Dall-E 3 AI Images With 98% Accuracy".ExtremeTech.Archived from the original on 26 May 2024. Retrieved26 May 2024.
  162. ^"OpenAI's new tool can detect images created by DALL-E 3". 7 May 2024.Archived from the original on 14 January 2025. Retrieved15 November 2024.
  163. ^King, Hope (10 August 2022)."AI-generated digital art spurs debate about news illustrations".Axios.Archived from the original on 18 December 2022. Retrieved2 October 2022.
  164. ^Salkowitz, Rob (16 September 2022)."AI Is Coming For Commercial Art Jobs. Can It Be Stopped?".Forbes.Archived from the original on 2 October 2022. Retrieved2 October 2022.
  165. ^Inie, Nanna; Falk, Jeanette; Tanimoto, Steve (19 April 2023)."Designing Participatory AI: Creative Professionals' Worries and Expectations about Generative AI".Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. ACM. pp. 1–8.arXiv:2303.08931.doi:10.1145/3544549.3585657.ISBN 978-1-4503-9422-2.S2CID 257557305.Archived from the original on 28 September 2023. Retrieved20 September 2023.
  166. ^abRoose, Kevin (2022)."An A.I.-Generated Picture Won an Art Prize. Artists Aren't Happy".The New York Times.Archived from the original on 2 September 2022. Retrieved1 October 2022.
  167. ^ab"An AI-Generated Artwork Won First Place at a State Fair Fine Arts Competition, and Artists Are Pissed".Vice. Retrieved15 September 2022.
  168. ^Chen, Min (7 February 2023)."Netflix Japan Is Drawing Ire for Using A.I. to Generate the Background Art of Its New Anime Short".Artnet.Archived from the original on 2 December 2023. Retrieved2 December 2023.
  169. ^Pulliam, C. (2023, June 27). Marvel's Secret Invasion AI credits should shock no one. The Verge. Retrieved August 26, 2024, fromhttps://www.theverge.com/2023/6/27/23770133/secret-invasion-ai-credits-marvelArchived 11 November 2024 at theWayback Machine
  170. ^Tolliver-Walker, Heidi (11 October 2023)."Can AI-Generated Images Replace Stock?".WhatTheyThink.Archived from the original on 26 May 2024. Retrieved26 May 2024.
  171. ^David, Emilia (8 January 2024)."Getty and Nvidia bring generative AI to stock photos".The Verge.Archived from the original on 26 May 2024. Retrieved26 May 2024.
  172. ^Luccioni, Alexandra Sasha; Jernite, Yacine; Strubell, Emma (2024). "Power Hungry Processing: Watts Driving the Cost of AI Deployment?".The 2024 ACM Conference on Fairness, Accountability, and Transparency. pp. 85–99.arXiv:2311.16863.doi:10.1145/3630106.3658542.ISBN 979-8-4007-0450-5.
  173. ^Cetinic, Eva; She, James (31 May 2022)."Understanding and Creating Art with AI: Review and Outlook".ACM Transactions on Multimedia Computing, Communications, and Applications.18 (2):1–22.arXiv:2102.09109.doi:10.1145/3475799.ISSN 1551-6857.S2CID 231951381.Archived from the original on 22 June 2023. Retrieved8 April 2023.
  174. ^abCetinic, Eva; She, James (16 February 2022). "Understanding and Creating Art with AI: Review and Outlook".ACM Transactions on Multimedia Computing, Communications, and Applications.18 (2): 66:1–66Kate Vass2.arXiv:2102.09109.doi:10.1145/3475799.ISSN 1551-6857.S2CID 231951381.
  175. ^Lang, Sabine; Ommer, Bjorn (2018)."Reflecting on How Artworks Are Processed and Analyzed by Computer Vision: Supplementary Material".Proceedings of the European Conference on Computer Vision (ECCV) Workshops.Archived from the original on 16 April 2024. Retrieved8 January 2023 – via Computer Vision Foundation.
  176. ^Ostmeyer, Johann; Schaerf, Ludovica; Buividovich, Pavel; Charles, Tessa; Postma, Eric; Popovici, Carina (14 February 2024)."Synthetic images aid the recognition of human-made art forgeries".PLOS ONE.19 (2) e0295967. United States.arXiv:2312.14998.Bibcode:2024PLoSO..1995967O.doi:10.1371/journal.pone.0295967.ISSN 1932-6203.PMC 10866502.PMID 38354162.
  177. ^Achlioptas, Panos; Ovsjanikov, Maks; Haydarov, Kilichbek; Elhoseiny, Mohamed; Guibas, Leonidas (18 January 2021). "ArtEmis: Affective Language for Visual Art".arXiv:2101.07396 [cs.CV].
  178. ^Myers, Andrew (22 March 2021)."Artist's Intent: AI Recognizes Emotions in Visual Art".hai.stanford.edu.Archived from the original on 15 October 2024. Retrieved24 November 2024.
  179. ^Yannakakis, Geogios N. (15 May 2012). "Game AI revisited".Proceedings of the 9th conference on Computing Frontiers. pp. 285–292.doi:10.1145/2212908.2212954.ISBN 978-1-4503-1215-8.S2CID 4335529.
  180. ^"AI creates new levels for Doom and Super Mario games".BBC News. 8 May 2018.Archived from the original on 12 December 2022. Retrieved9 November 2022.
  181. ^Katsnelson, Alla (29 August 2022)."Poor English skills? New AIs help researchers to write better".Nature.609 (7925):208–209.Bibcode:2022Natur.609..208K.doi:10.1038/d41586-022-02767-9.PMID 36038730.S2CID 251931306.
  182. ^"KoboldAI/KoboldAI-Client".GitHub. 9 November 2022.Archived from the original on 4 February 2023. Retrieved9 November 2022.
  183. ^Dzieza, Josh (20 July 2022)."Can AI write good novels?".The Verge.Archived from the original on 10 February 2023. Retrieved16 November 2022.
  184. ^"AI Writing Assistants: A Cure for Writer's Block or Modern-Day Clippy?".PCMAG.Archived from the original on 23 January 2023. Retrieved16 November 2022.
  185. ^Song, Victoria (2 November 2022)."Google's new prototype AI tool does the writing for you".The Verge.Archived from the original on 7 February 2023. Retrieved16 November 2022.
  186. ^Sochacki, Grzegorz; Abdulali, Arsen; Iida, Fumiya (2022)."Mastication-Enhanced Taste-Based Classification of Multi-Ingredient Dishes for Robotic Cooking".Frontiers in Robotics and AI.9 886074.doi:10.3389/frobt.2022.886074.ISSN 2296-9144.PMC 9114309.PMID 35603082.
Tools
Hardware
Software
Forms
Notable
artists
Notable
artworks
Organizations,
conferences
Premodern,Modern andContemporary art movements
Premodern
(Western)
Ancient
Medieval
Renaissance
17th century
18th century
Colonial art
Art borrowing
Western elements
Transition
to modern

(c. 1770 – 1862)
Modern
(1863–1944)
1863–1899
1900–1914
1915–1944
Contemporary
andPostmodern
(1945–present)
1945–1959
1960–1969
1970–1999
2000–
present
Related topics
Retrieved from "https://en.wikipedia.org/w/index.php?title=Artificial_intelligence_visual_art&oldid=1323966643"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp