This page is within the scope ofWikiProject Images and Media, a project which is currently considered to beinactive.Images and MediaWikipedia:WikiProject Images and MediaTemplate:WikiProject Images and MediaImages and Media
The project page associated with this talk page is an officialpolicy on Wikipedia. Policies have wide acceptance among editors and are considered a standard for all users to follow. Please reviewpolicy editing recommendations before making any substantive change to this page. Always remember tokeep cool when editing, anddon't panic.
The text as was (allegedly) developed through the RfC: This text mostly comes from the closing summary ofWikipedia:Requests for comment/AI images#Relist with broader question: Ban all AI images? That RFC asked editors only "Should AI-generated images be banned from use in articles? (Where "AI-generated" means wholly created by generative AI, not a human-created image that has been modified with AI tools.)" and did not ask anyone to !vote on or even suggest any particular text.
The relevant part of the closing statement from that, BTW, says this:
"We have decided that, subject to common sense and with a number of exceptions, most images wholly generated by AIshould not be used on en.wiki. Obvious exceptions include articles about AI, and articles about notable AI-generated images. There may need to be some less-obvious exceptions, and we need more thought and community input about AI enhancement of an image originally generated by a human, and about using AI to generate simple images such as Venn diagrams. The community objects particularly strongly to AI-generated images (1) of named people, and (2) in technical or scientific subjects such as anatomy and chemistry. There is also well-articulated concern about the use of AI that's been trained on copyrighted content, which sits poorly with Wikipedia's strict attitude to fair use.
Now that we've made this decision, there's an opportunity for a group of interested editors to draft an essay with a view to presenting it to the community for ratification as a guideline."
I find that AIIMAGES does not represent the RFC's closing summary in several ways. A non-exhaustive list of differences includes:
It oversimplified the result in ways that reject AI-generated images more strongly than the closing statement supports (e.g., "subject to common sense and with a number of exceptions" has been omitted).
It omitted the statement about "using AI to generate simple images such as Venn diagrams", which the closing statement did not intend to reject.
It put the(well, a biased version of the) closing statement into a policy, despite the closing statement saying that (a) it needed to be presented to the community for ratification and (b) it should beproposed as a guideline, not a policy.
I therefore think that we should not take this text (or even the closing summary itself) as holy writ; they are neither of them text that was developed by the community or approved in an RFC (or even a talk-page discussion).
Positive purpose: The positive purpose that I see is that it's stupid to have editors argue – and I believe theywill – that a very simple diagram is acceptable if I draw it with paper and pencil and acceptable if I draw it in graphics software, but thatexactly the same thing magically becomes very, very bad if AI generates it. Editors have toWP:Use common sense, as the closing statement (but not AIIMAGES) said, and I believe that not arguing over who made avery simple diagram counts as using common sense.WhatamIdoing (talk)04:01, 3 September 2025 (UTC)[reply]
Yeah, there needs to be room for the use of AI in generating simple diagrams that are strongly directly by a prompt (eg one I just tested is[1]). When the product of the AI is just simple shapes and text that, even if generated directly by a human, would not be copyrightable, that should be fine, the AI here is only helping in simplify the time to do the drawing. Its when we're asking the AI model to do something that would likely be copyrightable if it were being made by hand by a human, with or without the support of a non-AI computer program like Photoshop. We don't want those types of images.Masem (t)04:10, 3 September 2025 (UTC)[reply]
@Masem, I think that copyright is not quite the right standard. I think it should not only meet theWP:PERTINENCE standard ("Images shouldlook like what they are meant to illustrate, regardless of whether they are authentic") but also do so at a glance. This means, e.g., that the article9999 (number) doesn't get an AI-generated image showing 9999 dots, because it takes too long to count them up and see if the image actually shows the correct number of dots. But the article9 (number) could have an AI-generated image showing nine dots, because anybody smart enough to edit Wikipedia should also be smart enough to count a single-digit number of dots in an image.WhatamIdoing (talk)17:33, 3 September 2025 (UTC)[reply]
I don't see why having editors argue over whether something is simple and a diagram is a positive outcome. You have just postedhere about the cost/benefit of discussing 144 examples, how many issues prompted this proposal? Are editors not using common sense in the way you assert? Is there a collection of simple diagrams we have been unable to use? (As an aside I just asked Gemini to help illustrate pi, and it mislablled the diameter and sort of mislabelled the circumference, so I really doubt the tool is well-aligned with the purported purpose.)CMD (talk)04:32, 3 September 2025 (UTC)[reply]
If I asked a random person in the street to "illustrate pi" they might well get it wrong. Should we ban "people" as well? I'm not sure you've demonstrated anything fundamental other than that some current AI tools have limitations and that the quality of prompts is hugely influential in the quality of output. Clearly the fact you reviewed the results and you'd have been the one taking responsibility for uploading any image and inserting it into an article, means we do actually have a functioning quality mechanism. If AI had generated an image that is equivalent to one you'd have drawn (but perhaps lack artistic ability or tooling to achieve) what really would have been the problem. Indeed, why on earth would one feel the need to mention it?
I think that RFC is a classic example of asking an outrageous binary question, voting on it, and which very obviously only gets unqualified supports from knee-jerk reactionary voters. Anyone thinking more about it saw problems with the proposal. I don't see how that's a good starting point for generating P&G text. It was imo a waste of time. The community needs toexplore how AI images can be used and where they can't or where there are often problems. IMO the closing comment should have been "You asked the wrong question (for which there was very clearly no consensus for support). Please let's discuss specifics and not get into a vote for quite some time, until it appears clear there's a likely consensus of this specific or that specific". --Colin°Talk08:39, 3 September 2025 (UTC)[reply]
The community has been exploring it for a few years at this point, resulting in the multiple RfCs on different aspects of the issue.CMD (talk)09:10, 3 September 2025 (UTC)[reply]
A simple diagram of a happy face, reportedly from ChatGPT. Would it really be terrible if this were used inHappy face?
Why "simple": Because I don't think that the community wants to see either medium- or high-complexity images from AI. We don't trust it. We don't want to go to a lot of work to see whether the image is accurate. We want to be able to see at a glance, with no special skills, that it's correct: A smiley face is a smiley face.File:Bi koma bost laurden.png shows the concept of 2.5 out of 4.File:Decision tree example.png shows what a flow chart looks like.File:Orange County Commission, 2025–2027.svg shows five blue dots and one red.
I understand why the words, the issue is the words are flexibly interpretable. Which thus informs the rest of my prior comment and its unanswered questions.CMD (talk)02:57, 4 September 2025 (UTC)[reply]
Most ordinary words are flexibly interpretable, including words likewholly generated by AI. If I give it a prompt, is that "wholly generated"? I'd say so, but a wikilawyer could "flexibly interpret" the wording differently. I think that, in practice, most editors are capable of figuring out whether something is a "very simple diagram".WhatamIdoing (talk)20:57, 4 September 2025 (UTC)[reply]
While some simple diagramsmight merit inclusion in the wiki, simple diagrams as a class definitely don't fall into the category of "obvious exception". I also think there is no point in even having this conversation unless we are actually experiencing a problem with simple diagrams that ought to be included getting removed from the wiki simply because they are AI-generated. --LWGtalk22:04, 3 September 2025 (UTC)[reply]
I disagree. The existing text does not match the RFC results; it does not align with common sense; it does not match the views of the community. Why should we leave known-wrong text in this policy and wait for a dispute to arise?
I am curious if you could give an example of simple diagram that you believe should be included in a Wikipedia article, but you would reject it if you learned it was AI-generated, solely on the grounds that it was AI-generated.WhatamIdoing (talk)22:11, 3 September 2025 (UTC)[reply]
Why should we leave known-wrong text in this policy and wait for a dispute to arise? It's not known-wrong, it's believed by you to be wrong. I think it's fine. As far as I can tell it has been working for its intended purpose and hasn't resulted in negative effects so far.
I am curious if you could give an example of simple diagram that you believe should be included in a Wikipedia article, but you would reject it if you learned it was AI-generated, solely on the grounds that it was AI-generated. I cannot give an example of such a diagram. So far I have not seenany simple AI-generated diagram that I thought should be included in a Wikipedia article, but if I ever saw one, I would not advocate for its removal solely on the grounds that it was AI-generated. As a general rule, I don't advocate for the removal of content that I think is beneficial to the wiki, since that would be a waste of time. --LWGtalk04:04, 4 September 2025 (UTC)[reply]
As far as I can tell it has been working for its intended purpose and hasn't resulted in negative effects so far. Text that says you can't do something results in .... nothing happening. WAID is complaining that this "nothing" is way too wide in scope and includes lots of images that could improve the project but aren't being created or added to articles. I'm not sure how you can determine that this is "working" when one of you says the "nothing" should be "this big" and the other says no, the "nothing" should be "that big". If the "nothing" is too big, you simply won't know about all the great images you aren't getting. You can't be confident it is "working" at all.
I think for such simple diagrams this prejudice against AI is just silly. Creator will simply not admit to using AI. You won't be able to tell then if it is working or not. All you do is miss out on the images being correctly described and categorised on Commons. --Colin°Talk07:28, 4 September 2025 (UTC)[reply]
When I say "it's working" I mean that beforeWP:AIIMAGES there were a large number of low quality and some outright disinformational AI-generated images on enwiki, and I wasn't sure where the community consensus lay on including them. Now, when I see an AI image that is subjectively bad or of unclear accuracy, I just remove it and link this policy. The potential drawback of this policy is that high-quality AI-generated images might be removed when they should be kept. But I have yet to see a single example of this happening, so I agree with CMD that this is a not actually a serious problem.
As for "the creator will simply not admit to using AI", I share that concern. In the RFC that led toWP:AIIMAGES I advocated against an outright ban in favor of mandating that AI images be clearly labeled as AI-generated. The community eventually landed on a stronger stance, but still fell short of an outright ban and allowed for common sense exceptions. So far that seems to be working well so I see no reason to complicate things. --LWGtalk14:29, 4 September 2025 (UTC)[reply]
But if the high-quality AI-generated images were uploaded to Commons by editors who didn't admit they were AI-generated, because they knew en:wp was weirdly prejudiced, then that would also explain why you aren't seeing examples. I don't think your "mandating that AI images be clearly labelled as AI-generated" could possibly work, especially for simple stuff. How could you possibly tell the above happy face was AI-generated unless someone says.
And if people didn't bother contributing high-quality AI-generated imagesat all, because they knew ep:wp was weirdly prejudiced, then that would also explain why you aren't seeing examples. It isn't just that high quality images "might be removed". You seem to be saying you haven't actually seen any "common sense exceptions", which makes me concerned, as people clearly thought there should be some. --Colin°Talk15:09, 4 September 2025 (UTC)[reply]
It's possible that negative attitudes toward AI on enwiki prior to the RFC somehow scared away all the people with good AI images without deterring the people with bad AI images. A simpler explanation is that AI is not currently an effective way to produce high-quality images in the vast majority of cases. If the technology improved, I might support using AI images in certain cases to produce illustrations likethis to be included in an article likeSimul-climbing with a caption like "diagram of a simul-climbing setup generated by GPT-9000". But that is hypothetical at this point, and the community consensus is currently to take a harder line.
It sounds like "it's working" means "I can revert images easily, in a way that minimizes the risk that I'll have to discuss this and find a consensus".
This means that if I were to addFile:Carita feliz.png toSmiley#Ideogram history, with a caption that says smiley faces don't always show the outline of the head, your goal would be to remove it as quickly and simply as you can, because AI is bad and wrong and banned by policy. Questions like "Does this image improve the article?" or "Should we use some common sense?" or "Should this be discussed on the article's talk page?" are things that would require time and effort from you, so they are undesirable. The most important point is that getting rid of AI should be quick and easy for you. Right?WhatamIdoing (talk)21:05, 4 September 2025 (UTC)[reply]
As a user from Commons who deals with a lot of AI-generated images: AI-generated diagrams (infographics, flowcharts, schematics, maps, graphs... etc) tend to be of very low quality. They often contain:
Inappropriate or gibberish text
Fabricated data, especially in graphs and maps
Missing, incorrectly placed, or duplicated labels
Arrows or other markers pointing to the wrong place, or in the wrong direction (especially in flowcharts)
Objects or text laid out in incoherent ways, like text disappearing off the edge of the image
Misplaced information in tabular layouts, as if the "author" had lost track of where they were in the table
Extremely simple images, like the smiley face above, are easy enough for a user to create themselves without the assistance of an image model. I see no compelling reason to make an exception on the basis of simplicity.Omphalographer (talk)22:35, 17 October 2025 (UTC)[reply]
There is a lot of noise here and not a lot of compelling examples, so I will offer one up:File:Pinwheel scheduling.svg, as used to illustratePinwheel scheduling. I used AI to generate the grey shading of the stylized gear wheel in this image because AI drawing skills for this sort of thing are better than mine despite their own imperfections (the original AI-generated drawing had noticeably irregular gear teeth that I cleaned up manually). Beyond that, the image is indeed a simple diagram (rather than something intended to be interpreted as a photorealistic image or artwork) and all of the actual information content of the image (the lettering, coloring, and sequence of letters on the gear wheel) was drawn by me afterwards. The AI part is more decorative, to convey the idea of a gear wheel, than actual diagrammatic information.
Is that the sort of diagram that you would like to block from being used?
I don't think there is any good reason for a simplicity exception. If an image is as simple as a smiley face or a circle, it takes trivial effort for human editors to create them, and indeed we almost certainly have better human created alternatives. I think that adding an exception for "very simple diagrams" generated with AI will simply create confusion. Cases like David Eppstein's creation ofPinwheel scheduling.svg seem to be relatively reasonable exceptions that I don't think any reasonable person would seek to remove them from the wiki (although I do wish that the generative AI usage was disclosed in the description). My worry with a simplicity exception is that it solves no problem and creates a needless loophole (i:e, well-meaning people will upload "very simple diagrams" with obvious problems, thinking it is allowed under policy when obviously it is not). The talk of "prejudice" against generative AI is very silly. Personally, the problem I have with generative AI is that opening the door to it risks flooding the wiki with useless crap. No-one is seeking to remove things like parliamentary diagrams or smiley faces from the wiki because they work for their intended purpose. It would be trivial to simply use thediagram tool on toolforge, or even just get out a pen and paper and draw a smiley face yourself, but in instances like this I think the only problem with these images is that writing an exception for them opens the door for more insidious generative AI images. As LWG says, I think there's no point in having the argument unless it solves a real problem, and I am very skeptical of having this argument if the aim is to open enWP up to more generative AI images.LivelyRatification (talk)21:29, 21 October 2025 (UTC)[reply]
AI-generated drawing of a hair style
On the one hand, if wedon't mention an exception for simple diagrams, then I worry – with IMO good cause – that we will get perfectly good, obviously correct images removed from articles.
But if we do mention it, then you worry – IMO also with good cause – that garbage will get put in articles, and then maybe someone will demand that the garbage be kept because some AI-generated images areallowed, and thereforethis AI-generated image isrequired.
I wonder what you think of the image here. It's a hair style. Specifically, most women of a certain age, from an Western/English-speaking country, can tell you at a glance that it's aLong bob (haircut). There is no equivalent photo in any of the relevant articles. Do you think this image is "useless crap"? Does it have any "obvious problems"? If I put this in an article, would you instantly revert it?WhatamIdoing (talk)21:39, 21 October 2025 (UTC)[reply]
I'll first preface this by saying I obviously don't think that the image you posted is a "very simple diagram", in that it would take some effort for a human to replicate it. (I am almost certain you agree with me on this.)
Supposing an instance where someone added a relatively unproblematic diagram of a generic human's face or hairstyle or whatever that was generated by AI, and there was no other free-use alternative available, I would remove it because I am very concerned about using depictions of humans generated by AI. Obviously there are some utilities for generative AI, it can create some helpful images like in the instance you've posted, but I think the risks of misleading readers and establishing a precedent for the use of AI-generated images to depict humans is quite troubling.
But probably what I would do in the exact instance you describe is not simply remove the image, but search for a free-use replacement, of whichtherearemany (three separate files linked). Hell, I am a woman with a (somewhat messy) long bob haircut, if I could be bothered I'd get a friend to take a photo of the back of my head.
I don't think anyone here is seeking to argue over the inclusion of an AI-generated smiley face. I think that the stubborn editor who hates generative AI, even at the expense of valuable content for the wiki, does not meaningfully exist. Both in your hypothetical and for the actual policy change you propose, there are reasonable, human-created alternatives that could be used without any significant loss for the reader. I think that there are editors who are concerned about the misleading potential of generative AI (and the potential copyright violations) who might be seeking to remove images that are unhelpful for the wiki. I, personally, am fine with sacrificing some high-quality generative AI images in the name of accuracy and trust, but that is not the issue at hand here. I fear that this proposal seeks to solve a problem which does not exist.LivelyRatification (talk)22:21, 21 October 2025 (UTC)[reply]
Does it matterhow this image was made?
I have some concerns about photo-realistic images of fake people. Even if it looks typical (e.g., doesn't have a strange number of fingers), it might accidentally happen to be aLook-alike for a real person, which feels unfair to them.
I have many fewer concerns of sketch-like drawings of people. (Nobody's going to mistake astick figure for a real person.)
The problem that I'd like to solve here is:
What we have written in the policy is a more anti-AI statement than the actual RFC closing statement. This is inaccurate, and inaccurate = bad.
We really shouldn't be concerned about the provenance for some simple images (simple diagrams, rather than photo-realistic images, or images – like the bob above – that are nearly photo-realistic). For example, this image illustrates the concept of 1%. Does it matter if it's made in Photoshop or by an AI tool? I don't think so, and yet we do have editors who readWP:AIIMAGES: "Most images wholly generated by AI should not be used in mainspace...other categories of exceptions may arise through further community discussion. Community members have largely rejected making exceptions merely because an image lacks obvious errors" and say "Well, we don't care if it's accurate. AI images shouldn't be used in articles except for articles likeArtificial intelligence visual art, there hasn't been an RFC to overturn this, so if it's AI, it's outta here!" We can't really stop mindless rule followers from following rules; we have to give them more accurate rules.
My fear here is basically scope creep. The RfC closing does say "we need more thought and community input about AI enhancement of an image originally generated by a human, and about using AI to generate simple images such as Venn diagrams". I would not be opposed to allowing venn diagrams, circles, simple drawings etc that are generated by AI or with the assistance of generative AI, or more likely trouting any editors who seek to remove a drawing of two overlapping circles because it was generated by AI. But I do object to using near-photo realistic images to depict concepts or images of generic people. I don't object to telling off editors who seek to remove a smiley face made using generative AI, what I do object to is adding exceptions to the "no generative AI images on Wikipedia" policy. And ultimately, if the diagrams are very simple, then it would be trivial effort for an alternative image to be created and used instead. Perhaps there is no reason why generative AI-created images shouldn't be used, but is there a particularly compelling argument as to why they should be used? It might be unfair in a sense, but ultimately it is an unfairness I am fine with in the name of preventing AI images from proliferating on the wiki.LivelyRatification (talk)09:23, 22 October 2025 (UTC)[reply]
If we actually have a"no generative AI images on Wikipedia" policy, then why would you betelling off editors who seek to remove a smiley face made using generative AI? Do you normally tell editors off for enforcing policies?WhatamIdoing (talk)19:21, 22 October 2025 (UTC)[reply]
The closing comment in the RfC stated the newly created policy would besubject to common sense and with a number of exceptions. I think exceptions for “very simple diagrams” and the like would fall under “common sense”.Dronebogus (talk)12:19, 26 October 2025 (UTC)[reply]
You may think that the closure states that it would be "subject to common sense and with a number of exceptions" but AIIMAGES says nothing like that. Instead it states that "Obvious exceptions include articles about AI and articles about notable AI-generated images; other categories of exceptions may arise through further community discussion." There are no exceptions listed there for simple images and no exception for common sense. That is the whole reason we are having this discussion: because the text in AIIMAGES does not match the sense of the RFC that created it. —David Eppstein (talk)19:35, 26 October 2025 (UTC)[reply]
I think we do have a clear consensus that the policy text does not conform to the RFC summary nor thus reflect any community consensus. At the very least we should be workshopping how to include "subject to common sense and with a number of exceptions" rather than stonewalling that idea any further.
I note someone above said the "The talk of "prejudice" against generative AI is very silly". Be cautious about labelling someone's comments as "silly" when they patently are not. Prejudice rests on two things. The first is "an adverse judgment or opinion formed beforehand or without knowledge of the facts." Banning an "AI generated image" is by that definition, absolutely a prejudice, as one has not considered (knowledge) the image itself, and whether it is useful for the article and accurate in what it portrays. The second aspect of most definitions of prejudice, is that it is unreasonable. Now that's within the bounds of editor opinion, and I don't think it is at all civil to say another editor is being "silly" for having the opinion that banning images based on the tool used to create them is "unreasonable".
Let me give you an example. Wikipedia is a free content encyclopaedia. That means our content has to be free to reuse and part of that means media types that have a patented component are not permitted to be uploaded to Commons. We miss out on the latest technology that Apple, Amazon and Disney use to deliver UHD HDR images to your TV, for example. But I knew some Commoners who went further and believed the media should only becreated by free content software (running on Linux, of course). That using commercial software like those from Adobe was harmful. They actually wanted us to restrict the media to only that generated by free software. In their eyes, this would force image makers and photographers to use this free software, and win-win, it would become better as more people used it, and superior to the commercial stuff, which was sinful. I thought they were "silly". But today, I see people arguing about the "tool" used to create an image, and am just shaking my head again.
People saying it is trivial to use thediagram tool on sourceforge to draw a smiley face, or think their pencil drawing would be accepted in an article, remind me of those same open-source-fanatics who thought it was "trivial" to master the Linux command line to process your Camera's RAW files, or with a straight face suggested normal humans might master something with the unlikely name ofGIMP. This was in the days when you downloaded Linux and it didn't come with WiFi drivers because they were patented and you had to FTP them separately from some server in Iceland.
I'm also seeing shades of when "proper" photographers with DSLRs looked down on the "crap" produced and uploaded by the untalented masses with their mobile phones. That the availability of a tool to generate something that appears "good enough" encourages a new wave of contributors, who are lacking appropriate self-filtering and critical eye that someone who previously laboured to produce an image would have learned from experience. These are people problems, not tool problems. --Colin°Talk09:08, 29 October 2025 (UTC)[reply]
You know what's easier than creating an image in software, or even creating an image with an AI tool? Using an image that someone else already created and already uploaded. The demand isn't just that editors create images without using AI; the demand is also that all editors refuse to use existing images that were created and uploaded by someone else and have been tagged as AI-generated, even for something as simple as a smiley face.WhatamIdoing (talk)16:17, 29 October 2025 (UTC)[reply]
This policy section (WP:IMGSIZE) contains two interesting statements:
The lead image in an infobox should not impinge on the default size of the infobox.
Therefore, it should be no wider than upright=0.9 (equivalent to 228px).
The first statement, the idea of correlating infobox size and image size, is based onthis December 2018 edit, whose edit summary said:
tweak wording re infobox/lead image. Per intent of Moxy, this puts all info re lead image/infobox in one place in a separate section where it can be easily identified
As far as I can tell, this is related toWikipedia talk:Image use policy/Archive 15#Policy dictating size? from February 2019. It's curious that it's in response to a couple of editors saying the policy shouldnot tell us what size things should be that's what the MOS is for, but that was still simply the status quo sincethis July 2011 edit.
The second statement, the mention of 228px, was added inthis April '25 edit, whose edit summary said:
This is not intended as a permanent change; discussion is still ongoing at talk and we should continue working out a new consensus there. The wording can be adjusted after this happens.
The January 2024 discussion at the Village pump proposals page included numerous mentions of infoboxes and visual examples of theEarth article infobox. The consensus found there was in the exact same context as this policy section.
This policy page has 845 watchers, of which 30 visited recently. The village pump proposals page has 3,736, of which 294 visited recently. This is a difference of almost 10x / 5x.
It's not in the spirit ofWP:Consensus for the much smaller forum to so casually override a broader community consensus decision.
It wasn't a discussion that happened so long ago that it would be inherently obsolete. Even the option to move further to 300px, for which there was a lack of consensus found there, was supported by more people than the last discussion here (9 support, 8 oppose).
So, we need to match this policy section's numbers with the idea that 250px does not impinge on the default size of the infobox. Is thatupright=1? --Joy (talk)23:00, 17 October 2025 (UTC)[reply]
The default size of an infobox is 22em, unless the specific infobox template forcibly overrides that. At 22 em, that is sufficient room for a 228 px width image, and anything larger will cause it to stretch, which we don't want to happen. Note that images in infoboxes are not considered thumbnails compared to those images generally inline within the text.Masem (t)12:35, 18 October 2025 (UTC)[reply]
I don't think anyone actually cares much about a 10% difference in infobox widths (250px is less than 10% wider than 228px). Most people won't even notice it.WhatamIdoing (talk)18:14, 18 October 2025 (UTC)[reply]
What does this number 22em really mean, what is the rationale for it? Where is this documented? Why is it more important to think about stretching that than to address the consensus of editors about what sort of images they told us they want to see there?
With regard to the idea of images in infoboxes not being "considered" thumbnails, the discussion that produced the aforementioned consensus involved numerous mentions of infoboxes as well as looking at examples of a top infobox image. It's not clear who you would be referring to in this consideration. --Joy (talk)08:30, 19 October 2025 (UTC)[reply]
"em" is the pixel width of the "m" character at the default font size; it's not documented well as you have to dig into the template pages, the Lau module codes, and other discussions to find this (I had to to answer this). I can't tell you when or where it was decided for that but that's been the way for well over a decade.
The reason infobox images are not thumbnails is that their size is fixed, whereas thumbnail sizes are set based on the user's preference in size.Masem (t)12:36, 19 October 2025 (UTC)[reply]
I wasn't asking about the em mechanism, but rather what was the significance of 22em, that is, why would that be substantially better than e.g. 21em or 23em or whatever. If it's not properly documented at all, I can't imagine that we have to treat this magic number as a sacred cow now. It seems more likely someone tried it and it seemed fine, and it stuck.
They're thumbnails in the sense that they're a smaller picture that allows for a click to open up a larger picture - at least that's been the organic meaning of thethumb parameter on images here since as long as I can remember. How is this difference between fixed and variable relevant to implementing this change? Do you expect that so much of empty space will be left around the other elements of infoboxes that readers will complain, or something like that? --Joy (talk)13:03, 19 October 2025 (UTC)[reply]
That 22em is to make sure the infobox is shown with a comfortable width for nearly all non-mobile interface screens. Exactly the process that was selected is something I can't easily find in the talk page archives but I know that's been the standard for years. And specialized infoboxes can specify larger sizes (like{{Infobox NFL game}} uses 27em width than 22em). But in relation to your last question, I think yes, that the size was selected to prevent too much white space in the fields below the image. I can see from past discussions on the talk page of MOS:INFOBOX that it may have been 25em in the past but broadly editors cemented on 22em as the appropriate width for all infoboxes, since it was essential infobox appearance needed to be standardized.Masem (t)13:13, 19 October 2025 (UTC)[reply]
Okay, so that was a dead end, because people brought up reasons to interpret the image size discussion as not specifically talking about infoboxes and infobox sizes. --Joy (talk)10:23, 21 October 2025 (UTC)[reply]
If the policy is not describing best practice, then it's not a good policy and should be changed. Everyone consistently ignoring policy is a pointless situation. --Joy (talk)12:43, 21 October 2025 (UTC)[reply]
It's not that infobox images aren't "considered" thumbnails; they actuallyaren't thumbnails. Here's three versions, all set to the same size:
So, going back to the original contention - this policy section ends up being too strict to deal well with the practical reality of our articles. I came here after observing discussions about max size at{{Infobox Australian place}}, and it struck me as weird that we make a lot of them smaller trying to observe policy I never saw enforced anywhere else.
We have a lot of articles where portrait images are most appropriate on top of the infobox, such as biographies, where this narrower baseline (228px) seems to make sense, otherwise we just risk having either too much height, or a lot of useless information on the side of people's faces. If there's usually naturally less information on the right and left side of these pictures, we shouldn't encourage having them wider. It makes sense in context, so it's fine.
But, we also have a lot of articles where landscape images are on top of the infobox, such as places, where this narrower baseline naturally restricts the amount of information we convey to the reader. Sure, sometimes there will be useless panorama images where you see something pointless. And sure, we seem to have a tendency by a lot of editors to stuff an entire useless gallery in there. But sometimes there are classical landscape images there that contain reasonable information on the sides and they shouldn't be crammed into a narrow space. Editors habitually extend them into the entire width of the infobox, or the pictures end up smaller than they could and should be. For example:
Those were just some megacities that immediately popped into my head, but we don't have to look at these possible outliers - it happens elsewhere, too:
I gave up after a few minutes of searching and constantly finding these examples. I'd have thought waterfalls and rivers would be a nice example where you can easily replace a landscape image with portrait, but even as I found examples of that, I also got the impression that it wouldn't do a lot of them justice, let alone others.
The way this policy is written it sounds like we want to encourage editors to use portrait images. That seems like a fairly arbitrary, uninspired choice, that doesn't actually have organic consensus. --Joy (talk)11:30, 21 October 2025 (UTC)[reply]
As I mentioned, you *can* change the width at the specific subtemplate that uses Infobox, like the NFL one that I pointed out. And in fact per "{{Infobox settlement/styles.css}}, its already set to be 23em wide instead of 22em. You should probably check with the appropriate wikiproject and suggest making that larger if you want to include more landscape oriented photos in the infobox.Masem (t)12:13, 21 October 2025 (UTC)[reply]
I will also add that having a montage of images in the infobox like iwth most of those city pictures defeats the point of an infobox - its supposed to rapidly summarize information about a topic, including one or two relevant images to best illustrate it is helpful, but making it a gallery that takes a whole page to scroll past to see the info is not appropriate.Masem (t)12:19, 21 October 2025 (UTC)[reply]
As I said, that doesn't change the fact thatsingle pictures in these cases need some width to not be tiny. IOW even if we magically decided to reduce all those infoboxes to a comparatively tiny amount of pictures, odds are we would still run into a preponderance of landscape pictures, not portraits. --Joy (talk)12:45, 21 October 2025 (UTC)[reply]
Maybe to clarify - it's not that these pictures then 'impinge on the default size of the infobox', rather editors are by and large just fine letting those picture width choices define the width of the infobox, rather than making sure some space is left on the sides. --Joy (talk)13:17, 21 October 2025 (UTC)[reply]
What I am saying is that it is possible to change the css to make the infobox wider to support landscape photos, but that change for a specific infobox template should be discussed at the appropriate WikiProject page or at a central location. The width is overridable at the CSS level, so you just need the consensus to change it for that template, and for something like infobox settlement it would make sense. However, we would not change it for all infoboxes as most that use images are using square or portrait based aspect ratios so narrower is better.Masem (t)13:44, 21 October 2025 (UTC)[reply]
That is all fine and well, but this policy wording still remains a problem even if we do that. If the infobox is always meant to be the thing that decides the width rather than the picture, and if the folks over there at infobox X don't want to widen the infobox any more than the width of the top images (which is basically the status quo), then their infobox image usage is still not compliant with this policy. --Joy (talk)16:15, 21 October 2025 (UTC)[reply]
That I can fix and just didn't to be clear that that size is for default infobox width of 22em, but should me infobox templates allow for larger sizes. That way, it should clear that with a infobox that allows for larger widths you are not constrained to 228pxMasem (t)16:28, 21 October 2025 (UTC)[reply]
This is better, thanks, but I still think we should put the 'impinge' claim in context, to avoid the appearance of "this is the rule. but also it can not be a rule." :) --Joy (talk)16:48, 21 October 2025 (UTC)[reply]
We are using "should" so that it is not a hard rule ( contrast to requirements of non free images), but I'm not sure how we soften that up furtherMasem (t)17:05, 21 October 2025 (UTC)[reply]
That meaning of "should" is not clear to some editors. We have some editors who think "should" is just a polite way of saying "must".
It might be helpful to remind editors that lead images do not need to be inside the infobox, and that if they need a larger or wider image, they can put it above the infobox.WhatamIdoing (talk)17:38, 21 October 2025 (UTC)[reply]
I've seen it before. Usually, I "correct" it so the image is inside the infobox. But if it were a particularly wide image, I think I'd leave it alone.WhatamIdoing (talk)21:20, 21 October 2025 (UTC)[reply]
Looking at the list of these popular infoboxes, there's a big contingent of various biographies, where rules for portrait will make sense, and a big contingent for various places, where rules for landscape will make sense. Anything with a mapframe or a location map in the infobox is likely in the latter group as well, that's another ballpark of 1.2M - 2M (sometimes it's only one, sometimes it's both, sometimes it's neither but the top image is an image map, it's hard to gauge). With about 5M infoboxes total using InfoboxImage, that's already enough to conclude that at least a significant minority is not addressed by this policy wording. --Joy (talk)13:08, 22 October 2025 (UTC)[reply]
I think the issue is that if you want to have more space for landscape images, they must still meet the confines of the infobox but there is the ability to establish a larger width at the template level for that. However it is still the case that the bulk of infobox images are best served with portrait type aspect rations, to which 22em will be the default and thus going larger than 228px is not helpfulMasem (t)13:28, 22 October 2025 (UTC)[reply]
I don't see a rationale in this statement. Where do we see that meeting these confines is the best practice for landscape images, what is this claim based on? --Joy (talk)13:34, 22 October 2025 (UTC)[reply]
Again, this just isn't the case in practice. Editors have habitually set infobox parameters to e.g. size 300px or upright 1.23, and the infobox templates silently accommodated the extra pixels, and that was it. --Joy (talk)14:10, 22 October 2025 (UTC)[reply]
The question is, how often are people usi g images larger than the confines of the infobox? If there's only a few instances, that's likely not practice and may be a problem in actually doing that.Masem (t)15:02, 22 October 2025 (UTC)[reply]
Look at some of the examples I posted above. I wasn't looking for them for more than ten minutes, it's not uncommon. --Joy (talk)17:35, 22 October 2025 (UTC)[reply]
What we do want are consistents width of infobox among the same set of articles, so if editors are frequently using larger widths than the infobox has set as default, that default should be changed. It is okay for one or two infoboxes of a given type to set widths, but if many of them are using wider images, it's better to expand the default size to be larger to accommodate rather than force a size. This is for both standardization and also to help with different ways readers might have set up their browser. In terms of language here, we are still using "should" so there is nothing preventing one from specifying an image larger than the infobox (it will scale).Masem (t)18:33, 22 October 2025 (UTC)[reply]
Who is "we", and why do "we" believe that consistent infobox width is a big deal? I certainly don't care about this. I'd like the infobox width to not look silly (e.g., a very wide image resulting in a huge amount of horizontal whitespace in the columns underneath the image), but I don't care whether the infobox is a different width when clicking from article A to B to C.WhatamIdoing (talk)18:45, 22 October 2025 (UTC)[reply]
It seems to me that infobox image width should be determined by the following, applied in order:
Encyclopedia-wide thumbnail size, as determined by community-wide consensus, then
Scaled by a fixed factor for all infoboxes, determined by community-wide consensus, then
Optionally scaled by a fixed factor for a given infobox template, determined by relevant consensus (depending on the template, this may be project-wide or Wikiproject-level), then
Optionally scaled by a fixed factor for a specific article, determined by relevant consensus (article talk or Wikiproject), then
Scaled by the user's thumbnail preference.
For (purely illustrative) example: the 250px thumbnail size, scaled 0.9x for all infoboxes, scaled 1.2x for all infoboxes of that type, no article-specific scaling, scaled 300/250=1.2x for a user with 300px preference, would result in 250*0.9*1.2*1.2 = 324px width.
This would mean that only the already-defined encyclopedia-wide thumbnail size is a specific pixel value; everything else is a scaling factor. 4 would be intended to be very rare, only used for edge cases (such as a very vertical image), and maximum scaling factors could be placed on 3 and 4 to prevent ridiculous widths. The result would be that a given reader would see (almost all) infoboxes of the same type as the same width, and all default-scaled infoboxes would be the same width.Pi.1415926535 (talk)19:34, 22 October 2025 (UTC)[reply]
Also, maybe infobox image size scaling should be a separate user-settable preference separate from the existing thumbnail size [scaling]. I suspect that logged-in editors care a lot more about infobox width than most logged-out readers.Pi.1415926535 (talk)19:39, 22 October 2025 (UTC)[reply]
We would rather editor avoid pixel perfect layouts, letting the page stylesheets handle image sizes. That's just good HTML practice, and avoids catering to a specific layout that may look good on one screen or browser but terrible in another. So using relative widths and upright setting gs is what we want to encourage editors to use. So when it comes to the infobox, we shouldn't be tryo g to mess about with the image size, and if there's a class of infobox that normally uses more landscape images, the infobox class should be expanded in the template styleshhet that trying to force each template instance to a random size.Masem (t)20:44, 22 October 2025 (UTC)[reply]
I think that's reasonable for the general case, but editors need to be able to accommodate the special case, too.
Imagine that an infobox for a 50% of then are16:9 aspect ratio landscape, and 50% of them are 2:3 portraits. Should we make the infobox wide to accommodate the landscape ones, in which case the portraits will be bigger than we want, or the other way around, and now the landscapes are too small?
If there is a non trivial amount of cases where a wider image is desired, it is better to make a new template (or possibly add an argument to the existing?) to enable a wider template. Like, I could see such a flag on the settlement template as Skylines or major cities will generally be a landscape image, while for small towns a photo of a major building or landmark likely will be portrait. Specifizing size beyond that should only be for exceptional cases, and not something haphazardly used across numerous cases.Masem (t)21:28, 22 October 2025 (UTC)[reply]
Maybe the best thing to do would be to have a general argument for all infoboxes, specifying|image_size=landscape or|image_size=portrait. That would be easier for a lot of editors than guessing about a reasonable pixel-based width.WhatamIdoing (talk)21:48, 22 October 2025 (UTC)[reply]
One problem that I see is that many of those city articles are using{{multiple images}} which to best I can tell require a width parameter to be set. It's far better that if a montage of images were being used, they were constructed together in a simple image which they would be able to have dynamic sizing.Masem (t)22:29, 22 October 2025 (UTC)[reply]
What you're saying sounds logical to me on the face of it, but I've seen so, so many examples of #4 that this all seems more like wishful thinking.
Even if we were to collectively decide that the new best current practice is to stop doing that, it would require a huge cleanup effort to track down all the various size=XYZ parameters that people have been sprinkling in over the last few decades or so. We should think carefully before we commit a million volunteer hours to cleaning this all up, even if it seems very messy.
The option of trying to care much less about consistency in this regard is absolutely reasonable, because we have little proof that the bulk of the existing inconsistencies bother readers.
Based on what's been said recently I've updated the language so that it should be clear allowing a larger image to expand the infobox is fine but just not preferred. Getting to a place where forced size should not be required will take far more effort and this at least reflects what's been in practice done here.Masem (t)22:37, 22 October 2025 (UTC)[reply]
WRT[2] I like that you removed the overly broad 'impinge' claims. Thank you.
I still don't think we have much reason to believe there is broad consensus that infobox width consistency between pages is necessary. When articles about world cities differ in infobox width, and nobody even bothers to notice that, despite huge readership, we can't really properly claim it's a known issue, let alone that the policy should advocate something about it. --Joy (talk)09:16, 26 October 2025 (UTC)[reply]
The issue with city ones is that if they are using the multiple image template, they can't get away from specifying a px width for that. But as I also have said, I think having that many images as the image part of the infobox defeats the purpose of the infobox which is supposed to be all the relevant info at a glance, and having to scroll down past a number of images defeats that purpose. But that goes beyond this discussion.Masem (t)12:16, 26 October 2025 (UTC)[reply]
I think that the "consistency between pages" line could be removed. Unlike (apparently) Joy, I have noticed that some infoboxes are different widths, but he is correct that nobody complains about it.WhatamIdoing (talk)20:21, 26 October 2025 (UTC)[reply]
There wasa pump proposal back in July on the question of updating the AI section of this policy to also cover AI upscaling, where AI software is used to add detail and resolution to low quality historical images. It ran until September 5, was never formally closed, and was automatically archived without action.
As an uninvolved admin, I would readily conclude that discussion decidedly had support for blockign the use of AI upscaling in such cases (outside of demonstrating that on articles dealing with the topic of AI upscaling), so it should likely be added.Masem (t)22:56, 31 October 2025 (UTC)[reply]
What is policy regarding a user-created illustration of a person? I'm referring toTalk:Emi Koyama; an editor uploaded an illustration they made of this person based on video of them; they seem to be part of a project(inSpanishPortuguese) to illustrate articles about women/trans/nonbinary persons with articles that lack photos. I'm concerned about demonstrating that it is an accurate depiction of the person.331dot (talk)23:49, 22 November 2025 (UTC)[reply]
This sounds like a weird violation of the spirit ofWP:V andWP:BLP. If we know we're in the 21st century, and we know we have easy access to photography, then that is the standard for reproducing the appearance of a person. Employing other methods just makes things more complex for the readers for no obvious benefit. --Joy (talk)12:00, 27 November 2025 (UTC)[reply]
Note also that this policy's section on "Diagrams and other images" already includes:
Additionally, user-made images may be wholly original. In such cases, the user-made image should be primarily serving an educational purpose, and not as a means of self-promotion of the user's artistic skills.
An illustration of a person is an original work but not really wholly original given the moral rights of the subject; but even disregarding that matter for a moment, it's easy enough to say that the illustration is inherently a promotion of artistic skills. --Joy (talk)12:03, 27 November 2025 (UTC)[reply]
The intent here seems to be to provide images of women/LGBTQ persons who don't have copyright compliant images available; but the way to do that is to solicit them(especially in the case of Emi Koyama who has her contact information on her website) not create their own.331dot (talk)12:52, 27 November 2025 (UTC)[reply]
I asked the user who uploaded the illustration of Emi Koyama to come here and/or ask their leaders to, but they aren't interested in pressing their case. I've also interacted with the second user who added their illustration to the article mentioned by Nikkimaria.331dot (talk)16:07, 27 November 2025 (UTC)[reply]
Per RfC: adding ban on AI-redrawn "enhanced" images
I propose adding the following to the WP:AIGI section:
Images whose details have been substantially regenerated by generative AI (e.g., in order to create a colorized or higher-resolution image) should not be used, subject to the same exceptions as apply to images fully generated by AI. Generally, the original version of the image should be used instead.
Additionally, since this case is subject to the consensus mentioned above, the following change should be made to the existing text:
Marginal cases (such asmajor AI enhancement or if an AI-generated image of a living person is itself notable) are subject to case-by-case consensus.
I dosupport it, although I express a small reserve about how the proposed addition suggests that the original version should[g]enerally be used, but doesn't specify when it might not. I'm guessing that would be when other, non-AI restorations are available?ChaoticEnby (talk ·contribs)16:18, 4 December 2025 (UTC)[reply]
When other restorations are available is one scenario where we might not use the original, another is if the AI-restoration is itself notable as we'd use either that version or both versions, depending on context (something comparable toEcce Homo (García Martínez and Giménez) comes to mind for the latter), or it could be that neither version is beneficial in the circumstance and it's also possible that the original may not be available under an appropriate license. I'm undecided whether I think it would be beneficial to list some or all of these, but if they are it should be clear they are examples and not an exhaustive list as other cases will exist that need individual consideration.Thryduulf (talk)17:36, 4 December 2025 (UTC)[reply]
What I meant, and I hope this is clear, is that the exceptions to the rule on this category of image would be parallel to those for wholly AI-generated images. So, indeed, if the manipulated image is the subject of the article, or if the article is about the process itself, or if the image is specifically newsworthy, then it would be used — exceptions to the general case.D. Benjamin Miller (talk)19:29, 4 December 2025 (UTC)[reply]
Not opposed if a clarification is needed, but AI colouring already falls under the generalMOS:COLORIZED. Upscaling would fall spiritually under the same preference for original photos.CMD (talk)01:40, 4 December 2025 (UTC)[reply]
The change in order is certainly no issue for me. I do think, however, that "regenerated" is preferable to "modified," as I think this may make it a bit clearer what we are targeting (though I understand wemean the same thing). The reasons I think the term "regenerated" is a bit clearer are:
It is parallel to the use of the term "wholly generated" in WP:AIGI.
It helps refer specifically to what we call generative AI, as opposed to the broader use of "AI" as a buzzword. For example, very light denoising algorithms are now sometimes being called "AI," as are the really heavy regenerators we want to avoid here. Hence "substantially regenerated." Effectively, if any part of the image appears to be replaced using generative AI, that would qualify. (I guess "substantially modified" could convey the same meaning, but that feels less specific to me.)
I'm worried that "regenerated" will lead to people trying to wikilawyer their way around this, saying "no, it's not 'regenerated', it's just the same image but better!". "Substantially modified by generative AI" should be sufficient to prevent this, while also not banning basic image editing tools, many of which are now marketed as "AI".Toadspike[Talk]22:48, 4 December 2025 (UTC)[reply]
I support the intent of this, but like Toadspike I have issues with "regenerated". I understand and half agree with D. Benjamin Miller, but not completely enough to fully support this as is (yet?). I think part of the issue might be that colourising an image that never had colour in the first place isn't "regenerating" anything and not all upscalling generates new information (I've seen it argued (not on Wikipedia) that information was "extrapolated" rather than "generated" (the term "regenerated" wasn't used in that discussion) - whether that is a meaningful difference was a key part of the disagreement. I'm wondering whether it would be better to split this into two parts (whether bullets or setences I'm not sure), one dealing with things that unarguably involve the (re)generation of elements of an image and one dealing with modifications such as colourisation? In the majority of cases we don't want either, but it might help make the language simpler and it might make discussion about exceptions easier.Thryduulf (talk)17:49, 4 December 2025 (UTC)[reply]
In a certain sense, all forms of upscaling involve the creation of new information by some means (whether by deterministic extrapolation or by use of a generative AI model which draws new details from whole cloth).
Now, even putting aside AI stuff for the moment, most upscaling is probably not desirable for other reasons (in part technical). Likewise, colorization is already almost universally avoided in WP articles (because it is speculative in nature, even if done manually). But I had never really seen people apply old-style upscalers to pictures here, whereas I have seen people feed images through AI enhancement software (which is what led me to make the original RfC).
The examples I gave in the original RfC probably illustrate what I'm talking about better than a written description can. There is a certain type of generative AI "upscaler" or "restorer" which produces images with, at least at this point, some pretty noticeable visual characteristics (look at the textures in the examples I give there). That is the issue I saw on various articles from a handful of misguided users.
In the future, it is possible that more convincing fakes might be generated, which look more like real photos (though I doubt it, because these generators are trained on present-day images, which often have obvious differences from historical photographs). So when it comes to trying to describe the general case, it gets a bit more difficult.
Here is what I think is the crux of the matter. If a "restorer" is replacing the genuine details of an image with a version that is noticeably different, then it is replacing genuine detail with faked detail (now usually obvious, though maybe less so). These details are supposed to correspond with those in the original image, which is why I used the word "regenerated." But you're right that with something like colorization that this is less appropriate.
So perhaps it is clearer to say where details have been "regenerated or added" by AI. This would cover cases where the detail has perhaps been lost (upscaling) and where it was never captured (colorization). Thoughts?D. Benjamin Miller (talk)19:50, 4 December 2025 (UTC)[reply]
I support the intent of this but I worry that as AI-"enhanced" features become commonplace it is going to become difficult both for uploaders to recognize whether they have used AI to add details to their photos, and difficult for photographers to avoid taking photographs that are not immediately modified by AI to add details. We do not want to outlaw photos taken on modern devices but that could be the effect of a ban that is not clear about the distinction between modification of older images and the unavoidable use of AI features in the default image processing pipeline in newly created images. —David Eppstein (talk)22:58, 4 December 2025 (UTC)[reply]
A photo taken on a device automatically using AI would mean the photograph so modified would nonetheless be the 'original image' per se, so we can probably have some wording that where the upscaling of most concern is secondary modification. This has already been an issue on Commons, the case I remember is where photographs of some South Korean celebrities were much sharper around the face than the rest of the image. I suppose in an ideal world exif data would indicate which photos were taken with phone models that apply automatic upscaling or similar.CMD (talk)03:37, 5 December 2025 (UTC)[reply]
Support, sharing the concerns over "regenerated". One common meaning of that word is the replacement of something that has been removed or lost (eg. erasing a person from a photo and redrawing the background to fill the space), and that's quite rare in the AI editing that we see. It also suggests that MOS:IMAGES's "usually appropriate to de-speckle or remove scratches from images" may not apply when performed by AI, as it can be a regenerative process.
I think "detail added" is clearer, and gets to the nub of the issue. An AI removing a large scratch from a photo of empty blue sky seems tolerable, as there was likely nothing there to begin with, but it removing the same damage from a crowd scene and adding new people to fill the gap would be creating detail that didn't previously exist. And the most common use case for AI upscaling on Wikipedia - blowing up a low quality photo and rendering higher resolution facial features - is unambiguously adding new details.Belbury (talk)10:37, 9 December 2025 (UTC)[reply]