Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Wikipedia:Village pump (proposals)

From Wikipedia, the free encyclopedia
"WP:PROPOSE" redirects here. For proposing article deletion, seeWikipedia:Proposed deletion andWikipedia:Deletion requests.
Discussion page for new proposals
 Policy Technical Proposals Idea lab WMF Miscellaneous 

Theproposals section of thevillage pump is used to offer specific changes for discussion.Before submitting:

Discussions are automatically archived after remaining inactive for 7 days.

Centralized discussion
For a listing of ongoing discussions, see thedashboard.

RFC: What should be done about unknown birth/death dates

[edit]

With the implementation ofModule:Person date, all|birth_date= and|death_date= values in Infoboxes (except for deities and fictional characters) are now parsed and age automatically calculated when possible.

With this implementation, it was found that there are a large number of cases (currently 4537) where the birth/death date is set toUnk,Unknown,? or##?? (such as 19??). Full disclosure,Module:Person date was created by me and because of an issue early on I added a number of instances of|death_date=Unknown in articles a few weeks ago. (I had not yet been informed about the MOS I link to below, that's my bad).

PerMOS:INFOBOX:If a parameter is not applicable, or no information is available, it should be left blank, and the template coded to selectively hide information or provide default values for parameters that are not defined..

There is also the essayWP:UNKNOWN which says, in short,Don't say something is unknown just because you don't know.

So the question is what to do about these values? CurrentlyModule:Person date is simply tracking them and placing those pages inCategory:Pages with invalid birth or death dates (4,537). It has been growing by the minute since I added that tracking. Now I am NOT proposing that this sort of tracking be done for every parameter in every infobox... There are plenty of cases of|some_param=Unknown, but with this module we have a unique opportunity to address one of them.

I tried to find a good case where the|death_date= truly is Unknown, but all the cases I could think of use|disappeared_date= instead. (SeeAmelia Earhart for example).

The way I see it there are a few options
  • Option A - Essentially do nothing. Keep the tracking category but make no actual changes to the pages.
  • Option B - Implement a{{preview warning}} that would sayThis value "VALUE" is invalid perMOS:INFOBOX &WP:UNKNOWN. (Obviously open to suggestions on better language).
  • Option C - Take B one step further and actually suppress the value. Display a preview warning that saysThis value "VALUE" is invalid perMOS:INFOBOX &WP:UNKNOWN. It will not be displayed when saved. then display nothing on the page. In other words treat|death_date=Unknown the same as|death_date=. (Again open to suggestions on better language for the preview warning).
  • Option D - Some other solution, please explain.

Thanks in advance! --Zackmann (Talk to me/What I been doing)23:43, 21 October 2025 (UTC)[reply]

Discussion (birth/death unknown)

[edit]
  • We definitely shouldn't be using things like "Unk" or "?" - if we want to say this is not known we should explicitly say "Unknown". Should we ever say "unknown" though? Yes, but for births only when we have reliable sources that explicitly say the date is unknown to a degree that makes values like "circa" or "before" unhelpful - even "early 20th Century" and is more useful imo than "unknown". "Unknown" is better than leaving it blank when we have a known date of birth but no known date of death (e.g.Chick Albion). I'm not sure how this fits into your options.Thryduulf (talk)00:24, 22 October 2025 (UTC)[reply]
    Agreed. There are cases where no exact date is given but MOS:INFOBOX and WP:UNKNOWN do not apply because thelack of known date can be sourced reliably. If the module cannot account for this, I really think only option A is acceptable. —Rutebega (talk)18:15, 22 October 2025 (UTC)[reply]
    @Rutebega andThryduulf: So I can very easily make it so that|..._date=Unknown<ref>... is allowed but just plain|..._date=Unknown is not. That is just a mater of tweaking theregular expression. Not hard at all to do at all. That being said (mostly for curiosity sake) can you give me an example of a page wherethe lack of known date can be sourced reliably? Every case I could think of (and I really did try to find one) either has a relevant|disappeared_date= (so you don't need to specify that|death_date=Unknown) or you can least provide approximate dates (i.e.{{circa|1910}},1620s or12th century).Zackmann (Talk to me/What I been doing)18:23, 22 October 2025 (UTC)[reply]
    Metrodora isn't quite date unknown, but the only fixed date we have is the manuscript which preserves her text (c.1100 AD), and her floruit has been variously estimated between the first and sixth centuries AD. Of course, so little is known for certain about Metrodora that every single infobox field would be "unknown" were it filled in, and therefore there's little point having an infobox at all.
    Corinna's dates are disputed: she was traditionally a contemporary ofPindar (thus born late 6th century and active in the fifth century BC) but some modern scholars argue for a third-century date. If the article had an infobox, a case could be made either for listing her floruit as either "unknown", "disputed", "5th–3rd century BC", "before 1st century BC" (the date of the first source to mention her) or simply omit it entirely.
    I'm open to convincing about how these cases should be handled; my inclination is that any historical figure where the date fields alone need this much nuance are probably a bad fit for an infobox, but the size ofCategory:Pages with invalid birth or death dates suggests that not everybody agrees with me!Caeciliusinhorto-public (talk)08:56, 23 October 2025 (UTC)[reply]
    @Caeciliusinhorto-public: thanks for some real examples. I think your point that so little is known that Infoboxes don't make sense is a good one... If there were other info that made sense to have in an Infobox I think the dates would still be able to be estimated (even if the range is hundreds of years). You could still put|birth_date=5th-3rd century BC or, of course, just leave it blank! Leaving it blank to me implies that it is Unknown, though it does leave ambiguous whether is it Unknown because no editor has taken the time to figure it out or whether it is Unknown because the person live some 2,200 years ago and we have no real way of knowing when they were born...Zackmann (Talk to me/What I been doing)09:05, 23 October 2025 (UTC)[reply]
  • This is above my pay grade but can you give us an idea of how much "It has been growing by the minute". The scale of those additions may inform our view as to how best to deal with it.Lukewarmbeer (talk)16:34, 22 October 2025 (UTC)[reply]
    @Lukewarmbeer: so this is mostly a caching issue. I don't think very manynew instances of this are being created each day, it just takes a while for the code to propagate. I really don't have an objective way of saying how many new instances are being created daily...Zackmann (Talk to me/What I been doing)17:13, 22 October 2025 (UTC)[reply]
    FWIW, about 15% of our biographies of living people have unknown birthdates (based on a count by category I did in 2023). I would assume that deceased biographies are perhaps more likely to miss this data, so we're looking at a number in the low hundreds of thousands? Not all of those will have infoboxes, of course.Andrew Gray (talk)20:39, 22 October 2025 (UTC)[reply]
    @Andrew Gray: when you sayhave unknown birthdates do you mean "no birthdates are given"? Because that is NOT what we are talking about here... We are talking about|birth_date=Unknown, where someone has specifically stated that the date is Unknown, not just left it blank.Zackmann (Talk to me/What I been doing)20:42, 22 October 2025 (UTC)[reply]
    @Zackmann08 ah, right - I think I misunderstood, apologies. If the module does nothing when the birthdate field is blank or missing, that sounds good.
    I think the simple tracking category for non-date values sounds fine for now.Andrew Gray (talk)20:52, 22 October 2025 (UTC)[reply]
  • Perhaps the problem is the multiple meanings of "Unknown". Some may have filled it meaning "nobody knows about the early life of this historical guy, only that he became relevant during the X events, already an adult", and others "unknown becauseI don't know". We may make it so that "Unknown" has the same effect as an empty field, and require a special input for people with truly unknown dates. And note that any biography after whatever point birth and death certificates became ubiquitous should be treated as the second case.Cambalachero (talk)14:09, 23 October 2025 (UTC)[reply]
  • Option D The variant on option C where it's permittediff there's a citation seems like a good solution to me. By a similar argument toWP:ALWAYSCITELEAD, I think a citation should always be required to assert that someone's date of death is outside the scope of human knowledge. FromWP:V we should always citematerial that is likely to be challenged, and I think the assertion that someone's date of death is "unknown" falls well within that scope; in particular I myself will always challenge it if unsourced.lp0 on fire ()16:32, 23 October 2025 (UTC)[reply]
    I think whether someone's date of birth or death being unknown falls into the category ofmaterial that is likely to be challenged is party a factor of when and where they were born and the time, place and manner of their death and how much we know about them generally. It is not at all surprising to me that we don't know the date of birth or death of a 3rd century saint or 18th century enslaved person, or when a Peruvian athlete who competed in the 1930s died; we do need a citation to say that we only know the approximate date of death forDennis Ritchie andGene Hackman.Thryduulf (talk)16:50, 23 October 2025 (UTC)[reply]
    Do you think the citation always needs to be inside the infobox? Our article aboutMetrodora has a couple of paragraphs about which century she might have lived in. There's no infobox at the moment, but if we added one, would you insist that the citations be duplicated into the infobox?WhatamIdoing (talk)18:40, 24 October 2025 (UTC)[reply]
  • Option D Allow Unknown but not other abbreviations. Require citations for dates. Rationale: Looking at theSven Aggesen it’s easy to see that “Unknown” is helpful because it’s communicating that the person is dead. In my opinion it’s still stating a fact. So Unknown should be allowed. “?” Should not. It seems like dates of birth and death should always be cited. Thanks for your work on this!!Dw31415 (talk)17:54, 23 October 2025 (UTC)[reply]
    • In the case of Sven Aggesen I think we could reasonably expect a reader to infer from "born: 1140? or 1150?" that he is probably dead! In the case of people born recently enough that there might be confusion, I can't imagine there are many cases where both (a) they are known to be dead and (b) their date of death is known so imprecisely that we don't have a more useful value than "unknown" for the infobox.Caeciliusinhorto (talk)20:35, 23 October 2025 (UTC)[reply]
  • Option A - needs more study - The category seems flawed, the concern seems more a flaw in process or concept of the template itself. Looking at a few pretty random clickings fromCategory:Pages with invalid birth or death dates (4,537) I see that maybe listing them as bad instead is indication that when the context is historical or a short stubby article, it just should not expect modern and detailed precision. And that there was at least one simply a typo to remind me articles are imperfect.
  • Carlos Altés 3 Sept 1907 to unknown -- there obviously is a death, but that the death date is unknown is perhaps a correct statement of fact.
  • Georgios Anitsas born 1891, died unknown -- well it's a stub article about a 1924 Olympics shooter based on two sports cites.
  • Æthelbald of Mercia King of Mercia died 757 - the death would be known from the succession, though exact day not so much, and the birth before rising to Kingship even less so.
  • Martín de Andújar Cantos born 1602, died unknown - another stub article from one art source
  • Vicente Albán born 1725, died unknown - and typo on birthplace "Viceroyalty if New Gramada"
  • Po Aih Khang King of Panduranga died 1622, born ? - the death would be known from the succession, though again exact day not so much, and the person is only known from historical chronicles so the birthdate being a question mark seems informal ask for someone who knows to put in ...
In the actual instances: Option B seems a nonstarter since the pages already exist and such a flag seems meaningless; Option C supression seems in many casesw hiding simple fact of what is not known or what is only known to the year without exact day; and Option D ... I don't have a fix for the cases other than to say 'needs more study' and/or 'thingss are about as good as can be done with what is shown so just leave it'. CheersMarkbassett (talk)19:35, 13 November 2025 (UTC)[reply]
@Markbassett: to really sure how theconcept of the template itself is flawed... Again perMOS:INFOBOX andWP:UNKNOWN we should not be puttingUnknown in the infobox... Your comments don't really address that... You give a few examples where the date should supposedly be inferred, but don't address the underlying issue here...Zackmann (Talk to me/What I been doing)19:40, 13 November 2025 (UTC)[reply]
I think it's complicated and depends on context, this RFC was missing too many questions and too many cases to start trying to make a conclusion.
  • The simple list of 'Category:Pages with invalid birth or death dates' has many different situations and various templates -- and some of the fields might well be a good usage or the best that can be expected. Needs considerably more study, maybe the category needs to look at things by-template and by-era for example, or maybe separate out those that are from very short articles with less than 4 cites.
  • See my remark for Carlos Altés - "that the death date is unknown is perhaps a correct statement of fact."
If the date is not known and not knowable - does that mean the template 'Infobox football biography' was a bad one to use or that the template is incomplete? Per template guidance "Do NOT use this template when the person's exact date of death is disputed or unknown; considerdeath year and age instead.'
Does this mean that templatedeath year and age needs to add guidance for if year is unknown ? Should date fields have some text values allowed as options to distinguish 'nobody knows' from 'unknown from limited cites' from 'someone please put in a value'?
  • I could ask similarly if 'Infobox royalty' date fields should allow simply year-of values or specify some text values as options, because the template Birth date defaults to Birth year and no further, but in just these few examples I'm seeing that centuries-ago kings seem often not knowing the year of birth.
Perhaps the category list is showing a few thousand places of questions more than issues -- if you broke it out by which template is used fromWikipedia:List of infoboxes - Wikipedia it might be reduced to mostly just a few where birth-date is an issue, or perhaps it would emerge that the template birth-year needs a mod. I don't know, but I think nobody knows without considerably more study -- and meanwhile no change. CheersMarkbassett (talk)20:46, 13 November 2025 (UTC)[reply]
AgainMarkbassett you have not bothered to read the beginning of this RFC where the following is clearly stated...
PerMOS:INFOBOX:If a parameter is not applicable, or no information is available, it should be left blank, and the template coded to selectively hide information or provide default values for parameters that are not defined..
There is also the essayWP:UNKNOWN which says, in short,Don't say something is unknown just because you don't know.
so onCarlos Altés the|death_date= in the Infobox should be left blank per the MOS...Zackmann (Talk to me/What I been doing)21:52, 13 November 2025 (UTC)[reply]
Umm, obviously you're making a claim without ability to know, but I did read that -- and then looked further at the unstated and perhaps unseen flaws inCategory:Pages with invalid birth or death dates (4,537) by looking at some specific cases inCategory:Pages with invalid birth or death dates (4,537) and chased thru a couple of various Infobox templates with a subfield of birth-date etcetera. Though I don't know why the templates count of issues is small when yours is big, I do know that 'it's complicated'. There's a lot of different situations and different infoboxes and maybe the wrong infobox was used or maybe the wrong fill was used or maybe just maybe this is just too superficial and generic a study so far to start trying for conclusions. I didn't propose INVALID RFC, but suggested it needs a deeper look and made a couple suggestions. I am not excluding that perhaps the infoboxes need to address an area that's not going well and edit there -- in which case removing the indication would be a bad thing. CheersMarkbassett (talk)22:26, 13 November 2025 (UTC)[reply]
  • Option D - estimate to a few decades of precision Wikipedia is unusual for broadly covering global human history. Because of this, unusually as compared to other publications, readers browse biographies not even knowing a person's century or country. Consider the American Civil War veteranFrancis A. Bishop. Is this person American, born in the 1800s, and male? The article lacks sources to establish such things, but still, this kind of demographic information is important for categorizing people in Wikipedia. If we can determine a person's century of birth then that is helpful, and if we can narrow it to within a few decades then that also is helpful. Placing biographies in time is critical, and even when we lack an explicit source toWP:V the claim, then I favor doing theWP:OR to place this person into visibility in categories and data structures. Bluerasberry(talk)20:06, 13 November 2025 (UTC)[reply]
  • Thedocumentation for Module:person date is really only useful to the person who wrote it. It's very difficult to figure out where this fits in the infobox ecosystem. But my best guess is that it is only invoked if templates such as{{Death date and age}} and{{Birth date}} are used in the infobox. These templatesONLY support dates in the Gregorian calendar. The earliest possible Gregorian date was 15 October 1582. The discussion above makes reference to a number of examples from antiquity. These articles should not be using any of these date templates. I can't see how mentioning these people in this discussion makes sense.Jc3s5h (talk)20:27, 13 November 2025 (UTC)[reply]
    User:Jc3s5h for the record you can set|birth_date= to ANY value... It has long been prefered that you using a template such as{{birth date and age}} but with the creation ofModule:Person date even THAT is no longer necessary for modern, Gregorian calendar dates. The real issue here is what to do with dates that claim to be Unknown. I would argue there is not an example where the date is COMPLETELY unknown. While you may not know the EXACT date, you at least know a decade, or a century the person was alive. You can simply say|birth_date=6th century or|death_date={{circa|610}}...Zackmann (Talk to me/What I been doing)20:33, 13 November 2025 (UTC)[reply]

I can't understand your reply without a more complete context. If I have the following:

{{Infobox Christian leader| type = Pope| birth_date = c. 530| birth_place = [[Blera]], [[Eastern Roman Empire]]| death_date = 22 February 606 (aged 75–76)}}

It would seem to me such an infobox would not invoke Module:Person date and so would not be a suitable example for this discussion.Jc3s5h (talk)20:49, 13 November 2025 (UTC)[reply]

@Jc3s5h: you are correct, if you use that code you provided, it would NOT InvokeModule:Person date. My point is that if you had|death_date=Unknown I have yet to find a case where that cannot be replaced with SOME information. We may not know the exact date, or even the exact year, but you should be able to replaceUnknown withc. 123 or15th century and thus resolve theproblem of it appearing in the category. I have yet to find a page where there is literally NO CLUE about when the person lived, not even a century.
The root of the question is for those pages that DO useUnknown should we display some sort of{{preview warning}} message to editors that essentially says "Hey this isn't a valid value, you need to put SOMETHING (a decade, a century, a 'circa') or (perMOS:INFOBOX) simply leave it blank". This is the goal of Options B & C. Hope that helps... -Zackmann (Talk to me/What I been doing)21:59, 13 November 2025 (UTC)[reply]

RfC: Aligning community CTOPs with ArbCom CTOPs

[edit]

Should the community harmonize the rules that govern community-designated contentious topics (which are general sanctions authorized by the community) withWP:CTOP? If so, how? 19:55, 22 October 2025 (UTC)

Background

Before 2022, thecontentious topics process (CTOP) was instead known as"discretionary sanctions" (DS). Discretionary sanctions were authorized in a number of topic areas, first by the Arbitration Committee and then by the community (under itsgeneral sanctions authority).

In 2022, ArbCom made anumber of significant changes to the DS process, including by renaming it to contentious topics and by changing the set of sanctions that can be issued, awareness requirements, and other procedural requirements (seeWP:CTVSDS for a comparison). But because the community's general sanctions are independent of ArbCom, these changes did not automatically apply to community-authorized discretionary sanctions enacted before that date.[a]

In anApril 2024 RfC, the community decided thatthere should be clarity and consistency regarding general sanctions language and decided to rename community-authorized discretionary sanctions to "contentious topics". However, the community did not reach consensus on several implementation details, most prominently whether the enforcement of community CTOPs should occur at thearbitration enforcement noticeboard (AE) instead of theadministrators' noticeboard (AN), as is now allowed (but not required) by ArbCom's contentious topics procedure.[b]

Because of the lack of consensus, no changes were made to the community-designated contentious topics other than the naming. As a result, there currently exist24 ArbCom-designated contentious topics and7 community-designated contentious topics, and the rules between the two systems differ as documented primarily atWP:OLDDS.

Questions:
  • Question 1: Should the community align the rules that currently apply in community-designated contentious topics withWP:CTOP,mutatis mutandis (making the necessary changes) for their community-designated nature?
  • Question 2: Should the community authorize enforcement of community contentious topics at AE (in addition to AN, where appeals and enforcement requests currently go)?
Implementation details:

In either case above, all existing community CTOPs would be amended by linking to the new information page to document the applicable provisions.

If question 1 fails, no changes would be made.

Notes

  1. ^WP:GS/SCW&ISIL,WP:GS/UKU,WP:GS/Crypto,WP:GS/PW,WP:GS/MJ, andWP:GS/UYGHUR followWP:OLDDS.WP:GS/ACAS was enacted after December 2022 and therefore follows the current ArbCom contentious topics procedure.
  2. ^Specifically, AE may consider "requests or appeals pursuant to community-imposed remedies which match the contentious topics procedure, if those requests or appeals are assigned to the arbitration enforcement noticeboard by the community." –Wikipedia:Arbitration Committee/Procedures § Noticeboard scope 2

Survey (Q1&Q2)

[edit]
The following discussion is an archived record of arequest for comment.Please do not modify it. No further edits should be made to this discussion.A summary of the conclusions reached follows.
Yes x2 perWP:SNOW.voorts (talk/contributions)23:21, 3 November 2025 (UTC)[reply]

  • Yes to both questions. For almost three years now, we have had two different systems called "contentious topics" but with different rules around awareness, enforcement, allowable restrictions, etc. In fact, becauseWP:GS/ACAS follows the new CTOP procedure but without AE enforcement, we actually havethree different systems. We should take this chance to make the process meaningfully less confusing. There is no substantive reason why the enforcement of, for example,WP:GS/UYGHUR andWP:CT/AI should differ in subtle but important ways.
    As for using AE, AE is designed for and specialized around CTOP enforcement requests and appeals. AE admins are used to maintaining appropriate order and have the benefit of standard templates, word limits, etc., while AN or ANI are not specialized around this purpose. As a result ofWP:CT2022, ArbCom nowspecifically allows AE to hearrequests or appeals pursuant to community-imposed remedies which match the contentious topics procedure, if those requests or appeals are assigned to the arbitration enforcement noticeboard by the community. We should take them up on the offer asBarkeep49 firstsuggested at the previous RfC.
    FYI, I am notifying all participants in the previous RfC, as this RfC is focused on the same topic. Best,KevinL (akaL235·t·c)19:57, 22 October 2025 (UTC)[reply]
  • Yes to both - I don't see a downside to this standardization, and it would appear to both make the system as a whole easier to understand, and allow admins to take advantage of the automated protection logging bot for the currently-GS topics.signed,Rosguilltalk20:01, 22 October 2025 (UTC)[reply]
  • Yes to both. The CTOP system is complicated even without these three different regimes and confuses almost everyone involved. AE can be a great option for reducing noise in discussions, compared to AN.—Femke 🐦 (talk)20:20, 22 October 2025 (UTC)[reply]
  • Yes to both as standardization can help clarify confusion especially among newcomers about contentious topics.Aasim (話すはなす)20:29, 22 October 2025 (UTC)[reply]
  • Yes to both but as I said in the previous RFC, if we're going to go in this direction, we should also be moving towards a process where the community eventually takes over older ArbCom-imposed CTOPs, especially in areas where the immediate on-wiki disruption that required ArbCom intervention has mostly settled down but the topic itself remains indefinitely contentious for off-wiki reasons. ArbCom was intended as the court of last resort for things the community failed to handle; it's not supposed to create policy. Yet currently, huge swaths of our most heavily-trafficked articles are under perpetual ArbCom sanctions, which can only be modified via appeal to ArbCom itself, and which arefunctionally the same as policy across much of the wiki. This isn't desirable; when ArbCom creates long-term systems like this, we need a way for the community to eventually assume control of them. We need to go back to treating ArbCom as a court of last resort, not as an eternal dumping ground for everything controversial, and unifying ArbCom and community sanctions creates an opportunity to do so by asking ArbCom to agree to (with the community's agreement to endorse them) convert some of the older existing ArbCom CTOPs into community ones. --Aquillion (talk)20:51, 22 October 2025 (UTC)[reply]
  • Yes to both per nom. Consistency is great, and eliminating the byzantine awareness system (where you need an alert every 12 months) is essential.WP:AE is a miracle of a noticeboard (how is the noticeboard with the contentious issues the relatively tame one?), and we as a community should take advantage of ArbCom's offer to let us use it. Best,HouseBlaster (talk • he/they)22:10, 22 October 2025 (UTC)[reply]
  • Yes to both. This is a huge step in the right direction.Toadspike[Talk]22:16, 22 October 2025 (UTC)[reply]
  • Yes to both, and a full-throated "yes" for using AE in particular. The other noticeboards are not fit for purpose with respect to handling CTOP disruption.Vanamonde93 (talk)22:24, 22 October 2025 (UTC)[reply]
  • Yes to both – This has been a mess for more than a decade. Harmonising the community and ArbCom general sanctions regimes will cut red tape, and eliminate confusion over which rules apply in any given case. I am also strongly in favour of allowing community sanctions to be enforced atWP:AE. Previously, there were numerous proposals to create a separate board for community enforcement, such asUser:Callanecc/Essay/Community discretionary sanctions, but all failed to go anywhere. In my opinion, the most important aspect of community sanctions (as opposed to ArbCom sanctions) is that the community authorises them, and retains control over their governance. Enforcement at AE does nothing to reduce the community's power to enact sanctions; if anything, it will ensure that these regimes are enforced with the same rapidity as ArbCom sanctions. It would be foolish to not take advantage of ArbCom's offer to allow us to use their existing infrastructure.Yours, &c.RGloucester23:54, 22 October 2025 (UTC)[reply]
  • Yes to both. I was in favor of this during the March 2024 rfc but was relcutant to push it too hard since I was then on arbcom. I am no longer on arbcom and thus can freely and fully support this thoughtful and wise pproposal for the same reasons I hinted at in the previous discussion. Best,Barkeep49 (talk)02:00, 23 October 2025 (UTC)[reply]
  • Yes to both, and future changes to either sanction procedure should be considered for both. Not to be unduly repetitive of others above, but the system is more complex than it needs to be. AE as an additional option is a positive.CMD (talk)04:38, 23 October 2025 (UTC)[reply]
  • Yes to both and thank you toL235 for working on this. As RGloucester I'd worked on this previously so am definitely supportive.Callanecc (talkcontribslogs)07:11, 23 October 2025 (UTC)[reply]
  • Yes to both, permy comment in the 2024 RfC. It is not reasonable to expect new editors to familiarize themselves with multiple slightly different sanctions systems that emphasize procedural compliance. — Newslinger talk08:20, 23 October 2025 (UTC)[reply]
  • Yes to both. Let's not make the CT system more complicated and impenetrable than it needs to be already; consistency can only be good here.Caeciliusinhorto-public (talk)09:07, 23 October 2025 (UTC)[reply]
  • Yes to both, long overdue. ~ Jenson (SilverLocust💬)16:38, 23 October 2025 (UTC)[reply]
  • Yes to both, with the same caveats asAquillionlp0 on fire ()16:39, 23 October 2025 (UTC)[reply]
  • Yes to both for consistency.ChaoticEnby (talk ·contribs)16:59, 23 October 2025 (UTC)[reply]
  • Yes to both. We already have overlapping CSes (Arbcom-imposed) and GSes (community-imposed) - A-A and KURD, at least, where the community chose to impose stricter sanctions on a topic area than ArbCom mandated (in both of those cases, the community chose to ECR the topic area). This has caused confusion for me as an admin a few times, for a regular user it can only be more so. Harmonizing the restrictions, with the only difference being who imposed them, can only make sense. -The BushrangerOne ping only20:02, 23 October 2025 (UTC)[reply]
  • Yes andYes - The same procedures should apply to topics that the ArbCom has found to be contentious as to topics which the community has found to be contentious. The differences have only caused confusion.Robert McClenon (talk)20:56, 23 October 2025 (UTC)[reply]
  • Yes to both CTs (whether issued by Arbcom or the community) should be treated the same regardless of whoever issued it.JuniperChill (talk)11:04, 24 October 2025 (UTC)[reply]
  • Yes to both: A long time coming. This centralization will clean up so much unnecessary red tape. —EarthDude (Talk)13:26, 24 October 2025 (UTC)[reply]
  • No I understand what Arbcom is perWP:ARBCOM and it seems to be a reasonably well-organised body with good legitimacy due to it being elected. But what's the community? PerWP:COMMUNITY andWikipedia community, it seems to be be any and all Wikipedians and this seems quite amorphous and uncertain. Asking such a vague community to do something is not sensible. In practice, I suppose the sanctions were cooked up at places likeWP:ANI which is a notoriously dysfunctional and toxic forum. That's not a sensible place to get anything done.
I looked at one of these community sanctions as an example, and it was some special measure for conflict about units of measurements in the UK:WP:GS/UKU. Now I'm in the UK and so might easily run afoul of this but this is the first I heard of this being an especially hot topic. And I've been actively editing for nigh on 20 years. Our general policies about edit-warring, disruption and tendentious editing seem quite adequate for such an issue and soWP:CREEP applies. That sanction was created over 10 years ago and so should be expired rather than harmomised. The other general sanctions concern such topics asMichael Jackson, who died 16 years ago and that too seems quite dated.
So, I suggest that all the general sanctions be retired. If problems with those topics then recur, fresh sanctions can be established using the newWP:CTOP process and so we'll then all be on the same page.
Andrew🐉(talk)16:24, 25 October 2025 (UTC)[reply]
I will note that policy assigns to the community theprimary responsibility to resolve disputes, and allows ArbCom to intervene inserious conduct disputes thecommunity has been unable to resolve (Wikipedia:Arbitration/Policy § Scope and responsibilities) (emphasis added). That is to say, ArbCom's role is to supplement the community when the community's efforts are unsuccessful. I think that's why there should besome harmonized community CTOP process that can be applied for all extant community CTOPs. I understand that it may be time to revisit some of the community-designated CTOPs, which I support – when I was on ArbCom, I was a drafter for theWP:DS2021 initiative which among other thingsrescinded old remedies from over half a dozen old cases. But that seems to be a different question than whether to harmonize the community structure with ArbCom's. Best,KevinL (akaL235·t·c)14:32, 29 October 2025 (UTC)[reply]

A motion to revoke authorisation for this sanctions regimewas filed at the administrators' noticeboard on 17 April 2020. The motion did not gain community consensus. 09:47, 22 April 2020 (UTC)
— Wikipedia:General sanctions/Units in the United Kingdom#Motion

At this time there is no consensus to lift these sanctions, with a majority opposed. People are concerned that disputes might flare up again if sanctions are removed: Give them an inch and they will take a kilometer ...
— User:Sandstein00:00, 26 November 2020 (UTC)

Aaron Liu (talk)02:16, 31 October 2025 (UTC)[reply]
  • Yes to both - By having two systems with the same name, we should then avoid differences in the rules. I say this because if the rules are different, then a user will need to be aware of who designated an area as a contentious topic before reporting or handling reports. For example, if we had the two systems use the same rules but different reporting pages (with no overlap on what pages that can be used), then I expect that users will by mistake post to the wrong pages.DreamyJazztalk to me |my contributions20:56, 1 November 2025 (UTC)[reply]
The discussion above is closed.Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Question 3. How should we handle logging of community contentious topics?

[edit]
  1. Use Arbitration Enforcement Log (WP:AELOG)
  2. Create a new page such asWikipedia:Contentious topics/Log which can be separated into two sections, one for community, and one that transcludesWP:AELOG
  3. Create a new page such asWikipedia:General sanctions/Log which would only log enforcement actions for community contentious topics (subpages would be years)
  4. Continue logging at each relevant page describing the community contentious topics (Wikipedia:General sanctions/Topic area), and if 2 or 3 are chosen, the page would transclude these relevant pages.

— Precedingunsigned comment added byAwesome Aasim (talkcontribs)20:42, 22 October 2025 (UTC)[reply]

  • 2+3+4 as proposer, one of the problems I do notice is that loadingWP:AELOG does take a lot of time because the page has a lot of enforcement actions. The advantage of 2 is having a single page that can be quickly searched.Aasim (話すはなす)20:42, 22 October 2025 (UTC)[reply]
    BTW except for 1 the other options are mutually exclusive. If option 1 is chosen, options 2-4 are irrelevant. I am not asking people to pick one and be done, people can choose any combination.Aasim (話すはなす)21:32, 23 October 2025 (UTC)[reply]
  • 2 >1 – Both ArbCom and community CT are forms of general sanctions (seemy incomplete essay on the subject); the only distinction is who authorises them. For this reason, '3' does not make sense. Eliminating the sprawling log pages that currently exist for community-authorised regimes should be a priority if our goal is to eliminate red tape, therefore '4' does not make sense either. That leaves me with 2, which allows for a centralised log for both forms of sanctions. I am perfectly fine with creating subpages as needed, but centralisation is paramount in my mind.Yours, &c.RGloucester00:02, 23 October 2025 (UTC)[reply]
  • I support option 4. I think it continues to make sense to log individual actions for a given topic area to the corresponding subpage ofWikipedia:General sanctions.isaacl (talk)01:12, 23 October 2025 (UTC)[reply]
    In the past, there have been concerns raised about it being clear if the enacting authority is the arbitration committee or the community. Thus I do not feel option 1 is the best choice.
    Regarding searching: I feel the typical use case is to search for actions performed within a specific topic area. If necessary, Wikipedia search with a page prefix criterion can be used to search multiple subpages.isaacl (talk)16:19, 23 October 2025 (UTC)[reply]
    I will note that having the logpages as subpages ofWikipedia:General sanctions (rather than a more tailored page) makes searchability much harder, which is why scripts likeWP:SUPERLINKS don't surface community CTOP enforcement entries even though it does surfaceWP:AELOG entries. Best,KevinL (akaL235·t·c)16:21, 23 October 2025 (UTC)[reply]
  • 2 I am in favor of fewer, larger pages because they are easier to find and to search. If a searcher needs to confirm that somethingisn't there, for example, fewer pages, even if very large, are much easier to work with.Darkfrog24 (talk)13:33, 23 October 2025 (UTC)[reply]
  • 1 - in keeping with the spirit for Q1 and Q2, the whole point here is to merge everything into a single system that is simpler to follow. We already have a practice of splitting off subpages when specific sections in the log get too large.signed,Rosguilltalk13:52, 23 October 2025 (UTC)[reply]
  • 2 as a first choice, as centralization is helpful, but the currentWP:AELOG is ultimately an ArbCom page and shouldn't have jurisdiction over community sanctions. I agree with Rosguill's point about splitting off subpages, and I presume this would be encouraged to a greater extent here. I could also be convinced by1 (to avoid an unnecessary transclusion, although it should be made clear that it isn't an ArbCom-only page anymore) or by a temporary3 (to avoid a lag spike until the main subpages are sorted out).ChaoticEnby (talk ·contribs)17:04, 23 October 2025 (UTC)[reply]
    Actually, I'm realizing that 2 doesn't help with centralization compared to 3, and creates a bit of an inconsistency between some topics being directly logged there and others being transcluded. Count3 as my first choice, with the possibility of a combined log transcluding both for reference.ChaoticEnby (talk ·contribs)19:34, 23 October 2025 (UTC)[reply]
  • 1 > 3 > 4, but my actual preference is todelegate this to a local consensus of those who are involved in implementing this. 1 is my preference, like Rosguill, because centralizing where the existing logs live promotes simplicity and would avoid the need for admins to check which types of CTOPs are which (one goal I have is for the community CTOPs and ArbCom CTOPs to feel almost identical). Not to mention, it would preserve compatibility with tools likeWP:SUPERLINKS that check AELOG but not other pages. The biggest hurdle in my mind is that #1 would require ArbCom approval, which I think is likely but not certain (given that ArbCom allows AE for community CTOPS, why not AELOG?). Best,KevinL (akaL235·t·c)19:29, 23 October 2025 (UTC)[reply]
  • 3, but there is a nuance: inclulde the recently-changed bit about protections beingautomatically logged, as part of a unified page atWikipedia:Arbitration enforcement log/Protections. Protections for the "overlapping" CT/GS regions (A-A and KURD) are already logged there (as, technically, they fall under both) so this would make, and keep, things simple. -The BushrangerOne ping only20:04, 23 October 2025 (UTC)[reply]
  • 5 There should be one system, not two. As noted above, thecommunity is too amorphous and uncertain to be the basis for this.Andrew🐉(talk)17:28, 25 October 2025 (UTC)[reply]
  • 1 > 2 These should be standardized as much as possible. It's already the most confusing and obfuscated system of policies on Wikipedia; we should strive to eliminate as much confusion and pointless red tape as possible. Apart from where actions are logged, there are now pretty much no practical differences between ArbCom and community CTOPs: they are imposed by different bodies, enforced identically, and logged in different places. I agree with others that these systems should feel identical; this would have the additional advantage of makingAquillion's vague long-term proposal, to have old ArbCom topics "expire" into community ones if deemed no longer pertinent, seem like a realistic option.lp0 on fire ()22:49, 10 November 2025 (UTC)[reply]

Discussion (CTOP)

[edit]
  • Comment I understand the functional difference between an AE sanction and AN sanction is that an AE sanction can be removed only by a) the exact same admin who placed it, called the "enforcing admin" or b) a clearly-more-than-half balance of AE admins at an AE appeal while a sanction placed at AN can be removed by c) any sufficiently convinved admin acting alone. To give an example of how this would change things, I found myself in a situation in which I was indefinitely blocked at AE and then the enforcing admin left Wikipedia, which removed one of my options for lifting a sanction. Some of our fellow Wikipedians will think making it easier to get a sanction lifted is a good thing and others will think it's a bad thing, but we should be clear about that so we can all make our decision. Am I correct about how these changes would affect those seeking to have sanctions removed?Darkfrog24 (talk)13:31, 23 October 2025 (UTC)[reply]
    @Darkfrog24: I think this is incorrect. As it stands now, restrictions imposed under community CTOPs are only appealable to the enforcing administrator or to AN (see, e.g.,WP:GS/Crypto, which saysSanctions imposed may be appealed to the imposing administrator or at the appropriate administrators' noticeboard.). Q1 is about aligning the more subtle but still important differences between community CTOPs and ArbCom CTOPs, while Q2 is about adding AE as aplace (but not changing the substantive amount of agreement needed) for enforcement requests and appeals. Best,KevinL (akaL235·t·c)13:43, 23 October 2025 (UTC)[reply]
    Thanks, KevinL. I will ponder this and make my decision.Darkfrog24 (talk)13:50, 23 October 2025 (UTC)[reply]
  • Comment: Is there any way that we could implement the semi-automated logging process that is used for page protection of CTOPS here? Is there any expectation that if any of these options were chosen, that process would revert to manual?SWATJesterShoot Blues, Tell VileRat!18:17, 23 October 2025 (UTC)[reply]
    Pinging @L235 whose bot is in charge of that – for the Twinkle integration of the CTOP logging, I'm currently working on a pull request that would work for both.ChaoticEnby (talk ·contribs)19:08, 23 October 2025 (UTC)[reply]
    I bet the bot could be adapted to whichever option the community opts for!KevinL (akaL235·t·c)19:24, 23 October 2025 (UTC)[reply]
  • Comment – If we are to create a seperate log for community-authorised contentious topics as in alternative 3, it should not be subpage ofWikipedia:General sanctions. 'General sanctions' is a broad category that includes ArbCom sanctions, and also non-contentious topics remedies such as the extended confirmed restriction. This is a recipe for confusion. Please consider an alternative naming scheme.Yours, &c.RGloucester00:19, 24 October 2025 (UTC)[reply]
    The title can always be different. The title I named was just an example title to explain the purpose of the question.Aasim (話すはなす)01:30, 24 October 2025 (UTC)[reply]
The extended confirmed restriction is a separate kind of general sanction, not part of contentious topics. Nothing in this discussion should apply to community-imposed extended confirmed restrictions.Yours, &c.RGloucester23:39, 4 November 2025 (UTC)[reply]

RFC: New GA quick fail criterion for AI

[edit]

Should the following be added to the 'Immediate failures' section of thegood article criteria?

6. It contains obvious evidence ofLLM use, such as AI-generated references or remnants of AI prompt.

Proposed after discussion atWikipedia talk:Good articles#AI.Yours, &c.RGloucester10:08, 26 October 2025 (UTC)[reply]

Survey (GA quick fail)

[edit]
  • Support – Articles that contain obvious evidence of unreviewed AI use are evidence of acompetence issue on the part of their creator that is not compatible with the GA process. Having reviewers perform a spot check forobvious signs of AI use will help militate against the recent problem whereby AI-generated articles are being promoted to GA status without sufficient review.Yours, &c.RGloucester10:08, 26 October 2025 (UTC)[reply]
  • Support. Hardly needs saying. The use of AI is fundamentally contrary to the process of encyclopaedic writing.AndyTheGrump (talk)10:14, 26 October 2025 (UTC)[reply]
    Strong Support: No article of real quality would ever have any signs of AI use easy to see.CabinetCavers (talk)20:10, 10 November 2025 (UTC)[reply]
  • Support This is an excellent proposal to help stop Wikipedia falling into absolute disreputeBillsmith60 (talk)10:21, 26 October 2025 (UTC)[reply]
  • SupportBillsmith60 (talk)10:22, 26 October 2025 (UTC)[reply]
    Billsmith60, presumably you didn't mean to enter two supports?Mike Christie (talk -contribs -library)12:41, 26 October 2025 (UTC)[reply]
    Sorry about that Mike. Was on my phone for this and it is always temperamentalBillsmith60 (talk)01:02, 27 October 2025 (UTC)[reply]
  • Support Per nomination. This would not prohibit AI use per se, but would rule out promoting any low effort usage of AI. AI use in this manner could be argued to be a failure of GA criteria 1 and 2 as well, but explicitly stating as such will give a bit more weight to reviewers' decisions. --Grnrchst (talk)10:45, 26 October 2025 (UTC)[reply]
  • Oppose Per comment in discussion.Rollinginhisgrave (talk |contributions)10:47, 26 October 2025 (UTC)[reply]
  • Support Per nom.Vacant0(talkcontribs)10:54, 26 October 2025 (UTC)[reply]
  • Oppose per comment in discussionIAWW (talk)11:05, 26 October 2025 (UTC)[reply]
  • Oppose. GAs should pass or fail based only and strictly only on the quality of the article. If there are AI-generated references then they either support the article text or they don't, if they don't then the article already fails criteria 2 and the proposal is redundant. If the reference does verify the text it supports then there is no problem. If there are left-over prompts then it already fails criteria 1 and so this proposal is redundant. If the AI-generated text is a copyright violation, then it's already an immediate failure and so the proposal is redundant. If the generated text is rambly, non-neutral, veers off topic, or similar issues then it already fails one or more criteria and so this proposal is redundant.Thryduulf (talk)12:20, 26 October 2025 (UTC)[reply]
    As I see it, this proposal as-written is actually quite limited in scope and is not doing anything beyond saving resources. Obvious unreviewed AI use will not meet all criteria, but at the moment a reviewer of the GAN is still expected to do a full review. This proposal if passed would effectively codify that obvious AI is considered (by consensus of users of the GA process) to mean the article has insurmountable issues in its current state and should be worked on first before a full review.Kingsif (talk)14:07, 26 October 2025 (UTC)[reply]
  • Support, we can't afford wasting precious reviewer time (a very scarce resource) on stuff with fake references. —Kusma (talk)12:29, 26 October 2025 (UTC)[reply]
    If there are fake references then it's already a fail for verifiability. This proposal does not save any additional reviewer time.Thryduulf (talk)12:42, 26 October 2025 (UTC)[reply]
    It turns a fail into a quick fail, which of course saves reviewer time. —Kusma (talk)12:49, 26 October 2025 (UTC)[reply]
    See my comment below in the discussion section.Thryduulf (talk)12:57, 26 October 2025 (UTC)[reply]
  • Oppose per IAWW and Thryduulf. All issues arising from AI use are already covered by other criteria, and there are legitimate uses of AI, which should not be prohibited.Kovcszaln6 (talk)12:33, 26 October 2025 (UTC)[reply]
  • Oppose. I sympathize with the intent of this RfC but it's the state of the article, not the process by which it got there, that GA criteria should address.Mike Christie (talk -contribs -library)12:39, 26 October 2025 (UTC)[reply]
  • Oppose. I agree with this in spirit, but I don't think it would be a useful addition. If a reviewer spots blatant and problematic AI usage (e.g. AI-generated references), almost all would quickfail the article immediately anyway. I can't imagine this proposal saving any additional reviewer time or reducing the handful of flawed articles that slip through that process. But if a nominator used AI for something entirely unproblematic and left an edit summary saying something like "used ChatGPT to change table formatting" or "fixed typos identified by ChatGPT", that would beobvious evidence of LLM usage and yet clearly doesn’t warrant a quickfail.MCE89 (talk)12:52, 26 October 2025 (UTC)[reply]
    Hmm, if “content” was in the proposed text somewhere, would that assuage your legitimate use thoughts?Kingsif (talk)14:15, 26 October 2025 (UTC)[reply]
    I think that would be slightly better, but I still don't really see what actual problem this proposal is trying to solve. If an article consists ofunreviewed or obviously problematic LLM output and contains things like fake references, reviewers aren't going to hesitate to quickfail it (and potentially G15 it) already. I don't see any signs that GAN is currently overwhelmed by AI-generated articles that reviewers just don't have the tools to deal with. And given that lack of a clear benefit, I'm more worried about the potential for endless arguments about process rather than content in the marginal cases (e.g. Can an article be quickfailed if the creator discloses that they used ChatGPT to help copyedit? What if they say they've manually verified and rewritten the LLM output? What is the burden of proof to say that LLM usage is "obvious", e.g. could I quickfail an article solely based on GPTZero?)MCE89 (talk)15:11, 26 October 2025 (UTC)[reply]
    About the problems, I have a lot of thoughts and happy to discuss, perhaps we should move it to the section below? I also assume and hope people take obvious to mean obvious: if it’s marginal, it’s not obvious. Genuine text/code leftovers from copypasting LLM output is obvious, having to ask a different AI isn’t.Kingsif (talk)15:29, 26 October 2025 (UTC)[reply]
  • Oppose largely per Thryydulf, except that I don't believe that AI content necessarily violates criterion 1. AI style is often recognisable but if it's well-written then I wouldn't care and we should investigate if the sources were not hallucinated. Fake references (as opposed to incomplete/obscure/not readily available references) should be an instafail reason.Szmenderowiecki (talk)13:13, 26 October 2025 (UTC)[reply]
  • Support Per my comments in discussion and here. I also see no objection that couldn’t be quelled by the proposed text already having the qualifier “obvious”: the proposal includes benefit of the doubt, even if I personally would take it much further.Kingsif (talk)14:12, 26 October 2025 (UTC)[reply]
  • Support per Kingsif and following the spirit of the guidance contained in WP:HATGPT and adjacentlyWP:G15.Fortuna,imperatrix14:36, 26 October 2025 (UTC)[reply]
  • Support. On a volunteer-led project, it is an insult to expect a reviewer to engage with the extruded output of a syntax generator and not the work of a human volunteer. I am not interested in debating this; please don't ping me to explain that I'm being a Luddite in holding this view. ♠PMC(talk)14:52, 26 October 2025 (UTC)[reply]
  • Oppose per MCE89. LLM use isn't necessarily problematic (even if it often is), and the proposed wording would discourage people from disclosing LLM use in their edit summaries.Anne drew (talk ·contribs)15:28, 26 October 2025 (UTC)[reply]
  • Weak support -- Did not realize this discussion had been ongoing, I noped out because I was frankly way too exhausted to sisypheanly re-explain things I had already tried to explain. Anyway, I don't object to these criteria per se but this is a really low bar. What I would really support ismandatory disclosure of any AI use, because if AI was used then the spot-checking that is required in GA review is not going to be nearly enough. Nor is the problem really fake sources anymore, the problem is "interpretations" of sources that might not seem worth checking if you don't know what AI text sounds like, but if you do know what AI text sounds like, are huge blaring alarms that the text is probably wrong.Here's an example (albeit for a Featured Article and not a Good Article). All the sources were real, but the text describing the sources was fabricated. And it took me about 15 minutes to zero in on the references that were likely to have issues because I know how LLMs word things; without AI disclosure, reviewers are likely to spot-check the wrong things (as happened here).Gnomingstuff (talk)17:34, 26 October 2025 (UTC)[reply]
  • Weak oppose While I fully agree with the intent of this proposal, in practice I am concerned that this is subject to misuse by labeling anything as "AI". I agree with Thryduulf and others that any sort of poorly done AI use (which is almost all of it) will already be failable per the existing GA criteria. I share others' concern about the proliferation of AI generated articles and reviews but I'm not convinced this is the solution.Trainsandotherthings (talk)18:37, 26 October 2025 (UTC)[reply]
  • Weak oppose per my comments atWT:GAN. I also agree that LLM-generated articles are problematic, but the existing criteria already cover most of what's proposed - for instance, evidence of persistent failed verification is a valid reason to quickfail already. I'm concerned that a reviewer would use an LLM detector to check an article, the detector incorrectly says that the article is AI, and the reviewer fails based on that basis. AI detectors are notoriously unreliable - you can run a really old document, like theUnited States Declaration of Independence, through an AI detector to see what I'm talking about. (Edit - Iwould support changingWP:GACR criterion 3 -It has, or needs, cleanup banners that are unquestionably still valid. These include {{cleanup}}, {{POV}}, {{unreferenced}} or large numbers of {{citation needed}}, {{clarify}}, or similar tags  - to list{{AI-generated}} as an example of a template that would merit a quickfail, since AI articles can already be quickfailed under that criterion. 13:01, 27 October 2025 (UTC))Epicgenius (talk)20:18, 26 October 2025 (UTC)[reply]
    Wrong. I don't use AI detectors, butthe best ones achieve ~99% accuracy. The Declaration of Independence is one of the worst possible counterexamples -- no shit, a famous English-language public domain text is all over the training data?Gnomingstuff (talk)02:24, 27 October 2025 (UTC)[reply]
    They have high numbers ofboth false positives and false negatives. See, for instance,this study:Looking at the GPT 3.5 results, the OpenAI Classifier displayed the highest sensitivity, with a score of 100%, implying that it correctly identified all AI-generated content. However, its specificity and NPV were the lowest, at 0%, indicating a limitation in correctly identifying human-generated content and giving pessimistic predictions when it was genuinely human-generated. GPTZero exhibited a balanced performance, with a sensitivity of 93% and specificity of 80%, while Writer and Copyleaks struggled with sensitivity. The results for GPT 4 were generally lower, with Copyleaks having the highest sensitivity, 93%, and CrossPlag maintaining 100% specificity. The OpenAI Classifier demonstrated substantial sensitivity and NPV but no specificity.
    The link you provided saysannotators who frequently use LLMs for writing tasks excel at detecting AI-generated text. This is abouthuman writers detecting AI, not AI detectors detecting AI. That is not what I am talking about. Other studies likethis one state that human reviewers have significant numbers of false positivesand false negatives when detecting AI:In Gao et al.’s study, blind human reviewers correctly identified 68% of the AI-generated abstracts as generated and 86% of the original abstracts as genuine. However, they misclassified 32% of generated abstracts as real and 14% of original abstracts as generated.Epicgenius (talk)02:57, 27 October 2025 (UTC)[reply]
    The study also contains a chart comparing the performance of automatic AI detectors such as Pangram, GPTZero, and Binoculars. As you would have noticed if you read it fully.Gnomingstuff (talk)16:03, 27 October 2025 (UTC)[reply]
    If you'd read to the conclusion you'd seeWhile AI-output detectors may serve as supplementary tools in peer review or abstract evaluation, they often misclassify texts and require improvement. The limitations section also notes that paraphrasing the AI output significantly decreases the detection rate. This clearly indicates they are not fit for the purpose they would be used for here - especially when the false positive rate is sometimes over 30%. We absolutely cannot afford to tell a third of users that their submission was rejected because they used AI when they didn't actually use AI.Thryduulf (talk)17:42, 27 October 2025 (UTC)[reply]
    Seconding what Thryduulf said. I've said my piece, though, so I won't belabor it any further. –Epicgenius (talk)01:38, 28 October 2025 (UTC)[reply]
  • Support It's not fair to submit GA checkers to the noxious task of checking everything in a long detailed article for AI problems. Even without a rule, if you see evidence of AI, say so in the review, that everyone can see, the AI rabbit hole has been found. Nobody is obligated to go down that warren, note it and pass it by. Heck make some warning templates or essays, so future reviewers understand their obligation. It should take 10+ hours to correctly verify an AI article, requires reading all sources and understanding topic in depth. --GreenC20:52, 26 October 2025 (UTC)[reply]
    It's not fair to submit GA checkers to the noxious task of checking everything in a long detailed article for AI problems they don't have to at the moment. If there are problems the review is already failed regardless of whether or not the problems result from AI use. If there are no problems then whether AI was used is irrelevant.Thryduulf (talk)20:57, 26 October 2025 (UTC)[reply]
    Also, I should note that if a reviewer finds so many issues that the article requires 10+ hours to fix, it is already acceptable to quickfail based on these other issues. GA is supposed to be a lightweight process; reviewers already can fail articles if they find things like failed verification or issues needing maintenance banners, and determine that the issues can't be reasonably fixed within a week or so. The proposed GA criterion is well-intentioned, but I think focusing on themeans of writing the articles, rather than theends, is not the correct way to go about it.Epicgenius (talk)22:40, 26 October 2025 (UTC)[reply]
    With AI you don't even know errors exist. It took me 7 days once to find all the problems in an AI generated article. Turned out to have a reasonable sounding but nationalistic-bent supported by errors of omission. How do you know this without research on the topic? This is why so many are against AI, it's incredibly difficult to debug. Normally a nationalistic writer is easy to spot, but AI is such a good liar, not even the operators realize what it is doing. Not to say AI is impossible to use correctly, with a skilled, disciplined, and intellectually honest operator. — GreenC23:41, 26 October 2025 (UTC)[reply]
    I agree, and based on some known AI model biases, any controversial topic (designated or just by common sense) should probably have AI use banned completely.Kingsif (talk)23:50, 26 October 2025 (UTC)[reply]
    I do see, and agree with, the point that you would have to very carefully examine all claims in an article that is suspected of containing AI content. However,WP:GAQF criterion 3 (It has, or needs, cleanup banners that are unquestionably still valid. These include {{cleanup}}, {{POV}}, {{unreferenced}} or large numbers of {{citation needed}}, {{clarify}}, or similar tags ) already covers this. If an article is suspected of containing AI, and thus deserves (or has){{AI-generated}}, it is already eligible for a quick fail under QF criterion 3. –Epicgenius (talk)02:53, 27 October 2025 (UTC)[reply]
    Epicgenius, yeah that sounds right. Maybe somewhere in the GA rules there could be a reminder about adding{{AI-generated}} if AI is discovered during the vetting process. Then QF#3 takes effect. — GreenC06:27, 27 October 2025 (UTC)[reply]
    That's fair, and I can agree with adding it to QF#3. –Epicgenius (talk)13:02, 27 October 2025 (UTC)[reply]
  • Weak oppose while i am against use of AI in GA and the GANR process I think this is a somewhat misguided proposal as it covers things that would already fall under quick fail criteria and does not actually identify the scope of the issues (i.e what is considered obvious evidence of AI use?). I would be able to support a non-redundant and more detailed proposal but it would need to be more fleshed out than this.IntentionallyDense(Contribs)02:18, 27 October 2025 (UTC)[reply]
    The proposal clearly identifies 'obvious evidence of AI use' as AI-generated references, such as those that can be detected byHeadbomb's script, and remnants of AI prompt.Yours, &c.RGloucester04:03, 27 October 2025 (UTC)[reply]
    So you want people to quick-fail a nomination on the basis of @Headbomb's script, about which the documentation for the script says it "is not necessarily an issue ("AI, find me 10 reliable sources about Pakistani painterSadequain Naqqash")". That sounds like a bad idea to me.WhatamIdoing (talk)06:20, 27 October 2025 (UTC)[reply]
    I think I have made my stance on LLM use in relation to good articles very clear. You are free to object as you see fit.Yours, &c.RGloucester07:12, 27 October 2025 (UTC)[reply]
    your wording says6. It contains obvious evidence of LLM use, such as AI-generated references or remnants of AI prompt. this tells me that you can’t have AI prompts in your writing and no AI generated references. Okay… so both of those would be covered by the current criteria. It didn’t mean took the headbomb script. And I definitely would not support any quickfail criteria that relies on a used script especially when the script states “ This is not a tool to be mindlessly used.” also on what basis of HB script are we quick failing?
    The current proposal tells me nothing about what is considered suspicious for AI usage outside of the current existing quick fail criteria. It gives me no guidance as a reviewer as to what may be AI unless it is blatantly obvious.IntentionallyDense(Contribs)13:18, 27 October 2025 (UTC)[reply]
    WP:AISIGNS is a good start.Gnomingstuff (talk)20:15, 28 October 2025 (UTC)[reply]
    I agree, we do have some pretty solid parameters around what is a red flag for AI but I believe any proposal around policy/guidelines for AI needs to incorporate those and lay out what that looks like before we take action on it. I would just like an open conversation on what editors think signs of AI use are, and what we can gain some consensus around regarding indications of AI, then it will be a lot easier to implement policy on how to deal with those indications.
    My main issue with this proposal is that it completely skipped that first step of gaining consensus about what the scope of the problem is and jumped to implementing measures to resolve said problem that we have not properly reached consensus on.IntentionallyDense(Contribs)20:26, 28 October 2025 (UTC)[reply]
  • Oppose. If a GAN is poorly written, it fails the first criterion. If references are made up, it fails the second criterion. If the associated prose does not conform with the references, then it fails the second criterion.We shouldn't be adding redundant instructions to the good article criteria. I don't want new reviewers to be further intimidated by a long set of instructions.Steelkamp (talk)04:53, 27 October 2025 (UTC)[reply]
  • Oppose, as I pointed in other discussions, and many already pointed here, GA criteria should be focused on the result, not the process. But besides that, I see a high anti-AI sentiment in those discussions and fear that if those proposals are approved, they will be abused.Cambalachero (talk)13:16, 27 October 2025 (UTC)[reply]
  • Support. Playing whack-a-mole with AI is a problematic time sink for good-faith Wikipedia editors because of the disparity in how much time it takes an AI-using editor to make a mess and how much time it takes the good-faith editors to figure it out and clean it up. This is especially problematic in GA where even in the non-AI cases making a review can be very time consuming with little reward. The proposal helps reduce this time disparity and by doing so helps head off AI users from gaming the system and clogging up the nomination queue, already a problem. —David Eppstein (talk)17:29, 27 October 2025 (UTC)[reply]
    Please rewrite that. By writing "good-faith Wikipedia editors" meaning editors who do not use AI, you are implying that those who do are acting in bad faith.Cambalachero (talk)17:37, 27 October 2025 (UTC)[reply]
    "Consensus-abiding Wikipedia editors" and "editors who either do not know of or choose to disrespect the emerging consensus against AI content" would be too unwieldy. But I agree that many new editors have not yet understood the community's distaste for AI and are using it in good faith. Many other editors have heard the message but have chosen to disregard it, often while using AI tools to craft discussion contributions that insist falsely that they are not using AI. I suspect that the ones who have reached the stage of editing where they are making GA nominations may skew more towards the latter than the broader set of AI-using editors. AGF means extending an assumption of good faith towards every individual editor unless they clearly demonstrate that assumption to be unwarranted. It does not mean falsely pretending the other kind of editor does not exist, especially in a discussion of policies and procedures intended to head off problematic editing. —David Eppstein (talk)18:31, 27 October 2025 (UTC)[reply]
  • Support. People arguing that any article containing such things would ultimately fail otherwise are missing the point. The point is to make it aninstant failure so further time doesn't need to be wasted on it - otherwise, people would argue eg. "oh that trace of a prompt / single hallucinated reference is easily fixed, it doesn't mean the article as a whole isn't well-written or passesWP:V. There, I fixed it, now continue the GA review." One bad sentence or one bad ref isn't normally an instant failure; but in a case where it indicates that the article was poorly-generated via AI, it should be, since it means the entire article must be carefully reviewed and, possibly, rewritten before GA could be a serious consideration. Without that requirement, large amounts of time could be wasted verifying that an article is AI slop. This is especially true because the purpose of existing generative AI is to create stuff that looks plausible at a glance - it will oftennot be easy to demonstrate that it isa long way from meeting any one of the six good article criteria, wasting editor time and energy digging into material that had little time and effort put into it in the first place. That's not a tenable situation; once there is evidence that an article was badly-generated with AI, the correct procedure is to immediately terminate the GA assessment to avoid wasting further time, and only allow a new one once there is substantial evidence that the problem has been addressed by in-depth examination and improvement. Determining whether an articleshould pass or fail based only and strictly only on the quality of the article is a laborious, time-intensive process; it is absolutely not appropriate to demand that an article be given that full assessment once there's a credible reason to believe that it's AI slop. That's the entire point of the quickfail criteria - to avoid wasting everyone's time in situations where a particular easily-determined criteria means it is glaringly obvious that the article won't pass. --Aquillion (talk)19:46, 27 October 2025 (UTC)[reply]
    Bravo, Aquillion! You explained my rationale for this proposal better than I could have done. I am much obliged.Yours, &c.RGloucester23:55, 27 October 2025 (UTC)[reply]
  • Support in principle, although perhaps I'd prefer such obvious tells in the same criteria as the copyvio one. Like copyvio, the problems might not be immediately apparent, and like copyvio, the problems can be a headache to fix. Llm problems are possibly even much more of a timesink, checking through and potentially cleaning up llm stuff is not a good use of reviewer time. This QF as proposed will only affect the most blatant signals that llm text was not checked, which has its positives and negatives but worth noting when thinking about the proposal.CMD (talk)01:34, 28 October 2025 (UTC)[reply]
  • Support. Deciding on the accuracy and relevance of every LLM's output is not sustainable on article talkpages or in articles. Sure, it could produce something passable, but there is no way to be sure without unduly wasting reviewer time. They're designed to generate text faster than any human being can produce or review itand designed in such a way as to make fake sources or distorted information seem plausible.--MattMauler (talk)19:34, 28 October 2025 (UTC)[reply]
  • Support: per Aquillion who's reasoning matches my own thoughts exactly.fifteen thousand two hundred twenty four (talk)21:12, 28 October 2025 (UTC)[reply]
  • Support AI is killing our planet (to an even worse extent than other technologies) and we need to strongly discourage its use.JuxtaposedJacob(talk) | :) | he/him |00:46, 29 October 2025 (UTC)[reply]
  • Oppose This proposal is far too broad. This would mean that an article with a single potentially hallucinated reference (that may not have even been added by the nominator) would be quickfailed. Nope.voorts (talk/contributions)01:19, 29 October 2025 (UTC)[reply]
    If an article has even a single hallucinated reference, it should be quickfailed, as that means the nominator has failed to do the bare minimum of due diligence.CaptainEekEdits Ho Cap'n!19:28, 2 November 2025 (UTC)[reply]
    That's why I saidpotentially hallucinated. I'm worried this will be interpreted by some broadly and result in real, but hard to find, sources being deemed hallucinated. Also, sometimes editors other than the nominator edit an article in the months between nomination and review. We shouldn't penalize such editors with a quickfail over just one reference that they may not have added.voorts (talk/contributions)19:34, 2 November 2025 (UTC)[reply]
  • Comment Here's alist of editors who have completed a GAN review.voorts (talk/contributions)01:29, 29 October 2025 (UTC)[reply]
    "Whoever wants to know a thing has no way of doing so except by coming into contact with it, that is, by living (practicing) in its environment. ... If you want knowledge, you must take part in the practice of changing reality. If you want to know the taste of a pear, you must change the pear by eating it yourself.... If you want to know the theory and methods of revolution, you must take part in revolution. All genuine knowledge originates in direct experience." –Mao Zedong
    Editors who have never done a GA review or who have done very few should consider that they may not have adequate knowledge to know what GAN reviewers want/need as tools. It seems to me like a lot of support for this is a gut reaction against any AI/LLM use, and I don't think that's a good way to make rules.voorts (talk/contributions)15:48, 29 October 2025 (UTC)[reply]
    I like the way you’ve worded this as it is my general concern as well. While I’m not too high up on that list I’ve done 75 ish reviews and have never encountered AI usage. I know it exists and do see it as problem, however I don’t feel it deserves such a hurried reaction to create hard and fast rules. I would much prefer we take the time to properly flesh out a plan to deal with these issues that involves community input from a range of experiences and reviewers on the scope of the problem, how we should deal with it and to what extent.IntentionallyDense(Contribs)20:32, 29 October 2025 (UTC)[reply]
    Yes. Recently, I've noticed a lot of editors rushing to push through new PAGs without much discussion or consideration of the issues beforehand. It's not conducive to good policymaking.voorts (talk/contributions)20:35, 29 October 2025 (UTC)[reply]
    I echo this sentiment. In my 100+ reviews done in the last year I have only had a few instances where I suspected AI use, and I can't think of any that had deep rooted issues clearly caused by AI.IAWW (talk)22:10, 29 October 2025 (UTC)[reply]
    This rule would’ve been useful years ago when we had users who really wanted to contribute but couldn’t write well enough, their primitive chat bot text was poor and they were unable to fix it, and keeping a review open to go through everything was the response because they didn’t want to close it and insisted it just needed work. As gen AI use is only increasing, addressing the situation before it gets that bad is a good thing.Kingsif (talk)19:13, 30 October 2025 (UTC)[reply]
    Cool, but I am easily highest up that list (which doesn’t count the reviews I did before it, or took over after) of everyone in this discussion, so your premise is faulty.Kingsif (talk)19:07, 30 October 2025 (UTC)[reply]
    I don't think my premise is faulty. I never said everyone who does GAN reviews needs to think the same way, nor do I believe that, and I see that you and other experienced GAN reviewers disagree with me. My point was that editors who have never done one should consider whether they have enough knowledge to make an informed opinion one way or the other.voorts (talk/contributions)19:12, 30 October 2025 (UTC)[reply]
    While you didn’t speak in absolutes, your premise was based in suggesting the people who disagree with you aren’t aware enough. Besides being wrong, you must know it was unnecessary and rather unseemly to bring it up in the first place: this is a venue for everyone to contribute.Kingsif (talk)19:19, 30 October 2025 (UTC)[reply]
    That wasn't my premise. I just told you what my premise is and I stand by it. I felt like it needed to be said in this discussion because AI/LLM use is a hot button issue and we should be deliberative about how we handle it on wiki. If editors who have never handled a GAN review want to ignore me, they can. As you said, anyone can participate here.voorts (talk/contributions)19:51, 30 October 2025 (UTC)[reply]
    Forgive me for disagreeing with your point, then, but I don’t think it even really requires editing experience in general to have an opinion on “should we make people waste time explaining why gen AI content doesn’t get a Good stamp or just let them say it doesn’t”Kingsif (talk)20:11, 30 October 2025 (UTC)[reply]
    Fair enough. No need to apologize. I'm always open to disagreement.voorts (talk/contributions)20:39, 30 October 2025 (UTC)[reply]
  • Oppose. The proposal lacks clarity in definitions and implementation and the solution is ill-targeted to the problems raised in this and the preceding discussion. Editors have stated that the rationale for new quick fail criteria is to save time. On the other hand, editors have said it takes hours to verify hallucinated references and editors disagree vehemently about the reliability of subjective determinations of AI writing or use of AI detectors. Others have stated that it is already within the reviewer's purview to quick fail an article if they determine that too much time is required to properly vet the article. It is not clear how reviewers will determine that an article meets the proposed AI quick fail criterion, how long this will take, or that a new criterion is needed to fail such articles. Editors disagree about which signs of AI writing are "obvious" and as to whether all obvious examples are problematic. The worst examples would fail, anyway, and seemingly without requiring hours to complete the review so again it is unclear that this new criterion addresses the stated problem. Editors provided examples of articles with problematic, (allegedly) AI-generated content that have passed GA. New quick fail criteria would not address these situations where the reviewer apparently did not find the article problematic while another felt the problems were "obvious". Reviewers who are bad at detecting AI writing or don't verify sources or whatever the underlying deficit is won't invoke the new quick fail criterion and won't stop AI slop from attaining GA status.—Myceteae🍄‍🟫 (talk)01:42, 29 October 2025 (UTC)[reply]
  • Support in the strongest possible terms. This is half practical, and half principle: the principle being thatLLM/AI has no place on Wikipedia. Yes, there may be some, few, edge-cases where AI is useful on Wikipedia. But one good apple in a barrel of bad apples does not magically make the place that shipped you a barrel of bad apples a good supplier. For people who want a LLM-driven encylopedia,Grokipedia is thataway →. For people who want an encyclopedia actually written by and for human useage,the line must be drawn here. -The BushrangerOne ping only01:53, 29 October 2025 (UTC)[reply]
  • Oppose per IntentionallyDense and because detecting AI generation isn't always "obvious", and becausethe nom's proposed method for detecting LLM use to generate the article's contents will also flag people who use (e.g.) ChatGPT as a web search engine without AI generating even a single word in the whole article article. Also: if you want to make any article look very suspicious, then spam?utm_source=chatgpt.com at the end of every URL. The "AI detecting" script will light up every source on the page as being suspicious, because it's not actually detecting AI use; it's detecting URLs with some referral codes. Imight support adding{{AI generated}} to the list of other QF-worthy tags.WhatamIdoing (talk)02:09, 29 October 2025 (UTC)[reply]
  • Oppose. If it'sWP:G15-level, G15 it (no need to quickfail). Otherwise, we shouldn't go down the rabbit hole of unprovable editor behaviour and should focus on the actual quality of the article in front of us. If it has patently non-neutral language or several things fail verification, it can already be quick-failed as being a long way from the criteria.~ L 🌸 (talk)07:01, 29 October 2025 (UTC)[reply]
  • Oppose per MCE89. If you use an LLM to generate text and then use it as the basis for creating good, properly verified content, who cares? It's not as if a reviewer has to check every single citation — if you find one that's nonexistent, that alone should be sufficient to reject the article. Stating "X is Y"<ref>something</ref>, when "something" doesn't say so or doesn't even exist, is a hoax, and any hoax means that the article is a long way from meeting the "verifiable with no original research" criterion. And if we encounter "low effort usage of AI", that's certainly not going to pass a GA review. And why should something beinstantly failed just because you believe that it's LLM-generated? Solidly verifying that something is automatically written — not just a high suspicion, but solidly demonstrating — will take more work than checking some references, and as Whatamidoing notes, it's very difficult to identify LLM usage conclusively; we shouldn't quick-fail otherwise good content just because someone incorrectly thinks that it was automatically written. I understand that LLMs tend to use M-dashes extensively. I've always used them a lot more than the average editor does; this was the case even when I joined Wikipedia 19 years ago, long before LLMs were a problem this way.Nyttend (talk)10:44, 29 October 2025 (UTC)[reply]
  • Support per PMC, David Epstein and Aquillion. A lot of opposes to me look like they are either completely missing the point of a useful practical measure over irrelevant theoretical concerns. I also do find it absolutely insulting to not give reviewers every possible tool to deal with this trash, making them waste precious time and effort to needlessly satisfy another of the existing criteria.Choucas0 🐦‍⬛💬📋15:25, 29 October 2025 (UTC)[reply]
    I also do find it absolutely insulting to not give reviewers every possible tool to deal with this trash, making them waste precious time and effort to needlessly satisfy another of the existing criteria. I've reviewed a lot of GAs and oppose this because it's vague and a solution in search of a problem. I see that you've completed zero GAN reviews.voorts (talk/contributions)15:44, 29 October 2025 (UTC)[reply]
    You are entitled to your opinion, but so am I, and I honestly do not see what such a needlessly acrimonious answer is meant to achieve here. The closer will be free to weigh your opposition higher than my support based on experience, but in the meantime that does not entitle you to gate-keep and belittle views you disagree with because you personally judge them illegitimate.Choucas0 🐦‍⬛💬📋15:58, 29 October 2025 (UTC)[reply]
    You are entitled to your opinion. But when your opinion is based on the fact that something is insulting to a group to which I belong, I am entitled to point out that you're not part of that group and that you're not speaking on my behalf. I don't see how it'sacrimonious orgate-keep[ing] orbelittl[ing] to point out that fact.voorts (talk/contributions)16:06, 29 October 2025 (UTC)[reply]
    That is not what my opinion is based on (the first half of my comment pretty clearly is), and I did not mean to speak on anyone's behalf; I apologize if it was not clearer, since it is something that I aim to never do. I consider being exposed to raw LLM output insulting to anyone on this site, so I hope what I meant is clearer now. On another hand,your comment quoting Mao Zedong immediately after your first answer to me clearly shows that you do intend to gate-keep this discussion at large, so you will forgive me for being somewhat skeptical and not engaging further.Choucas0 🐦‍⬛💬📋16:31, 29 October 2025 (UTC)[reply]
    I'm not sure how pointing out that editors should think before they opine on something with which they have little to no experience is a form of gatekeeping. That's why I didn't say "those editors can't comment" in this discussion. It's a suggestion that people stop and think about whether they actually know enough to have an informed opinion.voorts (talk/contributions)16:44, 29 October 2025 (UTC)[reply]
  • Oppose per LEvalyn. Like WhatamIdoing, I would rather treat{{AI generated}} as reason to quick-fail under GA Criteria 1 and 2.ViridianPenguin🐧 (💬)15:35, 29 October 2025 (UTC)[reply]
    I've seen a couple people suggest this, and... I don't really get how this is different at all? Anything under the proposed criterion can be tagged as AI-generated already, this would just be adding an extra step.Gnomingstuff (talk)20:25, 31 October 2025 (UTC)[reply]
  • Oppose. If it has remnants of a prompt, that's alreadyWP:G15. If the references are fake, that's alreadyWP:G15. If it's not that bad, further review is needed and it shouldn't be QF'd. If AI-generated articles are being promoted to GA status without sufficient review, that means the reviewer has failed to do their job. Telling them their job is now also to QF articles that have signs of AI use won't help them do their job any better - theyalready didn't notice it was AI-generated. --asilvering (talk)15:56, 29 October 2025 (UTC)[reply]
  • Oppose. The article should be judged on its merits and its quality, not the matter or methods of its creation. The judgment should be based only on its quality. Any AI-generated references will fail category 2. If the AI-generated text is a copyright violation, it would be an instant failure as well. We didn't need to write up new rules for things that are forbidden in the first place anyway. Another concern for me is the term "obvious". While there may be universal agreement that some AI slop are obvious AI ("This article is written for your request...", "Here is the article...") some might not be obvious for other people. The use of em-dashes might not be an obvious AI use as some ESL writers might use them as well. The term "obvious" will be vague and it will create problems. Obviously AI slop can be dealt with G15 as well.SunDawnContact me!02:55, 30 October 2025 (UTC)[reply]
  • Support - too much junk at this point to be worthwhile. Readers come here exactly because it is written by people and not Grokipedia garbage. We shouldn't stoop to that level.FunkMonk (talk)13:38, 30 October 2025 (UTC)[reply]
  • Support If there is an obvious trace of LLM use in the article and you are the creator, then you have no business being anywhere near article creation. If you are the nominator, then you have failed to apply a basic level of due diligence. Either way the article will have to be gone over with a fine comb, and should be removed from consideration. --Elmidae(talk ·contribs)13:54, 30 October 2025 (UTC)[reply]
  • Support Per nom. LLM-generated text has no place on Wikipedia.The Morrison Man (talk)13:59, 30 October 2025 (UTC)[reply]
  • Support GAN is not just a quality assessment – it also serves as a training ground for editors. LLM use undermines this; using LLMs just will not lead to better editors. As a reviewer, I refrain from reading anything that is potentially AI generated, as it is simply not worth my time. I want to help actual humans with improving their writing; I am not going to pointlessly correct the same LLM mistakes again and again, which is entirely meaningless. LLM use should be banned from Wikipedia entirely. --Jens Lallensack (talk)15:59, 30 October 2025 (UTC)[reply]
  • Oppose. The Venn Diagram crossover of "editors who use LLMs" and "editors who are responsible enough to be trusted to use LLMs responsibly" isincredibly narrow. It would not surprise me if 95% of LLM usage shouldn't merely be quickfailed, but actively rolled back. That said, just because most editors cannot be trusted to use it properly does not mean it is completely off the table - using an LLM to create a table in source given input is fine, say. Additionally, AI accusations can prove a "witch hunt" where just because an editor's writing style includes m-dashes or bold, it gets an AI accusation - even though real textbooks may often also use bolding and m-dashes and everything too! If a problematic LLM article is found, it can still be quick-failed on criterion 1 (if the user wrote LLM-style rather than Wikipedia-style) or criterion 2 (if the user used the LLM for content without triple-verifying everything to real sources they had access to). We don't need a separate criterion for those cases.SnowFire (talk)18:40, 30 October 2025 (UTC)[reply]
  • Oppose – either the issues caused by AI make an articlea long way from meeting any one of the six good article criteria, in which case QF1 would apply, or they do not, in which case I believe a full review should be done. With the current state of LLMs, any article in the latter category will be one that a human has put significant work into. Some editors would dislike reviewing these nominations, but others are willing; I think makingWP:LLMDISCLOSE mandatory would be a better solution.jlwoodwa (talk)04:20, 31 October 2025 (UTC)[reply]
    I would also fully support mandatory LLM disclosureIAWW (talk)08:58, 31 October 2025 (UTC)[reply]
    But wouldn't those reviewers that are possibly willing to review an LLM generated article be primarily those that use LLMs themselves, have more trust in them, and probably even use them for their review? A situation where most LLM-generated GAs are reviewed by LLMs does not sound healthy. --Jens Lallensack (talk)12:00, 31 October 2025 (UTC)[reply]
    I think that's a stretch. I've used an LLM to create two articles, but wouldn't trust it to review an article against GAN criteria.ScottishFinnishRadish (talk)12:15, 31 October 2025 (UTC)[reply]
    LLM usage is a scale. It is not as black-and-white as those who use LLMs vs those who don't. I am of the opinion that LLMs should only be used in areas where their error rate is less than humans. In my opinion LLMs pretty much never write adequate articles or reviews, yet they can be used as tools effectively in both.IAWW (talk)13:22, 31 October 2025 (UTC)[reply]
  • Oppose - Redundant.Stikkyyt/c11:53, 31 October 2025 (UTC)[reply]
  • Oppose. GA is about assessing the quality of the article, not about dealing with prejudice toward any individual or individuals. If the article is bad (poorly written, biased, based on rumour rather than fact, with few cites to reliable sources), it doesn't matter who has written it. Equally, if an article is good (well written, balanced, factual, and well cited to reliable sources), it doesn't matter who has written it, nor what aid(s) they used. Lets assess the content not the contributor.SilkTork (talk)12:15, 31 October 2025 (UTC)[reply]
  • Oppose. Focuses too much on the process rather than on the end result. Also, the vagueness of 'obvious' lays the ground for after-the-event arguments on such things as "I already know this editor uses LLMs in the background; the expression 'stands as a ..' appears, and that's an obvious LLM marker".MichaelMaggs (talk)18:20, 31 October 2025 (UTC)[reply]
    I already know this editor uses LLMs in the background
    How is this not a solid argument?Gnomingstuff (talk)00:57, 3 November 2025 (UTC)[reply]
  • Support per Aquillion.Nikkimaria (talk)18:50, 1 November 2025 (UTC)[reply]
  • Support GA is a mark of quality. If you read something and you can obviously tell it is AI, that does not meet our standards of quality. Florid language, made up citations, obvious formatting errors a human wouldn't make, whatever it is that indicates clear AI use, that doesn't meet our standards. Could we chock that up to failing another criteria? Maybe. But it's nice to have a straightforward box to check to toss piss poor AI work out--and to discourage the poor use of AI.CaptainEekEdits Ho Cap'n!19:34, 2 November 2025 (UTC)[reply]
  • Support In my view, if someone is so lazy that they generate an entire article to nominate without actually checking if it complies with the relevant policies and guidelines, then their nomination is not worth considering. Reviews are already a demanding process, especially nowadays. Why should I or anyone else put in the effort if the nominator is not willing to also put in the effort.Lazman321 (talk)03:10, 3 November 2025 (UTC)[reply]
    This proposal would impact those people, but it wouldalso speedily fail submissions by people who do (or are suspected of) using LLMs but whodo put in the effort to check that the LLM-output complies with all the relevant policies and guidelines. For example:
    • Editor A uses an LLM to find a source, verifies that that source exists, is reliable, and supports the statement it is intended to support but doesn't remove the associated LLM metadata from the URL. This nomination is speedily failed, despite being unproblematic.
    • Editor B uses an LLM to find a source, verifies that that source exists, is reliable, and supports the statement it is intended to support, and removes the associated LLM metadata from the URL. This nomination is speedily failed if someone knows or suspects that an LLM was used, it is accepted if someone doesn't know or suspect LLM use, despite the content being identical and unproblematic.
    • Editor D finds a source without using an LLM, verifies that that source exists, is reliable, and supports the statement it is intended to support. This nomination is accepted, even though the content is identical in all respects to the preceding two nominations.
    • Editor D adds a source, based on a reference in an article they don't know is a hoax without verifying anything about the source. The reviewer AGFs that the offline source exists and does verify the content (no LLMs were used so there is no need to suspect otherwise) and so the article gets promoted.
    Please explain how this benefits readers and/or editors.Thryduulf (talk)04:36, 3 November 2025 (UTC)[reply]
    I am nitpicking, but you got two "Editor D" there.SunDawnContact me!00:55, 4 November 2025 (UTC)[reply]
    Whoops, the second should obviously be Editor E (I changed the order of the examples several times while writing it, obviously I missed correcting that).Thryduulf (talk)01:23, 4 November 2025 (UTC)[reply]
  • Oppose "Obvious" is subjective, especially if AI chatbots become more advanced than they are now and are able to speak in less stilted language. Furthermore, either we ban all AI-generated content on Wikipedia, or we allow it anywhere, this is just a confusing half-measure. (I am personally in support of a total ban, since someone skilled enough to proofread the AI and remove all hallucinations/signs of AI writing would likely just write it from scratch, it doesn't save much time). Or if it did, they'd still avoid it out of fear of besmirching their reputation given the sheer amount of times AI is abused.ᴢxᴄᴠʙɴᴍ ()11:31, 3 November 2025 (UTC)[reply]
    A total ban of AI has not gained consensus, in part because there are few 'half-measures' in place that would be indicative that there is a widespread problem. The AI image ban came only after a BLP image ban, for example.CMD (talk)11:53, 3 November 2025 (UTC)[reply]
    Adding hypocritical half-measures just to push towards a full ban would be "disrupting Wikipedia to make a point". As long as it's allowed, blocking it in GAs would make no sense. It's also likely that unedited AI trash will be caught by reviewers anyway because it's incoherent, even before we get to the AI criterion.ᴢxᴄᴠʙɴᴍ ()15:46, 4 November 2025 (UTC)[reply]
    I'm not sure where the hypocrisy is in the proposal. Whether reviewers will catch unedited AI trash is also not affected by the proposal, the proposal provides a route for action following the catch of said text.CMD (talk)16:01, 4 November 2025 (UTC)[reply]
  • Support-I think at some point LLMs like to cite wikipedia whenever they spit out an essay or any kind of info on a given topic. Then an editor will paste this info into the article, which the AI will cite again, and wikipedia articles will basically end up ouroboros'dUser:shawtybaespade (talk)12:01, 3 November 2025 (UTC)[reply]
  • Support Aquillion's comment above expresses my view very well.Stepwise Continuous Dysfunction (talk)20:11, 3 November 2025 (UTC)[reply]
  • Support. Obviously a sensible idea.Stifle (talk)21:15, 3 November 2025 (UTC)[reply]
    There is extensive explanation above of why this isnot a good proposal, so this comment just indicates you haven't read anything of the discussion, which is something for the closer to take note of.Thryduulf (talk)21:25, 3 November 2025 (UTC)[reply]
    I urge everyone not to make inferences about what others have read. The wide diversity of opinions makes it clear that different editors find different arguments compelling, even after reading all of them.isaacl (talk)23:58, 3 November 2025 (UTC)[reply]
    If Stifle had read and thought about any of the comments on this page it would be extremely clear that it is not "obviously" a sensible idea. Something that is "obviously" a sensible proposal does not get paragraphs of detailed explanation about why itisn't sensible from people who think it goes too far and from those who think it doesn't go far enough.Thryduulf (talk)01:27, 4 November 2025 (UTC)[reply]
  • Oppose I feel the scope that this criteria would cover is already redundant by the other criterias (see Thryduulf's !vote). Additionally, I am concerned that this will raise false positives for those whose writing style is too close to what an LLM could generate.Gramix13 (talk)23:02, 3 November 2025 (UTC)[reply]
  • Support, as per expressed by Aquillion. Furthermore would rather not see Wikipedia become aGrokipedia.Lf8u2 (talk)01:41, 4 November 2025 (UTC)[reply]
    Grokipedia is an uncontrollable AI slop where no one can control the content (except for Elon Musk and his engineers). Current Wikipedia's rules is enough to stop such travesty without adding this quickfail category. GA criteria #1 and #2 is more than enough to stop the AI slop. G15 is still there as well. No need to put rules on top of other rules.SunDawnContact me!04:37, 4 November 2025 (UTC)[reply]
  • Oppose For the statements made above and in the discussion that the failures of AI (hallucination) are easily covered by criterions 1 and 2. But, additionally, because I am not confident that AI is easily detected. AI-detector tools are huge failures, and my own original works on other sites have been labeled AI in the past when they're not. So I personally have experience being accused of using AI when I know my work is original all because I use emdashes. And since AI is only going to improve and become even harder to detect, this criterions is most likely going to be used to give false confidence to over-eager reviewers ready to quick-fail based on a hunch. Terrible idea.--v/r -TP01:40, 4 November 2025 (UTC)[reply]
  • Support after consideration. I do not love how the guideline is currently written - I think all criteria for establishing "obvious" LLM use should be defined. However, I would rather support and revise than oppose. Seeing multiple frequent GA reviewers !vote support also suggests there is a gap with the current QF criteria.NicheSports (talk)04:58, 4 November 2025 (UTC)[reply]
  • Support, the fact that people are starting to write like AI/bots/LLMs means that "false positives" will be detecting (in some cases) users who are too easily influenced by what they are reading. Let's throw those babies out with the bathwater.Abductive (reasoning)05:16, 4 November 2025 (UTC)[reply]
    It's literally the opposite of that. LLMs and GenAI are trained on human writing. It mimics human writing. Not the other way around. And are you suggesting banning users for writing in similar prose to the highly skilled published authors that LLMs are trained on? What the absolute fuck?!?--v/r -TP15:21, 4 November 2025 (UTC)[reply]
    TheWashington Post says "It’s happening: People are starting to talk like ChatGPT with a subheading "Unnervingly, words overrepresented in chatbot responses are turning up more in human conversation."Abductive (reasoning)06:20, 5 November 2025 (UTC)[reply]
    LLMs are trained on highly skilled published authors? Pull the other one, it's got bells on. I didn't know highly skilled published authors liked to delve into things with quite so many emojis.Cremastra (talk ·contribs)15:26, 4 November 2025 (UTC)[reply]
    Yes, LLMs are trained on published works. Duh.--v/r -TP00:47, 5 November 2025 (UTC)[reply]
    Yeah, I know that. Dial down the condescension. But they're trained onall published works, including plenty of junk scraped from the internet. Most published works aren't exactly Terry Pratchett-quality either.Cremastra (talk ·contribs)00:56, 5 November 2025 (UTC)[reply]
    You want me to dial down the condescension on a request that anyone whose prose is similar to that of the material the AI is trained on, including published works, be banned? Did you read the top level comment that I'm being snarky to?--v/r -TP00:59, 5 November 2025 (UTC)[reply]
    I did, and it isn't relevant here. What's relevant is your misleading claim that AI writing represents the best-quality writing humanity has to offer and is acceptable to be imitated. In practice, it can range from poor to decent, but rarely stellar.Cremastra (talk ·contribs)01:07, 5 November 2025 (UTC)[reply]
    First off - I made no such claim. I said AI is trained on some of the best quality writing humanity has to offer. Don't put words in my mouth. Second off - Even if I did, calling for a ban on users who positive contribute because their writing resembles AI is outrageous. Get your priorities straight or don't talk to me.--v/r -TP22:17, 5 November 2025 (UTC)[reply]
    Modern LLMs are trained on very large corpuses, which include everything from high-quality to low-quality writing. And even if one were trained exclusively on high-quality writing, that wouldn't necessarily mean its output is also high-quality. But I agree that humans picking up speech patterns from LLMs doesn't make them incompetent to write an encyclopedia.jlwoodwa (talk)22:30, 5 November 2025 (UTC)[reply]
  • Support per Aquillion and Lf8u2, most editors I assume would not want Wikipedia to become Grokipedia, a platform of AI slop. LLMs such as Grok and ChatGPT write unencyclopedically or unnaturally and cite unreliable sources such as Reddit.Alexeyevitch(talk)07:55, 4 November 2025 (UTC)[reply]
    Users !opposed to this proposal are not supportive of AI slop or a 'pedia overrun by AIs. It's just a bad proposal.--v/r -TP15:22, 4 November 2025 (UTC)[reply]
We can either wait for the 'perfect' proposal, which may never come, or try something like this, so as to have some recourse. It has been years since ChatGPT arrived. If there are some problems that arise with this criterion in actual practice, they can be dealt with by modifying the criterion through the usual Wikipedia process of trial and error. The point is that there is value merely in expressing Wikipedia's stance on AI in relation to good articles. I hope you can understand that users who support this proposal thinksomething is better than nothing, which is the current state of affairs.Yours, &c.RGloucester22:02, 4 November 2025 (UTC)[reply]
There is already something. 1) That sources in a GA review are verified to support the content, and 2) That it follows the style guide. What does this new criterion add that isn't already captured by the first two?--v/r -TP00:48, 5 November 2025 (UTC)[reply]
Adding this criterion will make clear what is already expected in practice. Namely, that editors should not waste reviewer time by submitting unreviewed LLM-generated content to the good articles process, as Aquillion wrote above. It is true that the other criteria may be able to be used to quick-fail LLM-generated content. This is also true of articles with copyright violations, however, which could logically be failed under 1 or 3, but have their own quick-fail criterion, 2. I would argue that purpose of criterion 2 is equivalent to the purpose of this new, proposed criterion: namely, to draw a line in the sand. The heart of the matter is this: what is the definition of a good article on Wikipedia? What does the community mean when it adds a good article tag to any given article? Adding this criterion makes clear that, just as we do not accept copyright violations, even those that are difficult to identify, like close paraphrasing, we brook no slapdash use of LLMs.Yours, &c.RGloucester01:34, 5 November 2025 (UTC)[reply]
I disagree that the quickfail criterion, as proposed, would make that clear. Not allobvious evidence of LLM use is evidence ofunreviewed LLM use.jlwoodwa (talk)01:43, 5 November 2025 (UTC)[reply]
Any 'successful' use of LLMs, if there can be such a thing, should leave no trace behind in the finished text. If the specified bits of prompt or AI-generated references are present, that is evidence that whatever 'review' may have been conducted was insufficient to meet the expected standard.Yours, &c.RGloucester07:25, 5 November 2025 (UTC)[reply]
If someone verifies that an article's references exist and support the claims they're cited for, I would call that a sufficient review of those references, whether or not there areUTM parameters remaining in the citation URLs.jlwoodwa (talk)07:47, 5 November 2025 (UTC)[reply]
No, because not only the references would need to be checked. If such 'obvious evidence' of slapdash AI use is in evidence, the whole article will need to be checked for hallucinations, line-by-line.Yours, &c.RGloucester07:53, 5 November 2025 (UTC)[reply]
Shit flow diagram andMalacca dilemma have obvious evidence of LLM use, in that my edit summaries make it clear I used an LLM, and I've disclosed that on the talk page. Is that sufficient for a quickfail?ScottishFinnishRadish (talk)11:45, 5 November 2025 (UTC)[reply]
Unfortunately I think it might be. Malacca dilemma for example, claims a pipeline has been "operational since 2013" using a source published in 2010 (and that discusses October 2009 in the future tense); in fact most of that paragraph seems made up around the bare bones of the potential for these future pipelines being mentioned. I assume the llm is drawing from other mentions of the pipelines somewhere in its training data, or just spinning out something plausible. Another llm issue is that when actually maintaining the transfer of source content into article prose, such as the first paragraph of background, it can be quite CLOPpy.CMD (talk)11:59, 5 November 2025 (UTC)[reply]
That's actually found in Chen, but with with 2013 given as the planned operational date.China-Myanmar 2000 Kunming 20, for 2013 1.5
Oil Pipeline 30 years... Construction of the two pipelines will begin soon and is expected to be completed by 2013. China National Petroleum Corporation (CNPC), the largest oil and gas company in China, holds 50.9 per cent stake in the project, with the rest owned by the Myanmar Oil and Gas Enterprise (MOGE). (Sudha, 2009)
I've wikilinked the article on the pipeline and added another source for becoming operational in 2013. That was definitely an issue, but would you call that a quick fail?ScottishFinnishRadish (talk)12:12, 5 November 2025 (UTC)[reply]
What's found in Chen areparts of the text, the bare bones I mention above. The rest of the issues with that paragraph remain, and it is the presence of many of these issues, especially with the way llms work by producing words that sound right whether actual information or not, that is the problem.CMD (talk)12:35, 5 November 2025 (UTC)[reply]
Oppose asWP:CREEP—bad GA noms should be failed for being bad, not specifically for using AI.– Closed Limelike Curves (talk)01:23, 6 November 2025 (UTC)[reply]

1)That AI usage is bad and this proposal addresses it. I think all editors here agree that uncritical use of AI is an increasing problem facing Wikipedia. What editors don't agree is that this proposaleffectively addresses the issue in a way that doesn't introduce new problems2)This will save editor time when reviewing for GAR. I find this very unconvincing: I haven't seen any examples or explanations of an article that this guideline would save editor time, but rather magnify debates over whether or not particular aspects of an article are "evidence of AI usage". Editors argue that there are a class of articles that would be quickfailed under this guideline, but are currently not quickfailed and that this detracts from editor time. Whether or not this class of articles exists (something I'm skeptical of given the lack of examples), theapplication of the guideline would not effectively reduce editor time because there is no clear way to apply this guideline - the lack of specificity in the proposal defeats its functionality. In general, I haven't seen any evidence that there is aproblem here. If a problem exists with how GA reviews function, that evidence is still forthcoming. I agree with editors who point out that this guideline is a solution in search of a problem.Katzrockso (talk)22:38, 12 November 2025 (UTC)[reply]

  • Support per RGloucester's reasoning. AI is clearly the next big thing, and we have to make sure that itshallucinations, including making up fake sources, which can be incredibly dangerous, do not get in the way of good articles.

Discussion (GA quick fail)

[edit]
  • I sometimes use AI as a search engine and link remnants are automatically generated. I'd rather not face quickfail for that. I'm also not seeing how the existing criteria are not sufficient; if links are fake or clearly don't match the text, that is already covered under a quickfail as being a long way from demonstrated verifiability. Can a proponent of this proposal give an example of an article they would be able to quickfail under this that they can't under the current criteria?Rollinginhisgrave (talk |contributions)10:47, 26 October 2025 (UTC)[reply]
    The purpose of this proposal is to draw a line in the sand, to preserve the integrity of the label 'good article', and make clear where the encyclopaedia stands.Yours, &c.RGloucester12:55, 26 October 2025 (UTC)[reply]
    In a nutshell the difference is that with AI-generated text,every single claim and source must be carefully checked, and not just for the source's existence; GA only requires spot-checking a handful. Theexample I gave above was a FA, not GA, but it's basically the same thing.Gnomingstuff (talk)17:56, 26 October 2025 (UTC)[reply]
    Thankyou for this example, although I'm not sure how it's applicable here as it wouldn't fall under "obvious evidence of LLM use". At what point in seeing edits like this are you invoking the QF?Rollinginhisgrave (talk |contributions)21:23, 26 October 2025 (UTC)[reply]
    The combination of "clear signs of text having been written by AI..." plus "...and there are multiple factual inaccuracies in that text." Or in other words, obvious evidence (#1) plus problems that suggest that the output wasn't reviewed well/at all (#2).Gnomingstuff (talk)02:20, 27 October 2025 (UTC)[reply]
    I've spent some time thinking about this. Some thoughts:
    • What you describe as obviously AI is very different to what RGloucester describeshere, which makes me concerned about reading any consensus for what proponents are supporting.
    • I would describe what you encountered atLa Isla Bonita as "possible/probable AI use" not "obvious", and your description of it as "clear" is unconvincing, especially when put against cases where prompts are left in etc.
    • If I encountered multiple substantialTSI issues like that and suspected AI use, I would be more willing to quickfail as I would have less trust in the text's verifiability. I would want other reviewers to feel emboldened to make the same assessment, and I think it's a problem if they are not currently willing to do so because of how the QF criteria is laid out.
    • I see no evidence that this is actually occuring.
    • I think that the QF criteria would have to be made more broad than proposed ("likely AI use") to capture such occurrences, and I would like to see wording which would empower reviewers in that scenario but would avoid quickfails where AI use is suspected but only regular TSI issues exist (for those who do not review regularly, almost all spot checks will turn up issues with TSI).
    Rollinginhisgrave (talk |contributions)17:49, 29 October 2025 (UTC)[reply]
    Not a fan of RGloucester's criteria tbh, I don't feel like references become quickfail-worthy just because someone used ChatGPT search, especially given that AI browsers now exist.
    As far as the rest this is why I !voted weak support and not full support -- I'm not opposed to quickfail but it's not my preference. My preference is closer to "don't promote until a lot more review/rewriting than usual is done."Gnomingstuff (talk)05:00, 1 November 2025 (UTC)[reply]
    It's correct that every single claim and source needs to be carefully checked, but it needs to be checked by theauthor, not the GA reviewer. The spot check is there to verify the author did their part in checking.– Closed Limelike Curves (talk)01:23, 6 November 2025 (UTC)[reply]
  • What's "obvious evidence of AI-generated references"? For example, I often use the automatic generation feature of the visual editor to create a citation template. Or I might use a script to organise the references into the reflist. The proposal seems to invite prejudice against particular AI tells but these include things like using an m-dash, and so are unreliable.Andrew🐉(talk)10:53, 26 October 2025 (UTC)[reply]
    Yeah, it's poorly written. "Obvious evidence of AI-generated references" in this context means a hallucination of a reference that doesn't exist.Viriditas (talk)02:43, 28 October 2025 (UTC)[reply]
    If the article cites a nonexistent reference, that should be grounds for failure by itself.– Closed Limelike Curves (talk)01:16, 6 November 2025 (UTC)[reply]
  • What about something similar toWP:G15, for example6. It contains content that could only plausibly have been generated by large language models and would have been removed by any reasonable human review.Kovcszaln6 (talk)10:59, 26 October 2025 (UTC)[reply]
  • This would be the first GA criterion that regulates the workflow people use to write articles rather than the finished product, which doesn't make much sense because the finished product is all that matters. Gen AI as a tool is also extremely useful for certain tasks, for example I use it to search for sources I may have missed (it is particularly good at finding multilingual sources), to add rowscopes to tables to comply withMOS:DTAB, to double check table data matches with the source, and to check for any clear typos/grammar errors in finished prose.IAWW (talk)11:05, 26 October 2025 (UTC)[reply]
    It’s irrelevant to this discussion but I don’t think it’s right to call something “extremely useful” when the tasks are layout formatting, and source-finding and copy editing skills you can and should develop for yourself. You will get better the more you try, and when even just pretty good, you will be better than a chatbot. You also really don’t need gen AI to edit tables, there are completely non-AI tools to extract datasets and add fixed content in fixed places, tools that you know won’t throw in curveballs at random.Kingsif (talk)14:24, 26 October 2025 (UTC)[reply]
    Well, "extremely useful" is subjective, and in my opinion it probably saves me about 30 mins per small article I write, which in my opinion justifies the adjective. I still do develop all the relevant skills myself, but I normally make some small mistakes (like for example putting a comma instead of a full stop), which AI is very good at detecting.IAWW (talk)14:55, 26 October 2025 (UTC)[reply]
    You still don’t need overconfident error-prone gen AI for spellcheck. Microsoft has been doing it with pop ups that explain why your text may or may not have a mistake for almost my whole life.Kingsif (talk)15:02, 26 October 2025 (UTC)[reply]
    GenAI is just faster and easier to use for me.IAWW (talk)16:15, 26 October 2025 (UTC)[reply]
    Well yes, if you consider speed and ease of use to be more important than accuracy, generative AI is probably the way to go...AndyTheGrump (talk)21:09, 26 October 2025 (UTC)[reply]
    @AndyTheGrump is there any evidence that the uses to which IAWW puts generative AI result in a less accurate output than doing it manually?Thryduulf (talk)21:11, 26 October 2025 (UTC)[reply]
    I have no idea how accurately IAWW can manually check spelling, grammar etc. That wasn't the alternative offered however, which was to use existing specialist tools to do the job. They can get things wrong too, but rarely in the making-shit-up tell-them-what-they-want-to-hear way that generative AI does.AndyTheGrump (talk)21:40, 26 October 2025 (UTC)[reply]
    Generative AI can do that in certain situations, but things like checking syntax doesn't seem like one of those situations. Anyway, if the edits IAWW makes to Wikipedia are accurate and free of neutrality issues, fake references, etc. why does it matter how that content was arrived at?Thryduulf (talk)21:48, 26 October 2025 (UTC)[reply]
    'If' is doing a fair bit of work in that question, but ignoring that, it wouldn't, except in as much as IAWW would be better off learning to use the appropriate tools, rather than using gen AI for a purpose other than that it was designed for. I'd find the advocates of the use of such software more convincing if they didn't treat it as if it was some sort of omniscient and omnipotent entity capable of doing everything, and instead showed a little understanding of what its inherent limitations are.AndyTheGrump (talk)23:02, 26 October 2025 (UTC)[reply]
    I clearly don't treat it as an omniscient and omnipotent entity, and I welcome any criticism of my work.IAWW (talk)08:05, 27 October 2025 (UTC)[reply]
    To me - and look, as much as it's a frivolous planet-killer, I am not going to go after any individual user for non-content AI use, but I will encourage them against it - if we assume there are no issues with IAWW's output, my main concern would be the potential regression in IAWW's own capabilities for the various tasks they use an AI for, and how this could affect their ability to contribute to the areas of Wikipedia they frequent. E.g. if you are never reviewing your own writing and letting AI clean it up, will your ability to recognise in/correct grammar and spelling deteriorate, and therefore your ability to review others' writing. That, however, would be a personal concern, and something I would not address unless such an outcome became serious. As I said, with this side point, I just want to encourage people to develop and use these skills themselves.Kingsif (talk)23:21, 26 October 2025 (UTC)[reply]
    why does it matter how that content was arrived at? Value? Morality? If someone wants ChatGPT,it's over this way. We're an encyclopedia. We have articles with value written by people who care about the articles. LLM-generated articles make a mockery of that. Why would you deny our readers this? I genuinely can't understand why you're so pro-AI. Do you not see how AI tools, while they have some uses, are completely incompatible with our mission of writing good articles?Cremastra (talk ·contribs)01:57, 28 October 2025 (UTC)[reply]
    Once again, Wikipedia is not a vehicle for you to impose your views on the morality of AI on the world. Wikipedia is a place to write neutral, factual encyclopaedia articles free of value judgements - and that includes value judgements about tools other people use to write factual, neutral articles.Thryduulf (talk)02:17, 28 October 2025 (UTC)[reply]
    Your refusal to take any stance on a tool that threatens the value of our articles is starting to look silly. As I sayhere, we take moral stances on issues all the time, and LLMs are right up our alley.Cremastra (talk ·contribs)02:28, 28 October 2025 (UTC)[reply]
    That LLM isa tool that threatens the value of our articles is your opinion, seemingly based on your dislike of LLMs and/or machine learning. You are entitled to that opinion, but that does not make it factual.
    If an article is neutral and factual then it is neutral and factual regardless of what tools were or were not used in its creation.
    If an article is not neutral and factual then it is not neutral and factual regardless of what tools were or were not used in its creation.Thryduulf (talk)02:52, 28 October 2025 (UTC)[reply]
    You missed two: If an article is not neutral and factualand was written by a person, you can ask that person to retrace their steps in content creation (if not scan edit-by-edit to see yourself) so everyone can easily identify where the inaccuracies originated and fix them. If an article is not neutral and factual and you cannot easily trace its writing process, it is hard to have confidence in any content at all when trying to fix it.Kingsif (talk)03:01, 28 October 2025 (UTC)[reply]
    It’s irrelevant to this discussion but I don’t think it’s right to call a calculator “extremely useful” when the tasks are division, exponentiation, and root-finding skills you can and should develop for yourself.– Closed Limelike Curves (talk)01:18, 6 November 2025 (UTC)[reply]
Nonconstructive.FaviFake (talk)05:26, 12 November 2025 (UTC)[reply]
The following discussion has been closed.Please do not modify it.
  • Are you being deliberately obtuse and pulling back a week-ended thread just to make aWP:POINT? Tut tut. And all for an incorrect "nootice" too. Humans are inherently better at determining source usefulness and copyediting than a computer will ever be. Computers, however, were literally created to do routine but lengthy mathematical equations.Kingsif (talk)20:42, 10 November 2025 (UTC)[reply]
    Are you being deliberately obtuse and pulling back a week-ended thread just to make aWP:POINT?
    I missed the date stamp on this message, and the conversation is still going on further upthread. Please remembercivility.– Closed Limelike Curves (talk)18:12, 11 November 2025 (UTC)[reply]
    Honestly, I think it's very interesting you think someone questioning your pointless-besides-provocation comment is less civil than (regardless of date) yourself making the pointless-besides-provocation comment.Kingsif (talk)21:34, 11 November 2025 (UTC)[reply]
    I find it interesting you continue to use a condescending tone after calling a colleague "obtuse" and shushing them like a child. Lots of things are interesting; some are even so interesting they might get administrators' attention.Cremastra (talk ·contribs)21:54, 11 November 2025 (UTC)[reply]
    That was an honest comment, mate (very literally noted as such), and even re-reading I don't see where you find condescension at all. But should I take your mimicry to be a deliberate use of condescension if that is how you apparently believe it's used? If so, much like CLC's, your comment has an edge of provocation with no discussion value. Why?Kingsif (talk)23:14, 11 November 2025 (UTC)[reply]
  • Now, relevantly, this proposal clearly doesnot regulate workflow, only the end product. It only refers to the article itself having evidence of obvious AI generation in its actual state. Clean up after your LLMs and you won’t get caught and charged 😉Kingsif (talk)14:28, 26 October 2025 (UTC)[reply]
    The "evidence" in the end product is being used to infer things about the workflow, and the stuff in the workflow is what the proposal is targeting.IAWW (talk)14:50, 26 October 2025 (UTC)[reply]
    Y’all know I think gen AI is incompatible with Wikipedia and would want to target it, but I don’t think this proposal does that. If there’s AI leftovers, that content at least needs human cleanup, and that shouldn’t be put on a reviewer. That’s no different to identifying copyvio and quickfailing saying a nominator needs to work on it rather than sink time in a full review.Kingsif (talk)14:59, 26 October 2025 (UTC)[reply]
  • Regarding "fake references", I can see the attraction in this being changed from a slow fail to a quick fail, but before it can be a quick fail there needs to be areliable way to distinguish between references that are completely made up, references that exist but are inaccessible to (some) editors (e.g. offline, geoblocked, paywalled), references that used to be accessible but no longer are (e.g. linkrot), and references with incorrect details (e.g. typos in URIs/dois/ISBNs/titles/etc).Thryduulf (talk)12:56, 26 October 2025 (UTC)[reply]
    If you cannot determine if a reference that doesn’t work is AI or not, then it’s not obvious AI and this wouldn’t apply…Kingsif (talk)14:09, 26 October 2025 (UTC)[reply]
    I think this is the problem: The proposal doesn't say "a reference that doesn’t work". It says "AI-generated references". Now maybe @RGloucester meant the kind of ref that's completely fictional, rather than real sources that someone found by using ChatGPT as a type of cumbersome web search engine, but that's not clear from what's written in the proposal.
    This is a bit concerning, because there have been problems with citations that people can't check since before Wikipedia's creation – for example:
    • Proof by reference to inaccessible literature: The author cites a simple corollary of a theorem to be found in a privately circulated memoir of the Slovenian Philological Society, 1883.
    • Proof by ghost reference: Nothing even remotely resembling the cited theorem appears in the reference given.
    • Proof by forward reference: Reference is usually to a forthcoming paper of the author, which is often not as forthcoming as at first.
    – and AI is adding to the traditional list the outright fabrication of sources: "Proof by non-existent source: A paper is alleged to exist, except that no such paper ever existed, and sometimes the alleged author and the alleged journal are made-up names, too". These are all problems, but these need different responses in the GA process. Made-up sources should beWP:QF #1: "It is a long way from meeting any one of the six good article criteria" (specifically, the requirement to cite real sources. A ghost reference is a real source but what's in the Wikipedia article{{failed verification}}; depending on the scale, that's a surmountable problem. A forward reference is an unreliable source, but if the scale is small enough, that's also a surmountable problem. Inaccessible literature is not grounds for failing a GA nom.
    If this is meant to be "most or all of the references are to sources that actually doesn’t exist (not merely offline, not merely inconvenient, etc.)", then it can be quick-failed right now. But if it means (or gets interpreted as) "the URL says ?utm=chatgpt", then that's not an appropriate reason to quick-fail the nomination.WhatamIdoing (talk)06:10, 27 October 2025 (UTC)[reply]
    Perhaps a corollary added to existing crit, saying that such AI source invention is a QF, would be more specific and helpful. I had thought this proposal was good because it wasn’t explicitly directing reviewers to “this exact thing you should QF”, but if there are reasonable concerns (not just the ‘but I like AI’ crowd) that the openness could instead confuse reviewers, then adding explicit AI notes to existing crit may be a better route.Kingsif (talk)16:05, 27 October 2025 (UTC)[reply]
  • Suggestion: change the fail criterion to read "obvious evidence of undisclosed LLM use". There are legitimate uses of LLMs, but if LLM use is undisclosed then it likely hasn't been handled properly and shouldn't be wasting reviewers' time, since more than a spot-check is required as explained byGnomingstuff.lp0 on fire ()09:17, 27 October 2025 (UTC)[reply]
    My concern here is that these in practice are basically the same thing:WP:LLMDISCLOSE is not mandatory, so almost all LLM use is undisclosed, even when people are doing review.Gnomingstuff (talk)16:08, 27 October 2025 (UTC)[reply]
    It would also be so hard to implement making it mandatory, in practice. Heavy rollout means some users may not even know when they’ve used it. Left google on AI mode (or didn’t turn it off…)? Congrats, when you searched for a synonym you “used” an LLM.Kingsif (talk)16:12, 27 October 2025 (UTC)[reply]
  • Any evidence of LLM use? Does that include disclosing LLM use used in article development/creation? SeeShit flow diagram andMalacca dilemma for examples. Should both of those quick fail based only on LLM use?ScottishFinnishRadish (talk)11:10, 27 October 2025 (UTC)[reply]
    I took evidence to mean things in the article. I hope no reviewer would extend the GA crit to things not reviewed in the GAN process - like an edit reason or other disclosure. I can see the concern that this wording could allow or encourage them to, now that you bring it up.Kingsif (talk)15:56, 27 October 2025 (UTC)[reply]
    A difficult part of workshopping any sort of rule like this is you have to remember not everyone who uses it will think the same way you do, or even the way the average person does. What I'd hate to see happen is we pass something like this and then have to come back multiple times to edit it because of people using it as license to go open season on anything they deem AI, evidence or no evidence. I don't mean to suggestyou would do anything like that, Kingsif, butsomeone out there probably will.Trainsandotherthings (talk)01:52, 28 October 2025 (UTC)[reply]
    I didn't think you were suggesting so ;) As noted, I agree. As much as obviousshould mean obvious and evidenceshould be tangible evidence, and the spirit of the proposalshould be clear... I still support it, as certainly less harmful than not having something like it, but I can see how even well-intentioned reviewers trying to apply it could go beyond this limited proposal's intention.Kingsif (talk)01:59, 28 October 2025 (UTC)[reply]
  • I mentioned this above in my !vote, but isn't this already covered byWP:GAQF #3 (# It has, or needs, cleanup banners that are unquestionably still valid. These include {{cleanup}}, {{POV}}, {{unreferenced}} or large numbers of {{citation needed}}, {{clarify}}, or similar tags)? Any blatant use of AI means that the article deserves{{AI-generated}} and, as such, already is QF-able. All that has to be done is to modify the existing QF criterion 3 to make it explicit that AI generation is a rationale that would cause QF criterion 3 to be triggered. –Epicgenius (talk)01:44, 28 October 2025 (UTC)[reply]
    To keep it short, isn't QF3 just a catch-all for "any clean-up issues that might not completely come under 1 & 2" and theoretically both those quickfail conditions come under it and they're unnecessary? But they're important enough to get their own coverage? Then we ask is unmonitored gen AI more or less significant than GA crit and copyvio.Kingsif (talk)02:06, 28 October 2025 (UTC)[reply]
  • Suggestion: combining theobvious use of AI withevidence that the submission falls short of any of the other six GA criteria (particularly criteria 2). Many of the current Opposes reflect a sentiment that this policy would encapsulate too much: instead of reflecting the state of the article, it punishes those who use AI in their workflow. This suggestion would cover a quickfail of articles with AI-hallucinated references (so, for instance, if a reviewer notes a source with a?utm_source=chatgpt.com tag and determines that the sentence is notverifiable, they can quickfail it); however, this suggestion limits the quickfail potential for people who use AI, review its outputs, and put work into making sure it meets the guidelines for a Wikipedia article.Staraction (talk |contribs)07:41, 30 October 2025 (UTC)[reply]
    We already have this: it'sWP:QF #1.Kovcszaln6 (talk)08:23, 30 October 2025 (UTC)[reply]
    Sorry, I don't think I worded the tqi part well. I mean that, if there isobvious use of AI and any evidence of a hallucinated source, unverified citation, etc.at all that the reviewer is allowed to quickfail.
    If this still isWP:QF #1, then I sincerely apologize for wasting everybody's time.Staraction (talk |contribs)08:39, 30 October 2025 (UTC)[reply]
    I see. I support this.Kovcszaln6 (talk)08:45, 30 October 2025 (UTC)[reply]

RfC: Should edit filter managers be allowed to use the "revoke autoconfirmed" action in edit filters?

[edit]

Please consider joining thefeedback request service.
An editor hasrequested comments from other editors for this discussion. This page has been added to the following list:When discussion hasended, remove this tag and it will be removed from the list. If this page is on additional lists, they will be noted below.

Anedit filter can perform certain actions when triggered, such as warning the user, disallowing the edit, or applying achange tag to the revision. However, there are lesser known actions that aren't currently used in the English Wikipedia, such as blocking the user for a specified amount of time, desysopping them, and something called "revoke autoconfirmed". Contrary to its name, this action doesn't actually revoke anything; it instead prevents them from being "autopromoted", or automatically becoming auto- or extended-confirmed. This restriction can be undone by any EFM at any time, and automatically expires in five days provided the user doesn't trigger that action again. Unlike block and desysop (called "degroup" in the code), this option is enabled for use on enwiki, but has seemingly never been used at all.

Fast forward to today, and we have multiple abusers and vandalbots gaming extended confirmed in order to vandalize or edit contentious topics. One abuser in particular has caused an edit filter to be created for them, which is reasonably effective in slowing them down, but it still lets them succeed if left unchecked. As far as I'm aware, the only false positive for this filter was triggered byPaulHSAndrews, who has since beencommunity-banned. In theory, setting this filter to "revoke autoconfirmed" should effectively stop them from being able to become extended confirmed. Some technical changes were recently made to allow non-admin EFMs to use this action, but since it has never been used, I was told to request community consensus here.

So, should edit filter managers be allowed to use the "revoke autoconfirmed" action in edit filters?ChildrenWillListen (🐄 talk,🫘 contribs)05:04, 28 October 2025 (UTC)[reply]

Survey (edit filters)

[edit]

Discussion (edit filters)

[edit]

In general, this proposal seems highly dangerous, and policy shouldn't change. Just go toWP:STOCKS and you'll find some instances in which misconfigured filters prevented edits byeveryone; imagine that these filters also included provisions to block or revoke rights from affected editors. However, the proposal seems to be talking about a filter for one particularly problematic user; I could support a proposal to make an exemption for egregious cases, but I think such an exemption should always be discussed by the community, so the suggested reconfiguration is the result of community consensus.Nyttend (talk)10:51, 29 October 2025 (UTC)[reply]

RfC: Increase the frequency of Today's Featured Lists

[edit]

Please consider joining thefeedback request service.
An editor hasrequested comments from other editors for this discussion. This page has been added to the following lists:When discussion hasended, remove this tag and it will be removed from the lists. If this page is on additional lists, they will be noted below.

Increase the frequency of Today's Featured Lists from 2 per week to 3 or 4 per week, either on a trial basis, with the option to expand further if sustainable, or without a trial at all.Vanderwaalforces (talk)07:02, 2 November 2025 (UTC)[reply]

Background

Right now,Today's Featured List only runs twice a week; that is Mondays and Fridays. The problem is that we've built up a huge (and happy?) backlog because there are currently over 3,400Featured Lists that have never appeared on the Main Page (see category). On top of that and according to ourFeatured list statistics we're adding about 20 new Featured Lists every month, which works out to around 4 to 5 a week, and looking at the current pace of just 2 per week, it would take forever to get through what we already have, and the backlog will only keep growing.

Based onprior discussion at WT:FL, I can say we could comfortably increase the number of TFLs per week without running out of material. Even if we went up to 3 or 4 a week, the rate at which new lists are promoted would keep things stable and sustainable. Featured Lists are one of our high-quality contents and they get this less exposure compared toWP:TFAs orWP:POTDs, so trust me, this isn't about numbers, and neither is it about FL contributors being jealous (we could just be :p). Giving them more space would better showcase the work that goes into them. We could run a 6‑month pilot, then review the backlog impact, scheduling workload, community satisfaction, etc.

Of course, there are practical considerations. Scheduling is currently handled byGiants2008 the FL director, and increasing the frequency would mean more work, which I think could be handled by having one of the FL delegates (PresN andHey man im josh) OR another experienced editor to help with scheduling duties.Vanderwaalforces (talk)07:03, 2 November 2025 (UTC)[reply]

Options
  • Option 1: Three TFLs per week (Mon/Wed/Fri)
  • Option 2: Four TFLs per week (e.g., Mon/Wed/Fri/Sun)
  • Option 3: Every other day, with each TFL staying up for two days (This came up at the WT:FL discussion, although it might cause imbalance if comparing other featured content durations.)
  • Option 4: Three TFLs per week (Mon/Wed/Fri) as a 6‑month pilot and come back to review backlog impact, scheduling workload, community satisfaction, etc.
  • Option 5: Four TFLs per week (e.g., Mon/Wed/Fri/Sun) as a 6‑month pilot and come back to review backlog impact, scheduling workload, community satisfaction, etc.
  • Option 6: Retain status-quo

Discussion (TFLs)

[edit]
  • Generally supportive of an increase, if the increase has the support of Giants2008, PresN, and Hey man im josh. Could there be an elaboration on the potential main page balance? TFL seems to slot below the rest of the page, without the columnar restrictions.CMD (talk)10:01, 2 November 2025 (UTC)[reply]
    @Chipmunkdavis Per the former, yeah, I totally agree, which is why I suggested earlier that one of the FLC delegates could help share the load, alternatively, an experienced FLC editor or someone familiar with how FL scheduling works could assist. Per the latter, nothing changes actually, the slot for TFL remains the same, viewers only get to see more FLs than the status-quo. It might fascinate you that some editors do not know if we have TFLs (just like TFAs) on English Wikipedia either because they have never viewed the Mainpage on a Monday/Friday or something else.Vanderwaalforces (talk)17:06, 2 November 2025 (UTC)[reply]
  • Support Option 2 with the Monday list also showing on Tuesday, the Wednesday list also showing on Thursday and the Friday list also showing on Saturday— Precedingunsigned comment added byEasternsahara (talkcontribs)16:28, 2 November 2025 (UTC)[reply]
  • Option 1, for two main reasons: (1) there is no reason to rush into larger changes (we can always make further changes later), and (2) FL topics tend to be more limited and I think it's better to space out similar lists (e.g., having a "List of accolades received by <insert movie/show/actor>" every other week just to keep filling slots would get repetitive). Stronglyoppose any option that results in a TFL being displayed for 2 days; this would permanently push POTD further down, break the patterns of the main page (no other featured content is up for more than 1 day), and possibly cause technical issues for templates meant to change every day.RunningTiger123 (talk)18:08, 2 November 2025 (UTC)[reply]
  • Option 1 – Seeing the notification for this discussion pop up on my talk page really made me take a step back and ponder how long I've been active in the FL process (and my mortality in general, but let's not go there). I can't believe I'm typing this, but I've been scheduling lists at TFL for 13 years now. That's a long time to be involved in any one process, asthis old graphic makes even more clear. Where did the time go? Anyway, I agree with RunningTiger that immediately pushing for 4+ TFLs per week when we may not have enough topic diversity to probably support that amount would do more harm than good, but I think enough lists are being promoted through the FL process to support an increase to three TFLs weekly. In addition, I agree with RT that we don't need to be running lists over multiple days when none of the other featured processes do.
    While I'm here, I do want to address potential workload issues. My suggestion is that, presuming the delegates have the spare time to take this on, each of us do one blurb per week. With the exception of the odd replaced blurb once in a blue moon, I've been carrying TFL by myself for the vast majority of the time I've been scheduling TFLs (over a decade at this point). If I take a step back and ignore the fact that I'm proud to have had this responsibility for the site for this many years (and that the train has been kept on the tracks fairly well IMO), it really isn't a great idea for the entire process to have been dependent on the efforts of a single editor for that long. I just think it would be a good sign of the strength of the TFL process for a rotation of schedulers to be introduced. Also, in the event of an emergency we would have a much better chance of keeping TFL running smoothly with a rotation. Of course, this part can be more thoroughly hammered out at TFL, but I did want to bring it up in case the wider community has any thoughts.Giants2008 (Talk)01:42, 4 November 2025 (UTC)[reply]
  • Option 1, and I'd be willing to do some TFL scheduling. --PresN15:59, 4 November 2025 (UTC)[reply]
  • Option 1, though I would support any permanent increase to the frequency of TFLs as long as the coords or other volunteers have the capacity for that.Toadspike[Talk]20:13, 4 November 2025 (UTC)[reply]
  • Option 4, let's see if some backlog can be cleared and evaluate the workload.BlueRiband►01:00, 5 November 2025 (UTC)[reply]
  • Option 1, sounds like the best option to me at this stage. –Ianblair23(talk)12:26, 12 November 2025 (UTC)[reply]
  • Option 1, Slow changes are better. Also, this doesn't explicitly need to be a pilot (opt4) since we can always switch back to the status quo ante if unforeseen problems crop up. -MPGuy2824 (talk)14:20, 12 November 2025 (UTC)[reply]
  • Option 1, in agreement with others. I would be open to an increase in frequency after some time, with input from editors involved in TFLs about the impact of the initial change. —Myceteae🍄‍🟫 (talk)20:15, 12 November 2025 (UTC)[reply]

Option 1, but I think that this should be trialled and then returned for further discussion based on how it affects the backlog and the wider response to the change.JacobTheRox(talk | contributions)20:45, 20 November 2025 (UTC)[reply]

Based on your response, you should choose option 4. -MPGuy2824 (talk)04:35, 21 November 2025 (UTC)[reply]

Proposal to speed COI edit requests

[edit]

When a new COI edit request is posted, it appears onCategory:Wikipedia conflict of interest edit requests. When a volunteer starts to address the request, it can be tagged with the {{started}} template. But we still have to click on each request to go to the request on the talk page to see if it's been tagged with "started" yet. It would save time if the presence of the started template triggers some kind of visual alert on the category page. Currently, a lot of real estate and color coding goes to show that an article is edit protected, but that has very little impact on most editors handling these requests. Instead, if a field could be used to simply say "started", or "new" (default), it would make it easier for volunteers to clear the queue by highlighting new requests that aren't already being worked on by someone else.STEMinfo (talk)23:46, 4 November 2025 (UTC)[reply]

You're talking aboutUser:AnomieBOT/COIREQTable, which is transcluded on the category page, right?jlwoodwa (talk)01:40, 5 November 2025 (UTC)[reply]
@Jlwoodwa: Yes - I didn't know there was another location for the queue. On the link you shared, there's even more empty space, so it seems there would be room to put in a "started" icon or the word started in a stareted column to help the volunteers.STEMinfo (talk)00:07, 8 November 2025 (UTC)[reply]
@STEMinfo, when was the last time you had an actual problem with wasted work because someone else was answering the same request that you picked?
There are usually about 200 open requests on that page, and I would be surprised if there were even 10 editors using the list (the cat gets about 20 page views per day). I estimate the odds of a conflict as being significantly less than 1% per article chosen, especially if you're picking an article that isn't one of the very top or very bottom entries.WhatamIdoing (talk)18:25, 12 November 2025 (UTC)[reply]
@WhatamIdoing: Every time I read a request and find it's been started and is being worked on, I feel I've wasted time. And the queue continues to grow. If it was shorter, there wouldn't be an issue. I supposed I could just jump to the bottom every time I click on a request to see what's new, but a simple modification to the queue showing "started" would be more efficient. I usually try to work the oldest ones first, but might respond to a newer one if it's well written and/or uncontroversial. I don't always remember which ones I've clicked on before, so a tag would help.STEMinfo (talk)08:44, 20 November 2025 (UTC)[reply]

BlankMediaWiki:Userlogout-temp-moreinfo

[edit]

I do not believe this message, which appears when a temporary account attempts to exit session, is necessary. The wikilinks in message is currently broken due toT409630, and no good faith user would believe that it is okto disrupt Wikipedia, evade a block or ban, or to avoid detection or sanctions. The exit session dialogue is already cluttered enough, and the message can come across as assuming bad faith.Catalk to me!13:15, 8 November 2025 (UTC)[reply]

Pinging translator @K6kaCatalk to me!13:19, 8 November 2025 (UTC)[reply]
You can do such a thing? We should just get rid of that "feature", which has probably already been abused by vandals.ChildrenWillListen (🐄 talk,🫘 contribs)13:32, 8 November 2025 (UTC)[reply]
We have disabled system messages before; simply replacing them with a- is usually enough to hide them. As for the message itself, I'm all for simplifying interface messages (as long as they're still informative enough) so I have no major issues with this message being hidden for us. —⁠k6ka🍁 (Talk ·Contributions)14:05, 8 November 2025 (UTC)[reply]
I'm talking about the logout button offered for temp accounts.ChildrenWillListen (🐄 talk,🫘 contribs)14:07, 8 November 2025 (UTC)[reply]
Ah yes, that feature wasn't too well documented. Yes, users of temporary accounts can use the "End Session" button to essentially log out of their temporary account (forever), no cookie-clearing required. I suppose there is a concern that it could be used for abuse, but it's not like a warning message would stop determined malice anyway. —⁠k6ka🍁 (Talk ·Contributions)16:48, 8 November 2025 (UTC)[reply]
At a minimum, I support disabling the "Exit session" feature for blocked temporary accounts. Even if this only stops less determined vandals, removing the feature would still reduce the anti-vandalism workload. — Newslinger talk16:15, 10 November 2025 (UTC)[reply]
The only qualm I have disabling the feature is that when using a TA, it adds an obnoxious gray bar at the top.Catalk to me!23:43, 10 November 2025 (UTC)[reply]
I agree that being "logged in" to a temporary account offers a worse visual experience than being logged out. As someone who spends a lot more time reading than editing, I'll log out of a temporary account after making an edit to get back to normal.~2025-32801-03 (talk)11:24, 11 November 2025 (UTC)[reply]
Hey @~2025-32801-03, this is interesting - why exactly do you believe that the temp account experience is worse for reading? Could you elaborate? Thank you!SGrabarczuk (WMF) (talk)01:30, 15 November 2025 (UTC)[reply]
Hi @SGrabarczuk (WMF), in addition to the grey bar mentioned above, there's also a sticky header, which by its nature reduces the amount of content on screen. The main menu and tools sidebars also open on each page visit instead of staying closed throughout a session. There's no settings menu for TAs that I'm missing anywhere, is there? I suppose needing to reapply them for each new account would be frustrating in itself, but maybe that would encourage longtime temporary users to create permanent accounts. @SGrabarczuk (WMF) (Forgot to sign)~2025-34084-82 (talk)11:09, 17 November 2025 (UTC)[reply]
Thanks @~2025-34084-82, I'm glad to hear back from you. Let's unpack this:
  • Sticky header - well, it's there to make it easier to use some tools and reduce scrolling. Some people dislike it, but it's proven to be useful, so it will stay.
  • "The main menu and tools sidebars also open on each page visit" - this is a bug; unfortunately not so easy to fix.
  • "There's no settings menu for TAs (...) needing to reapply them" - what would you like to see in such a menu? what do you mean by them?
Thank you!SGrabarczuk (WMF) (talk)11:27, 17 November 2025 (UTC)[reply]
Well, another temp user here to say I hit “end session” every time because the giant banner telling me to make an account is annoying to see on every page. You’ll see a lot of additional churn.~2025-34811-10 (talk)01:23, 21 November 2025 (UTC)[reply]
Support – makes no senseFaviFake (talk)17:20, 11 November 2025 (UTC)[reply]
Pinging @SGrabarczuk (WMF), because this smells like the kind of thing that would be created for legal compliance.WhatamIdoing (talk)18:29, 12 November 2025 (UTC)[reply]
@K6ka I think there is a mild consensus here to delete. The message also serves togive vandals ideas.Catalk to me!13:33, 20 November 2025 (UTC)[reply]
Agreed.FaviFake (talk)16:04, 20 November 2025 (UTC)[reply]
The page has been blanked. The message now shown to users ending their session looks like this:File:Enwiki temp account end session dialog as of November 2025.png. —⁠k6ka🍁 (Talk ·Contributions)23:45, 20 November 2025 (UTC)[reply]
Thanks!Catalk to me!00:01, 21 November 2025 (UTC)[reply]

Template:Map

[edit]

Hi! Some days ago, with a group of users at eswiki, we created a module + template for easily creating colored maps. I just imported it to enwiki (seeTemplate:Map,Module:Map andModule:Map/world). Here's a very basic example of the wikitext followed by the output:

{{Map| countries= Brazil, Mexico, Egypt, China, Australia}}

See the template documentation for more examples and features. The template not only makes it easy to create maps (no more need to upload SVGs to Commons) but also toupdate said maps, and towatch said updates.

We're continuing development at a fast pace, but would love to hear your ideas, needs and feedback too. Cheers!Sophivorus (talk)13:27, 19 November 2025 (UTC)[reply]

Before you usurp a redirect, please check all incoming links to ensure that it is not being used or linked to with a different meaning. –Jonesey95 (talk)18:05, 19 November 2025 (UTC)[reply]
I did! I might have missed something though.Sophivorus (talk)23:11, 19 November 2025 (UTC)[reply]
This is awesome!! Are you planning to build similar templates for sub-national boundaries (for example, US states)?Anne drew (talk ·contribs)18:21, 19 November 2025 (UTC)[reply]
Yes! The module is already prepared for it, actually. It's just a matter of creating and curating the data for each specific new map. We'll get to the US soon enough! ;-)Sophivorus (talk)23:12, 19 November 2025 (UTC)[reply]
Looks good, thanks.Ymblanter (talk)22:04, 19 November 2025 (UTC)[reply]
First of all, yes, brilliant idea! Secondly, what boundaries are you using in the case of disputes? Which countries are "recognized" by this template? It sounds silly, but could be a real concern in the case of India/Pakistan, Israel/Palestine, Abkhazia, Transnistria, Somalia, Ukraine/Russia, etc. It looks to me like "lines of actual control" are used in Kashmir, and that the West Bank appears but not Gaza. —Ganesha811 (talk)12:16, 20 November 2025 (UTC)[reply]
@Ganesha811 Hm, that's a tough one. Could you help us by creating a topic about this issue atModule talk:Map?Sophivorus (talk)12:47, 20 November 2025 (UTC)[reply]
Done. —Ganesha811 (talk)12:50, 20 November 2025 (UTC)[reply]
What a Brilliant Idea Barnstar
I love this suggestion, thank you for it!Wikieditor662 (talk)06:25, 20 November 2025 (UTC)[reply]
PS for suggestions, I'd recommend adding ways to change the color, if you haven't already.Wikieditor662 (talk)06:25, 20 November 2025 (UTC)[reply]
Looks like that's already in the documentationKatzrockso (talk)12:29, 20 November 2025 (UTC)[reply]
It's a really cool idea, but I'm not sure if the current API is the best design. I think it'd be better to have color1/countries1, color2/countries2, etc. For complex maps it would make it a lot easier to see what countries are what and update the colours.  novovtalkedits03:57, 24 November 2025 (UTC)[reply]
@Mir Novov Interesting idea! Could you bring it over toModule talk:Map, please? Thanks!Sophivorus (talk)11:26, 25 November 2025 (UTC)[reply]
They already have a few days ago...Wikieditor662 (talk)16:25, 25 November 2025 (UTC)[reply]

Change "From Wikipedia, the free encyclopedia" to the short description

[edit]
  • We know the article is from Wikipedia, so that's mostly redundant text.
  • Replacing it with the short description of an article would offer the reader more information and allow them to gleam what the article is about when the lede is very long
  • In the case of no short description, keep "From Wikipedia, the free encyclopedia"

OmegaAOLtalk?10:19, 20 November 2025 (UTC)[reply]

Support - better to have some information than none.GarethBaloney (talk)13:21, 20 November 2025 (UTC)[reply]
Interesting idea, although I don't think I can support it.
That tagline has never been my favorite, since roughly 95% of readers are going tomisinterpret "free". Still, it's an opportunity to establish our brand. It's also a place wherewe recently agreed to add good/featured status, which would be overridden with this.
I also have concerns about putting the short description there.Short descriptions were developed for the express purpose of helping readers disambiguate among search results. I've never been happy with the decision to display them on the mobile version of articles because they are generally redundant to the lead sentence, and displaying them on desktop as well would make that issue worse. Ifthe lede is very long and doesn't establish the topic up front, then that's an issue to fix with the article, not something to be addressed via the tagline.Sdkbtalk15:34, 20 November 2025 (UTC)[reply]
TBF, many featured articles have quite long leads. Although I didn't know about the article status addition. Maybe we can add it before? For example:
Act of Accord
==================
1460 act of the Parliament of England • Afeatured article from Wikipedia, the free encyclopediaOmegaAOLtalk?16:11, 20 November 2025 (UTC)[reply]
This raises the question of using every article status. Maybe just GA and FA-class articles? I don't think non-editors would really understand the ranking system that's used. Maybe hovering over FA and GA gives a short blurb about what an FA and GA is.PhoenixCaelestisTalk //Contributions16:48, 20 November 2025 (UTC)[reply]
I think that this is the plan already.OmegaAOLtalk?16:57, 20 November 2025 (UTC)[reply]
@Sdkb also, if you are unhappy with the mobile design, try using the Monobook skin on mobile instead. It has a responsive mobile design that is basically just the full-featured desktop site in an accessible, phone-usable format. Vector or Vectornew doesn't have a responsive design though so you can only do this with monobook.OmegaAOLtalk?16:59, 20 November 2025 (UTC)[reply]

New contributions footer for temporary accounts

[edit]

Please readMediaWiki talk:Sp-contributions-footer#Implement the following code on MediaWiki:Sp-contributions-footer-temp?. Thanks.Codename Noreste (discusscontribs)14:36, 21 November 2025 (UTC)[reply]

Temporary accounts

[edit]

Temporary accounts should be allowed to create their own user page. Of course after a few days of inactivity this page will be automatically deleted. It helps with allowing new users to see what they can do on Wikipedia. And also helps make clarifications and introductions instead of dumping it on the talk page.~2025-34140-84 (talk)01:39, 22 November 2025 (UTC)[reply]

I tried creating your userpage and it seems to work. It seems that users with normal accounts can create temporary account userpages. But you can't?✨ΩmegaMantis✨blather01:49, 22 November 2025 (UTC)[reply]
@~2025-34140-84 replied on my talk page and said it was because I was an AC user, which can create all user pages.✨ΩmegaMantis✨blather01:52, 22 November 2025 (UTC)[reply]
There's still no mechanism for deleting the page when the temp account goes away. The page you just created will exist forever or until MfD, whichever comes first. ―Mandruss  2¢. IMO.01:54, 22 November 2025 (UTC)[reply]
Then it should automatically be deleted. To save space and so other users who later adopt the IP feel like they have a fresh start.~2025-34140-84 (talk)01:58, 22 November 2025 (UTC)[reply]
I will propose it at MfD. This is a verytrouty moment for me and I apologize.
Also, it seems you have put your political views on your de facto userpage. You're welcome to do so, but the primary focus for a user account and page is for edits, notWP:SOAP.✨ΩmegaMantis✨blather02:02, 22 November 2025 (UTC)[reply]
Disagree. The purpose of a Userpage is for the user to introduce themselves.SmokeyJoe (talk)09:25, 22 November 2025 (UTC)[reply]
I believe that the deletion (or maybe just blanking) is better suited forWP:BOTREQ. Currently, there are bots which blank out IP talk pages after a certain duration.~/Bunnypranav:<ping>03:18, 22 November 2025 (UTC)[reply]
Ooh, good suggestion! Would you mind doing it -- I've already screwed up quite a bit on this matter, including with my deletion proposal. 😅...sigh✨ΩmegaMantis✨blather03:34, 22 November 2025 (UTC)[reply]
Temporary account holders with any desire to have a Userpage shouldWP:Register.SmokeyJoe (talk)09:23, 22 November 2025 (UTC)[reply]
+1fifteen thousand two hundred twenty four (talk)14:20, 23 November 2025 (UTC)[reply]
Ageee. Given the ephemeral nature of temporary accounts giving them userpages will result in large amounts of abandoned pages piling up.  novovtalkedits01:23, 24 November 2025 (UTC)[reply]


RFC: Disable unregistered editing and require all editors to register

[edit]
THAT'S BAIT
Nothing to see here, folks.asilvering (talk)04:03, 24 November 2025 (UTC)[reply]

The following discussion is closed.Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.



Should everyone be required to create an account prior to editing the English Wikipedia?ColdNerd (talk)03:27, 24 November 2025 (UTC)[reply]

  • Option 1: Status quo
  • Option 2: Run a 6-month trial (similar to ACTRIAL) and review results afterwards
  • Option 3: Disable unregistered editing now

ColdNerd (talk)03:27, 24 November 2025 (UTC)[reply]

At the previous discussion, as well as [[4]], there appears to be significant discussion about and support for restricting or terminating unregistered edits. Therefore, I have opened a formal RFC.ColdNerd (talk)03:27, 24 November 2025 (UTC)[reply]

  • Oppose (Option 1). In my short time on Wikipedia, I've done mostly anti-vandalism work. I revert a lot of IPs. However, they usually get reverted quickly, and shout out to the team maintainingedit filters. I also see a lot of gnome-like work by IPs; just someone who noticed a typo, and wanted to fix it. Because I think the consequences of allowing unregistered editing are mitigated pretty well, and that IPs do improve Wikipedia, I want to oppose this proposition.win8x (talk)03:43, 24 November 2025 (UTC)[reply]

Discussion

[edit]

@ColdNerd: Do you have any evidence that things have become worse since the introduction of temporary accounts? I am not seeing an uptick in misbehaviour at this point. Anon's are still doing useful or harmless work, and now there is more targeted ways to feedback to an unlogged-in user. The biggest problem for me so far is that it is not easy to copy the temporary account name in the source editor as it is using special markup.Graeme Bartlett (talk)03:54, 24 November 2025 (UTC)[reply]

It's becoming harder to report vandals properly, and the new rules about disclosing IPs of temporary accounts are a nightmare. Also, I don't like the idea of anyone with 300 edits being able to see IPs. Do you want one ofIcewhiz's sockpuppets to gettemporary account IP viewer? If you really are concerned about evidence, we could run a trial.ColdNerd (talk)03:57, 24 November 2025 (UTC)[reply]
The discussion above is closed.Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Have Twinkle stop notifying blocked users of deletion nominations?

[edit]
Tracked in github.com
Issue #2249

This is a perennial request. I think I've seen it atWT:TWINKLE three different times. I'd like to get wider feedback before making a change:

Under what circumstances should Twinkle inform a page creator that one of their pages has been nominated for deletion?Example notification.

  • Option A - Post a deletion notification for both unblocked and blocked authors. (status quo)
  • Option B - Post a deletion notification for unblocked authors only. Skip posting to blocked authors. (proposed change)

Note that this will apply to every kind of XFD: articles, templates, categories, redirects, files, etc. This may also apply to PRODs and CSDs. This may apply to both nominations and instant deletions. Note that there is a small subset of blocked users that are also deceased. –Novem Linguae(talk)09:37, 25 November 2025 (UTC)[reply]

Survey - Have Twinkle stop notifying blocked users of deletion nominations?

[edit]
  • Option A - I filed the abovepro forma because I am a Twinkle maintainer. I actually prefer the status quo. 1) if the user ends up unblocked soon, they may want to know what is happening to their articles, 2) talk page watchers may also want to keep an eye on that user's articles, and 3) it's been the status quo for a very long time. –Novem Linguae(talk)09:37, 25 November 2025 (UTC)[reply]
  • Option A - We want blocked users to return and make helpful contributions. We should give them the tools to do so and not withhold information that would be useful when they're unblocked.~2025-36374-38 (talk)10:49, 25 November 2025 (UTC)[reply]
  • Option A. I follow the talk page of a user who hasn't edited in years to keep track of the articles they have created getting nominated for deletion, as most of them can be saved. I am sure this is a reasonable use case for blocked users too.Katzrockso (talk)10:56, 25 November 2025 (UTC)[reply]
  • Option A per above. Twinkle should be notifying more users about potential deletions (e.g. major contributors), not fewer.Thryduulf (talk)12:01, 25 November 2025 (UTC)[reply]
  • Option C - I agree with most of the considerations expressed by the thus far unanimous A position. However, there are some cases, like editors blocked for UPE, or in other cases of indefs for serious abuse, where it would be nice to have the option not to send an automatic notification, thus my suggestion would be to have this be an optional toggle rather than always/never notifying blocked editors.signed,Rosguilltalk14:26, 25 November 2025 (UTC)[reply]
    @Rosguill there's already a toggle for notifying or not. I think it's in practice mostly used to avoid spamming notifications when there's a ton of connected nominations from the same creator at once. An option could be to change the default to don't notify for users in certain categories or that have certain block templates on their page. There might be something of the kind already but couldn't find it with a quick look at the code if so.Trialpears (talk)16:30, 25 November 2025 (UTC)[reply]
    I guess that shows how long it's been since I've actually nominated an article for deletion myselfsigned,Rosguilltalk16:32, 25 November 2025 (UTC)[reply]
    I was also going to suggest the categories idea. Does anyone have some specific categories in mind to automatically exclude? Sockpuppets, retired, deceased?Category:Wikipedians who opt out of template messages? Although the downside of that would be that talk page watchers would not be able to see and participate in those deletion discussions. –Novem Linguae(talk)17:19, 25 November 2025 (UTC)[reply]
    Globally locked accounts maybe? I feel like there’s a subset of LTAs that we’d want to minimize contact with, but I don’t think they’re already denoted with any particular exclusive categorysigned,Rosguilltalk17:28, 25 November 2025 (UTC)[reply]
  • Option A per points (1) and (2) in Novem's !vote. I don't mind an option to not notify, but it shouldn't be the default.Sdkbtalk16:34, 25 November 2025 (UTC)[reply]
  • Option BUser:Ritchie333/Don't template the retirees applies in force. In practice when a prolific user is blocked (or stops editing for any other reason) the result is that the talk page gets taken over by a nearly endless amount of templates, the vast majority of which don't lead to any useful action.* Pppery *it has begun...16:53, 25 November 2025 (UTC)[reply]
  • Option A per Katzrockso. While I kind of lean towards B, personally, Katzrockso makes a good argument about TPS getting the notifications as well. And if someone is blocked for a month and a notification is missed, that's not ideal either. --SarekOfVulcan (talk)16:59, 25 November 2025 (UTC)[reply]
  • I'm unsure but I do feel that option B should definitely not apply to editors on short term blocks. I'd probably support it for indefed editors. That said, what's the problem with their notifications stacking up anyway? It's not like many people are going to see them. I do worry that somebody could game option B. Imagine some troll who goads an editor into catching a short block and then uses a sock or meat account to nominate all that editor's articles for deletion in the hope that they won't notice. OK, sure, they can do that anyway without Twinkle, but let's not make it easier for them. --DanielRigal (talk)19:07, 25 November 2025 (UTC)[reply]
    I agree. For example we regularly block people and delete their spam articles. Some users then get renamed and rehabilitated. They'd like to know that their article has been deleted.Secretlondon (talk)22:32, 25 November 2025 (UTC)[reply]
  • If it has to be A or B then I guess A, as I think we should certainly template people who are temporarily blocked for edit warring or similar, and it would be detrimental if we didn't.Secretlondon (talk)22:34, 25 November 2025 (UTC)[reply]
  • Option B, but only for indefinite blocks. Most users who receive an indefinite block don't return or start block evading. On the flipside, we expect temporarily blocked users to return constructively. –LaundryPizza03 (d)17:12, 26 November 2025 (UTC)[reply]

Remove icons from alert and notification bells for some skins

[edit]

The alert and notification bell icons, at least as seen in Vector 2010, Monobook, Cologne Blue, and Modern, are blurry at the scale of these older skins and do not fit the skins in question, as they were added after the time of these skins. The example of Monobook before (http://bayimg.com/baHhPAAbh) and after (http://bayimg.com/cAhHAaABH) deleting the icon classes in DevTools shows that it is both more fitting with the rest of the design and more readable.

They do work nicely in Vector 2022, Minerva, and Timeless, so they should be retained there.OmegaAOLtalk?07:56, 26 November 2025 (UTC)[reply]

Retrieved from "https://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(proposals)&oldid=1324270882"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp