I do not have all the answers, but I will do my best to answer questions if you have them. If you think I am wrong, I am open to being convinced otherwise by logically supported,evidence based, succinct reasons. And as I state on my user page if you have a plant from theSouthern Rocky Mountains or surrounding countryside that needs an article I will give it a higher priority on my list of things to edit. —🌿MtBotany
On18 April 2025,Did you know was updated with a fact from the articleLeucocrinum, which you recently created, substantially expanded, or brought to good article status. The fact was... that the spring blooming wildflowercommon starlily(pictured) develops its seeds underground? The nomination discussion and review may be seen atTemplate:Did you know nominations/Leucocrinum. You are welcome to check how many pageviews the nominated article or articles got while on the front page(here's how,Leucocrinum), and the hook may be added tothe statistics page after its run on the Main Page has completed. Finally, if you know of an interesting fact from another recently created article, then please feel free to suggest it on theDid you know talk page.
The second round of the 2025 WikiCup ended on 28 April at 23:59 UTC. To reiterate what we said in the previous newsletter, we are no longer disqualifying contestants based on how many points (now known as round points) they received. Instead, the contestants with the highest round-point totals now receivetournament points at the end of each round. These tournament points are carried over between rounds, and can only be earned if a competitor is among the top 16 round-point scorers at the end of each round.This table shows all competitors who have received tournament points so far. Everyone who competed in round 2 will advance to round 3 unless they have withdrawn or been banned.
Round 2 was quite competitive. Four contestants scored more than 1,000 round points, and eight scored more than 500 points (including one who has withdrawn). The following competitors scored at least 800 points:
History6042 (submissions) with 1,088 round points from fourfeatured lists about Michelin-starred restaurants, nine good articles and agood topic mostly on Olympic-related subjects, seven ITN articles, and dozens of reviews
Gog the Mild (submissions) with 1,085 round points from three FAs, one GA, and four DYKs on military history, as well as 18 reviews
Arconning (submissions) with 887 round points, mostly from four FLs, six GAs, and seven DYKs on Olympic topics, along with more than two dozen reviews
In addition, we would like to recognizeGeneralissima (submissions) for her efforts; she scored 801 round points but withdrew before the end of the round.
The full scores for round 2 can be seenhere. During this round, contestants have claimed 13 featured articles, 20 featured lists, 4 featured-topic articles, 138 good articles, 7 good-topic articles, and more than 100 Did You Know articles. In addition, competitors have worked on 19 In the News articles, and they have conducted nearly 300 reviews.
Remember that any content promoted after 28 April but before the start of Round 3 can be claimed in Round 3. Invitations for collaborative writing efforts or any other discussion of potentially interesting work is always welcome on the WikiCup talk page. Remember, if two or more WikiCup competitors have done significant work on an article, all can claim points. If you are concerned that your nomination—whether it is at good article candidates, a featured process, or anywhere else—will not receive the necessary reviews, please list it onWikipedia:WikiCup/Reviews Needed. If you want to help out with the WikiCup, feel free to review one of the nominations listed on Wikipedia:WikiCup/Reviews Needed. Questions are welcome onWikipedia talk:WikiCup, and the judges are reachable on their talk pages. Good luck!If you wish to start or stop receiving this newsletter, please feel free to add or remove your name fromWikipedia:WikiCup/Newsletter/Send.MediaWiki message delivery (talk)01:02, 29 April 2025 (UTC)[reply]
The Core Contest has now ended! Thank you for your interest and efforts. Make sure that you include both a "start" and "improvement diff" on theentries page. The judges will begin delibertaing shortly and annouce the winners within the next few weeks. Cheers from the judges,Femke,Casliber,Aza24. –Aza24 (talk)02:53, 1 June 2025 (UTC)[reply]
If you wish to start or stop receiving news about The Core Contest, please add or remove yourself fromthe delivery list.
Thanks. I might do a DYK about the moths. It seems like the most interesting fact about the plants But more importantly it was good to do the review and see how the GA process works.🌿MtBotany (talk)18:30, 6 June 2025 (UTC)[reply]
The winners of the 2025Core Contest are announced 🎉. An great turnout with a impressive variety of articles and laudable improvements. The judges (Aza24,Femke andCasliber) would like to thank everybody who joined and congratulate the winners.
First place (and a prize of £120) goes toPhlsph7 (talk·contribs) for his systemic overhaul of thePolitical Philosophy article. What was once an unwieldy entry—dominated by a sprawling history section of nearly three dozen subsections!—is now an accessible and well-structured survey of a complex and often polarizing subject. We particularly commend Phlsph7’s global, inclusive, and comprehensive approach. He has once again demonstrated exceptional skill in handling core topics with clarity and balance.
Second place (and a prize of £100) goes toDracophyllum (talk·contribs) for their outstanding work on bothTrunk andFlower. The former was reimagined from a ~200 word stub into a richly detailed and impeccably sourced overview—an effort truly worthy ofits dedicatee, the late and much-missedVami IV. Meanwhile, their improvements to the Flower article transformed an already strong entry into an exceptional one, now well on its way topassing FAC.
Third place (and a prize of £80) goes toVigilantcosmicpenguin (talk·contribs) for his major development of theNiamey article. The entry now proudly stands among the finest city articles on Wikipedia—from thirty scattered references to nearly 400 high-quality academic sources. We particularly commend his inclusion of numerous French-language sources and thoughtfully comprehensive approach to the topic.
Sammi Brie (submissions) with 1,055 round points, mostly from television station articles, including 27 good articles and 9good topic articles
Everyone who competed in round 3 will advance to round 4 unless they have withdrawn.This table shows all competitors who have received tournament points so far, while the full scores for round 3 can be seenhere. During this round, contestants have claimed 4 featured articles, 16featured lists, 1featured picture, 9 featured-topic articles, 149 good articles, 27 good-topic articles, and more than 90 Did You Know articles. In addition, competitors have worked on 18In the News articles, and they have conducted more than 200 reviews.
Remember that any content promoted after 28 June but before the start of Round 4 can be claimed in Round 4. Invitations for collaborative writing efforts or any other discussion of potentially interesting work is always welcome on the WikiCup talk page. Remember, if two or more WikiCup competitors have done significant work on an article, all can claim points. If you are concerned that your nomination—whether it is at good article candidates, a featured process, or anywhere else—will not receive the necessary reviews, please list it onWikipedia:WikiCup/Reviews Needed. If you want to help out with the WikiCup, feel free to review one of the nominations listed on Wikipedia:WikiCup/Reviews Needed. Questions are welcome onWikipedia talk:WikiCup, and the judges are reachable on their talk pages. Good luck!If you wish to start or stop receiving this newsletter, please feel free to add or remove your name fromWikipedia:WikiCup/Newsletter/Send.MediaWiki message delivery (talk)14:49, 29 June 2025 (UTC)[reply]
MtBotany, for the sake of wikidrama potentially upsetting other contestants, I've moved this here.
How do I withdraw my entries from the contest? As it apparently is about using LLMs to generate "content" I want nothing to do with this.🌿MtBotany (talk)04:40, 29 June 2025 (UTC)[reply]
No it isn't about using LLMs to generate content. In fact I was close to banning them altogether but the problem is that some editors might use them for finding sources and organization, it is difficult to completely ban them and not disrupt some people who are still writing the bulk of the content. It is in the rules that editors can be disqualified from the contest if found to be using LLMs and creating problematic content. The contest is about improving the largest body of content globally within 4 weeks. If we have a rule that you can only win top prize once or twice in a year it will give others a chance to win if we hold these for long enough. Disappointed MtBotany that you want to remove your work on this, when I've just given you an editor of the week for STEMM again. ♦Dr. Blofeld12:48, 29 June 2025 (UTC)[reply]
I did not choose very good words to express what I think about this. I will let them stand so that other editors can see my mistake. I am withdrawing because I am strongly against any LLM content being added to Wikipedia. It devalues the project and does not respect the readers. As such I must withdraw from the contest because continuing to participate could make it seem like losing a contest is my reason for being so passionately against the use of LLMs on Wikipeda. I will continue to work for Wikipedia in destubbing articles as this is just something I do regardless of a contest being on or not, but in order to be a good advocate for human editing of Wikipedia I think I should not participate any further.🌿MtBotany (talk)13:38, 29 June 2025 (UTC)[reply]
I understand your feelings MtBotany, but the world is becoming increasingly AI driven, in fact I think Wikipedia would run the risk of becoming obsolete at some point if it doesn't join the AI world. I don't know how long it will take before most editors are using them in some way, I would think we're a number of years away from it, but I believe Wikimedia is working on AI tools to assist editors. Yes, it seems very sad from a human content creating perspective, but AI has huge potential and will continue to greatly improve, and I think it's the future of encyclopedia content in some way whether we like it or not. It has a long way to go still before it's reliable, and at present creates a mess, at least in the article subjects I tried with it. You can still participate in this as an editathon and add your articles to the main page list, but I can remove you from the entries page if you are happy for me to do so. ♦Dr. Blofeld13:56, 29 June 2025 (UTC)[reply]
If you are correct, that LLMs/AI models are the future of Wikipedia, then I have no future at this project. I hope that you are wrong about the future and the inevitable use of LLMs on the project as it will not help Wikipedia, but instead turn it into just another content farm. The use of LLMs to produce text is as much a threat to the mission of Wikipedia as were well meaning suggestions in the early days that it should be ad supported to keep it "free".
I realize that even the editors who are using the LLMs comes at this from wanting to do good work for the project. They are not villains. However, the writing or editing of text is about more than just producing something. It is about understanding and learning. Things that AI models are incapable of doing because they are statistically likely parrots and do not actually learn anything.
Because I am going to be trying to figure out how to draw a firmer line against the use of LLMs on the project I think it would muddle things were I to continue to participate on any level. I also do not want the contest to stop for me alone, but seeing that is acceptable to the sponsors of the contest made me realize the size and urgency of this problem for Wikipedia. I also think that all editors who do not use LLMs to generate their text should withdraw to make it clear to the organizers that this is not acceptable. I also think you are doing your best and doing great work that is being undercut by the rules you have been given.
I appreciate and respect your views MtBotany. Yes, I think we will have AI tools on here in a few years time to assist editors, particularly with sourcing and AI will become so advanced that it will be running many things in the world and become trustworthy enough at least for assisting editors. In some ways AI is the biggest threat to humanity, if you are very wary of it you're intelligent, life will never be the same again. Yes, people using LLMs are still putting four-five hours work into it a day. They seem to be doing a very good job and are an expert in the subject. It is hard to outlaw when it is used responsibly and STEMM is a big focus of the contest. The hardest working manual editors will be well rewarded in this if they put in hard work as they were in the last one. There will likely be a change to the way prizes are awarded next time. Perhaps a daily prize to the hardest working editor. Not necessarily raw output and obvious use of LLMs. Hard work will continue to be rewarded and editors have a fair chance to win things so don't worry about it. We will find a way to keep it fair to editors not using them. ♦Dr. Blofeld20:00, 29 June 2025 (UTC)[reply]
I've emailed you some further thoughts on this and what the issue is MTB. If you still want me to remove your entries from the contest I will do so.♦Dr. Blofeld14:54, 29 June 2025 (UTC)[reply]
To be clear, the sponsors of the contest are against irresponsible use of LLMs as much as I am and support editors being disqualified if they are using them poorly. The issue is that there are few people doing articles in the STEM field and it is difficult to outlaw LLMs if they are genuinely improving a lot of articles in this field, are done by an expert, and not creating a content problem. I agree in a way, but it is trickier to support in a contest context. I mentioned it before and it seemed like multiple contestants were happy with their use being open, particularly with things like finding sources. I think the solution is to change the prize system next time to reward a few editors perhaps on a daily basis, not necessarily based on just sheer output, but a range of factors and make it very fair to anybody so hard work is really immediately rewarded, regardless of tools they are using. You're always welcome to contribute to it as an editathon anyway. ♦Dr. Blofeld21:12, 29 June 2025 (UTC)[reply]
I am not nearly so concerned about a contest as I am about what the rules of the contest say about what is valued on Wikipedia and how this conflicts with how people view Wikipedia.
I knew that I would not win. It is like participating in the WikiCup or the Core Contest. It sharpens my skills and makes me a better editor and it can be amusing. But I also know I'm very busy with many other things in life and I'm too prone to being distracted. And I thought me, there somewhere in the middle of the pack, would hold up some really amazing editors making Wikipedia better. Losing to them is an honor. There is value to the editor who can work fast and efficiently to a task and I want to celebrate that. Ifthey are doing the writing.
Even when a LLM does not make a factual mistake, inadvertently closely paraphrase, or violate synth, they always produce bad writing. Moments where I stop and go, "What am I reading here?" The platitudes, padding, or pseudo-profound bullshit. I do use the LLM translation such as deepL when I am looking at a paper or a page and I no idea what it says. But I would never use even a great one to translate a page or section from the German Wikipedia for use here.
Side note: One of my happiest moments researching was in using deepL to translate a French source as part of a chain of chase down the source. It was all to remove or verify one claim fromAilanthus altissima. The French source pointed to a paper that pointed to a USDA source that pointed at a thesis... that did not contain the claim. Someone at USDA made a mistake and misremembered or misinterpreted what the thesis said about the age of the trees and it got repeated and then accurately cited to the later French source that was innocently wrong. It was all far too much work, but the claim sounded "funny" and I wanted to understand it fully and it may have persisted longer because of the language barrier.
I treat it machine translation like any other source where I must write it in my own words to avoid moral plagiarism more than legal plagiarism. Having looked at some of what is being produced, I need to do more research, but right now I think that the resulting articles are worse than the stubs they replaced. I think this is a threat to the trust that people put in Wikipedia just as using LLMs to write papers threatens the foundations of academic writing. If a person is a bad/slow writer but a good researcher, they need to partner with a good human writer or to work on their skills. I think that there needs to be a total ban on using any LLM to generate content that will be added to articles. They need to be tools, not replacements for thinking and understanding. The way they are being used is not responsible.🌿MtBotany (talk)22:47, 29 June 2025 (UTC)[reply]
I can't see anything wrong withthis expansion for instance. The content looks good IMO. You think that's bad quality and worse than the stub? ♦Dr. Blofeld12:00, 30 June 2025 (UTC)[reply]
Yes. That is garbage. Let's start by looking at the Taxonomy section.
But then it jumps back to 1981 without explaining. Was the genus originally in another family? The name is really similar as well, is it named for the same person?
"The family name Vezdaeaceae was originally proposed by Poelt and Vězda in 1981; the proposal, however, did not meet the formal requirements of theInternational Code of Nomenclature: Poelt and Vězda omitted aLatin (or otherwise Code-compliant) description and did not designate thetype genus, so under Articles 32.1(c) and 36.1 the name was notvalidly published."
And that paragraph is... technically correct, but really hard to follow. It uses a convoluted structure and seems to maybe be using filler phrases to bulk it out. Let's actually go to the source for that second paragraph and see whatRecord Details: Vezdaeaceae Poelt & Vězda, Biblthca Lichenol. 16: 3 (1981) in the Index Fungorum actually says.
Nomenclatural comment:
Nom. inval., Arts 32.1(c), 36, 39.1 (Melbourne)
So what am I to make of that? It seems like the LLM has taken the defintion ofNom. inval. and just dumped it into the sentence, but it does not know what reason it was invalid. If I had access to the next sourceSyllabus of Plant Families - A. Engler's Syllabus der Pflanzenfamilien Part 1/2: Ascomycota (not available on Proquest with the Wikipedia library subscription) I'd check that as well to see which reason it was invalid. I'll keep digging, but that's enough of that one for now.
Here is one I looked at today from their expanded list.Corticifraga nephromatis. Opening section it says, "Corticifraga nephromatis is one of four species ofCorticifraga known to occur in Alaska." This is cited to Spribille et al. 2023. Looking in that source that statement is not there. Instead it says, "Four species of Corticifraga D. Hawksw. & R. Sant. are known fromSE Alaska. Corticifraga fuckelii (Rehm) D. Hawksw. & R. Sant. and C. peltigerae (Fuckel) D. Hawksw. & R. Sant. are common on species of Peltigera, and recently C. scrobiculatae Pérez-Ort. was described from Lobarina scrobiculata (Spribille et al. 2010) and discovered in GLBA during the present survey. Corticifraga chugachiana Zhurb. was described from the Chugach National Forest as one of the few lichenicolous fungi to occur on Lobaria oregana (Zhurbenko 2007). Corticifraga nephromatis (Fig. 16) is the second species of the genus growing on Nephroma Ach. Corticifraga santessonii Zhurb." Emphasis added by me.
LLMs always do stuff like this, because they don't understand or know anything. They're just probability chaining.
Checking just one of the sources cited by Spribille et al. 2023 I find there are at least six species native to Alaska.
Moving on. Next bad statement from the description. "its cream-colored disc generally sits flush with the surrounding thallus". Also sourced to Spribille et al. 2023 but what the source actually says is, "disc usually at the same level as the thallus surface or slightly raised,". Once again LLM not able to known anything and just latches onto "usually" and substitutes "generally" and ignores the rest of it.
I'm going to keep going because I hope to present a strong case to convince Esculenta to stop or else convince enough editors to just completely ban this.🌿MtBotany (talk)23:02, 1 July 2025 (UTC)[reply]
I've noticed and greatly appreciated your work on Colorado's penstemon species. I just went up theCataract Lake Loop Trail for the columbines but also encountered a few species (and presumably some hybrids) of beardtongue. If I sent you a folder of images I took, would you be interested in seeing if any are worthwhile adding to articles? Best, ~Pbritti (talk)05:18, 2 July 2025 (UTC)[reply]
@Pbritti I would be delighted to take a look at the photos, though I'm not yet confident of my abilities to identify some of the more cryptic species from photographs alone, much less a hybrid. Though I am confident of knowing my own limits and not making a mistake out of overconfidence.🌿MtBotany (talk)03:22, 6 July 2025 (UTC)[reply]
Oh, splendid! I'll send those over sometime tomorrow and will be available any time next week should you wish to ping me with a question/ID! Best, ~Pbritti (talk)03:25, 6 July 2025 (UTC)[reply]
The fourth round of the 2025 WikiCup ended on 29 August. The penultimate round saw three contestants score more than 800 points:
BeanieFan11 (submissions) with 1,175 round points, mainly from sports-related articles, including 17 good articles, 27 did you know articles, and 9in the news articles
AirshipJungleman29 (submissions) with 854 round points, mostly from a high-scoring featured article on the Indian leaderRani of Jhansi and two good articles, in addition to 13 featured and good article reviews
Everyone who competed in Round 4 will advance to Round 5 unless they have withdrawn.This table shows all competitors who have received tournament points so far, while the full scores for Round 4 can be seenhere. During this round, contestants have claimed 9 featured articles, 12featured lists, 98 good articles, 9good topic articles, more than 150 reviews, nearly 100 did you know articles, and 18 in the news articles.
In advance of the fifth and final round, the judges would like to thank every contestant for their hard work. As a reminder, any content promoted after 29 August but before the start of Round 5 can be claimed in Round 5. In addition, note that Round 5 will end on 31 October at 23:59 UTC. Awards at the end of Round 5 will be distributed based on who has the most tournament points over all five rounds, and special awards will be distributed based on high performance in particular areas of content creation (e.g., most featured articles in a single round).
Invitations for collaborative writing efforts or any other discussion of potentially interesting work is always welcome on the WikiCup talk page. Remember, if two or more WikiCup competitors have done significant work on an article, all can claim points. If you are concerned that your nomination—whether it is at good article candidates, a featured process, or anywhere else—will not receive the necessary reviews, please list it onWikipedia:WikiCup/Reviews Needed. If you want to help out with the WikiCup, feel free to review one of the nominations listed on Wikipedia:WikiCup/Reviews Needed. Questions are welcome onWikipedia talk:WikiCup, and the judges –Cwmhiraeth (talk·contribs),Epicgenius (talk·contribs),Frostly (talk·contribs),Guerillero (talk·contribs) andLee Vilenski (talk·contribs) – are reachable on their talk pages. Good luck!
This is my 10,000th edit to the English language Wikipedia and my 12,837th to all Wiki projects according to Xtools. Right now 74.1% of my edits are to the mainspace. I have been concentrating my efforts at improving the project lately and avoiding discussions as much as is reasonable. I am fairly happy with my work, though right now I am becoming worried about the future of the project given the current fad for LLM generated texts that some editors are using to extend articles.🌿MtBotany (talk)21:00, 14 September 2025 (UTC)[reply]
Congratulations on your edit milestone. You are one of the best recently active editors in terms of producing well developed articles about plant species.Plantdrew (talk)01:37, 15 September 2025 (UTC)[reply]
Hi @MtBotany, I am active atWP:AIC and most of the work I do here is identifying and cleaning up problematic LLM-generated content. I noticed your comment atSpecial:Diff/1319143834 and wanted to follow up as in my experience, the user who added the lichen articles is theonly editor I have seen on wikipedia using LLMs in compliance with policy. I came across their work a few weeks ago and personallychecked every sentence in one of their articles and saw no issues. This single-handedly changed my perspective about LLMs, and while I still support a ban on LLM-generated content asdescribed here, I have also considered an additionaluser right that would allow approved, highly experienced editors to use LLMs under certain restrictions. But in your comment above you said that you found many subtle problems with one of the lichen articles, which would have implications for my suggested policy. Could you point me to one of the articles that did have issues? I would like to take a look at it.NicheSports (talk)01:50, 28 October 2025 (UTC)[reply]
@NicheSports: I might have gotten unlucky, but the first one I dug into was one suggested by Dr. Blofeld when discussing my withdrawing. Specificallythis diff. It produced some garbage text about "The family name Vezdaeaceae was originally proposed by Poelt and Vězda in 1981; the proposal, however, did not meet the formal requirements of the International Code of Nomenclature: Poelt and Vězda omitted a Latin (or otherwise Code-compliant) description and did not designate the type genus, so under Articles 32.1(c) and 36.1 the name was not validly published." Which is technically correct, but also not actually saying anything. It is a mashup of the code and the fact that it is listed as an invalid name in the Index Fungorum. Then I looked atCorticifraga nephromatis and at that time found many subtle errors. For example there being at least six species of the lichen in Alaska, not the four the LLM wrote. You can see what I started to investigate on this talk page towards the end of the subjectWithdrawing.🌿MtBotany (talk)02:10, 28 October 2025 (UTC)[reply]
It looks like Esculenta fixed the errors I noticed on 2 July. So they are doing the right thing in cleaning up after the LLM, but I don't see the point of using one if a human then needs to check every single statement for little errors. You might as well just write it yourself from the first.🌿MtBotany (talk)02:23, 28 October 2025 (UTC)[reply]
I already noticed that. I confirmed in thesource that 6 species are present in Alaska (on page 164, not 168 as the reference indicates - although potentially it was referenced to a different publication with different page #s? Who knows). I went back toCorticifraga nephromatis and saw the number there was correct, but then checked the history and saw that they had updated it. This is sufficient for me to strike my comment atSpecial:Diff/1318947003. I have zero tolerance for unambiguous source-to-text integrity errors introduced via LLM hallucination. Thank you for raising thisNicheSports (talk)02:30, 28 October 2025 (UTC)[reply]
I agree. Humans make mistakes as well. Heaven knows I've made a few in edits I've published to the mainspace, but I've learned from them and my mistake rate is fairly low.
And if an actual real, live human keeps making the same mistakes over and over eventually they get put under restrictions or blocked from editing. I've been involved in a few of those as well where I think everyone could tell the editor meant well, but just would not stop adding information without sources or sourced to things that no one else could locate. That was a bit of a hard one for me because I did like some of the facts the editor would dig up in good sources. We all just wanted them to stop with the unsourced speculation, but the editor kept doing it for years.🌿MtBotany (talk)02:55, 28 October 2025 (UTC)[reply]
It looks like p. 168 of the 527-page PDF of Spribille et al. 2023 is numbered 164, which may explain the discrepancy. The original version of the article was sourced solely to Spribille et al. 2020, which states that "Four species ofCorticifraga D. Hawksw. & R. Sant. are known from SE Alaska," slightly different from the article's statement that the subject is one of "four species ofCorticifraga known to occur in Alaska". The 17 June 2025 edit (which I assume was based on trained LLM output), didn't change this, but the 1 July 2025 edit, which looks like a small manual edit, changed the reference for that statement to Spribille et al. 2023 without updating the count of species to six.
It's hard for me to tell what is or isn't LLM error here (well, I assume the PDF pagination thing is), because this seems very close to the sort of small error a human could easily make, the sort of thing that gets shaken out in a rigorous peer review at FAC or the like. On the one hand, I'm very interested in the details of the pipeline that's producing these lichen articles, because it does seem to have been pretty successful at weeding out hallucinations, and there's a prospect of building out various areas of the encyclopedia on a less-than-multigenerational time scale. On the other hand, this does make me wonder about the social stability of this. Destubathon aside, editors in general seem to have more "fun" writing their own text than checking each other's work. If a supervised LLM can knock out an article for every species in an order in a short time frame, does our talent pool have the patience to check it for error? We've always struggled to maintain our content review (vs content creation) processes; what happens when content review is the main thing to be done in certain topics?Choess (talk)17:00, 1 November 2025 (UTC)[reply]
I agree with your questions about this @Choess and about if LLM generation could be useful or harmful on balance.
In addition to the LLM errors issue, humans often have passion for the subject they are writing about and can figure out when and how to "sneak" that past the standards of neutrality. Would an LLM have put in the long Denis Diderot rant/commentary into the article aboutUrena lobata? Or the long passage from the journal of John Charles Frémont intoValeriana edulis? Both add color to articles that might otherwise be kinda boring to non-plant enthusiasts, but in a compliant with Wikipedia standards and practices way. (Though they are both are currently on my list of articles that desperately need more work.)
It was in a simpler form, but generating articles about subjects makes me think of the early bot experiments on the Swedish Wikipedia. They have a bot generated stub for every single then valid plant, but most of the time when I have looked at them they languish without improvement due to a lack of interest. The plant stubs are much the same here on the English language version, but the fact that a human created them often shows interest in a species that is useful to me as a signal about what to work on first/next. So my answer is, "Let's not do much of this yet. Better a redlink than a LLM generated article." We can change our collective opinion to allowing it later, but I think it would be hard to unring the bell.🌿MtBotany (talk)18:19, 1 November 2025 (UTC)[reply]
BeanieFan11 (submissions) with 1,035 round points, mostly from 19 good articles and 21 did you know articles about athletes
vigilantcosmicpenguin (submissions) with 819 round points, mostly from 13 good articles and 11 did you know articles about a wide range of topics from abortion topics to African cities
TheNuggeteer (submissions) with 508 round points from 9 good articles, 4 good topic articles and 6 did you know articles mainly about Philippines topics, along with 19 good article reviews
The final round was very productive, and contestants had 2featured articles, 4featured lists, 106good articles, 5good topic articles, 178 article reviews, 76did you know articles, and 9in the news articles. Altogether, Wikipedia has benefited greatly from the activities of WikiCup competitors all through the contest. Well done everyone!
The top eight scorers will receive awards shortly. The following special awards will be made, based on high performance in particular areas of content creation. These prizes are awarded to the competitor who scored the highest in any particular field during the competition.
Gog the Mild (submissions) wins the featured article prize, with 12 featured articles total, and the featured topic prize, with 9 featured topic articles in total
AirshipJungleman29 (submissions) wins the featured picture prize, submitting the only featured picture in the entire contest during round 3
History6042 (submissions) wins the featured content reviewer prize, with 127 featured content reviews. He will also share the ITN prize, with 20 in the news articles in total.
BeanieFan11 (submissions) wins the good article prize, with 100 good articles total, and the DYK prize, with 147 did you know articles in total. He will also share the ITN prize, with 20 in the news articles in total.
Next year's competition will begin on 1 January. You are invited tosign up to participate. The WikiCup is open to all Wikipedians, both novices and experienced editors, and we hope to see you all in the 2026 competition. Until then, it only remains to once again congratulate our worthy winners, and thank all participants for their involvement!
Hello, MtBotany. Per your request, your account has beengrantedtemporary-account-viewer rights. You are now able to reveal the IP addresses of individuals usingtemporary accounts that are not visible to the general public. This is very sensitive information that isonly to be used to aid in anti-abuse workflows. Please take a moment to reviewWikipedia:Temporary account IP viewer for more information on this user right. It is important to remember:
Accessmust not be used for political control, to apply pressure on editors, or as a threat against another editor in a content dispute. There must be a valid reason to investigate a temporary user. Note that using multiple temporary accounts is not forbidden, so long as they are not used in violation of policies (for example, block or ban evasion).
It is also important to note that the following actions are logged for others to see:
When a user accepts the preference that enables or disables IP reveal for their account.
Revealing an IP address of a temporary account.
Listing the temporary accounts that are associated with one or moreIP addresses (using theCIDR notation format).
Remember, even if a user is violating policy, avoid revealing personal information if possible. Use temporary account usernames rather than disclosing IP addresses directly, or give information such as same network/not same network or similar. If you do not want the user right anymore then please ask me or another administrator and it will be removed for you. You may also voluntarily give up access at any time by visitingSpecial:Preferences. Happy editing!asilvering (talk)03:31, 5 November 2025 (UTC)[reply]