Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Wikipedia:Village pump (technical)

From Wikipedia, the free encyclopedia
Page for discussing Wikipedia technical issues

 Policy Technical Proposals Idea lab WMF Miscellaneous 
Thetechnical section of thevillage pump is used to discuss technical issuesaboutWikipedia. Bug reports and feature requests should be made inPhabricator (seehow to report a bug). Bugs withsecurity implications should be reported differently (seehow to report security bugs).

If you want to report aJavaScript error, please followthis guideline. Questions aboutMediaWiki in general should be posted at theMediaWiki support desk. Discussions are automatically archived after remaining inactive for 5 days.

? view · edit
Frequently asked questions (see also:Wikipedia:FAQ/Technical)
Click "[show]" next to each point to see more details.

If something looks wrong,purge the server's cache, thenbypass your browser's cache.

This tends to solve most issues, including improper display of images, user-preferences not loading, and old versions of pages being shown.

No, we will not use JavaScript to set focus on the search box.

This would interfere with usability, accessibility, keyboard navigation and standard forms. Seetask 3864. There is anaccesskey property on it (default toaccesskey="f" in English). Logged-in users can enable the "Focus the cursor in the search bar on loading the Main Page" gadgetin their preferences.

No, we will not add a spell-checker, or spell-checkingbot.

You can use a web browser such asFirefox, which has a spell checker. An offline spellcheck of all articles is run byWikipedia:Typo Team/moss; human volunteers are needed to resolve potential typos.

If you have problems making your fancy signature work, checkHelp:How to fix your signature.

If you changed to another skin and cannot change back, usethis link.

Alternatively, you can press Tab until the "Save" button is highlighted, and press Enter. Using Mozilla Firefox also seems to solve the problem.

If an image thumbnail is not showing, trypurging its image description page.

If the image is from Wikimedia Commons, you might have to purge there too. If it doesn't work, try again before doing anything else. Some ad blockers, proxies, or firewalls blockURLs containing /ad/ or ending in common executable suffixes. This can cause some images or articles to not appear.

For server or network status, please seeWikimedia Status. If you cannot reach Wikipedia services, seeReporting a connectivity issue.

Centralized discussion
For a listing of ongoing discussions, see thedashboard.

Searching for "Sense and Sensibility"

[edit]

(@David10244 I split this question to a separate thread from#Hybrid Search: Phase 1 - Early A/B Experiment on the Android App above, I think that is not related to your issue.Matma Rextalk02:58, 12 February 2026 (UTC))[reply]

@EBlackorby-WMF Will this fix the issue I see, where searching fir "Sense and Sensibility", or "Pride and Prejudice" (without the quotes) does not bring up what's expected?
When I search for Sense and Sensibility, the suggested matches all startcwith "Sensibility"!David10244 (talk)05:23, 7 February 2026 (UTC)[reply]
I don't get that result. Do you have a link?WhatamIdoing (talk)05:26, 7 February 2026 (UTC)[reply]
@WhatamIdoing When I click the magnifying glass next to my name, then type sense and sensibility (actually, all lower case), the first suggested result is the article Sensibility. The next suggestion starts with Sensibility. I'm not at my desktop, so I can't give much more info. I can't make and attach a screenshot easily from my tablet (I don't know how). 🙂David10244 (talk)05:35, 7 February 2026 (UTC)[reply]
Screenshot of search results
Here's aWP:WPSHOT showing what happens when I typesense and sensibility (all lower case, as you can see) into the search bar at the top of the page.
Is that what you're talking about? Do you get different results when you do the same thing?WhatamIdoing (talk)07:18, 7 February 2026 (UTC)[reply]
@WhatamIdoing Yes, I get different results. My list starts with "Sensibility" (pointing to the article by that name) and the second item is for the book named "Sensibility Objectified". It's very strange that we get different results. I don't have any local scripts in common.css or anything like that.David10244 (talk)04:00, 11 February 2026 (UTC)[reply]
@David10244, please try this link:https://en.wikipedia.org/wiki/special:Search?safemode=1 and see if you get the same results.WhatamIdoing (talk)21:59, 11 February 2026 (UTC)[reply]
@WhatamIdoing That link gives me the expected results, showing the movies and the book.David10244 (talk)05:55, 13 February 2026 (UTC)[reply]
mw:Safemode turns off user scripts and gadgets, so that indicates that the problem is almost certainly somewhere inUser:David10244/common.js (seems unlikely to me, but others will know more than I do) or inSpecial:Preferences#mw-prefsection-gadgets (I have no clue which).WhatamIdoing (talk)06:02, 13 February 2026 (UTC)[reply]
Do you get different results if you try the different "Search completion" modes atSpecial:Preferences#mw-prefsection-searchoptions, or a different skin (e.g. Timeless)?Matma Rextalk02:55, 12 February 2026 (UTC)[reply]
@Matma Rex I don't know; I don't want to change my search preferences or skin at the moment (I'm about to go to bed). I have never changed my search preferences OR my skin. I'll try soon. Thanks.David10244 (talk)05:57, 13 February 2026 (UTC)[reply]
@Matma Rex Thanks for splitting this.David10244 (talk)05:57, 13 February 2026 (UTC)[reply]

Page not being automatically archived, part 2

[edit]

User:LaundryPizza03/CSD log is still not archiving. This is a followup toWikipedia:Village_pump_(technical)/Archive_226#Page_not_being_automatically_archived. –LaundryPizza03 (d)22:50, 2 February 2026 (UTC)[reply]

Pinging participants from last discussion due to lack of participation:@Redrose64,PrimeHunter,Aidan9382,Graham87, andJonesey95. –LaundryPizza03 (d)20:40, 8 February 2026 (UTC)[reply]
For what it's worth I checked and couldn't find anything wrong with the archiving setup this time.Graham87 (talk)02:51, 9 February 2026 (UTC)[reply]
@LaundryPizza03: I don't know what stops the bot but try archiving the rest of 2024 manually and see if it picks up 2025.PrimeHunter (talk)14:00, 13 February 2026 (UTC)[reply]
There are 12 sections left after doing that, so I will have to wait until March to find out. Ask the owners ofClueBot III (talk ·contribs) to test the archive function on this page. –LaundryPizza03 (d)18:04, 13 February 2026 (UTC)[reply]

Deprecating and blacklisting archive.today

[edit]

As some may be aware by now, the maintainers ofarchive.today (and archive.is, etc.)recently injected malicious code into all archived pages in order to perform adenial of service attack against a person they disliked (this can be confirmed by the instructions describedhere.) While the malware has been removed now, it is clear that archive.today can't be trusted to not do this in the future, and for the safety of our readers, these archiving services should be swiftly removed and the websites blacklisted to prevent further use.

The rub is that https://archive.today alone is usedin nearly 400,000 pages, and its sister site archive.is is usedin around 50,000. We're clearly very dependent on this service, but we must find a way to break away from it. And fast.ChildrenWillListen (🐄 talk,🫘 contribs)00:26, 5 February 2026 (UTC)[reply]

I've been wondering the same thing. I don't have an answer but have notifiedWikipedia talk:Link rot to get more attention to this issue.ClaudineChionh(she/her ·talk ·email ·global)01:05, 5 February 2026 (UTC)[reply]
Absolutely not. We are dependent on external sites for archiving and verification. I have in the past lobbied WMF to acquire archive.org so it can meet our needs (or set up our own version) but unless and until that happens we have to link to external sites for verification.Hawkeye7(discuss)01:24, 5 February 2026 (UTC)[reply]
We're not deprecating archive.org (run by theInternet Archive), only the archive services provided by whoever runsarchive.today. The maintainers of that service cannot be trusted for the reasons I described.ChildrenWillListen (🐄 talk,🫘 contribs)01:27, 5 February 2026 (UTC)[reply]
I don't trust the maintainers of archive.org.Hawkeye7(discuss)02:28, 5 February 2026 (UTC)[reply]
I trust them a far sight more than someone who has now verifiably used their ownership of a domain we link some 400k times to DDOS another website on the Internet.Izno (talk)03:39, 5 February 2026 (UTC)[reply]
Maybe, but we need verifiability and archival sites are required for that. We shouldn't sacrifice our primary mission.Hawkeye7(discuss)03:57, 5 February 2026 (UTC)[reply]
Archive sites that are willing to inject malicious javascript aren't particularly good for verifiability.AntiCompositeNumber (they/them) (talk)04:31, 5 February 2026 (UTC)[reply]
They are better than nothing. The only solution is to run our own archive.Hawkeye7(discuss)04:46, 5 February 2026 (UTC)[reply]
Why insist on framing this as an all or nothing choice? Other archive sites exist and they don't have a demonstrable history of weaponising their service. Treating it as an exceptional case isn't unreasonable.
Also, even if this were an all or nothing choice (which it isn't), Wikipedia's need for citations isn't more important than the security of users. Archive.today has demonstrably abused its users' trust (including Wikipedia's editors and readers) and cannot be considered safe. –Scyrme (talk)20:46, 5 February 2026 (UTC)[reply]
  • Trash it completely. Archive.today has proven that it's not trustworthy as an archive source (unlike the Internet Archive) and links to it should be considered potentially malicious in nature.SilverserenC04:58, 5 February 2026 (UTC)[reply]
    "Archive.today has proven that it's not trustworthy" - there are no (known) examples of its owner tampering with archived pages. "unlike the Internet Archive" - Internet Archive removes archived copies regularly.sapphaline (talk)14:56, 5 February 2026 (UTC)[reply]
    there are no (known) examples of its owner tampering with archived pages: Yes there is, see above. Injecting malicious JavaScript is tampering, visible or otherwise. If they are willing to do that, who knows when they'll decide to exploit zero-days or engage in blatant manipulation.ChildrenWillListen (🐄 talk,🫘 contribs)15:03, 5 February 2026 (UTC)[reply]
    They only injected it on the CAPTCHA page.sapphaline (talk)15:04, 5 February 2026 (UTC)[reply]
    Which is only shown when you try to archive something, by the way.sapphaline (talk)15:05, 5 February 2026 (UTC)[reply]
    I always see the CAPTCHA screen when I view an archived page for the first time.ChildrenWillListen (🐄 talk,🫘 contribs)15:06, 5 February 2026 (UTC)[reply]
    My point about them being trustworthy when it comes to archived copies stands. Internet Archive is way less reliable in this regard, because archived copies can always be deleted there.sapphaline (talk)15:20, 5 February 2026 (UTC)[reply]
    I'd rather have information lost than readers having to encounter mailicious code whenever an archived copy is visited. Also, we know nothing about the maintainer(s) of Archive.today, how they make money, or even if they're ready to pack up their bags tomorrow and leave. They're in a jurisdiction that's politically unstable and prone to censorship. None of these problems exist with the Internet Archive.ChildrenWillListen (🐄 talk,🫘 contribs)15:30, 5 February 2026 (UTC)[reply]
    A jurisdiction that's politically unstable and prone to censorship - you mean, like the United States? (I wish I was joking about my country in 2026) Setting that aside, we shouldn't want any information lost just like that. We need a remedying/replacement process coming before a removal process. See my main comment below.Stefen 𝕋owerHuddleHandiwerk15:34, 5 February 2026 (UTC)[reply]
    If this RFC is going to pass (which will be a very unfortunate result!),megalodon.jp archives archive.today snapshots almost perfectly (the only issue is that they're zoomed out and for some reason have a 4000px width, but this is trivially fixed by unchecking some checkboxes in devtools). Maybe WMF could arrange some deal with their operators to archive all archive.today links we have?sapphaline (talk)15:43, 5 February 2026 (UTC)[reply]
    "how they make money" - why should we care about this? "if they're ready to pack up their bags tomorrow and leave" - archive.today has existed for nearly14 years. It's a snowball chance in hell they're going to shut the site down tomorrow or in any foreseeable future. "They're in a jurisdiction that's politically unstable and prone to censorship" - how do you know? "None of these problems exist with the Internet Archive" - US isextremely prone to censorship and political unstability + Internet Archive removes archived copies on any requests, not just governmental ones.sapphaline (talk)15:35, 5 February 2026 (UTC)[reply]
    It is economically infeasible to hold trillions of archived pages and provide them indefinitely for free. We don't know how they're funding their project, which means we wouldn't know when this funding would dry out.
    Their willingness to inject malware over a petty dispute puts their stability in disrepute. If we get in the bad graces of these maintainers, who knows what they'll be willing to do to us?
    It's fairly well-known that the maintainer(s) of Archive.today live in Russia, and that the main archive storage is also hosted in Russia. Sometimes, they redirect certain IP addresses to yandex.ru, and of course, their official Wikimedia accountRotlink was created in ruwiki.
    However, you are right about the United States, sadly.ChildrenWillListen (🐄 talk,🫘 contribs)15:44, 5 February 2026 (UTC)[reply]
Actually thetheory is Ukraine, not Russia, and theevidence is they provision on global edge cloud providers (such as CloudFlare - but not CloudFlare). --GreenC15:46, 7 February 2026 (UTC)[reply]
@GreenC What does "such as CloudFlare - but not CloudFlare" mean?David10244 (talk)05:59, 13 February 2026 (UTC)[reply]
@ChildrenWillListen makes an important point. People are bringing up that WP already has 500K links to them. What if they introduce malicious code on justsome of the archived pages (for instance, because targets of their malice to be more likely to access those links)?Aurodea108 (talk)01:49, 9 February 2026 (UTC)[reply]
I can't think of an explanation for this that isn't malicious. You'd think the maintainer(s) of archive services wouldn't be stupid enough to try to get a blog removed from the internet as a petty retaliation over some alleged doxxing. —DVRTed (Talk)05:33, 5 February 2026 (UTC)[reply]
I agree, they should be blacklisted. Should have happened a long time ago, really, because of massive copyright violation: they distribute lots of content that the copyright owners only made available behind paywalls. SeeWP:COPYLINK: "if you know or reasonably suspect that an external Web site is carrying a work in violation of copyright, do not link to that copy of the work". —Chrisahn (talk)11:11, 5 February 2026 (UTC)[reply]
I fully appreciate why this needs dealing with, but I am concerned about "the rub". We could end up harming verifiability on a *lot* of our content. Of course, we can leave citations in place without the archive.today links, but without the ready verification of having an article to load, I fear some useful article text could end up being removed by editors who decide they can't trust the listed source due to inaccessibility (typically those with little wiki experience). In cases where the paywalled content still exists, removal would be less likely, but in cases where the original link is permanently dead, it's not available on Archive.org, and we only have archive.today... yikes.
Deprecation makes sense as long as that doesn't include immediate removal before any replacement remedy is pursued. Any process that intervenes with using archive.today should encourage editors to directly replace these sources with archive.org links or newspaper.com clip links, or locate alternate sources. I realize this is generally what deprecation means but if the intervention can be clear and help the editor find an alternative, I would be more relieved of the ramifications of ditching this source.Stefen 𝕋owerHuddleHandiwerk14:51, 5 February 2026 (UTC)[reply]
well-said. isupport blacklisting as long as it is accompanied by an effort to find alternative solutions instead of just plain removal....sawyer *any/all *talk15:33, 5 February 2026 (UTC)[reply]
Oppose - this will greatly harmverifiability.sapphaline (talk)14:53, 5 February 2026 (UTC)[reply]
Agree with those above arguing that archive.today is simply not trustworthy enough to be sending our readers to. Adding malicious code to cause a DDoS on another website is an absurd thing for a website maintainer to do and we shouldn't be facilitating their behaviour by sending more users to their site, nor simply hoping that they won't do something worse which targets our readers.SamWalton (talk)15:03, 5 February 2026 (UTC)[reply]
You may be interested to seeWikipedia:Requests for comment/Archive.is RFC (2013, with consensus to remove the then 10k links and blacklist for future, overturned byWikipedia:Requests for comment/Archive.is RFC 4 in 2016.)Andrew Gray (talk)18:56, 5 February 2026 (UTC)[reply]
I saw that, yes. Perhaps it's time forWikipedia:Requests for comment/Archive.is RFC 5?ChildrenWillListen (🐄 talk,🫘 contribs)19:01, 5 February 2026 (UTC)[reply]
Yes, I think so. As you've said above, they can't be trusted tonot do that again in the future, so I would support blacklisting their links.Some1 (talk)01:17, 6 February 2026 (UTC)[reply]
I think there needs to be an official RfC on this to get more opinions. Personally I think this shows that archive.today can't be trusted (if they do this over something rather petty, what's stopping them from putting more malicious code into archived pages, not just the captcha?), and it should be at least deprecated - but only if the links can be replaced with a different archive without loss of information.Suntooooth, it/he (talk |contribs)20:15, 5 February 2026 (UTC)[reply]
They blatantly violatedWikipedia:External links#EL3, but you think we need to have a long discussion about whether malware-serving websites are sometimes okay?
If we're going to have an RFC, let's blacklist now and focus the RFC discussion on how to cope rather than whether we should provide links to malware-serving websites.WhatamIdoing (talk)22:16, 5 February 2026 (UTC)[reply]
And if we can't quite bring ourselves to list the sites onMediaWiki:Spam-blacklist, then let's at least put up a warning viaSpecial:AbuseFilter.WhatamIdoing (talk)22:17, 5 February 2026 (UTC)[reply]
I think that formally gaining consensus is important when it affects as many links as this does, especially since even in this thread it hasn't been unanimous. If this affected a much lower number of links (think a couple of orders of magnitude lower) and links that would be easily replaced or removed, then I wouldn't be suggesting a full RfC.Suntooooth, it/he (talk |contribs)00:40, 6 February 2026 (UTC)[reply]
I wonder if there's a way to add a warning in the articles. Something like[replace archive link] (and a category showing affected articles) might encourage people to start the process of finding other sources. It might be possible to do this automagically through the CS1|2 templates. I'm assuming that would catch most of them.WhatamIdoing (talk)02:44, 6 February 2026 (UTC)[reply]
It is in the realm of feasible just to turn off the display of archive links that are via archive.today/is in CS1/2. The real question is if we can get everything or if we just have to start off with vanishing the big quantity of links.Izno (talk)03:59, 6 February 2026 (UTC)[reply]
Removing archive links (even by just turning them off, rather than fully removing them) from this number of articles would be a huge hit to verifiability. If consensus is gained to remove archive.today links, there needs to be a mechanism for replacing them with other archives.Suntooooth, it/he (talk |contribs)16:05, 6 February 2026 (UTC)[reply]
would be a huge hit to verifiability I think turning their display off is a fair compromise on the road to removal and replacement. I do agree that half a million pages or links is a big number. A maintenance category would naturally be set up so we can actually find these quicker.
We probably also should notifyWP:URLREQ and/or specifically @GreenC about this discussion.Izno (talk)20:21, 6 February 2026 (UTC)[reply]
Good idea. Also considering the RfC above, isn't it possible that many of the archive.today links on the encyclopedia _aren't actually necessary?_ As in, they were added superfluously by the website operators themselves? Perhaps the true scale of the problem is much smaller, and we could vibe code a quick tool to check some of the links.audiodude (talk)04:50, 8 February 2026 (UTC)[reply]
It's not feasible to blacklist it right now.sapphaline (talk)07:20, 6 February 2026 (UTC)[reply]
Why do you think it's not feasible? What do you mean by "feasible"? —Chrisahn (talk)15:18, 6 February 2026 (UTC)[reply]
Because we have ~500k+ (including all of the different domain names) links to it.sapphaline (talk)18:25, 6 February 2026 (UTC)[reply]
That's not a problem. We won't remove them (at least not for a while). Blacklisting means that no new links to these domains can be added. It doesn't mean existing links have to be removed. —Chrisahn (talk)22:29, 6 February 2026 (UTC)[reply]
Every day, links are added becausethere is no other option. They literally are the only source for a large set of web pages on the Internet. This is why there are so many links. It's the only option. It is pragmatic. You and some othersappear concerned about what is best for Wikipedia, but you don't seem concerned about the consequences, which are very real, immediate and large scale - it would cause significant damage to Wikipedia. Unlike the good feelings about punishing Archive.today for some transgression. What is more important? --GreenC16:04, 7 February 2026 (UTC)[reply]
I've added archive.org URLs to lots of articles. In case a page hasn't been archived by them yet, I click "Save Page Now". I don't recall any significant problems, and I don't recall a URL that couldn't be archived. I'd say such URLs are pretty rare. —Chrisahn (talk)22:50, 7 February 2026 (UTC)[reply]
Actually it's common. I speak from the data not an opinion. --GreenC23:27, 7 February 2026 (UTC)[reply]
The problems likely depend on the type of source, so some editors may encounter problems fairly often, and others will not ever encounter it.WhatamIdoing (talk)23:35, 7 February 2026 (UTC)[reply]
What data? —Chrisahn (talk)23:36, 7 February 2026 (UTC)[reply]
@Chrisahn, you might want to read the bottom of his User: page...WhatamIdoing (talk)01:00, 8 February 2026 (UTC)[reply]
Agree that there should be an RfC. The implications of the discussion and potential actions taken by consensus will have far reaching effects across the encyclopedia. Additional comment as a technical editor, not one who edits a lot of articles: if archive.today provides a copy of a paywalled or linkrotted news article, but the article was actually published by the news organziation in questionat some point, what does it matter if the archived copy isn't available? The citations are still technically valid right? Does wikipedia remove citations to books that are out of print? Does information exist if it's not on the internet lol?audiodude (talk)04:47, 8 February 2026 (UTC)[reply]
Yes, you're right that aWikipedia:Convenience link (to the original and/or an archive) is not required, if the news article is archived in some place that is accessible to the general public. For example, it's traditional for ordinary print newspapers to keep a copy of all their old newspapers, and many will either let the general public take a look or send the older ones to a local library or historical society. However, not all publications have a print edition, and some news outlets put more information/additional articles on their web. I have, for example, been disappointed that paper copy ofThe Atlantic has fewer articles than their website. A web-only source needs a working URL, because sources must beWP:Published#Accessible.WhatamIdoing (talk)06:04, 8 February 2026 (UTC)[reply]
Has anyone linked to the circus that occurred when archive.today first appeared? As I recall, they used extremely advanced (for the time) techniques to attack Wikipedia by edit warring their links into pages. The views that we have to keep using them miss the big picture: these guys are obviously up to something bad. The infrastructure and operational maintentance to support their system would cost a vast amount and someone is planning to get a return on that investment eventually. It's much more effort than some libertarian philanthropist would support.Johnuniq (talk)02:47, 6 February 2026 (UTC)[reply]
Yes linked above:Wikipedia:Requests for comment/Archive.is RFC (2013, with consensus to remove the then 10k links and blacklist for future, overturned byWikipedia:Requests for comment/Archive.is RFC 4audiodude (talk)04:54, 8 February 2026 (UTC)[reply]
Technical question: How would blacklisting work? If I understand correctly, the idea is that blacklisting prohibits adding new archive.today (and archive.is etc.) links, but we'll keep the existing ones for now. Specifically: If I edit an article and try to add a new archive.today link, I get an error message and can't save my changes. But if I edit an article (or section) that already contains one or more archive.today links and I make unrelated changes, there's no such error message. Is that correct? Can we make that work? A "dumb" edit filter (that simply checks whether such links occur anywhere in the text I'm trying to save) won't work – it won't let me save unrelated changes. I can think of a few ways to implement a smarter filter, but I don't know if edit filters have access to the required information, or how efficient smarter checks would be. —Chrisahn (talk)09:18, 6 February 2026 (UTC)[reply]
@Chrisahn Yes, this is how the built-in tools likeMediaWiki:Spam-blacklist already work, and edit filters can also be made to work that way. They forbid adding new links to blacklisted domains, but if a link is already present in the article, it can be edited without tripping the blacklist. There are still some scenarios that cause problems (e.g. if a vandalism deletes a citation that links to archive.today, you won't be able to revert it without removing those links first), but that hasn't stopped additions to the blacklist before.Matma Rextalk14:41, 6 February 2026 (UTC)[reply]
Thanks for the explanation! Sounds good. —Chrisahn (talk)15:17, 6 February 2026 (UTC)[reply]
There have been concerns about archive.is/archive.today going back more than twelve years, see for exampleWikipedia:Requests for comment/Archive.is RFC. --Redrose64 🌹 (talk)11:13, 6 February 2026 (UTC)[reply]
Hi!
archive.today is just a very useful website that can be used if archive.org is not helping.
The hosters of archive.today are not as reliable as the guys that host archive.org. However, we don't know any case, where a snapshot was false, do we?
In this case, they "just" abused their visitors for a DDOS attack. Of course we should not support this. But this does not mean, we have to definitely block the website.
By the way, blacklisting (viaWP:SBL) without a previous removal of all links isnot a good option, because this leads to several problems:
  • Moving of parts of pages to other pages (e.g. archiving) is not possible any longer, if the moved text contains a link.
  • Modifying an existing blacklisted URL (e.g. link fixing) might trigger the SBL.
  • It's not possible to add blacklisted links to a discussion which is challenging for some not so technical users.
In my opinion a technical solution could be:
  • replace all links with a (unsubstituted) template (yes, this is a lot of work, but could be automized partially);
  • if any problem with the domain occurs again, modify the template such that there is no link any longer to archive.today (and .is and all the other domains);
  • when the problem is solved, revert the template change;
  • if anybody adds a link to archive.today without the template, a bot could try fix that afterwards and the bot could write a message on the linker's talk page that they should check whether they could find something better.
By a solution like this we would would still have the benefits of the archived versions. But wecould remove all links fast and at once, if needed.
--seth (talk)17:04, 6 February 2026 (UTC)[reply]
A couple hundred thousand bot edits is not a good solution, either.
Trappist the monk might have some ideas about whether the citation templates could special-case these domain names for a while, while the work is done. A maintenance category, as Izno mentioned above, would also be a good idea. And even if we don't want to use theMediaWiki:Spam-blacklist quite yet, for fear that it will interfere with rearranging pages, we could implement aSpecial:AbuseFilter that would prevent people from adding any new ones.WhatamIdoing (talk)21:33, 6 February 2026 (UTC)[reply]
I still advocate going back to WMF with a proposal to create our own archive. This will get us off dependence on external archive sites that we cannot control.Hawkeye7(discuss)22:25, 6 February 2026 (UTC)[reply]
@Hawkeye7: It may open us up to legal difficulties we can't easily handle, especially if we refuse to delete content as instructed by copyright owners.ChildrenWillListen (🐄 talk,🫘 contribs)22:27, 6 February 2026 (UTC)[reply]
Setting up an internet archive requires years of planning and work. I'd like us to start making tangible progress on resolving this problem today, or at least in the next week. Even if we thought that was legal and a good idea, creating our own archive isn't going to address the problem right now.WhatamIdoing (talk)23:24, 6 February 2026 (UTC)[reply]
cs1|2 can special case archive.today (and companion domains) if/when there is a consensus to deprecate/blacklist.
Trappist the monk (talk)22:58, 6 February 2026 (UTC)[reply]
One major problem with the edit filter (and SBL/BED) is that many unexperienced people who trigger a rule just don't know what that means and what they should do. We often see that people who wrote large paragraphs and failed of first try to safe, just run away, although the warning said that if they are sure about what they are doing, they just should try to save again.
The filter (and SBL/BED) should be used if people intentionally (try to) spam. If they actually just want to help, then there's a risk of annoying/frustrating them. That's why -- over time -- I more and more tend to use notification bots and maintenance lists instead of the blacklist-like tools in cases where links are mostly added by non-spammers.
--seth (talk)23:27, 6 February 2026 (UTC)[reply]
XLinkBot can deal with link additions, and we can modify the citation modules to not accept/show archive.today links anymore. After the number of links becomes manageable, an edit-filter based solution would be a good idea.ChildrenWillListen (🐄 talk,🫘 contribs)22:36, 6 February 2026 (UTC)[reply]
Also, am I the only person concerned by how OP downplays doxxing someone as a "petty dispute"?sapphaline (talk)18:25, 6 February 2026 (UTC)[reply]
Might be concerning, but that's "two people have been bad people" and each should be judged on their own merit accordingly. You don't treat someone DDOSing another person off the Internet as a stable individual meriting half a million links from the most popular source of collated information on the Internet (and that ignoring the prior dramas, as linked above).Izno (talk)21:37, 6 February 2026 (UTC)[reply]
@sapphaline: I wasn't fully aware of what the DDoS was a retaliation for when I started this thread, but either way, as @Izno said, they can't be trusted anymore, regardless of intention.ChildrenWillListen (🐄 talk,🫘 contribs)22:28, 6 February 2026 (UTC)[reply]
Doxing? Hardly. Quote: "While we may not have a face and a name, at this point we have a pretty good idea of how the site is run: it’s a one-person labor of love, operated by a Russian of considerable talent and access to Europe."[1]Chrisahn (talk)22:40, 6 February 2026 (UTC)[reply]
Probably not important, but interesting: Less than a day before this discussion was started,User:Masharabinovich, an account probably connected to archive.today, was renamed by request. —Chrisahn (talk)22:47, 6 February 2026 (UTC)[reply]
Well, the article they tried to remove did say that "Masha Rabinovich" is a pseudonym they use when creating online accounts.ChildrenWillListen (🐄 talk,🫘 contribs)23:13, 6 February 2026 (UTC)[reply]
Another aspect: Depending on how the FBI case against archive.today goes, there's a chance that these ca. 500,000 archive links in our articles will become useless in the not too distant future. —Chrisahn (talk)01:05, 7 February 2026 (UTC)[reply]
This I agree with. Stay the course and wait and see. --GreenC17:11, 7 February 2026 (UTC)[reply]
  • Prior to about 2015, the Wayback Machine did not systematically archive all links on Wikipedia. There are huge gaps prior to that date. Between 2012(?) and 2015, Archive.today systematically archived Wikipedia. Thus many dead links areonly archived on Archive.today. The one time Archive.today got blacklisted a long time ago, it didn't last long. People reversed it. Why? Because Archive.today is incredibly useful. It's that simple. It's pragmatic. They have the goods nobody else does. This incident with the CAPTCHA will soon be forgotten as inconsequential to Wikipedia. But blocking Archive.today will cause daily conflict with editors who need to use it because there is no other option. --GreenC17:11, 7 February 2026 (UTC)[reply]
    The decision was reversed because the maliciousness of the maintainers used to be pure speculation. This is no longer the case.ChildrenWillListen (🐄 talk,🫘 contribs)17:14, 7 February 2026 (UTC)[reply]
Your wish to punish Archive.today over this silly incident (which they undid) would cause widespread and deep collateral damage to Wikipedia. --GreenC18:32, 7 February 2026 (UTC)[reply]
I think that would depend on how it's implemented. First, just to remind everyone,WP:Glossary#verifiable means someone canfind a reliable source. It does not mean that the Wikipedia article already has a little blue clicky number (that'sWP:Glossary#cited) or that the ref contains a functional URL. This means that if the Wikipedia article says "The Sun is really big", and there's no cited source, or the cited source is a dead URL, then that sentence is still verifiable, because an editor (or reader) could look up Alice Expert's book,The Sun is Really Big, and learn that the material in the Wikipedia article matches the material published in at least one reliable source. Removing archive links therefore doesn't (usually) destroy verifiability (unless that was the only source in the world that ever said that, and the original is a dead URL – in which case, are we really sure we should be saying that now?); it just makes verifying the information take more work.
Having looked at a too-small sample size (=4 articles) with these links, I think that some of these links are unnecessary and others deserve a{{better source needed}} tag no matter what the archive status is. I therefore think that checking and replacing sources might be a good thing, overall.WhatamIdoing (talk)19:20, 7 February 2026 (UTC)[reply]
A citation to a book is always verifiable. So is the NYT and other news outlets. Referring to everything elseonline-only, which is most of it. Without an archive, a dead website is unverifiable. Maybe wait 10 years for an archive to surface, but eventually it's gone. Youmight find other sources, but who is going to do that for half a million links? Certainly not the few people engaged in these conversations. Most people don't even verify sources, much less try to replace them with other sources. People are busy creating new citations with future dead links that nobody fixes. The debt continues to grow, and one of our best tools for dealing with it is now being threatened with removal. --GreenC19:44, 7 February 2026 (UTC)[reply]
Please look at the definitions I linked. We don't care whether "a dead website is unverifiable". (It's really none of our business whether people can double-check that some other website's content was taken from a reliable source vs is an original work.)
We care whether the content in the Wikipedia article is verifiable – and we care whether it's verifiable inany reliable source, not just the cited one.
Yes, you're right: half a million sources is a problem, and the debt continues to grow. To stop the bleeding, I think we should deprecate/discourage future additions of this source. To get the existing ones checked, I think we should have a tracking category, and maybe even a way to make this a more mobile-friendly and/or newcomer-friendly task. Based on my experience the other day, we're looking at about five minutes per source. Also based on my experience the other day, half the sources are unreliable ones anyway (at least for medical content).WhatamIdoing (talk)19:55, 7 February 2026 (UTC)[reply]
If Archive.today actually goes offline, then we have another problem. But treating it like it'salready offline by adding{{dead link}} templates is backwards since we don't know the future. The assumption there are alternatives to Archive.today is a mistake. Most Archive.today links are added because Wayback can't do it. There's really only two games in town, and we are eliminating one. And you can't go back and fix it, either you save the web page before it dies or it's gone forever. Archive.today has a monopoly on many archive pages, and many citations are the only game in town there are no better sources. Most people don't read these forums, but if you start blocking or hiding links, there will be many editors complaining. It's a major resource for our community that has a large following.Nobody has really been notified about the RfC. --GreenC21:00, 7 February 2026 (UTC)[reply]
Roughly 2 years and 8 months seems like a decent chunk of time to me.Nil Einne (talk)05:09, 13 February 2026 (UTC)[reply]
Wikipedia:Requests for comment/Archive.is RFC 5 is live. This should probably be added toWP:CENT to get a more global consensus.ChildrenWillListen (🐄 talk,🫘 contribs)17:12, 7 February 2026 (UTC)[reply]
Hoisting a comment by @Sapphaline to the top level:
"megalodon.jp archives archive.today snapshots almost perfectly (the only issue is that they're zoomed out and for some reason have a 4000px width, but this is trivially fixed by unchecking some checkboxes in devtools). Maybe WMF could arrange some deal with their operators to archive all archive.today links we have?"Aurodea108 (talk)20:51, 15 February 2026 (UTC)[reply]

How to see Wikipedia articles in a category or a WikiProject list missing images?

[edit]

I think it would be useful to see lists of articles that do not include any image, maybe with a column for the linked Commons category if it exists and a column for the image(s) set on the Wikidata item if there are any. The articles could be those in a category or especially some WikiProject list likeWikipedia:WikiProject Climate change/Popular articles orCategory:High-importance science articles (corresponding articles, not the talk pages though). I think it's not unlikely that there is some way for this.

Asking this in the context ofc:Commons:List of science-related free media gaps – this could be useful not just for adding images if a useful relevant high-quality one exists for the article, but also for identifying media gaps.

By the way, I was wondering whether to post this here or atc:Commons:Village pump/Technical.
Prototyperspective (talk)14:49, 5 February 2026 (UTC)[reply]

You could find articles that have been tagged withTemplate:Image requested, but I'm not aware of any way to look for untagged articles.https://pagepile.toolforge.org/ will let you define a list of target pages, and that list can be used be used by other tools for various purposes, but, again, I'm not aware of any tool that would import such a list and identify missing images.
Images are one of the key things that readers want to find in a Wikipedia article. It would be nice to have more emphasis on finding and adding appropriate images.WhatamIdoing (talk)23:59, 5 February 2026 (UTC)[reply]
Good idea – that method shows 191 pages inthis query which is something one can start with.
A way to list articles without images would probably show way more results, would be more dynamic, and could be useful in more ways. It would not rely on users adding that template which is relatively rarely done. Additionally, having that template doesn't mean the article is short of even an image illustrating the main subject and entirely lacking images (also implies there is no image for the article in the page preview hovercard and in the Wikipedia app).
Agree on what you said there. Also of note that only very few users know of, see, and click the Commons category linked to an article – there's often high-quality files there but pageviews stats show that few go to these pages. After creating many Commons categories, I think most of them over a year later weren't even linked via the small often overseen{{Commons category}} somewhere in the article. One can often find images in categories that have been there for years but nobody ever added them to the article including articles including not even one image.Prototyperspective (talk)00:26, 6 February 2026 (UTC)[reply]
Well, as I like to tell the people fighting over infoboxes: It'd be better to start withCategory:Wikipedia articles with an infobox request than any random article.Searching by templates might be better if you get into larger groups of pages; deepcat searches sometimes time out for me.WhatamIdoing (talk)03:38, 6 February 2026 (UTC)[reply]
Regarding the search link that checks for templates instead of the categories: don't know why it only shows 70 results instead of all 191.
Regarding the search link that checks via the two categories: I've looked into it further and excluded all articles that are biographies or films. Now it contains just 58 items instead of 191 and most of these are niche low-importance articles where I can't see how an image would be very useful or they already have an image for the article's topic (as in the case ofGypsum concrete). I nevertheless added the search query to the media gaps page.
deepcat searches sometimes time out for me this happens for deeply-nested categories which is why it won't really work forCategory:Science currently. This may also be an issue here because not yet all relevant articles in that category branch have been tagged with the WikiProject template. Additionally, it doesn't look like one can scan if an article is in a category and its associated talk page in another. This would be useful because the WikiProject category is only set on the talk page. There's also ways to scan for articles in a category branch that don't yet have the WikiProject template but it's complicated and I guess barely anybody uses that (a tool for that would be great btw).Prototyperspective (talk)15:55, 6 February 2026 (UTC)[reply]
Petscan gets about 95% of the way there - you can ask for pages in a category that don't have "a lead image", which I think is the single image returned in the API. Pages with no images will presumably also have no lead image.Andrew Gray (talk)09:56, 6 February 2026 (UTC)[reply]
Following up on this - it seems "lead image" is defined frommw:Extension:PageImages and is a) one of the first four images in the lead section which b) has a certain range of ratios and c) is not explicitly excluded. So it is possible to have an article with images that nonetheless show up as no-image here. But having said that...
It doesn't seem to be possible to do this in one step starting with a talkpage category (like importance tags), but it is possible in two steps via PagePile.
There's also an interesting query which I've found on Quarry and tweaked - thanks to @Cryptic for coming up with it originally - which identifiessix "top/high" importance Science articles with no image links on the page.Andrew Gray (talk)18:20, 6 February 2026 (UTC)[reply]
Interesting, thanks, I didn't know about the petscan feature to only show articles without lead image.
I tried to run it onCategory:Science but the problem is that it's not possible because that cat has too many subcategories and also when limiting it to e.g. just 3 layers, it (query) shows too many results (>60.000).
I first thought maybe the approach of that petscan filter isn't really adequate as it also shows articles with images even lots of images – but looking more closely, I'm not so sure anymore: e.g.Agricultural science is listed buts its image in infobox does not illustrate agricultural science;Artificial intelligence is listed despite having many images but it does not have an image at the top that's some diagram explaining AI types and/or how AI works. Articles likeAnthropology also miss some image that illustrates the subject well. So maybe the issue is not with the methods but simply that there's soooo many articles missing images (I think the community hasn't really begun to systematically address this).
What would be the best ways to address this that takes into account these issues: prioritizing articles that miss images, using only other methods that check whether there is any image at all in the article², somehow further filtering the petscan, somehow extracting fields or large-order topics lacking images?
² here's one additional way to check if there's any image whatsover (or animation or video or audio) in an article:deepcategory:Science -insource:"[[File:" (82,511 articles with incomplete results) Note: in this query it also shows articles with image in infobox so these would also need to be excluded somehow (maybe via filtering out things like .png?). One couldcombine this with incategory:"Commons category link is on Wikidata" to see just articles with no image but a Commons category (2,086 so this one seems quite actionable).
Pages with no lead image linked from Wikipedia:WikiProject Climate change/Popular articles (137/1000) Nice query, this one seems quite actionable as well. I'll probably link that on the science-related media gaps page as well and will look for other similar WikiProject pages for which to also create such a query for and maybe extract some topics in need of illustration/image (note that an article with lots of images illustrating the various subtopics may not be missing an image much even when there no lead image and ideally we'd like to have one).
interesting query which I've found on Quarry and tweaked … identifies six "top/high" importance Science articles with no image links weird that it only shows 6 items. So it seems like currently this query is not useful but maybe it can be tweaked further until it is. The description saysthat have no images of any sort (not even those from templates like {{unreferenced}}) so that seems to be the cause here; maybe one could exclude images in such templates but I also wonder whyResearch statement shows despite that there's several PDF document icon images on the page(?)
Thanks for your investigations and very helpful contributions on this issue.Prototyperspective (talk)14:12, 8 February 2026 (UTC)[reply]
The pdf icons aren't added by image links - in a template or otherwise - but by a css class. They're not detectable with queries against the database even if we wanted to (other than by searching for external links ending in ".pdf", which isn't practical).
Excluding images included by templates isn't possible either. We've been asking the developers for an equivalent for simple links in WhatLinksHere for over two decades. And it wouldn't help anyway, since it would also exclude images in infoboxes.
Whatwould help is a list of specific files to ignore, like{{unreferenced}}'sFile:Question book-new.svg. Or I can write queries for non-free/non-existent lead images by talkpage categories/wikiproject ratings/etc. Asking atWP:RAQ is the best way for such requests not to get lost; my free time and attention is very limited this time of year. —Cryptic18:59, 8 February 2026 (UTC)[reply]
"For mid to large Wikipedias, shorter articles are less likely to have an image"
Is there any code issue or wish or project page about enabling a functional Quarry query for seeing articles without any images via some list that specifies common icons used in templates (like the CCBY icon etc)?
So images in infoboxes are taken into account (in imagelinks) in that query? (If they aren't taken into account maybe one could take the results from the query and enter them into a second tool that checks for images in templates.)queries for non-free/non-existent lead images… that's a bit confusing to me – weren't you talking about the Quarry query earlier which doesn't check only for lead images but any images in the article? I would find this again more useful than scanning just for articles without lead images.
By the way, I randomly stumbled uponm:Research:Map of Visual Knowledge Gaps which seems to have some research on the thread topic. Haven't yet checked which method was used there to identify the articles. I've added the images included therein (on the right is one of them) to the category about the subject I created on Commons,c:Category:Images on Wikipedia.Prototyperspective (talk)19:45, 13 February 2026 (UTC)[reply]

Suggestion Mode – new Beta Feature on Tuesday

[edit]

Suggestion Mode is a new Beta Feature for the VisualEditor that proactively suggests actions that people can consider taking to improve Wikipedia articles, such as "add citation", "improve tone", or "fix an ambiguous link". The feature islocally configurable, and can be locally expanded. It will be available here as anew Beta Feature on Tuesday (and thus, in practical terms, only visible to experienced editors to begin with).

The goal of this limited early release is for us to work together to:

  1. Identify what issues and improvements need to be addressed before evaluating the impact of the feature on newcomers througha controlled experiment.
  2. Generate ideas for new suggestions you think would be worthwhile to implement.More on this below.

The feature is closely related to the existingEdit Check feature which shows actionable feedback to newcomers as they edit, and shares many configuration details with it.

Why Suggestion Mode?

Suggestion Mode is meant to benefit two audiences:

  • [Primary] Newcomers who are eager to edit and struggle with how to start doing so constructively, plus giving them encouragement to explore the policies and guidelines.
  • [Secondary] Experienced editors seeking easier ways to find out what might need fixing, and assembling the context needed to decide how and if to act.
    • Note: volunteers have helpfully created many tools/gadgets/user scripts to help with the above.[2][3] Suggestion Mode seeks to make the functionality these tools offer easier for more people and in more languages to access.

How it works

When an editor who has the Beta Feature enabled opens an article with VisualEditor, if there are any of the available types of suggestion within the article content, then one or more suggestion cards will be shown alongside. Each card contains a description of the potential problem, a link to the policy or guideline the suggestion is based on, a button to start resolving the problem, and a way to provide feedback about the suggestion itself. You can see some examples and thefeedback flow below. Seemw:VisualEditor/Suggestion Mode#Design for more examples.

  • "Add a citation" example on the article w:en:Mango
    "Add a citation" example on the articlew:en:Mango
  • "Convert citation" example on the article w:en:Gooseberry
    "Convert citation" example on the articlew:en:Gooseberry
  • "Revise tone" example on the article w:en:Melon
    "Revise tone" example on the articlew:en:Melon
  • Desktop feedback workflow
    Desktop feedback workflow
  • Mobile feedback workflow
    Mobile feedback workflow

The team has started with an initial set of suggestions to demonstrate the concept. They are derived from existing tools, policies, and content guidelines. We're very interested in your recommendations for additional types of suggestions, to add to the growing list inT360489. The complete list of initial suggestions can be seen atSpecial:EditChecks. You can test the feature immediately, viathis user script.

Local configuration

The aspects of a suggestion that are community configurable will vary on a case-by-case basis. They can be configured by admins atMediaWiki:Editcheck-config.json, to enable/disable individual suggestion types and control parameters for each type (e.g. the categories and sections it should/should not be shown within, the cumulative edits someone must have made to see a suggestion, etc.). The listing of available parameters is atmw:Edit check/Configuration. In particular, thetextMatch suggestion type is a relatively simple system that finds words or phrases within the text, and suggests either replacing, deleting, or thinking about the text (along with a contextual guidance link). That sub-feature is easily expanded/adapted in any way you wish. In the future, we hope tosupport regex for these suggestions.

Known issues

The team is currently working on:checkYAdding the ability to include links within the text-match types of Suggestions (e.g. The "English variant specified" type will link to MOS:RETAIN next week) (T416511);checkYAdding theeditsuggestion-visible tag to monitor edits that are made when any Suggestions have been seen (T413419); Adding the ability to see the specific suggestions someone acted on within a given edit session (T416535); Improving the feedback flow to be more streamlined (T401739); Adding the ability to toggle the visibility of the Suggestions cards entirely (T415589).

Get involved

For now, Suggestion Mode will be available as a Beta Feature, in order to collect your recommendations for:changes to the default "descriptions" (both the wording and the links),feedback on the individual suggestions and their results, andrequests/ideas for further types of suggestions. The team and some volunteers have been experimenting with the checks for the last few weeks, plusdiscussing the tool inDiscord and Phabricator, and the team has fixed a number of issues, but we need your help finding more ways to improve this feature. We also hope you will have additional ideas for new types of suggestions, that can either be implemented entirely locally as text-match suggestions, or requested for developer-assistance in making more complex suggestions; there is a listing of existing suggestions inT360489. Please share your thoughts on the feature either here or atmw:Talk:VisualEditor/Suggestion Mode, and use the built-in feedback system to share any details about problems with specific suggestions. Much thanks,Quiddity (WMF) (talk)00:29, 6 February 2026 (UTC)[reply]

Looks pretty cool. can't wait to try it out. —TheDJ (talkcontribs)09:08, 6 February 2026 (UTC)[reply]
Agree, I think so too. This could be quite useful for the community. What I don't like here is that it's just for the VisualEditor and not the wikitext editor (albeit this feature is probably most useful for newish editors and not so much for active editors already overburdened with tasks anyway who don't benefit much from any such further ones – those are probably mostly using the wikitext editor but I could be wrong about that).Prototyperspective (talk)15:28, 6 February 2026 (UTC)[reply]
Personally I think the difficulty involved in making it work in both modes would not be worth the extra dev effort. --asilvering (talk)18:36, 6 February 2026 (UTC)[reply]
Yeah, the problem is that you need twocompletely different systems when looking for suggestions to make and when applying them to the document. VisualEditor is easier to do, because it offloads the whole "you must parse and modify wikitext without breaking it" part to Parsoid, and lets us work with something that's already got some level of semantic meaning applied to it.
We actuallycould reuse this, kind of, by (essentially) running VisualEditor in the background, and having your wikitext sent into the API and parsed, working out what suggestions there are, then asking the API again to tell us what ranges in the wikitext source they correspond to. Then doing similar things when you want to take an action in response to a suggestion, etc. It'd be painful and slower, as you might imagine.DLynch (WMF) (talk)17:55, 9 February 2026 (UTC)[reply]
This feature is now available. You can enable it atSpecial:Preferences#mw-prefsection-betafeatures. (If you have previously selected the preference for "Automatically opt-in to new Beta Features", then you still need to open your Preferences page once, in order to enable any new type of Beta Feature.)
Please do share your thoughts and feedback (and especially your ideas for other types of Suggestion that could be implemented (either by the team, or by yourselves locally via textMatch), that might be helpful for newcomers to act-on and learn-from) so that we can continue to improve it for you. Thanks.Quiddity (WMF) (talk)18:52, 11 February 2026 (UTC)[reply]

Geohack template broken

[edit]
Resolved

Can someone with the relevant permissions and technical knolwedge please revertTemplate:GeoTemplate back to a state where English language geohack works? The error seems to have been introduced yesterday.Tæppa (talk)13:28, 6 February 2026 (UTC)[reply]

It's not protected so I suggest reverting to an earlier working version. Pinging @Tæppa — Martin(MSGJ · talk)14:03, 6 February 2026 (UTC)[reply]
It is extended confirmed protected.Tæppa (talk)14:10, 6 February 2026 (UTC)[reply]
Wrong user! @Mapeh — Martin(MSGJ · talk)14:04, 6 February 2026 (UTC)[reply]
This is not a problem with Template:GeoTemplate (seeTemplate talk:GeoTemplate#What happened?). It's at toolforge, but I cannot work the best way of filing aphab: ticket for a toolforge problem. --Redrose64 🌹 (talk)22:49, 6 February 2026 (UTC)[reply]
(Discussion also atw:Template talk:GeoTemplate#What happened?)
Pinging@Trappist the monk: who edited thew:Module:Lang the day before (some) of the geohack language pages broke. My original thought was that the use of <br />{{lang|ar|خَرائط فلسطين المَفتوحة|rtl=yes}} broke the page, but <br />{{lang|he|עמוד ענן|rtl=yes}} has been there a while.
Just for interest:
Arabic (rtl) works[4], Hebrew (rtl) works[5], Pashto (rtl) doesn't work[6], Yiddish (rtl) doesn't work[7], Chinese (ltr) doesn't work[8], Russian (ltr) works[9].Tæppa (talk)00:08, 7 February 2026 (UTC)[reply]
Thechange I made atModule:Lang was for{{transliteration}}. At this writing, there are ten{{lang}} templates in{{GeoTemplate}}. All ten are used for presentation and have nothing to do with the&language= query portion of the geohack url.
Trappist the monk (talk)01:12, 7 February 2026 (UTC)[reply]
Thanks for the clarification.
For the record, all of the above links (Arabic through Russian) are now "broken" so it's probably nothing to do with the templates that the wikipedias have for each language.Tæppa (talk)22:02, 8 February 2026 (UTC)[reply]

As someone who reads a lot of geographic articles, this GeoHack glitch has been annoying for the last couple of days. I think the correct place to file a bug report ishere. In case it helps to debug: I notice that, upon opening GeoHack, the table of links to Google Maps, OSM, and other servicesvery briefly displays but disappears after a split second. If I use Firefox's "reader view", all the links are visible. HTH.~2026-87494-1 (talk)01:49, 9 February 2026 (UTC)[reply]

The reason Geohack is broken seems to be related to this change:1234316. Geohack was depending on html comments to find the main content. These html comments are removed in the latest mediawiki update. --wimmel (talk)08:57, 10 February 2026 (UTC)[reply]
 Working again --Redrose64 🌹 (talk)18:44, 12 February 2026 (UTC)[reply]

£, s, d?

[edit]

Crosspost (sort of) fromTemplate talk:Pounds, shillings, and pence#Alternative coding, I've been looking for something that lets me output non-decimalised numbers. So far I've been using{{GBP|10 8s 9d}} to produce£10 8s 9d for example, but leaving what is effectively text inside the curly brackets doesn't feel right and might break downstream applications e.g. inflation calculators.

I suppose the ideal outcome might be{{GBP|x|y|z|nd}}, with numbers in place of x, y, z, and "nd" for "non-decimalised". Empty y or z values would return a negative sign, e.g. £x/-/z or £x/y/-, and an empty x value would skip the "£" symbol as well e.g. y/z, y/- or -/z. Of course, values y and z would also need to be capped, with any excess being moved to the prior column e.g. 30 shillings would be displayed as £1/10/-, unless some extra term like|abbr=on was included?

Critically, the template system cannot require typing the £ symbol, because not everyone has that on their keyboard. Also, whatever system ends up working for this should be copy-able to other areas e.g. Australia where the three-piece currency system used to apply. ({{AUD|}} doesn't currently support £sd but it might one day.)

Anothersignalman (talk)16:03, 7 February 2026 (UTC)[reply]

@Anothersignalman: While this could be implemented, I'm unsure why you want this. What's the problem with just{{GBP|10}} 8s 9d to get£10 8s 9d without including any letters? –Scyrme (talk)23:08, 9 February 2026 (UTC)[reply]
Also, after looking at the code for{{£sd}} and{{GBP}}, it would probably be much more straightforward and cleaner to implement this as a new template rather than modify the existing ones. (And if you want the inflation calculator to work, that would probably require converting the input to decimal then converting it back. It doesn't look like the inflation calculator is set up to take nondecimal inputs.) –Scyrme (talk)23:21, 9 February 2026 (UTC)[reply]
I've since swapped over from{{GBP}} to{{Australian pound}} for my articles, which I didn't know about before and seems to have solved my problem. My original concern was that I wanted to keep all numbers related to specific figures within a set of curly figures on a matter of principle, but the inflation element was a secondary concern. If I still needed this, I'd have accepted a modification to{{GBP}} so that it could take a decimal input and output the £sd arrangement, because I could then request or find a separate template that took £sd inputs to generate the decimal output and put that inside the GBP set. This would also have solved issues where a source reported a cost as say 30s instead of £1 10s.Anothersignalman (talk)05:46, 10 February 2026 (UTC)[reply]
@Anothersignalman: Are you only using{{Australian pound}} with Australian predecimalised currency? The template links toAustralian pound when it uses £. If you're also intending to use this with UK currency, maybe it would be helpful to modify the template to allow that link to vary (or to display no link by default), and move the template to a broader title? ({{Australian pound}} would exist as a redirect after the move, so the existing uses would still work.) –Scyrme (talk)17:02, 10 February 2026 (UTC)[reply]
No, I'm writing an Australian article, I just used GBP because it had the right symbols and the two currencies were tied to each other in the relevant time frame.Anothersignalman (talk)08:10, 11 February 2026 (UTC)[reply]

Help with conditional expressions

[edit]

I am drafting a template atTemplate:Career FNCS results middle/sandbox. It works fine when I use#switch as follows:

{{#switch:{{{made_grands|}}}|true=Yes|false=''Eliminated in {{{elimination_stage}}}''}}

However, it seems to me that the code above could be re-written using#if as follows:

{{#if:{{{made_grands|}}}|Yes|''Eliminated in {{{elimination_stage}}}''}}

If I do this, and entermade_grands=false, the parameter is still treated as true, and thus produces "Yes". I am very new to conditional expressions; am I doing something obviously wrong?Rockfighterz M (talk)21:37, 9 February 2026 (UTC)[reply]

@Rockfighterz M:#if tests for non-empty. Seemw:Help:Extension:ParserFunctions##if.PrimeHunter (talk)21:52, 9 February 2026 (UTC)[reply]
@Rockfighterz M I think you're looking for#ifeq:, not#if:, but even then you'd need two nested if statements to cover the case where the value is neither "true" nor "false".--Ahecht (TALK
PAGE
)
18:47, 10 February 2026 (UTC)[reply]

Mobile table of contents experiment: Phase 1

[edit]

Hi everyone,

I’m posting on behalf of WMF'sReader Growth team. The week of February 16 we will launch anA/B/C test that adds a Table of Contents on mobile web and auto-expands all article sections as a way to get readers information faster by addressing navigation difficulties. Our hypothesis is that by giving readers a table of contents to see sections at a glance, they will be able to more easily find what they’re looking for.

Why are we doing this?

We see this work as a part of addressingthe decline in pageviews on Wikipedia. We want it to be easier to access content on the site, especially on mobile where newer readers tend to come in.WMF’s Reader Foundational Research found that difficulty with in-article navigation, in particular mobile, is a top complaint among readers. We’re trying out a table of contents on mobile web to see if it supports ease of browsing based on data that it can be helpful for navigating. The Wikipedia Android app, for example, has a table of contents, which on average gets opened almost 4 times per user, much more often than users start a search, which is only an average of 1.5 times a session. The app also sees 71.1% clickthrough rate, indicating strong usage on small screens.

These screenshots show the two different table of contents buttons that will be shown to experiment participants in the two treatment options.

What idea are we testing?

Article sections are currently collapsed by default on mobile, which was intended to save users time in navigating as they scroll through long paragraphs of text. However, we suspect that this default may contribute to navigation difficulties since users must first open individual sections before reading. In December 2025, weconducted an experiment on Arabic, Vietnamese, French, Chinese, and Indonesian wikis to 1) auto-expand all sections in an article by default and 2) pin the header of the section in the viewport to the top of the page.

We found that this change actually lowered the retention rate for readers by about 1.5% and shortened the amount of time they spent onwiki. We suspect that auto-opening all the sections on mobile ended up causing navigation difficulties by creating a wall of text, resulting in readers feeling overwhelmed or frustrated and leaving. So we decided to try out something different.

Now we want to see if offering a Table of Contents will improve those navigation needs. The new test will add a Table of Contents button on mobile. When users tap it, a panel slides up from the bottom showing the article’s section headings, which they can then click to jump to different parts of the page.

These screenshots show the two different table of contents interfaces that the two treatment groups will see.

What stage is this project in?

This project is inphase 1: launching a small test with an early version of these ideas.  It’s not yet clear whether this feature will be an improvement for readers, so we want to test it to determine whether to proceed intophase 2: building a feature.  

What is the timeline?

The experiment will go live the week of February 16 and will run for four weeks. It will affect up to 10% of mobile users on Arabic, Vietnamese, French, Chinese, and Indonesian Wikipedias and up to 1% of mobile users on English Wikipedia. Once we have the results, we will come back here to discuss results and decide whether we want to proceed with this idea.

Thank you!EBlackorby-WMF (talk)15:59, 10 February 2026 (UTC)[reply]

Something to entertain since you're poking around would be to display some lesser set of the table of contents, perhaps all items pointing to an (h1), h2, or h3, and excluding any pointing to h4s.Izno (talk)19:30, 10 February 2026 (UTC)[reply]
The TOC in the first image on the right is a more familiar & self-explanatory kind of TOC I think. What is missing is a button to expand all section easily with a click on desktop. Is there an issue about this? Glancing over the sections is a good way to find what you're looking for or in discovery-mode see if there's something you may find interesting in an article but having to uncollapse each section individually is too much and one also starts to think about which sections may or may not be relevant instead of just doing that quick click.
.
Also I kind of liked how the TOC used to be in a way because when opening an article one could see the TOC and thereby somewhat a form of summary of the contents of the article at the top right away without having to click anywhere. I have the TOC collapsed to the top instead of the sidebar. On the other hand the always displayable TOC also has big advantages. Why not combine the best of both or give logged-in users to configure a setting to have that: display the left sidebar with the TOC when the mouse goes to the minimized panel on the left but when just reading the article make it a small panel that doesn't take up space. I could make an illustration butthis video also shows what I mean. One could also have an option for whether or not the sidebar should display when opening the article or even then only when hovering left. The sidebar is usually just mostly whitespace so I have it hidden even when I like seeing the TOC often and miss the quickly glancable TOC at article opening. This would also make it faster and easier to find some info.Prototyperspective (talk)00:57, 12 February 2026 (UTC)[reply]

Indexed article omitted from Google

[edit]

Hi folks, not sure if this is the best place to post this but it seemed like a high-visibility spot where someone might have an answer. Feel free to move or copy my message elsewhere if you'd like.

I created the articleNova Scotia Guard on 1 July 2025. The article appears to be indexed, and is the third result on DuckDuckGo. However, it will not show up on Google at all. Even when searching "Nova Scotia Guard Wikipedia", you'll get articles it's linked to and even a category it's in, but not the article itself. I was particularly perturbed by the fact that the Grokipedia clone of the article shows up in Google, but not the one I created. I mentioned this in the Wikipedia Discord server some time ago and my results were replicated by several other users. Since then I created a redirect, edited the Wikidata item, and added more links to the article, but it hasn't changed anything.

My biggest concern here is that there may be other articles which Google is not showing in search results for one reason or another. If someone might be able to look into this I'd appreciate it. Thanks,MediaKyle (talk)16:04, 10 February 2026 (UTC)[reply]

It was marked as reviewedin September which should allow it to be indexed, but I checked Google Search Console and for some reason Google hasn't crawled it since July when it was noindexed. I requested a re-crawl, so hopefully it will start showing up soon.the wub"?!"16:44, 10 February 2026 (UTC)[reply]
@MediaKyle: It hasn't been edited since 20 August 2025 where it was still noindexed. I get the impression Google is watching our edit logs and often revisits a page shortly after it has been edited so any edit (except an unloggednull edit) may influence them.PrimeHunter (talk)17:31, 10 February 2026 (UTC)[reply]
Thanks for the replies, I appreciate you both looking into this. I thought I edited the article the other day but I guess I did everything except that... Just made an edit. Hopefully it will show up soon and this is just an isolated incident. Cheers,MediaKyle (talk)17:49, 10 February 2026 (UTC)[reply]
@MediaKyle, fyi I just searched and we were the 3rd result.Dw31415 (talk)18:55, 10 February 2026 (UTC)[reply]
Thanks for letting me know, just checked and it shows up for me now as well. Seems this is resolved now... still have to wonder what other articles might be caught by this oddity but I can't imagine it's too widespread.MediaKyle (talk)19:01, 10 February 2026 (UTC)[reply]
Hm. I think maybe quite a lot, actually, if the reason something wouldn't be indexed is "no edits since an NPR hit the reviewed button". --asilvering (talk)05:22, 11 February 2026 (UTC)[reply]
This is pretty common (and pretty damaging for our Google scores probably) —TheDJ (talkcontribs)13:27, 11 February 2026 (UTC)[reply]
If this is common then probably it would be good if there was some query that listed all of these pages so that a bot could make an edit to these to get them index I think. I kind of doubt this is common though when not considering articles with a delay of 10 days or so but even when it's not common, many pages could be affected.Prototyperspective (talk)11:37, 12 February 2026 (UTC)[reply]
@Prototyperspective Here you go -Quarry 102028 (I think, anyway). All pages on enwiki which are a) in the main namespace b) not a redirect c) marked as reviewed and d) have a last editolder than the review timestamp.
The interesting thing is that there are two different sets of answers here depending on how we look for the "review timestamp". In total, there are about 6000. But filtering only onptrp_tags_updated we have about 5600 entries, oldest review date 2026-01-01. Filtering only onptrp_reviewed_updated gets about 1600, oldest review date 2026-01-15.
Those are two suspiciously round numbers (one is this calendar year only, one is the last month only) so I am wondering if they are perhaps incomplete. Either way it might be an interesting list to investigate.Andrew Gray (talk)16:34, 15 February 2026 (UTC)[reply]

Temporary Wikipedian userpages

[edit]

In the past couple of weeksSpecial:WantedCategories has seen more than one recurrence of a redlinkedCategory:Temporary Wikipedian userpages that was deleted in 2016. Both times, it was populatedentirely by the user talk pages of editors who were blocked for vandalism in 2008, the first timeexclusively editors whose usernames began with W, and todayexclusively editors whose usernames began with V — and the culprit appears to be that said talk pages have recently beenundeleted onWP:DELTALK grounds, after having been previously deleted, and were thus put back into a category that existed at the time of the original deletion but has not existed for a decade.

So is there another way that this can be resolved without forcing me to gnome it out in anWP:AWB run every time it comes back again?Bearcat (talk)17:28, 10 February 2026 (UTC)[reply]

The category is currently empty. Please always include an example. I found one in your contributions:[10]. There is no way to prevent the categorization before you removed it. The page was undeleted byHex. Her logs show she undeleted many such pages beginning with V on 7 February and with W on 29 January. If she is planning to undelete more pages then you could ask if she will remove the category afterwards but now I have also pinged her.PrimeHunter (talk)18:03, 10 February 2026 (UTC)[reply]
Hiya - yes, I'm repairing amass deletion of user talk pages by a former admin back in 2008 before we decided not to do that. It's slow and tiresome because I check each of them to ensure there's nothing requiring RevDel before hitting the button, so I was planning to ask someone to get a bot to clear off the category afterwards. I guess you'd like me to arrange that now? Given that I've done about 600 out of 11,000, this is going to take a while. By the way Bearcat, since you evidently checked the logs, you could have written to me first before posting here. It's also kind of odd that you didn't mention me and so PrimeHunter had to send a ping.  —Hextalk19:58, 10 February 2026 (UTC)[reply]
@Hex: Now that the topic is raised, I do have to wonder what purpose is served by undeleting these talk pages. Was there a discussion somewhere that concluded these 11000 pages would be useful to mass-restore 18 years later?Anomie23:59, 10 February 2026 (UTC)[reply]
I have the same question. I looked atjust one example, which had five Linter errors and one nonexistent category. I expect to see deleted templates as well. Restoring these pages will make work for a lot of gnomes; what is the benefit, and where was the discussion about this restoration? –Jonesey95 (talk)00:32, 11 February 2026 (UTC)[reply]
No discussion was required. We established consensus a long, long time ago that user talk pages shouldn't be deleted except in rare circumstances because they form an important part of the historical record. When that happened, someone should have done this job, but nobody did. I'm rectifying that error. The effort of dealing with a small number of linter issues will be outweighed thousands of times over by the benefits of not having a massive chunk of user interactions and block log context missing for no good reason.  —Hextalk01:43, 11 February 2026 (UTC)[reply]
And what advantages are those, exactly? Since no one has cared in 18 years, it seems unlikely they're that significant.Anomie12:57, 11 February 2026 (UTC)[reply]
thousands of times over? Actual human and bot editors are going to have to make thousands of edits to remove errors from these restored pages. That is a guaranteed downside if thiscrusade project goes forward.Hex: Please enumerate concrete instances that balance the downside of those thousands of edits with benefits. I won't ask you to justify the obviously unjustifiable orders of magnitude that you claim. Just a simple positive or break-even counterbalance would be fine. –Jonesey95 (talk)15:09, 11 February 2026 (UTC)[reply]
A quick and dirty sql says that among the undeleted pages with linter issues, most have issues with obsolete tags, no background inline (which a number of wikipedians regard having a lot of false positives) and no end tags. Most of this can be automatically fixed. Everything else is less than 20 pages with issues.
Do users need to have a set number of edits to have userpages, and if so, why?Snævar (talk)17:01, 11 February 2026 (UTC)[reply]
When you ran your SQL query, did you run it on the originally restored pages to account for theLinter errors that have already been fixed and thecategories that needed to be removed? –Jonesey95 (talk)17:37, 11 February 2026 (UTC)[reply]
@Snævar: There is no restriction. Some editors create their user page as the very first edit; indeed, for some, it is theonly edit that they ever make. It's often harmless, provided it's not againstWP:UPNOT and isn'tspeedyable under the G and U criteria. But this thread appears to be about usertalk pages, these being the ones that Hex has been undeleting. Very few users create their own user talk pages, although some do. It's also not usually a Wikicrime. --Redrose64 🌹 (talk)23:29, 11 February 2026 (UTC)[reply]
@Anomie - who can say, really? People are interested in anything and everything. Even the most seemingly mundane detail in an archive may be exactly what some future historian is looking for as part of a research project. You could really say that about almost all of our archives, which we generate at a ferocious rate - page revision histories, system logs, talk archives. 18 years is also not very long at all. The discussion we're having right now will get archived, and then nobody might care about it at all for 25, 50, 100 years. But there may be a single historian in 2126 who it's useful to and is reading it right now. (Hello! Do you live in space? I'm sorry for what we did to the planet.) It's thatpossibility that we keep archives for.  —Hextalk20:30, 11 February 2026 (UTC)[reply]
That's a whole lot of maybes and hypotheticals.Anomie00:17, 12 February 2026 (UTC)[reply]
That's life.  —Hextalk00:22, 12 February 2026 (UTC)[reply]
So a moment ago it was the unsupportableThe effort of dealing with a small number of linter issues will be outweighed thousands of times over by the benefits of not having a massive chunk of user interactions and block log context missing for no good reason, and now it'sthat's life? I hope thatHex will considercleaning up the pages that they undelete (link to an example of a Linter error that is of a type that was completely eliminated years ago). Editors are responsible for their edits. This is like watching someone walk through my neighborhood throwing trash on the ground. –Jonesey95 (talk)14:25, 13 February 2026 (UTC)[reply]
If anything, Jonesey95's crusade against linter errors is way more harmful than Hex's undeletions, because they seem to have no issues with loudly complaining and upsetting people over it. The message above ("This is like watching someone walk through my neighborhood throwing trash on the ground") is a perfect example of this.sapphaline (talk)14:59, 13 February 2026 (UTC)[reply]
Re redlinked categories, I've told JJMC89 bot III (the bot that normally processesWP:Categories for discussion outcomes) to emptyCategory:Temporary Wikipedian userpages, so that problem shouldn't happen anymore. And I'll add (to counter Jonesey95's comment thatno one has cared in 18 years), that I approve of what Hex is doing, and have expressed dissatisfaction with this trend since at least 2019 (for exampleWikipedia:Requests_for_undeletion/Archive_344#c-Pppery-2019-11-07T21:55:00.000Z-User_talk_pages:_A). Admittedly that scattergun REFUND request wasn't my finest hour and I'm inclined to agree now with some of the comments criticizing that request, but the claim that nobody has cared isn't true and if Hex wants to spent their time doing this then I would say more power to them.* Pppery *it has begun...18:26, 11 February 2026 (UTC)[reply]
Oh, thanks for the bot-herding, and your comments!  —Hextalk21:02, 11 February 2026 (UTC)[reply]
@Pppery: That was me, not Jonesey95. I guess good for you that you cared at one point in 2019? But not enough to have started a discussion beyond the REFUND you now admit wasn't a good one.Anomie00:23, 12 February 2026 (UTC)[reply]
  • @Jonesey95: Mentioning me in every single edit summary you make so that I come back to find I have 75 notifications is unbelievably petty and childish. Grow up.  —Hextalk15:15, 13 February 2026 (UTC)[reply]
    It was a boilerplate edit summary. What is unbelievable is how much work you are making for your fellow editors. Please clean up the pages that you are restoring. Editors are responsible for their edits. See below for something more constructive. –Jonesey95 (talk)15:23, 13 February 2026 (UTC)[reply]
    Two different boilerplate edit summaries which you wrote yourself to specifically mention and talk to me, which you've now stopped doing after getting called out on it. Sure dude.  —Hextalk16:01, 13 February 2026 (UTC)[reply]
You asked me to stop, so I stopped. That's the polite thing to do. See below for an example of an editor who did not stop causing problems when asked to do so. –Jonesey95 (talk)16:43, 13 February 2026 (UTC)[reply]

Exploring a better process

[edit]

I just did apartial cleanup on 88 User talk pages restored byHex, fixing types of Linter errors that we eliminated from the English Wikipedia many years ago, and deleting nonexistent templates. A bot also removed nonexistent categories from many of the restored pages. This work took me about an hour that I otherwise would have spent fixing other problems or making actual improvements to Wikipedia. Bots and human editors will be needed to clean up "obsolete tag" Linter errors on a couple hundred additional pages that Hex recently restored.

I suspect that there is a better way forHex to achieve their goals while avoiding this unnecessary work. I can think of a few options:

  1. Stop restoring these pages.
  2. Restore the pages, then fix all errors on the pages (both actions would be performed byHex).
  3. Restore the pages and then blank them. The supposedly valuable information would still be available in the pages' histories.
  4. Now that there is actual time-based evidence of the cost of restoring these pages, explain in detail the thousands of hours of benefits that will accrue to future editors, readers, and researchers from restoration of these 88 pages. If it is really worth it, I can live with the extra work.

There are probably additional options. I ask that Hex stop restoring pages until a better workflow can be developed. –Jonesey95 (talk)15:23, 13 February 2026 (UTC)[reply]

If you don't like the job that you chose to do, you should probably stop doing it.  —Hextalk15:26, 13 February 2026 (UTC)[reply]
Please stop violating the guideline atWP:REDNOT, specifically "Do not create red links to: Transclusions of templates that do not exist." –Jonesey95 (talk)15:37, 13 February 2026 (UTC)[reply]
Hex is actively restoringpages with errors (this page did not exist half an hour ago), despite the above request to pause. –Jonesey95 (talk)15:50, 13 February 2026 (UTC)[reply]
@Hex I don't think that is a reasonable response when your project is actively causing problems.Qwerfjkltalk17:07, 13 February 2026 (UTC)[reply]
You're entitled to your opinion.  —Hextalk17:19, 13 February 2026 (UTC)[reply]
I'll also add my voice here that I think you should stop what you are doing and seek consensus for it. If anyone would have edited 11k pages without even a single discussion they'd get blocked immediately. Being an admin does not give you any special rights to bypass this process.Gonnym (talk)20:49, 13 February 2026 (UTC)[reply]
We had the discussions about user talk pagesfrom 2006–2010. In fact, the day after tomorrow is the 20th anniversary ofWP:DELTALK. If you want an MfD for 11,000 user talk pages trying to retrospectively overrule that consensus, well, good luck.  —Hextalk22:05, 13 February 2026 (UTC)[reply]
Find me a consensus that isn't 16-20 years old please. en.wiki has changed dramatically since then and I'd like to see recent consensus that agrees that mass restoring 11k pointless talk pages is wanted.Gonnym (talk)08:34, 14 February 2026 (UTC)[reply]
Starting from 12:27, 7 February 2026, out of the 605 user talk pages Hex has restored, 215 pages currently have at least one lint error (382 have no errors, and 8 were re-deleted). Here's a list in case anyone is interested in fixing those lint errors specifically:User:DVRTed/sandbox4. —DVRTed (Talk)16:22, 14 February 2026 (UTC)[reply]
FWIW, I have already fixed all, or nearly all, of the Linter errors in these pages other than "obsolete tag" errors(the dark mode issues are not worth bothering with at this time, which is another discussion). I think I also fixed all of the nonexistent templates. We have a bot that can fix many pages that have only obsolete tags on them, so for human editors are interesting in fixing Linter errors, there are plenty of non-bot-fixable pages to focus on. The bot will make its way around to these pages eventually (it is currently fixing a batch of many tens of thousands of pages, possibly as many as 300,000, containing an error caused by a substed template; aren't you glad you're not a bot?). –Jonesey95 (talk)05:07, 15 February 2026 (UTC)[reply]

I think the real issue here isWP:MEATBOT. I don't have an opinion on whether these pages should be restored, but I can understand that folks find undeleting 500 pages in a day to be disruptive when the whole project averages closer to 30-35 per day. Obviously pages are going to be restored from time to time, even ancient ones, IMO it's the scale at which it's happening that's upsetting people.

MariaDB [enwiki_p]> SELECT     ->     LEFT(log_timestamp, 8) as restore_date,    ->     SUM(CASE WHEN actor_name = "Hex" THEN 1 ELSE 0 END) as restorations_by_hex,    ->     COUNT(*) as total_restorations,    ->     ROUND(100.0 * SUM(CASE WHEN actor_name = "Hex" THEN 1 ELSE 0 END) / COUNT(*), 2) as pct_by_hex    -> FROM logging    -> JOIN actor ON log_actor = actor_id     -> WHERE log_type = "delete"     ->     AND log_action = "restore"    ->     AND log_timestamp >= "20260125000000"    ->     AND log_timestamp < "20260301000000"    -> GROUP BY LEFT(log_timestamp, 8)    -> ORDER BY restore_date DESC;+--------------+---------------------+--------------------+------------+| restore_date | restorations_by_hex | total_restorations | pct_by_hex |+--------------+---------------------+--------------------+------------+| 20260213     |                 500 |                576 |      86.81 || 20260212     |                   0 |                 47 |       0.00 || 20260211     |                   0 |                 50 |       0.00 || 20260210     |                   0 |                 27 |       0.00 || 20260209     |                   0 |                 31 |       0.00 || 20260208     |                   0 |                 29 |       0.00 || 20260207     |                 105 |                154 |      68.18 || 20260206     |                   1 |                 34 |       2.94 || 20260205     |                   0 |                 23 |       0.00 || 20260204     |                   0 |                 36 |       0.00 || 20260203     |                   0 |                125 |       0.00 || 20260202     |                   0 |                 48 |       0.00 || 20260201     |                   0 |                 24 |       0.00 || 20260131     |                   0 |                 64 |       0.00 || 20260130     |                   0 |                 44 |       0.00 || 20260129     |                 259 |                287 |      90.24 || 20260128     |                   0 |                 41 |       0.00 || 20260127     |                   2 |                 63 |       3.17 || 20260126     |                   0 |                 66 |       0.00 || 20260125     |                   0 |                 40 |       0.00 |+--------------+---------------------+--------------------+------------+20 rows in set (0.134 sec)

Restoring11 pages in a single minute is clearly bot-like behavior, and should go through some sort of approval, at which point we can figure out details on how to coordinate with other cleanup bots and humans. Restoring 11k user talk pages doesn't fall underWP:MASSCREATE because they aren't articles, but I think following that guidance would ease bad feelings on both sides.Legoktm (talk)00:23, 14 February 2026 (UTC)[reply]

Call it bot-like if you wish but this is very, very simple work that requires only a glance at the edit history of pages that are 90% just a single block message or maybe a couple of warnings before that. It is even so still work requiring human attention and not a bot. It's also incredibly boring, and because unlike some people I understand thatthere is no deadline I'm not in some all-consuming rush to get this done. It's also how I approach my backlog of grindy projects, of which I have many. Do some to scratch an itch, then forget about it for a while. I started this project a year and a half ago - that's how long it took me to get over the boredom of the last bunch of undeletions. After doing 500 yesterday that itch has been scratched for now until I regain the energy to think about it more, but I'm probably going to be seeing this site in my sleep for a week. It would probably have come up for a bit more scratching relatively soon now that I've gotten a feel for it again, but after the toxic behavior on display in this discussion it's retreated a long way and is unlikely to see the light of day for quite some time. I'll note again here that we could easily have had a good-natured chat about all of this on my user talk page, but someone chose to passive-aggressively post here in a way that they knew would cause drama. For shame.
If people want to artificially limit progress on rectifying this big, stupid mistake from the past, they could at least volunteer to help out with it. I'm the only one doing it, and hamstringing me on the occasions that I feel sufficiently motivated will achieve nothing. If there are more people working on it then that will make a difference even with a go-slow sign on the side of the road.
Have a good weekend, I intend to.  —Hextalk11:18, 14 February 2026 (UTC)[reply]
this is very, very simple work that requires only a glance at the edit history of pages illustrates the problem. The technical work of looking at history and clicking a button is simple, but the job is not done at that point. Instead of moving on to the next boring page restoration, the restoring editor, who has now created one or more problems on a Wikipedia page, bears some responsibility for resolving those problems. The editor should remove nonexistent categories and templates and do their best to fix wikitext syntax errors. The red categories are easy to see in preview. The nonexistent templates are easy to see in "Pages included in this section:". And many of the syntax errors are easy to see using the syntax highlighter gadget. Please fix the errors that you are creating, now that you have been notified that you are creating them. –Jonesey95 (talk)13:09, 14 February 2026 (UTC)[reply]
@Hex, I hope you don't feel that push back to your project is toxic.Jonesey95, in particular, has tried to offer alternative solutions that wouldn't cause issues for other editors, but you seem to have dismissed them out of hand.
I'll note again here that we could easily have had a good-natured chat about all of this on my user talk page, but someone chose to passive-aggressively post here in a way that they knew would cause drama. For shame.
Isn't this really the crux of the problem?Bearcat often asks here for help with his maintenance work keepingSpecial:WantedCategories clean - I don't know where you're getting that this is intended to cause drama (see alsoWikipedia:Aspersions). But clearly this is causing issues for other editors, because that is what precipitated this thread.Qwerfjkltalk15:04, 14 February 2026 (UTC)[reply]
FWIW I also feel it's useful to restore these talk pages. However Hex I have a question, you've mentioned you're checking to see if there's anything that needing to be revdeleted, that's great. But are you also checking why the page was deleted? Because IMO in any case where it was deleted by request of the editor for whom the talk page is for, you should be blanking the talk page by default. Blanking a talk page is perfectly in line with policy and practice. And it the editor asked for it to be deleted 18 years or whatever and this was granted given the norms of the time, and it's now being undeleted because of policies changes, we should still grant this editor the courtesy of blanking it for them as the closest thing we can do which is in line with our current policies which fits with their request. Even in cases where it wasn't at the request of the editor the talk page is for, I don't see any harm in blanking it. Especially since we will probably never know if the editor for who it's for might have blanked it if it were possible and didn't reasonably expect it to come back 18 years later. So IMO the solution which will also allay the other concerns is for you to blank it after restoration.(To be clear, this means you probably don't have to check so well why the talk page was deleted.) If you want to get technical, you could ensure you keep any declined unblock requests if the editor is still blocked, but frankly after 18 years of a long deleted talk page, that's not particularly important IMO. BTW, I could approach you with this but since we're already discussing it here, I felt it best just to mention it here.Nil Einne (talk)16:09, 15 February 2026 (UTC)16:19, 15 February 2026 (UTC)[reply]
To be clear, IMO the primary reason for blanking the talk page in a case where the editor it's for requested deletion is because we should assume it's the closest thing we can do which follows their wishes. And something there's a fair chance they would have done if told 18 years ago sorry we can't delete the talk page but you're free to blank it. (I think but am not sure, nowadays some admins may blank a talk page if an editor requests deletion.) The fact it deals with the other problems that come from a very old page being restored is only an added bonus.Nil Einne (talk)16:13, 15 February 2026 (UTC)[reply]
I agree that restoring and then blanking would be an acceptable path forward. I proposed it as option 3 above. It would alleviate the Linter errors, the category errors, and the nonexistent template errors, which (if I am reading correctly) would address all of the editors' complaints in this thread. –Jonesey95 (talk)16:21, 15 February 2026 (UTC)[reply]
Looking more it seems most or all of these were deleted as temporary user pages rather than on request. Even so, I still feel simply blanking them is the best option given as I mentioned it might have been carried out (or requested if the editor lost talk access) in the 18 years if they weren't deleted. And the user has no reason to expect it would suddenly come back. If the editor objects, they're free to unblank them but blanking when the editor doesn't want it seems the less of two possible evils. For IP talk pages, the situation is different however it was normal to clear out old messages to reduce confusion. And while there's no need for it now we have TAs, there's also no harm in it. (Frankly I wonder if we should just blank all IP talk pages but that's a discussion for another day.)Nil Einne (talk)16:48, 15 February 2026 (UTC)[reply]
I'm in support of blanking as well, it takes care of all the different issues nicely. Would be good to have a flagged bot do it so it doesn't trigger extra notifications for these users though. (@Nil Einne:VulpesBot is supposed to take care of blanking IP talk pages)Legoktm (talk)18:24, 15 February 2026 (UTC)[reply]
@Hex: Sorry, I intended the end of my message to be proposing a (hopefully) positive path forward, specifically "...we can figure out details on how to coordinate" was me explicitly volunteering to help!! I think treating this as a bot task will actually speed up what you want to accomplish rather than "artificially limit progress".
I do disagree with your assertion that this isn't a bot task. Doing something across 11k pages, even with minimal human input, is just a semi-automated bot instead of a fully automated one. To quoteMEATBOT:Editors who choose to use semi-automated tools to assist their editing should be aware that processes which operate at higher speeds, with a higher volume of edits, or with less human involvement are more likely to be treated as bots. If there is any doubt, you should make abot approval request.Legoktm (talk)18:21, 15 February 2026 (UTC)[reply]

Section of text shows up orange but only in some cases

[edit]

It is the section starting with "include obviously. It is absurd to say that we should say "he had never been arrested before" in[11]. See the discussion there about this. Thanks.Doug Wellertalk18:40, 10 February 2026 (UTC)[reply]

You are using one of the scripts that check links for reliability (Headbomb's I believe). It highlights theentire list item in which the unreliable link appears. I skimmed it so I can't say which specific link.Izno (talk)19:28, 10 February 2026 (UTC)[reply]
Thanks. .Doug Wellertalk19:54, 10 February 2026 (UTC)[reply]
It would be great if somebody could change that script so it doesn't highlight replies on discussion pages or anything on discussion pages that aren't article talk pages. I'm having the same problem of random replies being marked in red and lots of users have that script installed.Prototyperspective (talk)11:41, 12 February 2026 (UTC)[reply]
User:Headbomb can you help? Thanks.Doug Wellertalk19:15, 12 February 2026 (UTC)[reply]
Should be fixed.Headbomb {t ·c ·p ·b}19:23, 12 February 2026 (UTC)[reply]
Thanks for looking into it.This whole thread is still red and I think it's due to your script; could you check?Prototyperspective (talk)19:42, 12 February 2026 (UTC)[reply]

Side box + floatright combination squishes text on mobile

[edit]

Hello there :) Apologies if this has been discussed everywhere or is otherwise known, but I noticed an ugly visual bug resulting from the combination of a{{side box}} and (a table with) thefloatright class. You can see the effect atEjective consonant, opening the mobile version from a narrow enough screen (or emulator - the "iPhone SE" preset in chrome devtools is perfect). I tried fiddling with it for a bit but didn't find a convincing solution, or one in which I'm sufficiently confident (e.g., would it make sense to add a content-based width to{{side box}}?). I'm also not familiar with the available layout templates and classes here on enwiki, so y'all may already have a simple solution that I'm not aware of.Daimona Eaytoy(Talk)22:04, 10 February 2026 (UTC)[reply]

This should be changed in thefloatright class definition. Memory says this class (and its friends) used to be wrapped in a media query on mobile such that it only took effect above a certain width. I have been meaning to make that how it works globally and just have been ~lazy~. cc @JdlrobsonIzno (talk)00:51, 11 February 2026 (UTC)[reply]
Those rules exist but they only work on responsive skins (not resized skins). They work fine on mobile devices and people using desktop site on mobile.
Generally people shout at you when you give any kind of impression you are making their favorite pre-2011 skin "responsive" or mobile like which is why we unfortunately intentionally dont have a response version of the Vector 2022 (or Vector classic) skin whicha makes me sad.
https://gerrit.wikimedia.org/g/mediawiki/core/+/257e5d8c0a298d31870727cba144dc6c46ecec5f/resources/src/mediawiki.skinning/content.thumbnails-common.less🐸 Jdlrobson (talk)02:21, 11 February 2026 (UTC)[reply]
@Jdlrobson Ok, 1) I'm not crazy, and 2) the styles there aren't applying in Minerva right now?https://en.wikipedia.org/wiki/Ejective_consonant?useformat=mobile only has the clear-right-float-right rule, not the clear-both-float-none rule, at the appropriate width.Izno (talk)02:42, 11 February 2026 (UTC)[reply]
Because Minerva isn't using that version, it has its own.[12]Izno (talk)02:45, 11 February 2026 (UTC)[reply]
Phab:T368469🐸 Jdlrobson (talk)02:56, 11 February 2026 (UTC)[reply]
Thanks for the context :) I agree that the floatright class is ultimately responsible, although I guess I was also wondering if there's a simpler fix to apply to either of the involved templates while we wait for the proper fix. --Daimona Eaytoy(Talk)12:01, 11 February 2026 (UTC)[reply]
"we unfortunately intentionally dont have a response version of the Vector 2022" -are you sure about that?sapphaline (talk)12:05, 11 February 2026 (UTC)[reply]
intentionally is doing the heavy lifting there. That a parameter works with the skin doesn't imply that was intended (i.e. designed-for).Izno (talk)17:39, 14 February 2026 (UTC)[reply]

Nesting templates

[edit]
  1. Is there are way for a template to read the parameters of a template nested inside it? For example, if I had{{template one| {{template two |para1=some |para2=thing}} }} is there some way to code "template one" to read the parameters from "template two"? Or can "template one" only ever read the output of "template two", not the inputs?
  2. I've seen source code for some templates use#invoke where they could use an existing template. Is there a benefit to doing this, particularly for sidebars? What's the rationale for invoking a Lua module rather than just using{{sidebar}}?

I don't have a specific goal in mind with these questions, I'm just trying to understand how this works a bit better. –Scyrme (talk)05:06, 11 February 2026 (UTC)[reply]

@Scyrme:
1. A template cannot do this. Via a module it can read the source text of the whole page and search this source text for strings like a specific template name, but we only do that in special cases likeModule:Auto date formatter. It doesn't sound suitable for your purpose. It also relies on the parameter being present in the source text.
2.It reduces thepost-expand include size to invoke a module directly instead of via a template. SeeTemplate:Navbox#Technical details.
PrimeHunter (talk)05:29, 11 February 2026 (UTC)[reply]
@PrimeHunter: Thanks!
While I was searching, the closest thing I found was{{get parameter}} and{{template parameter value}}. Like you said, it looks like they invoke a Lua module to read the source of a specific article and extract the value of a parameter of a particular template on that page, as opposed to reading a value from parameter of a child template. So it seems you're right.
If anyone knows differently, let us know. –Scyrme (talk)03:45, 14 February 2026 (UTC)[reply]
The advice given is correct. The innermost template is expanded first starting with expansion (if necessary) of its parameters. The expanded innermost template is then passed to the next outer template which is expanded. That means a template get expanded parameters and cannot determine where they came from. Except, that extremely dubious and fragile methods exist to parse the wikitext for the whole page and guess which parameter is wanted.Johnuniq (talk)01:17, 15 February 2026 (UTC)[reply]

It should not be possible to make generic section headers in Village Pump discussions

[edit]

Occasionally editors will start a discussion on one of the village pump subpages and create a subheader with a title like "Discussion" or "Survey", which inevitably creates navigation problems when there end up being multiple identical subheaders on the page under different discussions. It should just not be technically possible to create a generic subheader on these pages. If someone tries to create one, they should be prevented from saving until they change it a unique subject-specifics subheader (like "Discussion (section headers)").BD2412T19:00, 11 February 2026 (UTC)[reply]

Yes please. Also onWP:ANI (and I supposeWP:AN). —Cryptic01:17, 12 February 2026 (UTC)[reply]
I completely agree with those. I can imagine there are other messageboards where this would make sense.BD2412T01:19, 12 February 2026 (UTC)[reply]
Also agree. The section headers should be descriptive. This also relates to wishW311: Do not fully archive unsolved issues on Talk pages albeit another idea would be needed for what to do at meta pages like VP that get lots of thread than what's suggested in the image there and the bigger problem with that is that here threads aren't marked as 'solved' or at least 'issues' or 'nonissues' (eg Tech News posts aren't issues). I think the solution would be to add a sentence about this to the header asking for descriptive headers and having users edit headers when they're not descriptive. I edited a few section headers atd:Wikidata:Bot requests that weren't descriptive.Prototyperspective (talk)11:33, 12 February 2026 (UTC)[reply]
Personally, I would prefer having a permalink icon next to the heading that provides easy access to a unique link to the heading, so users won't have to generate unique headings on their own. The unique(-ish) ID is already generated by the infrastructure underlying the reply tools feature; there just needs to be a user interface to expose it. (I have my ownscript to copy comment and heading links to the clipboard; other users have written similar scripts.)isaacl (talk)02:00, 12 February 2026 (UTC)[reply]
This prompted to create auser script...sapphaline (talk)22:09, 15 February 2026 (UTC)[reply]
Ideally section-based editing would be revised to support these IDs as well. However I don't know the practical feasibility of implementing this change.isaacl (talk)02:02, 12 February 2026 (UTC)[reply]
@BD2412, please post this request toWikipedia:Edit filter/Requested.--Ahecht (TALK
PAGE
)
14:52, 12 February 2026 (UTC)[reply]

Sandbox link no longer red

[edit]
Tracked inPhabricator
Task T417372

The sandbox link in the personal toolbar is no longer red even if it doesn't exist, in Vector (both kinds) and Monobook. It's still red in Timeless. The classnew is added to<li> not<a> soa.new is not being applied.Nardog (talk)02:35, 12 February 2026 (UTC)[reply]

This is filed asT408968.Matma Rextalk02:54, 12 February 2026 (UTC)[reply]
No, it just became blue in Vector legacy (and I assume Monobook).Nardog (talk)05:59, 12 February 2026 (UTC)[reply]
It's presently blue in MonoBook. I don't recall noticing it when was last shown as red except when first added some years ago. --Redrose64 🌹 (talk)22:40, 12 February 2026 (UTC)[reply]
In Vector-2022 this is intentional. In other skins, it's regression:phab:T417372. –Ammarpad (talk)11:44, 13 February 2026 (UTC)[reply]

Broken edit filter view/edit interface

[edit]

Perphab:T413542, a task was created to use OOUI in the edit filter interface, but there's a major side effect: the view is messed up in desktop (laptop/computer), and the Ace editor is completely broken in mobile/desktop view (in iOS/Android). I am notifying the community about this error (which has probably affected all wikis) which should be fixed immediately.Codename Noreste (talkcontribs)20:38, 12 February 2026 (UTC)[reply]

ORES score

[edit]

Movingthis here as I may get a better answer. The question is about AntiVandal. In the settings page, there's a setting for ORES score.I'm not understanding how that setting is supposed to work. I want it to mimic "likely bad faith" in Recent Changes, but it asks for a decimal? So what do I do if I want that behaviour?TheTechie[she/they] |talk?06:36, 13 February 2026 (UTC)[reply]

@TheTechie, I think you just set it to a low value, like 0.1. ORES has two values for edits "damaging" and "goodfaith". I assume this is the goodfaith one. — Qwerfjkltalk12:15, 13 February 2026 (UTC)[reply]
I would guess it uses the same numbers as in the configuration here:[13] but I'm still not sure how a single number can match that kind of configuration.Matma Rextalk17:32, 13 February 2026 (UTC)[reply]

What's wrong with DiscussionTools' markup?

[edit]

Why does it use<p> for spacing and<em> for italics instead of standard{{pb}} and''?sapphaline (talk)07:56, 13 February 2026 (UTC)[reply]

{{pb}} doesn't work across all wikis. (that's true, though irrelevant as DT didn't insert it) –SD0001 (talk)08:39, 13 February 2026 (UTC)[reply]
What is this "p tag for spacing"? That is html for paragraph...Gryllida09:26, 13 February 2026 (UTC)[reply]
There are no paragraphs on Wikipedia's talk pages. Well, thereshould be no paragraphs; DiscussionTools breaks this convention.sapphaline (talk)09:40, 13 February 2026 (UTC)[reply]
It's fine to use paragraph elements appropriately on any page. It's unnecessary when writing paragraphs that aren't embedded within other elements, as the MediaWiki parser will parse newline-separated wikitext as separate paragraphs, but can be used when embedding paragraphs within other elements such as a list (seeWikipedia:Manual of Style/Accessibility § Multiple paragraphs within list items). The{{pb}} template is easier for most people to use, since it doesn't require a closing tag, but it also is less semantic, as it adds a visual vertical break but not a logical paragraph.isaacl (talk)17:15, 13 February 2026 (UTC)[reply]
The<p> tag doesn't require a closing tag, either. It's implicitly closed by the next<p> tag, and by the closing tag of any block-level element that encloses it. It's also implicitly closed by the opening tag of any block-level element that you're trying to nest inside the<p>...</p> - in that respect it's unique among HTML elements. --Redrose64 🌹 (talk)11:32, 14 February 2026 (UTC)[reply]
Yes, I am aware of this behaviour within HTML 5. I was echoing the guidance atHelp:HTML in wikitext § p, since technically the wikitext parser could impose additional constraints. However, I missed the paragraph at the end of that section where it said it's not necessary on Wikipedia.isaacl (talk)03:45, 15 February 2026 (UTC)[reply]
Various HTML elements are supported within wikitext. SeeHelp:HTML in wikitext § Elements for more details.isaacl (talk)17:20, 13 February 2026 (UTC)[reply]
HouseBlaster used<em> to denoteemphasized content, which is exactlyhow it should be used.NguoiDungKhongDinhDanh09:56, 13 February 2026 (UTC)[reply]
Are you sure the author of that commend didn't enter the<p> and<em> themselves in the wikitext? In my experience, the tool itself does multiple paragraphs by using multiple colon-indented lines, not<p> tags. As in this comment, for example.
It even does that when it results inWP:LISTGAPs when replying to a comment that already has bullet-indented comments.Anomie13:10, 13 February 2026 (UTC)[reply]
That edit by HouseBlaster looks fine to me. I see no evidence that the reply tool caused any problems. Now if we could get the overzealous colon insertion described atT251633 fixed, that would fixan actual problem (second example,third example). –Jonesey95 (talk)14:18, 13 February 2026 (UTC)[reply]
HouseBlaster's edit was made using the source mode of DiscussionTools (you can, kind of awkwardly, tell by usinguselang=qqx toexpose all the hidden tags on that revision, and seeing that it has thediscussiontools-source tag), so this is indeed a case of them deliberately using HTML tags rather than wikitext.DLynch (WMF) (talk)21:52, 13 February 2026 (UTC)[reply]
Do not ever modify someone else's comment as you didhere. Those were inserted by the editors-plural, not by DT, and were used deliberately.Izno (talk)15:55, 13 February 2026 (UTC)[reply]
@Sapphaline: About sixty HTML5 elements may be used within Wikitext. As I write this, the list ishere. The element names are delimited by apostrophes; note that some are listed more than once. Sometimes, using these can produce a "cleaner" rendered page than Wikimarkup. For instance, if you have a list, it is possible for one of the items of that list to contain a sublist; but in Wikitext, such a sublist must be the last content in that list item. If you want text to appear at the level of the outer list, but after the inner list, you need to use HTML thus:
  • Nested list
Original list item is still open, so my sig that follows is part of the post beginningAbout sixty HTML5 elements, and not divorced into a separate item. --Redrose64 🌹 (talk)23:58, 13 February 2026 (UTC)[reply]

IPA consonant audio chart bugging

[edit]

the templates for the co-articulated and the non-pulmonic consonants overhere doesn't format correctly, but it was fine the other day, heres an example of the page on jan 24 on thewayback machineDomTheNightShiftEditor (talk)11:24, 13 February 2026 (UTC)[reply]

Sorry, fixed.Nardog (talk)12:00, 13 February 2026 (UTC)[reply]
damn, already? Thanks, really appreciate itDomTheNightShiftEditor (talk)12:03, 13 February 2026 (UTC)[reply]

Preview warning doesn't appear

[edit]

Hi! I don't know whether I should put this here or in the Teahouse, but I was about to remove the deprecated parameter "nationality" from the infobox ofFernán Mirás and for some reason the preview warning doesn't appear, even if it hasn't been removed. It's not an issue with my browser, since in the other pages I removed the parameter from, the warning appeared. I removed the space between the infobox and the template above, thnking it would help somehow solve the problem, but nothing changed. Thanks,Bloomingbyungchan (talk)15:36, 13 February 2026 (UTC)[reply]

The|nationality= parameter in that infobox is blank, so it should not show a warning. It was erroneously showing a warning before, butthat error was fixed yesterday. –Jonesey95 (talk)15:41, 13 February 2026 (UTC)[reply]
Thanks, I wasn't aware that it was an error, instead I thought that the warning was supposed to appear regardless of the fact that the parameter is blank or not.Bloomingbyungchan (talk)15:54, 13 February 2026 (UTC)[reply]

Odd image layout problem

[edit]

A reader asked me about a layout problem they're seeing with an article I currently have atWP:FAC. I suspect the issue is just that they're using an unusually wide window, but I would appreciate suggestions atTalk:Carlisle & Finch#Whitespace if there's some way I can improve on what I'm doing now.RoySmith(talk)18:03, 13 February 2026 (UTC)[reply]

There is a{{clear}} template at the bottom of the Modern Lights section. If the images go further than the text the next section won't start until the images have been displayed. This results in the white space the other editor is seeing. You can remove the template, but then the images will flow down into the Navigation Beacons section unless you move one of the images elsewhere. --LCUActivelyDisinterested«@» °∆t°19:37, 13 February 2026 (UTC)[reply]

No infobox for "Corpus Inscriptionum X" exists

[edit]

No infobox for "Corpus Inscriptionum..." exists and is very much needed. We have several such articles. I'll place similar notes there and direct people to discuss it here.

Anyone knowledgeable who can code?Arminden (talk)16:46, 22 June 2025 (UTC)[reply]

"Infobox language" comes close to what's needed, but using it is an improvisation and not really suitable (will try to use it for now atCorpus Inscriptionum Iudaeae/Palaestinae).Arminden (talk)17:05, 22 June 2025 (UTC)[reply]
Conclusion from improvising atCorpus Inscriptionum Iudaeae/Palaestinae: there are at least 3 serious problems:
  • the title appears as the lowest-rung language,
  • only one lang. family can be displayed, and
  • the unneeded ISO line can't be removed.
Arminden (talk)17:47, 22 June 2025 (UTC)[reply]
@Arminden Can you explain why it's needed? —Opecuted (talk)14:33, 11 February 2026 (UTC)[reply]
Hi.Opecuted. Sorry, but did you read what I wrote? There are some 8 different articles called "Corpus Inscriptionum XY", they all need an infobox, but none exists. We areforced to use "Infobox language", but it does NOT serve the purpose well - see above only SOME of the inadequacies deriving from this improvisation. For our purposes on Wiki, acorpus isa collection of inscriptions, either in one language, or from one geopolitical region, including inscriptions in several languages, so very far froma language as such.Arminden (talk)16:11, 11 February 2026 (UTC)[reply]
There are articles on (or redirects to, plus 1 red link to):
Arminden (talk)16:31, 11 February 2026 (UTC)[reply]
This sounds like you WANT an infobox. Articles never NEED an infobox. There is even a considerable “anti-infobox” section of our community, that thinks that we use infoboxes way too often and that infoboxes should be removed from lots of articles. —TheDJ (talkcontribs)08:54, 14 February 2026 (UTC)[reply]
Moved fromTalk:Corpus Inscriptionum Latinarum § No infobox for "Corpus Inscriptionum X" exists
 –moved here for better visibility —Opecuted (talk) 05:42, 14 February 2026 (UTC)
Being a series of books,{{Infobox book series}} comes to mind, though there's no parameter for the region and era of the original inscriptions (the existing "country" and "publication date" parameters don't seem appropriate). If you make a list of parameters that the infobox should support, then it wouldn't be too difficult to make one. –Scyrme (talk)06:20, 14 February 2026 (UTC)[reply]
The question remains unanswered by the OP: why do they all need an infobox?Cinderella157 (talk)09:01, 14 February 2026 (UTC)[reply]
@Arminden: forgot to ping —Opecuted (talk)12:02, 14 February 2026 (UTC)[reply]
If an infobox is needed for these articles, Scyrme is on the right track:{{Infobox book series}} should fit the need. Each article is about a series (corpus) of books. –Jonesey95 (talk)13:11, 14 February 2026 (UTC)[reply]
It's more accurate to say each article is about a series of books about a corpus of inscriptions; my understanding is they also include facsimiles of the original inscriptions. I assume they want an infobox that can handle including information about both the books themselves (title, editors, number of volumes, etc.) and the corpus of inscriptions which the books reproduce (era, region, languages). –Scyrme (talk)16:22, 14 February 2026 (UTC)[reply]
Probably more the corpus than the book, but yes.
I find it hugely useful to cross-reference using Wikilinks etc. The inscription collections are in part available online and offer very helpful context to the historical phenomena and sites discussed in individual articles. When this is not the user's main interest that day, an overview in the shape of an infobox, sometimes with links to Google Books or the dedicated website, is just perfect. Without, it takes much longer and I myself sometimes give up and lose much of the context info.Arminden (talk)16:37, 14 February 2026 (UTC)[reply]
@Arminden: It would be easier to fulfil your request if you provided a full list of parameters which it should have. What information should the infobox be capable of displaying?
Without a list it's difficult to make a new template or determine whether an existing template already has all the needed parameters as well as whether a new template would be warranted if one doesn't already exist (there may be other solutions besides an infobox, depending on what's needed). –Scyrme (talk)17:22, 14 February 2026 (UTC)[reply]
HiScyrme, and thank you!
Now that I had to compare all the "Corpus..." pages, it slowly sunk in that yes, for my the one I'm interested in,Corpus Inscriptionum Iudaeae/Palaestinae (CIIP), it would be great to have an infobox, but for the other pages this hardly applies. CIIP is a bit odd in that it covers an unusually large number of languages from more than one language family, and it's limited in time. So I guess it doesn't qualify?
If nevertheless possible, for CIIP itself I would propose (pls compare with what's there already):
  • Region
  • Period
  • Language family 1
    • Language 1
      • Dialect 1
      • Dialect 2
      • Dialect 3
      • ............
    • Language 2
    • ............
AND SO FORTH. Then:
  • Volumes (seeIndex there)
    • Vol. I: ...
    • Vol. II: ...
    • ..........
With details added freely, in Italics or straight, as you see them in the "Index" section.
At the very least, can you pleade make the title disappear from the bottom of the "Language families" list (under Safaitic)? Thank you!Arminden (talk)20:39, 14 February 2026 (UTC)[reply]

Strange section header appearance

[edit]

I was reading theSwingin' (John Anderson song) article on my iPhone (Vector 2022 skin) and the “Other versions” section header is shown one character per line. Any ideas for troubleshooting this? It does not appear this way when I look at the article using my laptop. Thanks,28bytes (talk)13:29, 14 February 2026 (UTC)[reply]

I have seen this issue before but honestly I can't reproduce it now. It's flex gone wrong but there's nothing that should be causing it particular trouble in this context.Izno (talk)17:35, 14 February 2026 (UTC)[reply]
It affectedPppery (talk ·contribs) in August 2024, seeUser talk:Pppery/Archive 25#Something amiss with User:Pppery/topicons. --Redrose64 🌹 (talk)18:15, 14 February 2026 (UTC)[reply]
It's been showing up like thison cellphones since a rather recent Wiki outlook change. What the new outlook screwed up even worse isthe way edits are shown on "edit history": I don't understand anything anymore, "edit history" has becone totally USELESS to me.
Back to this issue: I figured out that by flipping the phone from "portrait" to "landscape" (sorry, I'm a photographer), it fixes the problem A BIT.
Why don't coders stick to the "if it ain't broken, don't fix it" principle? Pleeeease do! Or test phone mode before releasing, at the VERY least! And remember: heaps of conyributors are way past their spectacles-less years, along with all that implies.Arminden (talk)20:11, 14 February 2026 (UTC)[reply]
This kind of issue is about as likely to be related to choices made by software engineers at the WMF as it is to be a failure of the Apple engineers. These days, the latter is more likely worth suspicion.Izno (talk)21:35, 14 February 2026 (UTC)[reply]
I’m confused.. how do you have desktop on a mobile with width being constrained to device width ? —TheDJ (talkcontribs)11:20, 15 February 2026 (UTC)[reply]
I’m using Vector 2022 and I have “Enable responsive mode” and “Enable limited width mode” both checked in the Preferences > Appearances > Skin preferences section, if that helps.28bytes (talk)12:11, 15 February 2026 (UTC)[reply]
The enable responsive mode doesn't apply to Vector 2022.. so I am also a little confused about how you are getting this view on a mobile phone!! Is there a gadget that does this?🐸 Jdlrobson (talk)02:53, 16 February 2026 (UTC)[reply]
I don’tthink I’ve got any unusual gadgets enabled, but I rechecked that page logged out and the glitch does not appear, so it’s certainly possible it’s something in my configuration/preferences. It still looks broken when I’m logged in. (And as best as I can recall that’s the only page I’ve seen this issue occur on.)28bytes (talk)03:08, 16 February 2026 (UTC)[reply]

{{!}} Appears to mess with table cell size

[edit]

It seems like the magic word{{!}} is messing with the height of table cells in two templates I'm making (those beingFNCS result/sandbox andFNCS LAN result/sandbox). See this example:

{{{player}}} career FNCS results
TournamentResult
Official nameStart dateEnd dateSeasonTeam sizeTeam mate(s)RegionMade Grand FinalsPlacementEarningsSource(s)
FNCS: Chapter 2 Season 3July 31, 2020August 16, 2020C2S3SolosN/AEUYes{{{placement}}}th${{{earnings}}}{{{source}}}
FNCS Invitational 2022November 12, 2022November 13, 2022C3S4DuosWasn't invited

As you can see, everything works fine except that the rows are taller than necessary – they could easily be one row tall (the first result appears fine in the visual editor but has the issue in the source editor's preview). I have no idea what caused this; has anyone seen this before?Rockfighterz M (talk)23:57, 14 February 2026 (UTC)[reply]

@Rockfighterz M: There are two things to try. First, make sure that each<noinclude> tag follows on directly from the "real" template code, without any intervening spaces or newlines. Second, remove the blank lines. --Redrose64 🌹 (talk)00:08, 15 February 2026 (UTC)[reply]
I did both. That solved the problem for the former template, but only mitigated it for the latter. Thank you @Redrose64 for that!
Have I missed something in the FNCS LAN result template, which leads to the problem pertaining?Rockfighterz M (talk)00:49, 15 February 2026 (UTC)[reply]
The output of{{FNCS LAN result/sandbox}} can still have many blank lines when a switch doesn't produce anything but there is a newline after it. It doesn't work to simply remove the newline in the source text because cell-starting pipes must be at the start of a line. And it doesn't work to simply move the newline inside the switch because whitespace at the ends is stripped. I know ugly workarounds but not a pretty solution.PrimeHunter (talk)01:40, 15 February 2026 (UTC)[reply]
I think I see what the issue was now. I rearranged some newlines to not have one after a switch and to have one before cell-starting pipes. It worked! I appreciate the help @Redrose64 and @PrimeHunter!Rockfighterz M (talk)09:54, 15 February 2026 (UTC)[reply]

Tool for correcting typos causes incorrect changes UK -> US English?

[edit]

I just came across a lot of edits by a user who had changed "humourous" to "humorous" in over 50 articles that were almost all about UK/Australian/Irish/etc topics. Some (e.g.Rickrolling but not all had the "Use British English"/"Use Australian English" templates. The edits were done very quickly, one or two every minute, all with the same edit text, "Correcting typos". Examples:Before_(song),The Ballad of the Drover (in this case the "correction" was actually in the title of a cited source - automated editing of citations is really problematic),Dagoretti etc.

Is there a tool for correcting titles that this user would have used? And if so, could that tool be adjusted so it's not this easy for someone who is not aware of there being different valid spellings in different Englishes? Perhaps the tool is only using a US English dictionary and the dictionary could be expanded to not correct US or UK spellings? Or it could apply a different dictionary to articles depending on which language template they have - but a lot of pages don't have any language template.Lijil (talk)12:06, 15 February 2026 (UTC)[reply]

(edit conflict)Lijil,User:Ohconfucius/EngvarB? — Qwerfjkltalk12:12, 15 February 2026 (UTC)[reply]
Lijil Um, "humorous" is the correct spelling in both US and UK English. Even though the noun is spelled "humour" in UK/Commonwealth, the adjective never has the extra "u". That's the randomness of English spelling for you. You might want to revert yourself where you've changed it.Black Kite (talk)12:17, 15 February 2026 (UTC)[reply]
Indeed. The Oxford Dictionary says: "Note that although humor is the American spelling of humour, humorous is not an American form. This word is spelled the same way in both British and American English, and the spelling humourous is regarded as an error."[14] Your complaint about the title of a cited source is[15]. The article gives the reference[16] which doesn't show the part with humourous/humorous. I found both spellings in other sources about the work so I looked for an image of the original and found[17] which says humorous. That means the editor corrected the spelling (although they may not have checked the source) and you incorrectly reverted it in[18] with a false edit summary. Please revert all your edits unless they actually quote a source which says humourous, and consult a dictionary before making mass changes of spellings in the future. And no matter how much you think your own spelling is correct, never make up a claim about what a source says.PrimeHunter (talk)13:08, 15 February 2026 (UTC)[reply]
Yes, "humourous" is an oddity (compare "humourless", whichdoes have the "u" in UKENG) so it actually wouldn't surprise me that the incorrect spelling appears in sources, even though that's not the case here.Black Kite (talk)13:22, 15 February 2026 (UTC)[reply]
Oh no. What a humorless and deeply embarrassing thing of me to gripe about - I am so sorry! And hopefully I'll never make this particular mistake again. Luckily @MtBotany fixed my incorrect reversions, and I've apologised to the user who was actually very helpfully fixing spelling errors. Sorry everyone.Lijil (talk)21:20, 15 February 2026 (UTC)[reply]

Difficulty calling a test module

[edit]

I'm trying to get started using modules, but I'm having difficulty getting a "Hello World"-type setup working. I asked ChatGPT to suggest a basic setup to help me learn, but even with its help I can't get it working. I have the following:
Module:Sandbox/Greenbreen/Toy
User:Greenbreen/Sandbox/Template:Toy
User:Greenbreen/Sandbox/Toy test
I expect that last page to show "10" under "Normal", but I get a red error:
'Script error: No such module "Sandbox/Greenbreen/Toy".'
What am I doing wrong? —Greenbreen (talk)22:07, 15 February 2026 (UTC)[reply]

Your module page has the wrong content type: it's "wikitext" rather than "Scribunto module". I've fixed that for you.Anomie22:23, 15 February 2026 (UTC)[reply]
@Greenbreen: It got the wrong type because you created it in another namespace and moved it.PrimeHunter (talk)23:02, 15 February 2026 (UTC)[reply]

Suggested Edits Quick Start Tips

[edit]

Hello, I was directed here from the Teahouse. When opening a page from the Suggested Edits on my userpage, the Quick Start Tips scroll very quickly through their numbered suggestions. I generally consider myself to be a quick reader but adding 1 second more to the timer would be beneficial. Especially for other newcomers who might not know they can manually go back through the tips individually. They do, of course, come back around over time but just a small adjustment like that would be nice. Thank you for considering this!Itsaclarinet (talk)02:51, 16 February 2026 (UTC)[reply]

#chart vs page size

[edit]

Savannah River Plant has a chart in it:Plutonium (kg)Fiscal year010002000300040005000600070001947195419611968197519821989Weapon grade (SRS)Weapon grade (Hanford)Fuel grade (Hanford)Hanford and Savannah River Site Plutonium Pr...

{{#chart:Production Plutonium Hanford SRS (1947-1989).chart|data=Production Plutonium Hanford-SRS-1947-1989 (Corrected).tab}}

For some reason, this causes the Page Size tool to hang. Any ideas?Hawkeye7(discuss)03:45, 16 February 2026 (UTC)[reply]

It loads fine for me when I tested with page sizes standard and wide, alongside text sizes too (on Vector 2022 theme). What 'tool' are we talking about here? ---n✓h✓8(he/him)04:50, 16 February 2026 (UTC)[reply]
The Page Size tool on the Tools menu. Enabled by theWikipedia:Prosesize gadget.Hawkeye7(discuss)05:02, 16 February 2026 (UTC)[reply]
The page size tool works fine for me on Vector 2022, with no errors in console that (I believe) come from prose size. staglol  ctbs
talk
05:30, 16 February 2026 (UTC)[reply]

Second infobox settlement in articles about provinces in Thailand

[edit]

"User:Preime TH" has already placed a second "Infobox settlement" under the original "Infobox settlement" in wiki articles "Provinces in Thailand" with information about the provincial administrative organization (PAO), without realizing that an image that was in the text to the left of the first "Infobox settlement" has been shifted down to the top of the second "Infobox settlement".
This image now appears in a completely different article section.
To solve this problem with as few adjustments as possible:
Create a sub-template called "Infobox settlement/PAO", whose text is 100% identical to "Infobox settlement", but without the software component that prevents placement of an image above "Infobox settlement/PAO".SietsL (talk)06:54, 16 February 2026 (UTC)[reply]

Retrieved from "https://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)&oldid=1338623316"
Category:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp