If you want to report aJavaScript error, please followthis guideline. Questions aboutMediaWiki in general should be posted at theMediaWiki support desk. Discussions are automatically archived after remaining inactive for 5 days.
This tends to solve most issues, including improper display of images, user-preferences not loading, and old versions of pages being shown.
No, we will not use JavaScript to set focus on the search box.
This would interfere with usability, accessibility, keyboard navigation and standard forms. Seetask 3864. There is anaccesskey property on it (default toaccesskey="f" in English). Logged-in users can enable the "Focus the cursor in the search bar on loading the Main Page" gadgetin their preferences.
No, we will not add a spell-checker, or spell-checkingbot.
You can use a web browser such asFirefox, which has a spell checker. An offline spellcheck of all articles is run byWikipedia:Typo Team/moss; human volunteers are needed to resolve potential typos.
If you changed to another skin and cannot change back, usethis link.
Alternatively, you can press Tab until the "Save" button is highlighted, and press Enter. Using Mozilla Firefox also seems to solve the problem.
If an image thumbnail is not showing, trypurging its image description page.
If the image is from Wikimedia Commons, you might have to purge there too. If it doesn't work, try again before doing anything else. Some ad blockers, proxies, or firewalls blockURLs containing /ad/ or ending in common executable suffixes. This can cause some images or articles to not appear.
@EBlackorby-WMF Will this fix the issue I see, where searching fir "Sense and Sensibility", or "Pride and Prejudice" (without the quotes) does not bring up what's expected?
@WhatamIdoing When I click the magnifying glass next to my name, then type sense and sensibility (actually, all lower case), the first suggested result is the article Sensibility. The next suggestion starts with Sensibility. I'm not at my desktop, so I can't give much more info. I can't make and attach a screenshot easily from my tablet (I don't know how). 🙂David10244 (talk)05:35, 7 February 2026 (UTC)[reply]
Screenshot of search results
Here's aWP:WPSHOT showing what happens when I typesense and sensibility (all lower case, as you can see) into the search bar at the top of the page.
@WhatamIdoing Yes, I get different results. My list starts with "Sensibility" (pointing to the article by that name) and the second item is for the book named "Sensibility Objectified". It's very strange that we get different results. I don't have any local scripts in common.css or anything like that.David10244 (talk)04:00, 11 February 2026 (UTC)[reply]
@Matma Rex I don't know; I don't want to change my search preferences or skin at the moment (I'm about to go to bed). I have never changed my search preferences OR my skin. I'll try soon. Thanks.David10244 (talk)05:57, 13 February 2026 (UTC)[reply]
As some may be aware by now, the maintainers ofarchive.today (and archive.is, etc.)recently injected malicious code into all archived pages in order to perform adenial of service attack against a person they disliked (this can be confirmed by the instructions describedhere.) While the malware has been removed now, it is clear that archive.today can't be trusted to not do this in the future, and for the safety of our readers, these archiving services should be swiftly removed and the websites blacklisted to prevent further use.
Absolutely not. We are dependent on external sites for archiving and verification. I have in the past lobbied WMF to acquire archive.org so it can meet our needs (or set up our own version) but unless and until that happens we have to link to external sites for verification.Hawkeye7(discuss)01:24, 5 February 2026 (UTC)[reply]
I trust them a far sight more than someone who has now verifiably used their ownership of a domain we link some 400k times to DDOS another website on the Internet.Izno (talk)03:39, 5 February 2026 (UTC)[reply]
Why insist on framing this as an all or nothing choice? Other archive sites exist and they don't have a demonstrable history of weaponising their service. Treating it as an exceptional case isn't unreasonable.
Also, even if this were an all or nothing choice (which it isn't), Wikipedia's need for citations isn't more important than the security of users. Archive.today has demonstrably abused its users' trust (including Wikipedia's editors and readers) and cannot be considered safe. –Scyrme (talk)20:46, 5 February 2026 (UTC)[reply]
Trash it completely. Archive.today has proven that it's not trustworthy as an archive source (unlike the Internet Archive) and links to it should be considered potentially malicious in nature.SilverserenC04:58, 5 February 2026 (UTC)[reply]
"Archive.today has proven that it's not trustworthy" - there are no (known) examples of its owner tampering with archived pages. "unlike the Internet Archive" - Internet Archive removes archived copies regularly.sapphaline (talk)14:56, 5 February 2026 (UTC)[reply]
there are no (known) examples of its owner tampering with archived pages: Yes there is, see above. Injecting malicious JavaScript is tampering, visible or otherwise. If they are willing to do that, who knows when they'll decide to exploit zero-days or engage in blatant manipulation.ChildrenWillListen (🐄 talk,🫘 contribs)15:03, 5 February 2026 (UTC)[reply]
My point about them being trustworthy when it comes to archived copies stands. Internet Archive is way less reliable in this regard, because archived copies can always be deleted there.sapphaline (talk)15:20, 5 February 2026 (UTC)[reply]
I'd rather have information lost than readers having to encounter mailicious code whenever an archived copy is visited. Also, we know nothing about the maintainer(s) of Archive.today, how they make money, or even if they're ready to pack up their bags tomorrow and leave. They're in a jurisdiction that's politically unstable and prone to censorship. None of these problems exist with the Internet Archive.ChildrenWillListen (🐄 talk,🫘 contribs)15:30, 5 February 2026 (UTC)[reply]
A jurisdiction that's politically unstable and prone to censorship - you mean, like the United States? (I wish I was joking about my country in 2026) Setting that aside, we shouldn't want any information lost just like that. We need a remedying/replacement process coming before a removal process. See my main comment below.Stefen 𝕋owerHuddle •Handiwerk15:34, 5 February 2026 (UTC)[reply]
If this RFC is going to pass (which will be a very unfortunate result!),megalodon.jp archives archive.today snapshots almost perfectly (the only issue is that they're zoomed out and for some reason have a 4000px width, but this is trivially fixed by unchecking some checkboxes in devtools). Maybe WMF could arrange some deal with their operators to archive all archive.today links we have?sapphaline (talk)15:43, 5 February 2026 (UTC)[reply]
"how they make money" - why should we care about this? "if they're ready to pack up their bags tomorrow and leave" - archive.today has existed for nearly14 years. It's a snowball chance in hell they're going to shut the site down tomorrow or in any foreseeable future. "They're in a jurisdiction that's politically unstable and prone to censorship" - how do you know? "None of these problems exist with the Internet Archive" - US isextremely prone to censorship and political unstability + Internet Archive removes archived copies on any requests, not just governmental ones.sapphaline (talk)15:35, 5 February 2026 (UTC)[reply]
It is economically infeasible to hold trillions of archived pages and provide them indefinitely for free. We don't know how they're funding their project, which means we wouldn't know when this funding would dry out.
Their willingness to inject malware over a petty dispute puts their stability in disrepute. If we get in the bad graces of these maintainers, who knows what they'll be willing to do to us?
It's fairly well-known that the maintainer(s) of Archive.today live in Russia, and that the main archive storage is also hosted in Russia. Sometimes, they redirect certain IP addresses to yandex.ru, and of course, their official Wikimedia accountRotlink was created in ruwiki.
Actually thetheory is Ukraine, not Russia, and theevidence is they provision on global edge cloud providers (such as CloudFlare - but not CloudFlare). --GreenC15:46, 7 February 2026 (UTC)[reply]
@ChildrenWillListen makes an important point. People are bringing up that WP already has 500K links to them. What if they introduce malicious code on justsome of the archived pages (for instance, because targets of their malice to be more likely to access those links)?Aurodea108 (talk)01:49, 9 February 2026 (UTC)[reply]
I can't think of an explanation for this that isn't malicious. You'd think the maintainer(s) of archive services wouldn't be stupid enough to try to get a blog removed from the internet as a petty retaliation over some alleged doxxing. —DVRTed (Talk)05:33, 5 February 2026 (UTC)[reply]
I agree, they should be blacklisted. Should have happened a long time ago, really, because of massive copyright violation: they distribute lots of content that the copyright owners only made available behind paywalls. SeeWP:COPYLINK: "if you know or reasonably suspect that an external Web site is carrying a work in violation of copyright, do not link to that copy of the work". —Chrisahn (talk)11:11, 5 February 2026 (UTC)[reply]
I fully appreciate why this needs dealing with, but I am concerned about "the rub". We could end up harming verifiability on a *lot* of our content. Of course, we can leave citations in place without the archive.today links, but without the ready verification of having an article to load, I fear some useful article text could end up being removed by editors who decide they can't trust the listed source due to inaccessibility (typically those with little wiki experience). In cases where the paywalled content still exists, removal would be less likely, but in cases where the original link is permanently dead, it's not available on Archive.org, and we only have archive.today... yikes.
Deprecation makes sense as long as that doesn't include immediate removal before any replacement remedy is pursued. Any process that intervenes with using archive.today should encourage editors to directly replace these sources with archive.org links or newspaper.com clip links, or locate alternate sources. I realize this is generally what deprecation means but if the intervention can be clear and help the editor find an alternative, I would be more relieved of the ramifications of ditching this source.Stefen 𝕋owerHuddle •Handiwerk14:51, 5 February 2026 (UTC)[reply]
well-said. isupport blacklisting as long as it is accompanied by an effort to find alternative solutions instead of just plain removal....sawyer *any/all *talk15:33, 5 February 2026 (UTC)[reply]
Agree with those above arguing that archive.today is simply not trustworthy enough to be sending our readers to. Adding malicious code to cause a DDoS on another website is an absurd thing for a website maintainer to do and we shouldn't be facilitating their behaviour by sending more users to their site, nor simply hoping that they won't do something worse which targets our readers.SamWalton (talk)15:03, 5 February 2026 (UTC)[reply]
Yes, I think so. As you've said above, they can't be trusted tonot do that again in the future, so I would support blacklisting their links.Some1 (talk)01:17, 6 February 2026 (UTC)[reply]
I think there needs to be an official RfC on this to get more opinions. Personally I think this shows that archive.today can't be trusted (if they do this over something rather petty, what's stopping them from putting more malicious code into archived pages, not just the captcha?), and it should be at least deprecated - but only if the links can be replaced with a different archive without loss of information.Suntooooth, it/he (talk |contribs)20:15, 5 February 2026 (UTC)[reply]
They blatantly violatedWikipedia:External links#EL3, but you think we need to have a long discussion about whether malware-serving websites are sometimes okay?
If we're going to have an RFC, let's blacklist now and focus the RFC discussion on how to cope rather than whether we should provide links to malware-serving websites.WhatamIdoing (talk)22:16, 5 February 2026 (UTC)[reply]
I think that formally gaining consensus is important when it affects as many links as this does, especially since even in this thread it hasn't been unanimous. If this affected a much lower number of links (think a couple of orders of magnitude lower) and links that would be easily replaced or removed, then I wouldn't be suggesting a full RfC.Suntooooth, it/he (talk |contribs)00:40, 6 February 2026 (UTC)[reply]
I wonder if there's a way to add a warning in the articles. Something like[replace archive link] (and a category showing affected articles) might encourage people to start the process of finding other sources. It might be possible to do this automagically through the CS1|2 templates. I'm assuming that would catch most of them.WhatamIdoing (talk)02:44, 6 February 2026 (UTC)[reply]
It is in the realm of feasible just to turn off the display of archive links that are via archive.today/is in CS1/2. The real question is if we can get everything or if we just have to start off with vanishing the big quantity of links.Izno (talk)03:59, 6 February 2026 (UTC)[reply]
Removing archive links (even by just turning them off, rather than fully removing them) from this number of articles would be a huge hit to verifiability. If consensus is gained to remove archive.today links, there needs to be a mechanism for replacing them with other archives.Suntooooth, it/he (talk |contribs)16:05, 6 February 2026 (UTC)[reply]
would be a huge hit to verifiability I think turning their display off is a fair compromise on the road to removal and replacement. I do agree that half a million pages or links is a big number. A maintenance category would naturally be set up so we can actually find these quicker.
Good idea. Also considering the RfC above, isn't it possible that many of the archive.today links on the encyclopedia _aren't actually necessary?_ As in, they were added superfluously by the website operators themselves? Perhaps the true scale of the problem is much smaller, and we could vibe code a quick tool to check some of the links.audiodude (talk)04:50, 8 February 2026 (UTC)[reply]
That's not a problem. We won't remove them (at least not for a while). Blacklisting means that no new links to these domains can be added. It doesn't mean existing links have to be removed. —Chrisahn (talk)22:29, 6 February 2026 (UTC)[reply]
Every day, links are added becausethere is no other option. They literally are the only source for a large set of web pages on the Internet. This is why there are so many links. It's the only option. It is pragmatic. You and some othersappear concerned about what is best for Wikipedia, but you don't seem concerned about the consequences, which are very real, immediate and large scale - it would cause significant damage to Wikipedia. Unlike the good feelings about punishing Archive.today for some transgression. What is more important? --GreenC16:04, 7 February 2026 (UTC)[reply]
I've added archive.org URLs to lots of articles. In case a page hasn't been archived by them yet, I click "Save Page Now". I don't recall any significant problems, and I don't recall a URL that couldn't be archived. I'd say such URLs are pretty rare. —Chrisahn (talk)22:50, 7 February 2026 (UTC)[reply]
Agree that there should be an RfC. The implications of the discussion and potential actions taken by consensus will have far reaching effects across the encyclopedia. Additional comment as a technical editor, not one who edits a lot of articles: if archive.today provides a copy of a paywalled or linkrotted news article, but the article was actually published by the news organziation in questionat some point, what does it matter if the archived copy isn't available? The citations are still technically valid right? Does wikipedia remove citations to books that are out of print? Does information exist if it's not on the internet lol?audiodude (talk)04:47, 8 February 2026 (UTC)[reply]
Yes, you're right that aWikipedia:Convenience link (to the original and/or an archive) is not required, if the news article is archived in some place that is accessible to the general public. For example, it's traditional for ordinary print newspapers to keep a copy of all their old newspapers, and many will either let the general public take a look or send the older ones to a local library or historical society. However, not all publications have a print edition, and some news outlets put more information/additional articles on their web. I have, for example, been disappointed that paper copy ofThe Atlantic has fewer articles than their website. A web-only source needs a working URL, because sources must beWP:Published#Accessible.WhatamIdoing (talk)06:04, 8 February 2026 (UTC)[reply]
Has anyone linked to the circus that occurred when archive.today first appeared? As I recall, they used extremely advanced (for the time) techniques to attack Wikipedia by edit warring their links into pages. The views that we have to keep using them miss the big picture: these guys are obviously up to something bad. The infrastructure and operational maintentance to support their system would cost a vast amount and someone is planning to get a return on that investment eventually. It's much more effort than some libertarian philanthropist would support.Johnuniq (talk)02:47, 6 February 2026 (UTC)[reply]
Technical question: How would blacklisting work? If I understand correctly, the idea is that blacklisting prohibits adding new archive.today (and archive.is etc.) links, but we'll keep the existing ones for now. Specifically: If I edit an article and try to add a new archive.today link, I get an error message and can't save my changes. But if I edit an article (or section) that already contains one or more archive.today links and I make unrelated changes, there's no such error message. Is that correct? Can we make that work? A "dumb" edit filter (that simply checks whether such links occur anywhere in the text I'm trying to save) won't work – it won't let me save unrelated changes. I can think of a few ways to implement a smarter filter, but I don't know if edit filters have access to the required information, or how efficient smarter checks would be. —Chrisahn (talk)09:18, 6 February 2026 (UTC)[reply]
@Chrisahn Yes, this is how the built-in tools likeMediaWiki:Spam-blacklist already work, and edit filters can also be made to work that way. They forbid adding new links to blacklisted domains, but if a link is already present in the article, it can be edited without tripping the blacklist. There are still some scenarios that cause problems (e.g. if a vandalism deletes a citation that links to archive.today, you won't be able to revert it without removing those links first), but that hasn't stopped additions to the blacklist before.Matma Rextalk14:41, 6 February 2026 (UTC)[reply]
archive.today is just a very useful website that can be used if archive.org is not helping.
The hosters of archive.today are not as reliable as the guys that host archive.org. However, we don't know any case, where a snapshot was false, do we?
In this case, they "just" abused their visitors for a DDOS attack. Of course we should not support this. But this does not mean, we have to definitely block the website.
By the way, blacklisting (viaWP:SBL) without a previous removal of all links isnot a good option, because this leads to several problems:
Moving of parts of pages to other pages (e.g. archiving) is not possible any longer, if the moved text contains a link.
Modifying an existing blacklisted URL (e.g. link fixing) might trigger the SBL.
It's not possible to add blacklisted links to a discussion which is challenging for some not so technical users.
In my opinion a technical solution could be:
replace all links with a (unsubstituted) template (yes, this is a lot of work, but could be automized partially);
if any problem with the domain occurs again, modify the template such that there is no link any longer to archive.today (and .is and all the other domains);
when the problem is solved, revert the template change;
if anybody adds a link to archive.today without the template, a bot could try fix that afterwards and the bot could write a message on the linker's talk page that they should check whether they could find something better.
By a solution like this we would would still have the benefits of the archived versions. But wecould remove all links fast and at once, if needed.
A couple hundred thousand bot edits is not a good solution, either.
Trappist the monk might have some ideas about whether the citation templates could special-case these domain names for a while, while the work is done. A maintenance category, as Izno mentioned above, would also be a good idea. And even if we don't want to use theMediaWiki:Spam-blacklist quite yet, for fear that it will interfere with rearranging pages, we could implement aSpecial:AbuseFilter that would prevent people from adding any new ones.WhatamIdoing (talk)21:33, 6 February 2026 (UTC)[reply]
I still advocate going back to WMF with a proposal to create our own archive. This will get us off dependence on external archive sites that we cannot control.Hawkeye7(discuss)22:25, 6 February 2026 (UTC)[reply]
Setting up an internet archive requires years of planning and work. I'd like us to start making tangible progress on resolving this problem today, or at least in the next week. Even if we thought that was legal and a good idea, creating our own archive isn't going to address the problem right now.WhatamIdoing (talk)23:24, 6 February 2026 (UTC)[reply]
cs1|2 can special case archive.today (and companion domains) if/when there is a consensus to deprecate/blacklist.
One major problem with the edit filter (and SBL/BED) is that many unexperienced people who trigger a rule just don't know what that means and what they should do. We often see that people who wrote large paragraphs and failed of first try to safe, just run away, although the warning said that if they are sure about what they are doing, they just should try to save again.
The filter (and SBL/BED) should be used if people intentionally (try to) spam. If they actually just want to help, then there's a risk of annoying/frustrating them. That's why -- over time -- I more and more tend to use notification bots and maintenance lists instead of the blacklist-like tools in cases where links are mostly added by non-spammers.
Might be concerning, but that's "two people have been bad people" and each should be judged on their own merit accordingly. You don't treat someone DDOSing another person off the Internet as a stable individual meriting half a million links from the most popular source of collated information on the Internet (and that ignoring the prior dramas, as linked above).Izno (talk)21:37, 6 February 2026 (UTC)[reply]
Doxing? Hardly. Quote: "While we may not have a face and a name, at this point we have a pretty good idea of how the site is run: it’s a one-person labor of love, operated by a Russian of considerable talent and access to Europe."[1] —Chrisahn (talk)22:40, 6 February 2026 (UTC)[reply]
Another aspect: Depending on how the FBI case against archive.today goes, there's a chance that these ca. 500,000 archive links in our articles will become useless in the not too distant future. —Chrisahn (talk)01:05, 7 February 2026 (UTC)[reply]
Prior to about 2015, the Wayback Machine did not systematically archive all links on Wikipedia. There are huge gaps prior to that date. Between 2012(?) and 2015, Archive.today systematically archived Wikipedia. Thus many dead links areonly archived on Archive.today. The one time Archive.today got blacklisted a long time ago, it didn't last long. People reversed it. Why? Because Archive.today is incredibly useful. It's that simple. It's pragmatic. They have the goods nobody else does. This incident with the CAPTCHA will soon be forgotten as inconsequential to Wikipedia. But blocking Archive.today will cause daily conflict with editors who need to use it because there is no other option. --GreenC17:11, 7 February 2026 (UTC)[reply]
Your wish to punish Archive.today over this silly incident (which they undid) would cause widespread and deep collateral damage to Wikipedia. --GreenC18:32, 7 February 2026 (UTC)[reply]
I think that would depend on how it's implemented. First, just to remind everyone,WP:Glossary#verifiable means someone canfind a reliable source. It does not mean that the Wikipedia article already has a little blue clicky number (that'sWP:Glossary#cited) or that the ref contains a functional URL. This means that if the Wikipedia article says "The Sun is really big", and there's no cited source, or the cited source is a dead URL, then that sentence is still verifiable, because an editor (or reader) could look up Alice Expert's book,The Sun is Really Big, and learn that the material in the Wikipedia article matches the material published in at least one reliable source. Removing archive links therefore doesn't (usually) destroy verifiability (unless that was the only source in the world that ever said that, and the original is a dead URL – in which case, are we really sure we should be saying that now?); it just makes verifying the information take more work.
Having looked at a too-small sample size (=4 articles) with these links, I think that some of these links are unnecessary and others deserve a{{better source needed}} tag no matter what the archive status is. I therefore think that checking and replacing sources might be a good thing, overall.WhatamIdoing (talk)19:20, 7 February 2026 (UTC)[reply]
A citation to a book is always verifiable. So is the NYT and other news outlets. Referring to everything elseonline-only, which is most of it. Without an archive, a dead website is unverifiable. Maybe wait 10 years for an archive to surface, but eventually it's gone. Youmight find other sources, but who is going to do that for half a million links? Certainly not the few people engaged in these conversations. Most people don't even verify sources, much less try to replace them with other sources. People are busy creating new citations with future dead links that nobody fixes. The debt continues to grow, and one of our best tools for dealing with it is now being threatened with removal. --GreenC19:44, 7 February 2026 (UTC)[reply]
Please look at the definitions I linked. We don't care whether "a dead website is unverifiable". (It's really none of our business whether people can double-check that some other website's content was taken from a reliable source vs is an original work.)
We care whether the content in the Wikipedia article is verifiable – and we care whether it's verifiable inany reliable source, not just the cited one.
Yes, you're right: half a million sources is a problem, and the debt continues to grow. To stop the bleeding, I think we should deprecate/discourage future additions of this source. To get the existing ones checked, I think we should have a tracking category, and maybe even a way to make this a more mobile-friendly and/or newcomer-friendly task. Based on my experience the other day, we're looking at about five minutes per source. Also based on my experience the other day, half the sources are unreliable ones anyway (at least for medical content).WhatamIdoing (talk)19:55, 7 February 2026 (UTC)[reply]
If Archive.today actually goes offline, then we have another problem. But treating it like it'salready offline by adding{{dead link}} templates is backwards since we don't know the future. The assumption there are alternatives to Archive.today is a mistake. Most Archive.today links are added because Wayback can't do it. There's really only two games in town, and we are eliminating one. And you can't go back and fix it, either you save the web page before it dies or it's gone forever. Archive.today has a monopoly on many archive pages, and many citations are the only game in town there are no better sources. Most people don't read these forums, but if you start blocking or hiding links, there will be many editors complaining. It's a major resource for our community that has a large following.Nobody has really been notified about the RfC. --GreenC21:00, 7 February 2026 (UTC)[reply]
Hoisting a comment by @Sapphaline to the top level:
"megalodon.jp archives archive.today snapshots almost perfectly (the only issue is that they're zoomed out and for some reason have a 4000px width, but this is trivially fixed by unchecking some checkboxes in devtools). Maybe WMF could arrange some deal with their operators to archive all archive.today links we have?"Aurodea108 (talk)20:51, 15 February 2026 (UTC)[reply]
I experimented with taking this one step further by rearchiving to Wayback a megalodon archive of an archive.today archive (what a sentence...)
I think it would be useful to see lists of articles that do not include any image, maybe with a column for the linked Commons category if it exists and a column for the image(s) set on the Wikidata item if there are any. The articles could be those in a category or especially some WikiProject list likeWikipedia:WikiProject Climate change/Popular articles orCategory:High-importance science articles (corresponding articles, not the talk pages though). I think it's not unlikely that there is some way for this.
Asking this in the context ofc:Commons:List of science-related free media gaps – this could be useful not just for adding images if a useful relevant high-quality one exists for the article, but also for identifying media gaps.
You could find articles that have been tagged withTemplate:Image requested, but I'm not aware of any way to look for untagged articles.https://pagepile.toolforge.org/ will let you define a list of target pages, and that list can be used be used by other tools for various purposes, but, again, I'm not aware of any tool that would import such a list and identify missing images.
Images are one of the key things that readers want to find in a Wikipedia article. It would be nice to have more emphasis on finding and adding appropriate images.WhatamIdoing (talk)23:59, 5 February 2026 (UTC)[reply]
Good idea – that method shows 191 pages inthis query which is something one can start with.
A way to list articles without images would probably show way more results, would be more dynamic, and could be useful in more ways. It would not rely on users adding that template which is relatively rarely done. Additionally, having that template doesn't mean the article is short of even an image illustrating the main subject and entirely lacking images (also implies there is no image for the article in the page preview hovercard and in the Wikipedia app).
Agree on what you said there. Also of note that only very few users know of, see, and click the Commons category linked to an article – there's often high-quality files there but pageviews stats show that few go to these pages. After creating many Commons categories, I think most of them over a year later weren't even linked via the small often overseen{{Commons category}} somewhere in the article. One can often find images in categories that have been there for years but nobody ever added them to the article including articles including not even one image.Prototyperspective (talk)00:26, 6 February 2026 (UTC)[reply]
Regarding the search link that checks for templates instead of the categories: don't know why it only shows 70 results instead of all 191.
Regarding the search link that checks via the two categories: I've looked into it further and excluded all articles that are biographies or films. Now it contains just 58 items instead of 191 and most of these are niche low-importance articles where I can't see how an image would be very useful or they already have an image for the article's topic (as in the case ofGypsum concrete). I nevertheless added the search query to the media gaps page.
deepcat searches sometimes time out for me this happens for deeply-nested categories which is why it won't really work forCategory:Science currently. This may also be an issue here because not yet all relevant articles in that category branch have been tagged with the WikiProject template. Additionally, it doesn't look like one can scan if an article is in a category and its associated talk page in another. This would be useful because the WikiProject category is only set on the talk page. There's also ways to scan for articles in a category branch that don't yet have the WikiProject template but it's complicated and I guess barely anybody uses that (a tool for that would be great btw).Prototyperspective (talk)15:55, 6 February 2026 (UTC)[reply]
Petscan gets about 95% of the way there - you can ask for pages in a category that don't have "a lead image", which I think is the single image returned in the API. Pages with no images will presumably also have no lead image.Andrew Gray (talk)09:56, 6 February 2026 (UTC)[reply]
Following up on this - it seems "lead image" is defined frommw:Extension:PageImages and is a) one of the first four images in the lead section which b) has a certain range of ratios and c) is not explicitly excluded. So it is possible to have an article with images that nonetheless show up as no-image here. But having said that...
It doesn't seem to be possible to do this in one step starting with a talkpage category (like importance tags), but it is possible in two steps via PagePile.
Interesting, thanks, I didn't know about the petscan feature to only show articles without lead image.
I tried to run it onCategory:Science but the problem is that it's not possible because that cat has too many subcategories and also when limiting it to e.g. just 3 layers, it (query) shows too many results (>60.000).
I first thought maybe the approach of that petscan filter isn't really adequate as it also shows articles with images even lots of images – but looking more closely, I'm not so sure anymore: e.g.Agricultural science is listed buts its image in infobox does not illustrate agricultural science;Artificial intelligence is listed despite having many images but it does not have an image at the top that's some diagram explaining AI types and/or how AI works. Articles likeAnthropology also miss some image that illustrates the subject well. So maybe the issue is not with the methods but simply that there's soooo many articles missing images (I think the community hasn't really begun to systematically address this).
What would be the best ways to address this that takes into account these issues: prioritizing articles that miss images, using only other methods that check whether there is any image at all in the article², somehow further filtering the petscan, somehow extracting fields or large-order topics lacking images?
² here's one additional way to check if there's any image whatsover (or animation or video or audio) in an article:deepcategory:Science -insource:"[[File:" (82,511 articles with incomplete results) Note: in this query it also shows articles with image in infobox so these would also need to be excluded somehow (maybe via filtering out things like .png?). One couldcombine this with incategory:"Commons category link is on Wikidata" to see just articles with no image but a Commons category (2,086 so this one seems quite actionable).
Pages with no lead image linked from Wikipedia:WikiProject Climate change/Popular articles (137/1000) Nice query, this one seems quite actionable as well. I'll probably link that on the science-related media gaps page as well and will look for other similar WikiProject pages for which to also create such a query for and maybe extract some topics in need of illustration/image (note that an article with lots of images illustrating the various subtopics may not be missing an image much even when there no lead image and ideally we'd like to have one).
interesting query which I've found on Quarry and tweaked … identifies six "top/high" importance Science articles with no image links weird that it only shows 6 items. So it seems like currently this query is not useful but maybe it can be tweaked further until it is. The description saysthat have no images of any sort (not even those from templates like {{unreferenced}}) so that seems to be the cause here; maybe one could exclude images in such templates but I also wonder whyResearch statement shows despite that there's several PDF document icon images on the page(?)
The pdf icons aren't added by image links - in a template or otherwise - but by a css class. They're not detectable with queries against the database even if we wanted to (other than by searching for external links ending in ".pdf", which isn't practical).Excluding images included by templates isn't possible either. We've been asking the developers for an equivalent for simple links in WhatLinksHere for over two decades. And it wouldn't help anyway, since it would also exclude images in infoboxes.Whatwould help is a list of specific files to ignore, like{{unreferenced}}'sFile:Question book-new.svg. Or I can write queries for non-free/non-existent lead images by talkpage categories/wikiproject ratings/etc. Asking atWP:RAQ is the best way for such requests not to get lost; my free time and attention is very limited this time of year. —Cryptic18:59, 8 February 2026 (UTC)[reply]
"For mid to large Wikipedias, shorter articles are less likely to have an image"Is there any code issue or wish or project page about enabling a functional Quarry query for seeing articles without any images via some list that specifies common icons used in templates (like the CCBY icon etc)?
So images in infoboxes are taken into account (in imagelinks) in that query? (If they aren't taken into account maybe one could take the results from the query and enter them into a second tool that checks for images in templates.)queries for non-free/non-existent lead images… that's a bit confusing to me – weren't you talking about the Quarry query earlier which doesn't check only for lead images but any images in the article? I would find this again more useful than scanning just for articles without lead images.
Can someone with the relevant permissions and technical knolwedge please revertTemplate:GeoTemplate back to a state where English language geohack works? The error seems to have been introduced yesterday.Tæppa (talk)13:28, 6 February 2026 (UTC)[reply]
Pinging@Trappist the monk: who edited thew:Module:Lang the day before (some) of the geohack language pages broke. My original thought was that the use of <br />{{lang|ar|خَرائط فلسطين المَفتوحة|rtl=yes}} broke the page, but <br />{{lang|he|עמוד ענן|rtl=yes}} has been there a while.
Thechange I made atModule:Lang was for{{transliteration}}. At this writing, there are ten{{lang}} templates in{{GeoTemplate}}. All ten are used for presentation and have nothing to do with the&language= query portion of the geohack url.
For the record, all of the above links (Arabic through Russian) are now "broken" so it's probably nothing to do with the templates that the wikipedias have for each language.Tæppa (talk)22:02, 8 February 2026 (UTC)[reply]
As someone who reads a lot of geographic articles, this GeoHack glitch has been annoying for the last couple of days. I think the correct place to file a bug report ishere. In case it helps to debug: I notice that, upon opening GeoHack, the table of links to Google Maps, OSM, and other servicesvery briefly displays but disappears after a split second. If I use Firefox's "reader view", all the links are visible. HTH.~2026-87494-1 (talk)01:49, 9 February 2026 (UTC)[reply]
The reason Geohack is broken seems to be related to this change:1234316. Geohack was depending on html comments to find the main content. These html comments are removed in the latest mediawiki update. --wimmel (talk)08:57, 10 February 2026 (UTC)[reply]
Hi folks, not sure if this is the best place to post this but it seemed like a high-visibility spot where someone might have an answer. Feel free to move or copy my message elsewhere if you'd like.
I created the articleNova Scotia Guard on 1 July 2025. The article appears to be indexed, and is the third result on DuckDuckGo. However, it will not show up on Google at all. Even when searching "Nova Scotia Guard Wikipedia", you'll get articles it's linked to and even a category it's in, but not the article itself. I was particularly perturbed by the fact that the Grokipedia clone of the article shows up in Google, but not the one I created. I mentioned this in the Wikipedia Discord server some time ago and my results were replicated by several other users. Since then I created a redirect, edited the Wikidata item, and added more links to the article, but it hasn't changed anything.
My biggest concern here is that there may be other articles which Google is not showing in search results for one reason or another. If someone might be able to look into this I'd appreciate it. Thanks,MediaKyle (talk)16:04, 10 February 2026 (UTC)[reply]
It was marked as reviewedin September which should allow it to be indexed, but I checked Google Search Console and for some reason Google hasn't crawled it since July when it was noindexed. I requested a re-crawl, so hopefully it will start showing up soon.the wub"?!"16:44, 10 February 2026 (UTC)[reply]
@MediaKyle: It hasn't been edited since 20 August 2025 where it was still noindexed. I get the impression Google is watching our edit logs and often revisits a page shortly after it has been edited so any edit (except an unloggednull edit) may influence them.PrimeHunter (talk)17:31, 10 February 2026 (UTC)[reply]
Thanks for the replies, I appreciate you both looking into this. I thought I edited the article the other day but I guess I did everything except that... Just made an edit. Hopefully it will show up soon and this is just an isolated incident. Cheers,MediaKyle (talk)17:49, 10 February 2026 (UTC)[reply]
Thanks for letting me know, just checked and it shows up for me now as well. Seems this is resolved now... still have to wonder what other articles might be caught by this oddity but I can't imagine it's too widespread.MediaKyle (talk)19:01, 10 February 2026 (UTC)[reply]
Hm. I think maybe quite a lot, actually, if the reason something wouldn't be indexed is "no edits since an NPR hit the reviewed button". --asilvering (talk)05:22, 11 February 2026 (UTC)[reply]
If this is common then probably it would be good if there was some query that listed all of these pages so that a bot could make an edit to these to get them index I think. I kind of doubt this is common though when not considering articles with a delay of 10 days or so but even when it's not common, many pages could be affected.Prototyperspective (talk)11:37, 12 February 2026 (UTC)[reply]
@Prototyperspective Here you go -Quarry 102028 (I think, anyway). All pages on enwiki which are a) in the main namespace b) not a redirect c) marked as reviewed and d) have a last editolder than the review timestamp.
The interesting thing is that there are two different sets of answers here depending on how we look for the "review timestamp". In total, there are about 6000. But filtering only onptrp_tags_updated we have about 5600 entries, oldest review date 2026-01-01. Filtering only onptrp_reviewed_updated gets about 1600, oldest review date 2026-01-15.
Those are two suspiciously round numbers (one is this calendar year only, one is the last month only) so I am wondering if they are perhaps incomplete. Either way it might be an interesting list to investigate.Andrew Gray (talk)16:34, 15 February 2026 (UTC)[reply]
In the past couple of weeksSpecial:WantedCategories has seen more than one recurrence of a redlinkedCategory:Temporary Wikipedian userpages that was deleted in 2016. Both times, it was populatedentirely by the user talk pages of editors who were blocked for vandalism in 2008, the first timeexclusively editors whose usernames began with W, and todayexclusively editors whose usernames began with V — and the culprit appears to be that said talk pages have recently beenundeleted onWP:DELTALK grounds, after having been previously deleted, and were thus put back into a category that existed at the time of the original deletion but has not existed for a decade.
The category is currently empty. Please always include an example. I found one in your contributions:[8]. There is no way to prevent the categorization before you removed it. The page was undeleted byHex. Her logs show she undeleted many such pages beginning with V on 7 February and with W on 29 January. If she is planning to undelete more pages then you could ask if she will remove the category afterwards but now I have also pinged her.PrimeHunter (talk)18:03, 10 February 2026 (UTC)[reply]
Hiya - yes, I'm repairing amass deletion of user talk pages by a former admin back in 2008 before we decided not to do that. It's slow and tiresome because I check each of them to ensure there's nothing requiring RevDel before hitting the button, so I was planning to ask someone to get a bot to clear off the category afterwards. I guess you'd like me to arrange that now? Given that I've done about 600 out of 11,000, this is going to take a while. By the way Bearcat, since you evidently checked the logs, you could have written to me first before posting here. It's also kind of odd that you didn't mention me and so PrimeHunter had to send a ping. —Hex•talk19:58, 10 February 2026 (UTC)[reply]
@Hex: Now that the topic is raised, I do have to wonder what purpose is served by undeleting these talk pages. Was there a discussion somewhere that concluded these 11000 pages would be useful to mass-restore 18 years later?Anomie⚔23:59, 10 February 2026 (UTC)[reply]
I have the same question. I looked atjust one example, which had five Linter errors and one nonexistent category. I expect to see deleted templates as well. Restoring these pages will make work for a lot of gnomes; what is the benefit, and where was the discussion about this restoration? –Jonesey95 (talk)00:32, 11 February 2026 (UTC)[reply]
No discussion was required. We established consensus a long, long time ago that user talk pages shouldn't be deleted except in rare circumstances because they form an important part of the historical record. When that happened, someone should have done this job, but nobody did. I'm rectifying that error. The effort of dealing with a small number of linter issues will be outweighed thousands of times over by the benefits of not having a massive chunk of user interactions and block log context missing for no good reason. —Hex•talk01:43, 11 February 2026 (UTC)[reply]
thousands of times over? Actual human and bot editors are going to have to make thousands of edits to remove errors from these restored pages. That is a guaranteed downside if thiscrusade project goes forward.Hex: Please enumerate concrete instances that balance the downside of those thousands of edits with benefits. I won't ask you to justify the obviously unjustifiable orders of magnitude that you claim. Just a simple positive or break-even counterbalance would be fine. –Jonesey95 (talk)15:09, 11 February 2026 (UTC)[reply]
A quick and dirty sql says that among the undeleted pages with linter issues, most have issues with obsolete tags, no background inline (which a number of wikipedians regard having a lot of false positives) and no end tags. Most of this can be automatically fixed. Everything else is less than 20 pages with issues.
@Snævar: There is no restriction. Some editors create their user page as the very first edit; indeed, for some, it is theonly edit that they ever make. It's often harmless, provided it's not againstWP:UPNOT and isn'tspeedyable under the G and U criteria. But this thread appears to be about usertalk pages, these being the ones that Hex has been undeleting. Very few users create their own user talk pages, although some do. It's also not usually a Wikicrime. --Redrose64 🌹 (talk)23:29, 11 February 2026 (UTC)[reply]
@Anomie - who can say, really? People are interested in anything and everything. Even the most seemingly mundane detail in an archive may be exactly what some future historian is looking for as part of a research project. You could really say that about almost all of our archives, which we generate at a ferocious rate - page revision histories, system logs, talk archives. 18 years is also not very long at all. The discussion we're having right now will get archived, and then nobody might care about it at all for 25, 50, 100 years. But there may be a single historian in 2126 who it's useful to and is reading it right now. (Hello! Do you live in space? I'm sorry for what we did to the planet.) It's thatpossibility that we keep archives for. —Hex•talk20:30, 11 February 2026 (UTC)[reply]
So a moment ago it was the unsupportableThe effort of dealing with a small number of linter issues will be outweighed thousands of times over by the benefits of not having a massive chunk of user interactions and block log context missing for no good reason, and now it'sthat's life? I hope thatHex will considercleaning up the pages that they undelete (link to an example of a Linter error that is of a type that was completely eliminated years ago). Editors are responsible for their edits. This is like watching someone walk through my neighborhood throwing trash on the ground. –Jonesey95 (talk)14:25, 13 February 2026 (UTC)[reply]
If anything, Jonesey95's crusade against linter errors is way more harmful than Hex's undeletions, because they seem to have no issues with loudly complaining and upsetting people over it. The message above ("This is like watching someone walk through my neighborhood throwing trash on the ground") is a perfect example of this.sapphaline (talk)14:59, 13 February 2026 (UTC)[reply]
@Pppery: That was me, not Jonesey95. I guess good for you that you cared at one point in 2019? But not enough to have started a discussion beyond the REFUND you now admit wasn't a good one.Anomie⚔00:23, 12 February 2026 (UTC)[reply]
@Jonesey95: Mentioning me in every single edit summary you make so that I come back to find I have 75 notifications is unbelievably petty and childish. Grow up. —Hex•talk15:15, 13 February 2026 (UTC)[reply]
It was a boilerplate edit summary. What is unbelievable is how much work you are making for your fellow editors. Please clean up the pages that you are restoring. Editors are responsible for their edits. See below for something more constructive. –Jonesey95 (talk)15:23, 13 February 2026 (UTC)[reply]
Two different boilerplate edit summaries which you wrote yourself to specifically mention and talk to me, which you've now stopped doing after getting called out on it. Sure dude. —Hex•talk16:01, 13 February 2026 (UTC)[reply]
You asked me to stop, so I stopped. That's the polite thing to do. See below for an example of an editor who did not stop causing problems when asked to do so. –Jonesey95 (talk)16:43, 13 February 2026 (UTC)[reply]
I just did apartial cleanup on 88 User talk pages restored byHex, fixing types of Linter errors that we eliminated from the English Wikipedia many years ago, and deleting nonexistent templates. A bot also removed nonexistent categories from many of the restored pages. This work took me about an hour that I otherwise would have spent fixing other problems or making actual improvements to Wikipedia. Bots and human editors will be needed to clean up "obsolete tag" Linter errors on a couple hundred additional pages that Hex recently restored.
I suspect that there is a better way forHex to achieve their goals while avoiding this unnecessary work. I can think of a few options:
Stop restoring these pages.
Restore the pages, then fix all errors on the pages (both actions would be performed byHex).
Restore the pages and then blank them. The supposedly valuable information would still be available in the pages' histories.
Now that there is actual time-based evidence of the cost of restoring these pages, explain in detail the thousands of hours of benefits that will accrue to future editors, readers, and researchers from restoration of these 88 pages. If it is really worth it, I can live with the extra work.
I'll also add my voice here that I think you should stop what you are doing and seek consensus for it. If anyone would have edited 11k pages without even a single discussion they'd get blocked immediately. Being an admin does not give you any special rights to bypass this process.Gonnym (talk)20:49, 13 February 2026 (UTC)[reply]
We had the discussions about user talk pagesfrom 2006–2010. In fact, the day after tomorrow is the 20th anniversary ofWP:DELTALK. If you want an MfD for 11,000 user talk pages trying to retrospectively overrule that consensus, well, good luck. —Hex•talk22:05, 13 February 2026 (UTC)[reply]
Find me a consensus that isn't 16-20 years old please. en.wiki has changed dramatically since then and I'd like to see recent consensus that agrees that mass restoring 11k pointless talk pages is wanted.Gonnym (talk)08:34, 14 February 2026 (UTC)[reply]
Starting from 12:27, 7 February 2026, out of the 605 user talk pages Hex has restored, 215 pages currently have at least one lint error (382 have no errors, and 8 were re-deleted). Here's a list in case anyone is interested in fixing those lint errors specifically:User:DVRTed/sandbox4. —DVRTed (Talk)16:22, 14 February 2026 (UTC)[reply]
FWIW, I have already fixed all, or nearly all, of the Linter errors in these pages other than "obsolete tag" errors(the dark mode issues are not worth bothering with at this time, which is another discussion). I think I also fixed all of the nonexistent templates. We have a bot that can fix many pages that have only obsolete tags on them, so for human editors are interesting in fixing Linter errors, there are plenty of non-bot-fixable pages to focus on. The bot will make its way around to these pages eventually (it is currently fixing a batch of many tens of thousands of pages, possibly as many as 300,000, containing an error caused by a substed template; aren't you glad you're not a bot?). –Jonesey95 (talk)05:07, 15 February 2026 (UTC)[reply]
I think the real issue here isWP:MEATBOT. I don't have an opinion on whether these pages should be restored, but I can understand that folks find undeleting 500 pages in a day to be disruptive when the whole project averages closer to 30-35 per day. Obviously pages are going to be restored from time to time, even ancient ones, IMO it's the scale at which it's happening that's upsetting people.
Restoring11 pages in a single minute is clearly bot-like behavior, and should go through some sort of approval, at which point we can figure out details on how to coordinate with other cleanup bots and humans. Restoring 11k user talk pages doesn't fall underWP:MASSCREATE because they aren't articles, but I think following that guidance would ease bad feelings on both sides.Legoktm (talk)00:23, 14 February 2026 (UTC)[reply]
Call it bot-like if you wish but this is very, very simple work that requires only a glance at the edit history of pages that are 90% just a single block message or maybe a couple of warnings before that. It is even so still work requiring human attention and not a bot. It's also incredibly boring, and because unlike some people I understand thatthere is no deadline I'm not in some all-consuming rush to get this done. It's also how I approach my backlog of grindy projects, of which I have many. Do some to scratch an itch, then forget about it for a while. I started this project a year and a half ago - that's how long it took me to get over the boredom of the last bunch of undeletions. After doing 500 yesterday that itch has been scratched for now until I regain the energy to think about it more, but I'm probably going to be seeing this site in my sleep for a week. It would probably have come up for a bit more scratching relatively soon now that I've gotten a feel for it again, but after the toxic behavior on display in this discussion it's retreated a long way and is unlikely to see the light of day for quite some time. I'll note again here that we could easily have had a good-natured chat about all of this on my user talk page, but someone chose to passive-aggressively post here in a way that they knew would cause drama. For shame.
If people want to artificially limit progress on rectifying this big, stupid mistake from the past, they could at least volunteer to help out with it. I'm the only one doing it, and hamstringing me on the occasions that I feel sufficiently motivated will achieve nothing. If there are more people working on it then that will make a difference even with a go-slow sign on the side of the road.
this is very, very simple work that requires only a glance at the edit history of pages illustrates the problem. The technical work of looking at history and clicking a button is simple, but the job is not done at that point. Instead of moving on to the next boring page restoration, the restoring editor, who has now created one or more problems on a Wikipedia page, bears some responsibility for resolving those problems. The editor should remove nonexistent categories and templates and do their best to fix wikitext syntax errors. The red categories are easy to see in preview. The nonexistent templates are easy to see in "Pages included in this section:". And many of the syntax errors are easy to see using the syntax highlighter gadget. Please fix the errors that you are creating, now that you have been notified that you are creating them. –Jonesey95 (talk)13:09, 14 February 2026 (UTC)[reply]
@Hex, I hope you don't feel that push back to your project is toxic.Jonesey95, in particular, has tried to offer alternative solutions that wouldn't cause issues for other editors, but you seem to have dismissed them out of hand.
I'll note again here that we could easily have had a good-natured chat about all of this on my user talk page, but someone chose to passive-aggressively post here in a way that they knew would cause drama. For shame.
Isn't this really the crux of the problem?Bearcat often asks here for help with his maintenance work keepingSpecial:WantedCategories clean - I don't know where you're getting that this is intended to cause drama (see alsoWikipedia:Aspersions). But clearly this is causing issues for other editors, because that is what precipitated this thread.Qwerfjkltalk15:04, 14 February 2026 (UTC)[reply]
FWIW I also feel it's useful to restore these talk pages. However Hex I have a question, you've mentioned you're checking to see if there's anything that needing to be revdeleted, that's great. But are you also checking why the page was deleted? Because IMO in any case where it was deleted by request of the editor for whom the talk page is for, you should be blanking the talk page by default. Blanking a talk page is perfectly in line with policy and practice. And it the editor asked for it to be deleted 18 years or whatever and this was granted given the norms of the time, and it's now being undeleted because of policies changes, we should still grant this editor the courtesy of blanking it for them as the closest thing we can do which is in line with our current policies which fits with their request. Even in cases where it wasn't at the request of the editor the talk page is for, I don't see any harm in blanking it. Especially since we will probably never know if the editor for who it's for might have blanked it if it were possible and didn't reasonably expect it to come back 18 years later. So IMO the solution which will also allay the other concerns is for you to blank it after restoration.(To be clear, this means you probably don't have to check so well why the talk page was deleted.) If you want to get technical, you could ensure you keep any declined unblock requests if the editor is still blocked, but frankly after 18 years of a long deleted talk page, that's not particularly important IMO. BTW, I could approach you with this but since we're already discussing it here, I felt it best just to mention it here.Nil Einne (talk)16:09, 15 February 2026 (UTC)16:19, 15 February 2026 (UTC)[reply]
To be clear, IMO the primary reason for blanking the talk page in a case where the editor it's for requested deletion is because we should assume it's the closest thing we can do which follows their wishes. And something there's a fair chance they would have done if told 18 years ago sorry we can't delete the talk page but you're free to blank it. (I think but am not sure, nowadays some admins may blank a talk page if an editor requests deletion.) The fact it deals with the other problems that come from a very old page being restored is only an added bonus.Nil Einne (talk)16:13, 15 February 2026 (UTC)[reply]
I agree that restoring and then blanking would be an acceptable path forward. I proposed it as option 3 above. It would alleviate the Linter errors, the category errors, and the nonexistent template errors, which (if I am reading correctly) would address all of the editors' complaints in this thread. –Jonesey95 (talk)16:21, 15 February 2026 (UTC)[reply]
Looking more it seems most or all of these were deleted as temporary user pages rather than on request. Even so, I still feel simply blanking them is the best option given as I mentioned it might have been carried out (or requested if the editor lost talk access) in the 18 years if they weren't deleted. And the user has no reason to expect it would suddenly come back. If the editor objects, they're free to unblank them but blanking when the editor doesn't want it seems the less of two possible evils. For IP talk pages, the situation is different however it was normal to clear out old messages to reduce confusion. And while there's no need for it now we have TAs, there's also no harm in it. (Frankly I wonder if we should just blank all IP talk pages but that's a discussion for another day.)Nil Einne (talk)16:48, 15 February 2026 (UTC)[reply]
I'm in support of blanking as well, it takes care of all the different issues nicely. Would be good to have a flagged bot do it so it doesn't trigger extra notifications for these users though. (@Nil Einne:VulpesBot is supposed to take care of blanking IP talk pages)Legoktm (talk)18:24, 15 February 2026 (UTC)[reply]
@Hex: Sorry, I intended the end of my message to be proposing a (hopefully) positive path forward, specifically "...we can figure out details on how to coordinate" was me explicitly volunteering to help!! I think treating this as a bot task will actually speed up what you want to accomplish rather than "artificially limit progress".
I do disagree with your assertion that this isn't a bot task. Doing something across 11k pages, even with minimal human input, is just a semi-automated bot instead of a fully automated one. To quoteMEATBOT:Editors who choose to use semi-automated tools to assist their editing should be aware that processes which operate at higher speeds, with a higher volume of edits, or with less human involvement are more likely to be treated as bots. If there is any doubt, you should make abot approval request.Legoktm (talk)18:21, 15 February 2026 (UTC)[reply]
I'm very happy to hear that! Regarding it not being a bot task, I said that because I don't think that any page should be undeleted without an admin at least briefly checking first, especially ones this old. It's a bit of a pain, but FWIW, out of the 500 the other day I did choose not to restore one (it was pure spam). About MEATBOT, well, this is splitting a hair, but it does say "use semi-automated tools" and I didn't use any of those; it was completely manual.
I disagree strongly that we should blank the restored pages, for a number of reasons:
They have no particular content that sets them apart from their contemporaries; the talk pages of registered users blocked in 2007, 2009, etc haven't been blanked.
Blanking 11k user talk pages is an extraordinary act, and would require extraordinary circumstances (and extraordinary consensus). By contrast, what's being discussed here is restoring them to completely ordinary circumstances.
It's said upthread that there's a bot fixing obsolete tag "errors"; then let it get around to them on these pages whenever it does. We absolutely shouldn't be making it more inconvenient for project participants to read historic discussions simply to make numbers go down on some linter reports, and especially not when it's something as trivial as replacing<font>...</font> tags which has no bearing on reading or participating. There definitely are discussions: I saw dozens and dozens of talk pages with much more than just a block message. Let's put humans first in our considerations of handling historic material in the project.
They have no particular content that sets them apart from their contemporaries; the talk pages of registered users blocked in 2007, 2009, etc haven't been blanked.
They have errors that would cause considerable effort to cleanup.
Blanking 11k user talk pages is an extraordinary act, and would require extraordinary circumstances (and extraordinary consensus). By contrast, what's being discussed here is restoring them to completely ordinary circumstances.
I think after 18 years, restoring 11,000 pages is still an extraordinary act.
It's said upthread that there's a bot fixing obsolete tag "errors"; then let it get around to them on these pages whenever it does.
The bot will certainly not fix all the errors (it's unclear if it will even fix a majority). User talk pages can be a massive pain to clean up.
We absolutely shouldn't be making it more inconvenient for project participants to read historic discussions simply to make numbers go down on some linter reports, and especially not when it's something as trivial as replacing<font>...</font> tags which has no bearing on reading or participating.
I think if project participants urgently needed to read historic discussions, they would have asked sometime in the last 18 years. Checking the history does not feel like an excessive barrier.
Let's put humans first in our considerations of handling historic material in the project.
It's humans who have to do cleanup work to fix these pages. If we had an automated way to fix all linter errors on user talk pages, we would not haveover 1 million of them.
It is a bit frustrating to seeHex simply opposing constructive feedback in this thread, rather than offering any sort of constructive solutions to the problems that they have been creating.Hex, since you are opposed to option 3 above (restoring and then blanking), which has support in this thread, which of the proposed options would you support? If the answer is "none", please propose a different option that addresses the problems caused by restoring these pages. –Jonesey95 (talk)20:32, 16 February 2026 (UTC)[reply]
They have errors that would cause considerable effort to cleanup. The main issue here is considering non-problems such as the presence of font tags to be "problems". They are not. It's meaningless busywork which offers zero benefit to anyone. Fixing actually broken markup such as unclosed tags has some minor value.I think if project participants urgently needed to read historic discussions, they would have asked sometime in the last 18 years. Nobody has said anything about anything being "urgent". This is a strawman argument.Checking the history does not feel like an excessive barrier. It wouldn't be an "excessive barrier", it would be superfluous and annoying.It's humans who have to do cleanup work to fix these pages. If humans stopped fretting about non-problems they could spend time on "making actual improvements to Wikipedia", to borrow a phrase. —Hex•talk21:48, 16 February 2026 (UTC)[reply]
If humans stopped fretting about non-problems Like talk pages that were deleted 18 years ago which almost no one has cared about in all that time, and almost no one is likely to care about in the future? How many of these even have anything in the history besides some warning templates and a block notice?Anomie⚔00:56, 17 February 2026 (UTC)[reply]
Have to agree. While I initially supported the restoration of these talk pages I can no longer do so. Like it or not, they were deleted 18 years ago and therefore anything we do now, is making a significant fairly extraordinary change. IMO given our policies and guidelines restoring them and blanking them is the best step forward. It complies with the view that talk pages should not generally be deleted, while also keeping things at the 18 year status quo ante in a manner fully supported by our policies and guidelines.
To be clear, these talk pages have effectively been blank for 18 years and we have no idea how many of them would have been properly blanked in those 18 years were it not for the fact not only was there no reason to, but it was impossible. It seems like we don't even know for sure whether in someone of them, the editor behind the talk page might have wanted the talk page blank and received this via the talk page being deleted after they made a request (even if the deletion wasn't from the request).
If Hex isn't willing to do this, then they need to stop. Admins need to be responsive to the community and the community has indicated that they do not believe their reading of policy is correct, our policies and guidelines do not allow restoration under the conditions Hex is imposing. If Hex isn't willing to stop by themselves, we can discuss a topic ban or worse to stop Hex but I really hope we don't have to do that.
Hex is of course welcome to seek a consensus somewhere to establish there is in fact consensus that restoring under the conditions they're imposing is supported by our policies and guidelines. I'll pingUser:Pppery as the only other editor who seems to support Hex's actions to see whether they support Hex restoration under the conditions Hex is imposing.
Note the vast majority of 2007 and 2009 have not been blanked it irrelevant. I find it hard to believe zero of them have been blanked and Hex has provided no evidence for this. It's quite likely a small number have been blanked for various reasons including that the editor for who it is for did it or asked for it and this happened after 6 months of the block. The trouble is we have no way of knowing which of these 11k talk pages for 2008 this would have happened for since it wasn't possible. We are imposing a situation on the editors for who the talk page is for after 18 years that doesn't apply to the 2007 and 2009 pages because we are restoring them after 18 years. These editors might have edited under their real names perhaps even when they were minors and might be shocked to find content they thought was long gone suddenly visible again and perhaps even indexed by some search engines who chose to ignore our noindex (I think user talk pages are noindexed somehow?)Nil Einne (talk)06:42, 17 February 2026 (UTC)[reply]
I find it hard to believe zero of them have been blanked and Hex has provided no evidence for this. Because I didn't say that. I said that they haven't all been blanked. —Hex•talk23:36, 17 February 2026 (UTC)[reply]
Well, I support restoring - and not restoring and universal blanking, for reasons I laid out in a recent VPP discussion on the matter.[9] (but, tldr: can't use the search function on blanked talk pages).
Furthermore, there's been a few times where I've stumbled across old, yet still extant, content issues in articles made by accounts with talk pages blanked/deleted in the older days... wish I'd written a few of those down now, given that Hex is restoring them now, as I'd love to be able to see to what extent cleanup was done by other editors, if there was evidence of vandalism/spamming/block evasion (Just saying, I apply a lot less AGF when it comes to fact checking/removing unsourced content for confirmed spammers and socks versus clueless newbies). And, again, I oppose wholesale blanking - iff the editor doing the blanking has not checked the account's contributions and confirmed there are no issues wrt to copyright, BLP, spam, vandalism, NPOV, ect, in mainspace. (Am also okay with selective blanking of a messed up template or PII disclosure - both things I've done myself, many a time) Like I said before, once those have been all dealt with, I'm fine with anybody blanking the talk pages. But unless the editor is going to do that, please don't make it harder on those of us wholike cleaning up mainspace content!!!!
And as for the linter errors - to put a different perspective on the issue, there's probably 7 million or so articles in need of some form of copyediting in main space, right? But that's hard to do, it takes time, ect... so should we just blank and delete those? It would be easier on our copyeditors, of course. And it would remove a lot of unambiguous errors.GreenLipstickLesbian💌🧸06:46, 17 February 2026 (UTC)[reply]
iff the editor doing the blanking has not checked the account's contributions and confirmed there are no issues wrt to copyright, BLP, spam, vandalism, NPOV, ect, in mainspace. This is an excellent point and illustration of why registered users' talk pages shouldn't be unconditionally blanked, even if they're 18 years old.It would be easier on our copyeditors, of course. And it would remove a lot of unambiguous errors. True. All those queues would shrink to nothing, and as we know making number go down is the most important job on the project. —Hex•talk23:41, 17 February 2026 (UTC)[reply]
(edit conflict) I disagree that the talk pages should be blanked. In my view the original deletions were invalid and we should end up in the state we would have ended up in had they never happened. That state is them being unblanked; perWP:TPO neither you nor anyone (other than the blocked user) has/would have had the authority to blank them (and there had never been any convention of blanking registered users' talk pages). And, I agree with most of Hex's comment at 21:48, 16 February 2026 - in my view a lot of lint error fixing amounts to pointless busywork that doesn't need to be done (a position I've expressed many times before, such asthis comment in 2023). That said, as an admin Hex does need to respect the consensus that the community comes to, which right now appears to be that these actions aren't appropriate, even if she disagrees with it.* Pppery *it has begun...06:49, 17 February 2026 (UTC)[reply]
The problem IMO is it is no 2008 as much as we might like to pretend it is. It's 18 years later. Heck, even if it was 2010 and someone had mass restored these then, IMO you'd have far more of a case that it's simply enforcing the policy change and nothing more is needed. But it's not that and like it or not, these were deleted and therefore effectively blanked for 18 years. Wecannot just pretend this didn't happen since it did.
Any of these editors could have complained they didn't want their talk page deleted and therefore blanked sometime in these 18 years and these should have been restored. I assume this didn't happen for any of them or maybe in some cases it fell through the cracks but either way there's no reason to assume any of them wanted the pages restored. Further once they are restored, any editor who still has talk page access is free to restore the content if they wish.
However there is good reason IMO to assume there might be a small number who wanted their talk pages blanked and didn't do anything because their talk pages were blanked. Yes they can do it after restoration but in the 0-18 years since they had this thought, they very likely never expected their talk pages was suddenly restored against their wishes and might be shocked to find it so. Even a policy wonk realistically likely wouldn't have expected it.
While I'm not an extreme policy wonk, I do feel I have a decent grasp. And frankly even if the though had entered my mind, I'm not sure what I would have done. I guess probably re-create the talk page with the message "If this is deleted, please keep this version or otherwise keep the content blanked." Alternatively request undeletion and blank it after.
But this is a fairly complicated line of thought based on a good understanding of our policies and guidelines, not something many would have. And as said even for me, I'm really unconvinced it would have ever occurred to me that my talk page is going to suddenly come back in all its glory with whatever possible embarrassing info fully visible & for all to see and indexed by whoever chose to do to. (And just to cut off questions, no I do not have a talk page which I want blanked. Nor one which has been or is going to be restored.)
Or let me put it a different way. If instead of these being deleted they were simply mass blanked does anyone realistically think there was any chance you'd get consensus to mass restore them now 18 years later? I quite doubt it. Part of this is the disruption of so many unnecessary edits. But part of it is likely IMO to be concerns over what we are restoring 18 years later when people might not have expected it to be and where we're not and it's really impossible to be sure there isn't something whoever's account it belongs to wanted blank.
I'd add that if you want to get technical, I'm unconvinced these deletions were clearly invalid at the time. While there was no written policy supporting it, AFAICT, there was no clear policy opposing it either. And since policy is intended to document practice, the fact one admin did it for 11k pages and no one seems to have complained about it until years later means IMO it cannot clearly be said to be out of process especially since at the time, policy was often less written down then it is now.
Even now, if there was no clear policy for or against something, someone does it in such a manner with so many pages that a bunch of people must have noticed, it's hard to make a case even a year later it was clearly invalid. Reverting after even 1 year, let alone 18 years generally requires discussion.I cannot realistically see how so many pages were deleted without quite a few people noticing and so far no one has demonstrated there was any pushback at the time and if there was nothing seems to have happened.
I accept policyand practice now and for a long time may be clearly against it. And since I there was no talk about preserving legacy deletions a case could be made for undeleting stuff which were deleted before this policy. But the fact that they weren'tthat clearly invalid at the time meanseven more that extra care needs to be taken in how we go about doing so, especially if we are going to do so after 18 years. As said, it isn't 2008 or 2010 and we can't pretend it is.
Preserving the status quo ante in the less disruptive manner which is support by our policy and guidelines should be considered and since someone already effectively blanked them 18 years ago, preserving this makes the most sense. Even more since what they did wasn't clearly invalid at the time.
BTW about IP talk pages, I see probable consensus there to stop the bot and cease blanking pages now. But these talk pages are 18 years old and so would have already been blanked. If consensus is reached to unblank all the other IP talk pages we can restore these at the same time but until there is such a consensus IMO it still makes sense to blank them.
IIRC our guidelines still allowed IP editors to blank their talk pages except for information targeted generally such as who the IP belongs to and guidance for making an account or info on blocks. But there's no real reason to keep this and it doesn't seem to be what GreenLipstickLesbian wants from the talk pages.
That said I'm less fussed about this since it's a lot less likely for there to be an issue as it's a lot less likely there was someone still using the IP who could blank the talk page. (Although I'm fairly sure if e.g. someone put a real name in the talk page or whatever, we'd have allowed it to be removed even if it wasn't the IP requesting it. And again especially without careful checking to at least find possible real names we have no idea if there might be such cases where the person thought it was no longer an issue in the 0-18 years but then it suddenly is.)
Ultimately we've always been clear that if you want to do something in mass, you generally have to do it right. So if you're not willing to spend the time to try and rule this stuff out, IMO you shouldn't be doing it. IMO it's simply not possible to even do this, but I'd perhaps be willing to accept if whoever does these restorations checks what they are restoring and removes anything which the editor might have wanted blanked. Also a check for any signs of a request for blanking.
I'd estimate 5-10 extra minutes would need to be spent per user talk page restored since it's very difficult to imagine what might be sufficiently problematic that an editor would want blanked so it does require careful thought. And a look through the editor's contrib history for any signs they request deletion or removal of their talk page. Note since some people use pseudonyms all over, we shouldn't assume even in the absence of probable real names it's okay, especially for usernames.
I was looking through the history to find if there was any difference about who and when may remove talk page comments. There isn't really[10]. Although I will note even now, editors are allow to request courtesy blanking so there's no requirement that it must be the user the talk page belongs to themselves who removes comments just that it's generally they should be who requested it. Perhaps more importantly the guidelines at the time did support deletion of IP talk pages (not user ones) in certain cases[11]. More importantly, it seemed to support deletion in RTV cases[12]. Although frankly even without this I think it should be considered that a RTV request was a request for courtesy blanking unless it's explicitly asked not to. Anyway so at an absolutely minimum, any RTV or frankly anything akin to it requires the talk page is blanked IMO.Nil Einne (talk)10:34, 17 February 2026 (UTC)[reply]
Note I wrote that before I explored the history. I'll accept that the user page policy did discourage user talk page deletions back to 2007.[13] But it was still a lot more wishy washy with the mention of RTV. Even now it's IMO not that clear what these exceptions are from written policy it's just that practice and discussion has established it's very exceptional cases.
The IP talk pages thing is interesting. While it didn't last that long[14][15] with some clearly opposed[16][17], IMO even from those discussions it's clear this wasn't such a clear no as it is now. So I still feel the deletions weren't so clearly out of line as they are now. The biggest concern seems to have been concerns over doing it as routine or in mass.
And I still haven't seen much discussion over restoring the deleted talk pages. IMO the most common sentiment seems to have been don't do it, but meh about those already deleted.
Also I may not have been as clear as I planned. IMO the harm from undeleting without blanking is this. While cases where the user who's talk page it was would have blanked it or requested blanking in the interim but they didn't because there was no need to may be tiny in number, the potential harm to them from suddenly finding their talk page back could be severe.
Therefore this harm greatly outweighs the limited harm that comes from everyone finding their talk page was courtesy blanked without their explicit request after 18 years of it being effectively so via deletion during which time they could have asked for restoration if it so bothered them. Noting also that deletion seems a much more serious issue than courtesy blanking.
BTW, I haven't much addressed the Linter errors etc issue because I consider this a much more minor one. Personally I feel both Linter issues and these talk pages being deleted are minor problems but still serious enough to warrant attention and this is supported by our policies and guidelines.
Different editors may feel they matter more or less. So editors on both sides are right to feel these do matter. And while everyone's entitled to their opinion IMO any view it doesn't matter so we shouldn't do anything doesn't mean much. No one should need to find something better to do since people who feel either matter are entitled to feel these issues matter and to act and speak accordingly.
And generally speaking no one should be forced to do anything, so no one should be forced to fix linter errors or to undelete pages after 18 years. But an exception is where you're contributing to some problem. Since this is effectively happening via the undeletions, it's far to ask whether some requirement should be imposed on anyone doing so. The fact these are undeletions rather than actively doing stuff reduces concerns but it doesn't eliminate them.
To give an example, since blanking now isn't well supported, if someone were fixing linter errors by blanking pages which haven't been effectively blanked for 18 years, we'd likewise have to ask whether they should be doing this. But it isn't what's happening.
Instead the only request is to blank something which has been effectively blank for 18 years. This resolves the things both sides case about, concerns over the talk page being deleted when modern practice says they shouldn't have while simultaneously resolves concerns over linter errors and other problems being re/introduced. (From what I understand, some of the stuff would have been fine that far back so technically it wasn't a problem then so it's complicated to call it a simple re-introduction even if the editor undeleting isn't the one that originally submitted it.)
But most importantly IMO it restores the talk pages without the possibility of imposing an extreme burden after 18 years on the editor the talk page is for.
(Do remember even if somehow someone started editing as a baby, they'd now be an adult in most circumstances in many jurisdictions. 18 years is a very long time however you spin it one reason I keep harping on it.)
Probably my last comment but if editors feel blanking after restoration would require further discussion and consensus so be it. There's no harm from discussion. But I feel there's been enough pushback for different reasons, that whatever our policies and guidelines may say, and whatever consensus was achieved many years ago, it's also clear that restoring these without doing anything also requires further discussion to establish clear consensus for doing so. After all, perhaps that's part of how we got here in the first place when some admin thought they were doing the right thing support by our policies and guidelines.Nil Einne (talk)12:46, 17 February 2026 (UTC)[reply]
IMO the harm from undeleting without blanking is ... the potential harm to them from suddenly finding their talk page back could be severe. Therefore this harm greatly outweighs the limited harm that comes from everyone finding their talk page was courtesy blanked... I'm sorry, but this is entirely far too speculative. —Hex•talk23:34, 17 February 2026 (UTC)[reply]
Anomie, if that's your position do you also support turning off every talk page archiving bot and automatically revdeleting every page revision older than a year or so? Because they all exist for the same reason: in case someone wants to look at them. Even though 99% of it by volume is extremely uninteresting to 99% of people. If you can't understand that, there's really very little left to add here. —Hex•talk23:02, 17 February 2026 (UTC)[reply]
What's left to add is your response to the original question, and to my restatement of it above: "Hex, since you are opposed to option 3 above (restoring and then blanking), which has support in this thread, which of the proposed options would you support? If the answer is "none", please propose a different option that addresses the problems caused by restoring these pages." Thanks in advance for proposing something constructive that addresses the many objections to the way that you are performing these page recreations. –Jonesey95 (talk)23:27, 17 February 2026 (UTC)[reply]
I respectfully suggest not trying to invoke the name of a fallacy until you're fully capable of discerning the appropriate moment to do so. A slippery slope is an assertion that one thing will inevitably lead to another. I didn't suggest anything of the sort. What I was trying to point out is that you hold contradictory beliefs about the value of items in our collection that are entirely equivalent to each other. Hope that helps. —Hex•talk23:50, 17 February 2026 (UTC)[reply]
This is certainly a very spirited attempt to dodge having been caught with your pants around your ankles but it won't fly. —Hex•talk09:13, 18 February 2026 (UTC)[reply]
Section of text shows up orange but only in some cases
It is the section starting with "include obviously. It is absurd to say that we should say "he had never been arrested before" in[18]. See the discussion there about this. Thanks.Doug Wellertalk18:40, 10 February 2026 (UTC)[reply]
You are using one of the scripts that check links for reliability (Headbomb's I believe). It highlights theentire list item in which the unreliable link appears. I skimmed it so I can't say which specific link.Izno (talk)19:28, 10 February 2026 (UTC)[reply]
It would be great if somebody could change that script so it doesn't highlight replies on discussion pages or anything on discussion pages that aren't article talk pages. I'm having the same problem of random replies being marked in red and lots of users have that script installed.Prototyperspective (talk)11:41, 12 February 2026 (UTC)[reply]
Hello there :) Apologies if this has been discussed everywhere or is otherwise known, but I noticed an ugly visual bug resulting from the combination of a{{side box}} and (a table with) thefloatright class. You can see the effect atEjective consonant, opening the mobile version from a narrow enough screen (or emulator - the "iPhone SE" preset in chrome devtools is perfect). I tried fiddling with it for a bit but didn't find a convincing solution, or one in which I'm sufficiently confident (e.g., would it make sense to add a content-based width to{{side box}}?). I'm also not familiar with the available layout templates and classes here on enwiki, so y'all may already have a simple solution that I'm not aware of.Daimona Eaytoy(Talk)22:04, 10 February 2026 (UTC)[reply]
This should be changed in thefloatright class definition. Memory says this class (and its friends) used to be wrapped in a media query on mobile such that it only took effect above a certain width. I have been meaning to make that how it works globally and just have been ~lazy~. cc @JdlrobsonIzno (talk)00:51, 11 February 2026 (UTC)[reply]
Those rules exist but they only work on responsive skins (not resized skins). They work fine on mobile devices and people using desktop site on mobile.
Generally people shout at you when you give any kind of impression you are making their favorite pre-2011 skin "responsive" or mobile like which is why we unfortunately intentionally dont have a response version of the Vector 2022 (or Vector classic) skin whicha makes me sad.
Thanks for the context :) I agree that the floatright class is ultimately responsible, although I guess I was also wondering if there's a simpler fix to apply to either of the involved templates while we wait for the proper fix. --Daimona Eaytoy(Talk)12:01, 11 February 2026 (UTC)[reply]
intentionally is doing the heavy lifting there. That a parameter works with the skin doesn't imply that was intended (i.e. designed-for).Izno (talk)17:39, 14 February 2026 (UTC)[reply]
If you view Vector 2022 on a mobile device it will always appear zoomed out due to the presence of the meta viewport tag. It will adapt to resizing the browser though. The only way to get mobile Vector 2022 on a mobile device is via [https://en.wikipedia.org/wiki/Main_Page?useformat=mobile&useskin=vector-2022 a specific url].
There has been an intentional decision here (which was pushed for by the community during rollout) for it to behave this way
Is there are way for a template to read the parameters of a template nested inside it? For example, if I had{{template one| {{template two |para1=some |para2=thing}} }} is there some way to code "template one" to read the parameters from "template two"? Or can "template one" only ever read the output of "template two", not the inputs?
I've seen source code for some templates use#invoke where they could use an existing template. Is there a benefit to doing this, particularly for sidebars? What's the rationale for invoking a Lua module rather than just using{{sidebar}}?
1. A template cannot do this. Via a module it can read the source text of the whole page and search this source text for strings like a specific template name, but we only do that in special cases likeModule:Auto date formatter. It doesn't sound suitable for your purpose. It also relies on the parameter being present in the source text.
While I was searching, the closest thing I found was{{get parameter}} and{{template parameter value}}. Like you said, it looks like they invoke a Lua module to read the source of a specific article and extract the value of a parameter of a particular template on that page, as opposed to reading a value from parameter of a child template. So it seems you're right.
The advice given is correct. The innermost template is expanded first starting with expansion (if necessary) of its parameters. The expanded innermost template is then passed to the next outer template which is expanded. That means a template get expanded parameters and cannot determine where they came from. Except, that extremely dubious and fragile methods exist to parse the wikitext for the whole page and guess which parameter is wanted.Johnuniq (talk)01:17, 15 February 2026 (UTC)[reply]
It should not be possible to make generic section headers in Village Pump discussions
Occasionally editors will start a discussion on one of the village pump subpages and create a subheader with a title like "Discussion" or "Survey", which inevitably creates navigation problems when there end up being multiple identical subheaders on the page under different discussions. It should just not be technically possible to create a generic subheader on these pages. If someone tries to create one, they should be prevented from saving until they change it a unique subject-specifics subheader (like "Discussion (section headers)").BD2412T19:00, 11 February 2026 (UTC)[reply]
Also agree. The section headers should be descriptive. This also relates to wishW311: Do not fully archive unsolved issues on Talk pages albeit another idea would be needed for what to do at meta pages like VP that get lots of thread than what's suggested in the image there and the bigger problem with that is that here threads aren't marked as 'solved' or at least 'issues' or 'nonissues' (eg Tech News posts aren't issues). I think the solution would be to add a sentence about this to the header asking for descriptive headers and having users edit headers when they're not descriptive. I edited a few section headers atd:Wikidata:Bot requests that weren't descriptive.Prototyperspective (talk)11:33, 12 February 2026 (UTC)[reply]
Personally, I would prefer having a permalink icon next to the heading that provides easy access to a unique link to the heading, so users won't have to generate unique headings on their own. The unique(-ish) ID is already generated by the infrastructure underlying the reply tools feature; there just needs to be a user interface to expose it. (I have my ownscript to copy comment and heading links to the clipboard; other users have written similar scripts.)isaacl (talk)02:00, 12 February 2026 (UTC)[reply]
Ideally section-based editing would be revised to support these IDs as well. However I don't know the practical feasibility of implementing this change.isaacl (talk)02:02, 12 February 2026 (UTC)[reply]
The sandbox link in the personal toolbar is no longer red even if it doesn't exist, in Vector (both kinds) and Monobook. It's still red in Timeless. The classnew is added to<li> not<a> soa.new is not being applied.Nardog (talk)02:35, 12 February 2026 (UTC)[reply]
Perphab:T413542, a task was created to use OOUI in the edit filter interface, but there's a major side effect: the view is messed up in desktop (laptop/computer), and the Ace editor is completely broken in mobile/desktop view (in iOS/Android). I am notifying the community about this error (which has probably affected all wikis) which should be fixed immediately.Codename Noreste (talk •contribs)20:38, 12 February 2026 (UTC)[reply]
Movingthis here as I may get a better answer. The question is about AntiVandal. In the settings page, there's a setting for ORES score.I'm not understanding how that setting is supposed to work. I want it to mimic "likely bad faith" in Recent Changes, but it asks for a decimal? So what do I do if I want that behaviour?TheTechie[she/they] |talk?06:36, 13 February 2026 (UTC)[reply]
It's fine to use paragraph elements appropriately on any page. It's unnecessary when writing paragraphs that aren't embedded within other elements, as the MediaWiki parser will parse newline-separated wikitext as separate paragraphs, but can be used when embedding paragraphs within other elements such as a list (seeWikipedia:Manual of Style/Accessibility § Multiple paragraphs within list items). The{{pb}} template is easier for most people to use, since it doesn't require a closing tag, but it also is less semantic, as it adds a visual vertical break but not a logical paragraph.isaacl (talk)17:15, 13 February 2026 (UTC)[reply]
The<p> tag doesn't require a closing tag, either. It's implicitly closed by the next<p> tag, and by the closing tag of any block-level element that encloses it. It's also implicitly closed by the opening tag of any block-level element that you're trying to nest inside the<p>...</p> - in that respect it's unique among HTML elements. --Redrose64 🌹 (talk)11:32, 14 February 2026 (UTC)[reply]
Yes, I am aware of this behaviour within HTML 5. I was echoing the guidance atHelp:HTML in wikitext § p, since technically the wikitext parser could impose additional constraints. However, I missed the paragraph at the end of that section where it said it's not necessary on Wikipedia.isaacl (talk)03:45, 15 February 2026 (UTC)[reply]
Are you sure the author of that commend didn't enter the<p> and<em> themselves in the wikitext? In my experience, the tool itself does multiple paragraphs by using multiple colon-indented lines, not<p> tags. As in this comment, for example.
Do not ever modify someone else's comment as you didhere. Those were inserted by the editors-plural, not by DT, and were used deliberately.Izno (talk)15:55, 13 February 2026 (UTC)[reply]
@Sapphaline: About sixty HTML5 elements may be used within Wikitext. As I write this, the list ishere. The element names are delimited by apostrophes; note that some are listed more than once. Sometimes, using these can produce a "cleaner" rendered page than Wikimarkup. For instance, if you have a list, it is possible for one of the items of that list to contain a sublist; but in Wikitext, such a sublist must be the last content in that list item. If you want text to appear at the level of the outer list, but after the inner list, you need to use HTML thus:
Nested list
Original list item is still open, so my sig that follows is part of the post beginningAbout sixty HTML5 elements, and not divorced into a separate item. --Redrose64 🌹 (talk)23:58, 13 February 2026 (UTC)[reply]
Hi! I don't know whether I should put this here or in the Teahouse, but I was about to remove the deprecated parameter "nationality" from the infobox ofFernán Mirás and for some reason the preview warning doesn't appear, even if it hasn't been removed. It's not an issue with my browser, since in the other pages I removed the parameter from, the warning appeared. I removed the space between the infobox and the template above, thnking it would help somehow solve the problem, but nothing changed. Thanks,Bloomingbyungchan (talk)15:36, 13 February 2026 (UTC)[reply]
Thanks, I wasn't aware that it was an error, instead I thought that the warning was supposed to appear regardless of the fact that the parameter is blank or not.Bloomingbyungchan (talk)15:54, 13 February 2026 (UTC)[reply]
A reader asked me about a layout problem they're seeing with an article I currently have atWP:FAC. I suspect the issue is just that they're using an unusually wide window, but I would appreciate suggestions atTalk:Carlisle & Finch#Whitespace if there's some way I can improve on what I'm doing now.RoySmith(talk)18:03, 13 February 2026 (UTC)[reply]
There is a{{clear}} template at the bottom of the Modern Lights section. If the images go further than the text the next section won't start until the images have been displayed. This results in the white space the other editor is seeing. You can remove the template, but then the images will flow down into the Navigation Beacons section unless you move one of the images elsewhere. --LCUActivelyDisinterested«@» °∆t°19:37, 13 February 2026 (UTC)[reply]
No infobox for "Corpus Inscriptionum..." exists and is very much needed. We have several such articles. I'll place similar notes there and direct people to discuss it here.
Hi.Opecuted. Sorry, but did you read what I wrote? There are some 8 different articles called "Corpus Inscriptionum XY", they all need an infobox, but none exists. We areforced to use "Infobox language", but it does NOT serve the purpose well - see above only SOME of the inadequacies deriving from this improvisation. For our purposes on Wiki, acorpus isa collection of inscriptions, either in one language, or from one geopolitical region, including inscriptions in several languages, so very far froma language as such.Arminden (talk)16:11, 11 February 2026 (UTC)[reply]
There are articles on (or redirects to, plus 1 red link to):
This sounds like you WANT an infobox. Articles never NEED an infobox. There is even a considerable “anti-infobox” section of our community, that thinks that we use infoboxes way too often and that infoboxes should be removed from lots of articles. —TheDJ (talk •contribs)08:54, 14 February 2026 (UTC)[reply]
–moved here for better visibility —Opecuted (talk) 05:42, 14 February 2026 (UTC)
Being a series of books,{{Infobox book series}} comes to mind, though there's no parameter for the region and era of the original inscriptions (the existing "country" and "publication date" parameters don't seem appropriate). If you make a list of parameters that the infobox should support, then it wouldn't be too difficult to make one. –Scyrme (talk)06:20, 14 February 2026 (UTC)[reply]
It's more accurate to say each article is about a series of books about a corpus of inscriptions; my understanding is they also include facsimiles of the original inscriptions. I assume they want an infobox that can handle including information about both the books themselves (title, editors, number of volumes, etc.) and the corpus of inscriptions which the books reproduce (era, region, languages). –Scyrme (talk)16:22, 14 February 2026 (UTC)[reply]
Probably more the corpus than the book, but yes.
I find it hugely useful to cross-reference using Wikilinks etc. The inscription collections are in part available online and offer very helpful context to the historical phenomena and sites discussed in individual articles. When this is not the user's main interest that day, an overview in the shape of an infobox, sometimes with links to Google Books or the dedicated website, is just perfect. Without, it takes much longer and I myself sometimes give up and lose much of the context info.Arminden (talk)16:37, 14 February 2026 (UTC)[reply]
@Arminden: It would be easier to fulfil your request if you provided a full list of parameters which it should have. What information should the infobox be capable of displaying?
Without a list it's difficult to make a new template or determine whether an existing template already has all the needed parameters as well as whether a new template would be warranted if one doesn't already exist (there may be other solutions besides an infobox, depending on what's needed). –Scyrme (talk)17:22, 14 February 2026 (UTC)[reply]
Now that I had to compare all the "Corpus..." pages, it slowly sunk in that yes, for my the one I'm interested in,Corpus Inscriptionum Iudaeae/Palaestinae (CIIP), it would be great to have an infobox, but for the other pages this hardly applies. CIIP is a bit odd in that it covers an unusually large number of languages from more than one language family, and it's limited in time. So I guess it doesn't qualify?
If nevertheless possible, for CIIP itself I would propose (pls compare with what's there already):
With details added freely, in Italics or straight, as you see them in the "Index" section.
At the very least, can you pleade make the title disappear from the bottom of the "Language families" list (under Safaitic)? Thank you!Arminden (talk)20:39, 14 February 2026 (UTC)[reply]
I was reading theSwingin' (John Anderson song) article on my iPhone (Vector 2022 skin) and the “Other versions” section header is shown one character per line. Any ideas for troubleshooting this? It does not appear this way when I look at the article using my laptop. Thanks,28bytes (talk)13:29, 14 February 2026 (UTC)[reply]
I have seen this issue before but honestly I can't reproduce it now. It's flex gone wrong but there's nothing that should be causing it particular trouble in this context.Izno (talk)17:35, 14 February 2026 (UTC)[reply]
It's been showing up like thison cellphones since a rather recent Wiki outlook change. What the new outlook screwed up even worse isthe way edits are shown on "edit history": I don't understand anything anymore, "edit history" has becone totally USELESS to me.
Back to this issue: I figured out that by flipping the phone from "portrait" to "landscape" (sorry, I'm a photographer), it fixes the problem A BIT.
Why don't coders stick to the "if it ain't broken, don't fix it" principle? Pleeeease do! Or test phone mode before releasing, at the VERY least! And remember: heaps of conyributors are way past their spectacles-less years, along with all that implies.Arminden (talk)20:11, 14 February 2026 (UTC)[reply]
This kind of issue is about as likely to be related to choices made by software engineers at the WMF as it is to be a failure of the Apple engineers. These days, the latter is more likely worth suspicion.Izno (talk)21:35, 14 February 2026 (UTC)[reply]
I’m using Vector 2022 and I have “Enable responsive mode” and “Enable limited width mode” both checked in the Preferences > Appearances > Skin preferences section, if that helps.28bytes (talk)12:11, 15 February 2026 (UTC)[reply]
The enable responsive mode doesn't apply to Vector 2022.. so I am also a little confused about how you are getting this view on a mobile phone!! Is there a gadget that does this?🐸Jdlrobson (talk)02:53, 16 February 2026 (UTC)[reply]
I don’tthink I’ve got any unusual gadgets enabled, but I rechecked that page logged out and the glitch does not appear, so it’s certainly possible it’s something in my configuration/preferences. It still looks broken when I’m logged in. (And as best as I can recall that’s the only page I’ve seen this issue occur on.)28bytes (talk)03:08, 16 February 2026 (UTC)[reply]
I'm using a Samsung A55, so it's not an Apple thing. And I bumped i to it on lots of pages, much more often on Romanian Wiki than on enWiki, don't know why.
Related to it: my phone offers me 3 different browsers. The default browser for Google searches (can't figure out which one it is) presents Wiki pages inPC mode, so thatfonts are minuscule on the phone screen, but it has the "Listen to this page" function unlike another browser, so I'm still using it. Google Chrome has its own pros and cons, as does the third browser. Why I mention it here: the PC mode with its minuscule fonts, which I can't reset, showed up a few months ago, at about the same time as the problem flagged here, and it also introduced a tortuous editing mode (copy paste hardly possible etc.), all thesealmost simultaneous changes (not of my doing) making both using and editing Wiki A LOT harder. I guess coders tried to "fix" a system that wasn't broken. Even if it's me not being able to reset to the previous state: Wiki is for normal ppl, not all tech wizards, but with good experise in their field; forcing unhelpful changes on us is idiotic, as in: the opposite of user-friendly. This is not a sandbox for experiments.Arminden (talk)11:35, 16 February 2026 (UTC)[reply]
with its minuscule fonts, which I can't reset, showed up a few months ago
this is a change in the Chrome browser. Apparently too few people were using the auto sizing of the fonts for the Chrome team (and thus most other browsers) to keep supporting that mode and it had a few significant problems, so they chose to remove it. You thus get the font sizing that you would expect of desktop site.
It also introduced a tortuous editing mode (copy paste hardly possible etc.)
This sounds like you enabled thesyntax highlighting. You can simply disable syntax highlighting via the toolbar. For me it works most of the time, but I do also see a problem there every once in a while. None of that seems related to the problem mentioned here. —TheDJ (talk •contribs)12:42, 16 February 2026 (UTC)[reply]
It seems like the magic word{{!}} is messing with the height of table cells in two templates I'm making (those beingFNCS result/sandbox andFNCS LAN result/sandbox). See this example:
As you can see, everything works fine except that the rows are taller than necessary – they could easily be one row tall (the first result appears fine in the visual editor but has the issue in the source editor's preview). I have no idea what caused this; has anyone seen this before?Rockfighterz M (talk)23:57, 14 February 2026 (UTC)[reply]
@Rockfighterz M: There are two things to try. First, make sure that each<noinclude> tag follows on directly from the "real" template code, without any intervening spaces or newlines. Second, remove the blank lines. --Redrose64 🌹 (talk)00:08, 15 February 2026 (UTC)[reply]
I did both. That solved the problem for the former template, but only mitigated it for the latter. Thank you @Redrose64 for that!
The output of{{FNCS LAN result/sandbox}} can still have many blank lines when a switch doesn't produce anything but there is a newline after it. It doesn't work to simply remove the newline in the source text because cell-starting pipes must be at the start of a line. And it doesn't work to simply move the newline inside the switch because whitespace at the ends is stripped. I know ugly workarounds but not a pretty solution.PrimeHunter (talk)01:40, 15 February 2026 (UTC)[reply]
I just came across a lot of edits by a user who had changed "humourous" to "humorous" in over 50 articles that were almost all about UK/Australian/Irish/etc topics. Some (e.g.Rickrolling but not all had the "Use British English"/"Use Australian English" templates. The edits were done very quickly, one or two every minute, all with the same edit text, "Correcting typos". Examples:Before_(song),The Ballad of the Drover (in this case the "correction" was actually in the title of a cited source - automated editing of citations is really problematic),Dagoretti etc.
Is there a tool for correcting titles that this user would have used? And if so, could that tool be adjusted so it's not this easy for someone who is not aware of there being different valid spellings in different Englishes? Perhaps the tool is only using a US English dictionary and the dictionary could be expanded to not correct US or UK spellings? Or it could apply a different dictionary to articles depending on which language template they have - but a lot of pages don't have any language template.Lijil (talk)12:06, 15 February 2026 (UTC)[reply]
Lijil Um, "humorous" is the correct spelling in both US and UK English. Even though the noun is spelled "humour" in UK/Commonwealth, the adjective never has the extra "u". That's the randomness of English spelling for you. You might want to revert yourself where you've changed it.Black Kite (talk)12:17, 15 February 2026 (UTC)[reply]
Indeed. The Oxford Dictionary says: "Note that although humor is the American spelling of humour, humorous is not an American form. This word is spelled the same way in both British and American English, and the spelling humourous is regarded as an error."[21] Your complaint about the title of a cited source is[22]. The article gives the reference[23] which doesn't show the part with humourous/humorous. I found both spellings in other sources about the work so I looked for an image of the original and found[24] which says humorous. That means the editor corrected the spelling (although they may not have checked the source) and you incorrectly reverted it in[25] with a false edit summary. Please revert all your edits unless they actually quote a source which says humourous, and consult a dictionary before making mass changes of spellings in the future. And no matter how much you think your own spelling is correct, never make up a claim about what a source says.PrimeHunter (talk)13:08, 15 February 2026 (UTC)[reply]
Yes, "humourous" is an oddity (compare "humourless", whichdoes have the "u" in UKENG) so it actually wouldn't surprise me that the incorrect spelling appears in sources, even though that's not the case here.Black Kite (talk)13:22, 15 February 2026 (UTC)[reply]
Oh no. What a humorless and deeply embarrassing thing of me to gripe about - I am so sorry! And hopefully I'll never make this particular mistake again. Luckily @MtBotany fixed my incorrect reversions, and I've apologised to the user who was actually very helpfully fixing spelling errors. Sorry everyone.Lijil (talk)21:20, 15 February 2026 (UTC)[reply]
@PrimeHunter@Anomie I noticed that the actual change from "wikitext" to "Scribunto module" showed 0 bytes of difference, so I tried undoing Anomie's edit to see what changed, but ran into a permission error. I guess it doesn't matter as long as I create the module within the Module namespace to start with. Thanks for both of your help!Greenbreen (talk)16:55, 16 February 2026 (UTC)[reply]
@Greenbreen: Yes, just create it as a module another time. The content model change is a log action seenhere. Some log actions are also displayed in the page history for convenience. The developers didn't make a new type of edit for this but just used the existing format. The edit summary is made by concatenating the English version of the automatic log description and the user-supplied log summary, and the diff is empty. The actual change is in metadata for the page. The current content model is shown in "Page information".[26] Administrators have a "(change)" link next to "Scribunto module".PrimeHunter (talk)18:10, 16 February 2026 (UTC)[reply]
Hello, I was directed here from the Teahouse. When opening a page from the Suggested Edits on my userpage, the Quick Start Tips scroll very quickly through their numbered suggestions. I generally consider myself to be a quick reader but adding 1 second more to the timer would be beneficial. Especially for other newcomers who might not know they can manually go back through the tips individually. They do, of course, come back around over time but just a small adjustment like that would be nice. Thank you for considering this!Itsaclarinet (talk)02:51, 16 February 2026 (UTC)[reply]
Hello @Itsaclarinet, thank you for taking the time to share this feedback.
I agree that the Quick Start Tips advance too quickly. I am the Product Manager for theWMF Growth team, which develops and maintains the Suggested Edits feature, and I appreciate you calling this out. I have created a task for our team to review and discuss potential solutions (T417708). As you noted, simply increasing the display time may be a straightforward and effective improvement, but we will also consider whether other adjustments would better support newcomers.
We have also been discussing a related change (T408544). The idea being to not automatically showing the Quick Start Tips for task types that a user has already completed or previously viewed. The tips would still be available from the Help panel, but they would not appear by default each time. I would be interested to hear whether that approach would feel like an improvement from your perspective.
It loads fine for me when I tested with page sizes standard and wide, alongside text sizes too (on Vector 2022 theme). What 'tool' are we talking about here? ---n✓h✓8(he/him)04:50, 16 February 2026 (UTC)[reply]
"User:Preime TH" has already placed a second "Infobox settlement" under the original "Infobox settlement" in wiki articles "Provinces in Thailand" with information about the provincial administrative organization (PAO), without realizing that an image that was in the text to the left of the first "Infobox settlement" has been shifted down to the top of the second "Infobox settlement". This image now appears in a completely different article section. To solve this problem with as few adjustments as possible: Create a sub-template called "Infobox settlement/PAO", whose text is 100% identical to "Infobox settlement", but without the software component that prevents placement of an image above "Infobox settlement/PAO".SietsL (talk)06:54, 16 February 2026 (UTC)[reply]
Using the default (CodeMirror) syntax highlighter, you can turn it on/off with the pencil icon in the editor toolbar (2010 wikitext editor) or the hamburger menu (2017 wikitext editor). The documentation about it being a beta feature atWP:HILITE was incorrect and I've changed it: that only applies to testing a more advanced version of CodeMirror.the wub"?!"11:27, 16 February 2026 (UTC)[reply]
Font Size jumps while reading article - seriously annoying behaviour
I'm using Samsung Tab 9A+, Android tablet, with Samsung browser. When I access any Wikipedia article, it displays in a small font, but as I read and scroll article, the font size jumps to a larger font. I have researched this crazy-annoying problem and I have tried locking down font-size in the browser, disabling browser zoom, using desktop form pages as per browser setting, and setting the Wikipedia "Appearance" font-size values to both small and standard.
This annoying browser behaviour is ONLY evident on Wikipedia pages - and queries to Google AI explain that Samsung/Android browser is probably "boosting" font size and Wikipedia is also dynamically adjusting font-size. I cannot disable the Samsung browser action (no option exists for this - went thru ALL options on browser, which is a variant of Chrome), and problem only evident on Wikipedia.
Google's AI says try "desktop view", which I now use exclusively. But Wikipedia still shows small-font initially, then, as I scroll to read page, font size jumps to large, and one loses reading position in the page.
Basically, I have tried almost all possible combo's of browser settings, and setting values for "Appearance" in Wikipedia - small font, standard or large - does not stop this annoying behaviour.
What I request, is that Wikipedia provide a simple option to **DISABLE** dynamic adjustment of font-size. Simple, easy fix, which would allow me read articles without having the font-size jump as I am reading. I looked thru tech info, no solution was located.
More info; Note, I can get this SAME annoying behaviour on one of my Linux desktop machines, running a Firefox browser. Setting the Wikipedia font to "small" after article loads, as I scroll, it will jump to "standard" and even "large", which is seriously annoying. It's like having some lunatic snatch your newspaper out of your hand while you're reading it! (And then handing it back to you with a magnifying glass...) And saying, "There you go!" Crazy annoying.
So, the problem is *not* with my browsers, or with my Android or Linux versions. Some unwise person has damaged the Wikipedia user-interface.
Fix this problem by just providing a display option to prevent Wikipedia from adjusting font-size under any circumstance, after page loads and displays.- Russel F.Rusfuture (talk)18:51, 16 February 2026 (UTC)[reply]
I was searching for a software that I can use to verify newly added content in drafts against sources linked, and thought about asking here. Do you possibly have any such resources? Opening 4 new sources in my web browser is time consuming. I forget which part was verified and which was not at a certain wiki page...
Latesttech news from the Wikimedia technical community. Please tell other users about these changes. Not all changes will affect you.Translations are available.
Weekly highlight
TheSRE Team will be performing a cleanup of Wikimedia'sEtherpad instance, the web-based editor for real-time collaborative document editing. All pads will be permanently deleted after 30 April, 2026 – if there are still migration projects in progress at that point the team can revisit the date on a case by case basis. Please create local backups of any content you wish to keep, as deleted data cannot be recovered. This cleanup helps reduce database size and minimize infrastructure footprint. Etherpad will continue to support real-time collaboration, but long-term storage should not be expected. Additional cleanups may occur in the future without prior notice.[27]
Updates for editors
The Information Retrieval team will be launching anAndroid mobile app experiment that tests hybrid search capabilities which can handle both semantic and keyword queries. The improvement of on-platform search will enable readers to find what they’re looking for directly on Wikipedia more easily. The experiment will first be launched on Greek Wikipedia in late February, followed by English, French, and Portuguese in March.Read more on Diff blog.[28]
The Reader Growth team will runan experiment for mobile web users, that adds a table of contents and automatically expands all article sections, to learn more about navigation issues they face. The test will be available on Arabic, Chinese, English, French, Indonesian, and Vietnamese Wikipedias.
Previously, site notices (MediaWiki:Sitenotice andMediaWiki:Anonnotice) would only render on the desktop site. Now, they will render on all platforms. Users on mobile web will now see these notices and be informed. Site administrators should be prepared to test and fix notices on mobile devices to avoid interference with articles. To opt out, interface admins can add#siteNotice { display: none; } toMediaWiki:Minerva.css.[29][30]
View all 19 community-submitted tasks that wereresolved last week. For example, an issue onSpecial:RecentChanges has been fixed. Previously, clicking hide in the active filters caused the "view new changes since…" button to disappear, though it should have remained visible. The button now behaves as expected.[31]
Updates for technical contributors
New documentation is now available to help editors debug on-site search features. It supports troubleshooting when pages do not appear in results, when ranking seems unexpected, and when you need to inspect what content is being indexed, helping make search behavior easier to understand and analyze.Learn more.[32]