If you want to report aJavaScript error, please followthis guideline. Questions aboutMediaWiki in general should be posted at theMediaWiki support desk. Discussions are automatically archived after remaining inactive for 5 days.
This tends to solve most issues, including improper display of images, user-preferences not loading, and old versions of pages being shown.
No, we will not use JavaScript to set focus on the search box.
This would interfere with usability, accessibility, keyboard navigation and standard forms. Seetask 3864. There is anaccesskey property on it (default toaccesskey="f" in English). Logged-in users can enable the "Focus the cursor in the search bar on loading the Main Page" gadgetin their preferences.
No, we will not add a spell-checker, or spell-checkingbot.
You can use a web browser such asFirefox, which has a spell checker. An offline spellcheck of all articles is run byWikipedia:Typo Team/moss; human volunteers are needed to resolve potential typos.
If you changed to another skin and cannot change back, usethis link.
Alternatively, you can press Tab until the "Save" button is highlighted, and press Enter. Using Mozilla Firefox also seems to solve the problem.
If an image thumbnail is not showing, trypurging its image description page.
If the image is from Wikimedia Commons, you might have to purge there too. If it doesn't work, try again before doing anything else. Some ad blockers, proxies, or firewalls blockURLs containing /ad/ or ending in common executable suffixes. This can cause some images or articles to not appear.
We’re sharing an early look at anexploration into improving how people search on Wikipedia in order to gather your input. The goal is to help readers find the information they’re looking for more easily on Wikipedia itself, without needing to rely on external search engines.
One focus of this work is semantic search, a type of search that looks at themeaning of a query, not just the exact words typed, to help people find information resources. Today, Wikipedia search relies almost entirely on keyword matching, which works well when readers know the exact article they want, but less well when they have a question or are exploring a topic and the answer is inside an article without a keyword title match.
This post outlines why we’re exploring this, what our early research shows, and what kind of small, early experiment we’re considering.
Why are we working on this?
Many readers do not start their searches on Wikipedia. Instead, they often use external search engines or AI-powered tools, which then direct them to Wikipedia – or sometimes provide answers based on Wikipedia content without sending readers to the site at all.
If Wikipedia search does not meet modern expectations, especially for question-based or exploratory searches, readers are less likely to begin or continue their journeys here and instead rely on platforms where information isn’t made by humans, and is less reliable, neutral, and complete.
In short, improving search is one way to help Wikipedia readers find and enjoy what they read on our platform.
What has our preliminary research shown?
Ourearly research checked whether this problem is real and whether improving search could meaningfully help readers. Our findings suggest that it could.
1. About 98% of Wikipedia reading sessions originate outside Wikipedia search.
The small group who use internal search are much more likely to be editors than casual readers. Most readers move between articles by returning to external search engines, even when links exist within Wikipedia itself.
2. Roughly 80–95% of on-wiki search sessions use autocomplete suggestions.
The preference for autocomplete suggestions – those that appear as someone types – shows that small improvements to speed can have a large impact on success.
3. Between 4–7% of Wikipedia search queries are phrased as questions, but these queries are less likely to succeed.
While this is a minority of searches, it shows that some readers attempt it and that many others likely avoid it because they’ve learned it doesn’t work.
What stage is this project in?
This work is currently in Phase 0, sharing early ideas, learning from research, and gathering community input.
What idea are we testing?
We’re exploring whether a hybrid search experience, one that combines keyword search with semantic search, could help readers find information more easily. The hybrid search would use machine learning, similar to how search engines rank and surface results today, to better match readers’ queries with relevant existing articles and sections.
Semantic search performs better for questions and exploratory searches, while keyword search works better for very specific or name-based queries. In early prototypes, combining the two approaches produced more useful results than either one alone.
Importantly, this exploration does not involve generating new answers or rewriting Wikipedia content. The goal is to better match readers’ queries to existing, editor-created articles and sections. Any experiment in this area would be small, limited, and designed to test whether this approach provides real value to readers.
What is the timeline?
Right now, we’re focused on discussing the problem space and sharing the findings of the report with you all. We especially want to understand if this problem space is worth learning more about. We are also trying to better understand if a simple Minimal Viable Product could be technically feasible.
Should there be alignment around further exploration, a possible next step would be a tightly constrained, time-bound A/B test with a limited group of readers, potentially beginning in February, to help answer open questions surfaced together.
What input are we looking for from you?
We’d especially appreciate your thoughts on the following:
What are your overall reactions to this exploration and the research behind it?
Are there risks or concerns you think we should be paying closer attention to?
What signals or outcomes would matter most in deciding whether a hybrid search approach is worth pursuing further?
For more details, including links to our research and early mockups, please see theproject page.
Google is not a good role model here, given the amount of negative media coverage they have received in the last few years about how they are currently destroying their flagship product, in part bymessing with people's search queries. Bringing this up as something Google "excels at" is actually baffling given how constant and how inescapablethe negative commentary has been. (But I guess these are just "Internet pundits" and the feature isn't for them.)
Also, based on the project page, this seems to go well beyond "improving search." The mockups that claim to be "semantic search" appear toactually illustrate a "Because you liked..." recommendation feature, and a "suggested questions" widget, which surfaces AI-generated questions for no apparent reason. (For Q&A use cases, AI-generated questions can potentially introduce factual or representational bias. -- you don't say!)
@Gnomingstuff Google is good at semantic interpretation (and many other things as well). It's not as good at giving the right answer as it USED to, but that has more to do with the overall pool of garbage that they have to sort through, not keeping up with development of their own platform's core functionality, as well as actively worsening their quality because they want to keep you coming back to their website.
On a scale of 1 through 10, they are however still at 7. If people want to focus on Google messing up the 7-10 part, then they are ignoring that WE are at 0 and not even close to their 7.
this seems to go well beyond "improving search."
I think that depends on if your definition of search is that of someone used to old style search (a specific word matching technology), or the actual meaning for most people in the world (finding what they are looking for). —TheDJ (talk •contribs)14:50, 7 January 2026 (UTC)[reply]
The post here seems to only discuss "old style" search:We’re exploring whether a hybrid search experience, one that combines keyword search with semantic search, could help readers find information more easily.. What thePhase 0 proposal page seems to actually be proposing are "since you liked..." and AI-generated questions widgets, with proof-of-concepts already built out.
I also guess I don't see the problem with people getting to Wikipedia pages via Google rather than internal search or blue links. They still get to the page either way, the only difference is squeezing out 2 extra pageviews."Gnomingstuff (talk)20:21, 7 January 2026 (UTC)[reply]
I feel WMF developers should be informed somehow that any LLM-generated text placed within the Wikipedia UI is unlikely to receive a positive community reaction (either due to anti-AI sentiment or cautious acceptance but feeling Wikipedia's human-written nature is what gives it a purpose distinct from ChatGPT etc.). Otherwise they will continue to spend effort on stuff that is unlikely to be accepted by the editing community.
On the core idea of semantic search, I somewhat disagree. Yesterday I was searching for some New Zealand-related topics. I typed NZ{phrase}, but it didn't come up with the correct article; although it works often enough for me to instinctively do it, stuff like this still fails for me a reasonable portion of the time. Having a search system that handles this sort of thing would be a better way of handling this rather than creating redirects for every conceivable topic. novovtalkedits09:28, 8 January 2026 (UTC)[reply]
"with proof-of-concepts already built out" proof of concepts get built all the time for a variety of reasons. to spark discussion, to visualize ideas etc etc. Stop telling people what to do. —TheDJ (talk •contribs)12:00, 8 January 2026 (UTC)[reply]
...I didn't tell anyone what to do? Pointing out the contents of a page is not telling anyone what to do and I have no idea how you are twisting "proof of concepts are built out" into "I order you to do XYZ."
Gnomingstuff,I'm pretty sure that the project page was at least edited, if not fully generated, with AI. - No, I'm pretty sure it isn't.WP:AISIGNS is not a good metric for finding AI-gen pages outside of encyclopedic articles. A lot developers write with bolded phrases as signposts and bulleted lists since it makes it easier to skim documents.with proof-of-concepts already built out I can personally tell you that those prototypes could be built at speed, over a evening if required. (in fact, if I were to speculate,[1] (which is their other design) took more time to create than the AI question screenshot).
That being said,
EBlackorby-WMF, wrtWhat signals or outcomes would matter most in deciding whether a hybrid search approach is worth pursuing further? I do want to amplify one specific criticism by @Gnomingstuff and @Mir Novov, I do thinkThe robot icon and AI label failed to clarify what was machine- vs human-generated and instead increased confusion. is a alarming finding that should have been a guard-rail, we cannot haveAlthough the questions were AI-generated, participants often assumed questions were crowdsourced or editor-curated be a thing. The (enwiki-) community cares deeply about making sure folks understand that the encyclopedia is human-generated and organic and having some mixing of AI-gen content/attribution of human-gen content to AI will likely be poorly recieved by the community.
Moving to my personal feedback, I really really like "Concept 1" (the "because you read" feature). It's something I've been missing a fair bit. I really dislike the Q/A interfaces (mostly due to the reasons that folks in the thread have outlined). I'm personally ambivalent to the ask.toolforge.org prototype, though I wonder if instead to prompting the question, the interface could silently engage the reader to page snippets? (For example,iff a user types in "Company that owns biggest browser", y'all could silently switch from using CirriusSearch to surfacingGoogle#Google Chrome or similar).Sohom (talk)14:19, 8 January 2026 (UTC)[reply]
I specifically had in mind stuff like "underscoring the need for transparent provenance labeling" and "align with Wikimedia’s infrastructure and privacy standards." As I said it's neither here nor there, this kind of document at any workplace is more likely than not to be AI-assisted and there's no rule against it.Gnomingstuff (talk)14:33, 8 January 2026 (UTC)[reply]
I wonder if the AIs learned to write that way from the kind of techno-corporate bureaucratese that certain types of people have been writing for a long time?Anomie⚔00:06, 9 January 2026 (UTC)[reply]
(Probably, which is why I suspect no AI usage, that language is a bit corporate-speaky, but I've seen it used in earnest were folks were not using GPT)Sohom (talk)10:51, 9 January 2026 (UTC)[reply]
@Sohom Datta and @Mir Novov, I appreciate your notes and acknowledge your concern around perceived mixing of AI- and human-generated content, as it's one we take very seriously as well. User testing showed that the bot labeling was confusing and would have to be completely rethought if Q&A explorations are found to be worth continuing. Right now it's clear that the current iteration of Q&A is not in production- or even an experiment-ready state.
However, there does seem to be interest in improvements to the search to support semantic-style queries. If this continues to be the case, we would return with a proposal for an experiment around search improvements and establish guardrails and goals together. Similarly, if there is another concept for improving wayfinding (that wouldn’t lead to confusing AI- with human-generated content) we’d return again to discuss before proceeding.
Interesting idea on silently engaging the reader to page snippets! Would this look like directly taking the user to the relevant heading instead of the top of the article? Do you have any other suggestions for areas for us to look into for information-finding?EBlackorby-WMF (talk)20:47, 9 January 2026 (UTC)[reply]
@Gnomingstuff, thanks for your feedback. It helped us realize we uploaded a still of the design concept instead of a GIF,fixed here, which may have led to confusion about this work focusing on “Because You Read”-style suggestions.
Those of us who (sadly) recall the early days ofDWIM know that this can be a very ambitious project. Please accept that you will find it hard to get the "super goal" of fully understanding what a user wants, but that you can achieve a good deal by even simple and clever means. So please have 3 or 4 goals of increasing complexity and do the simplest one first. A trivial example is keyword similarities such as fast/quick. Of course, before all else you need to improve the pageSemantic search! It is in hopeless shape. Last talk comment was in 2019. Looking back, in the late 1980's I met a young investment banker who was somewhat obsessed with funding companies to do semantic search. Then around 2007 or so I happened to see him again, he had become a venture capitalist, and was trying to fund a company to do... guess what, semantic search. The LLMS do some such things but that should be your 3rd project, I suggest. So please do proceed, but step by step. Thanks
It's an important R&D subject. One main problem users have is finding tutorials, help pages, policies and guidelines using the search. That is for example because these don't show in autocomplete and the default search results. Please also look intothis wish proposal for something parallel to the conventional search as it could greatly help newcomers find the relevant tutorial and guideline pages (or other meta pages and articles/sections too).
Rather than just semantic search things please also work on and consider things like better integration and awareness-raising about other things users could use to quickly find what they're looking for such as the deepcategory search operator along with awareness and skills in knowing & finding categories. Another concern is that too little attention is paid and priority given to what the real-world users of Wikipedia in real-world practice needs or would like to have.
Maybe average reads per day and/or time spent on site mainspace articles increasing or reader surveys improving. Difficult question but basically whether it's a real-world improvement to readers in practice.
Really interesting point about helping newcomer editors find what they need for help. I'll share that back with the team.
We're definitely thinking about ways to make good use of the deepcategory search operator, there's such a wealth of organized information in there that readers could benefit more from.
Average reads and time spent are good ideas to track usefulness.
Do you have any thoughts about potential design considerations to help readers/newcomer editors know the sorts of things they could ask in a search with semantic capability?EBlackorby-WMF (talk)21:35, 27 January 2026 (UTC)[reply]
Thanks! Btw, I'm thinking of maybe making a separate wish that is more specific in some ways (e.g. the application area such as only for helping newcomers instead of the myriad of listed ways this could be used) and broader in other ways (e.g. not specifying WikiChat in title).
Agree. Various things could be made possible/accessible that's based on it such as making it easier to filter an existing search or having the semantic search make use of it (e.g. searching in deepcat:"Films" if the user is clearly searching for a film) but so far I've mainly just explored its practical use on Commons. One could also show a button like "Search help and meta pages" near the top that searches again for just these namespaces because all the namespaces things are unknown and confusing to readers – e.g. they don't know these pages are not by default included in the search results. Also of note is that help pages are often very long with so much info that is helpful for various specific applications that readers landing there would have a hard time finding it or discovering it – this is what the wish is about to a large part but it's probably a bit too long or not written enough for people to see the large potential there and probable need for this (+other approaches addressing this).
Design considerations: when entering into the search bar show one stylized dropdown item like "Ask Wikimedia" which if selected/clicked makes the search use semantic search. At the top show the info that as a short/quick way to use this enter? orq followed by the question in natural language. To raise awareness about this search mode one could consider changing the default label from "Search Wikimedia" to "Search or ask Wikimedia". Then there's of course also various ways to tell newcomers that they can search for guidance/helpful info relating to their editing. In part related to that, one could make short videos that explain in bite-sized pieces how to edit Wikipedia and start them off by entering a question into the search bar like "q how to make a video start at a certain timestamp" which then would show the relevant wiki help page section that explains how to do it. One could also show a search input box for just help, policy, meta pages etc in the editing window (or a wikilink to a help hub page which contains this searchbox in the center).Prototyperspective (talk)16:15, 28 January 2026 (UTC)[reply]
Working on a better search system is a good idea, question based search has been a my staple on Google for 20 years and now I use LLMs. Take the questionhow is gps accurate enough for surveying (how can the same system that says I am sitting my neighbour's living room be used with any accuracy). Wikipedia has the answer inSurveying, in theHistory section,20th century subsection. Or we have an article:Real-time kinematic positioning, that is too complicated to answer the question efficiently. NeitherSurveying orReal-time kinematic positioning appear inthe top 20 results of a Wikipedia search (Toronto Marathon does though :-P). I tried answering this question more than 15 years ago on theReference desk (I can't find it in my contribs now, kudos to anyone that can).
When someone has a question, they don't want a search to tell them "the answer may be in this 7,000 word article, have a look". Or they ask a LLM "write me 7,000 word passage that overviews the topic related to my question".
Wikipedia does topic overviews. Wikipedian's can make it easier to find information by placing anchors in article introductions to relevant sections and making articles readable for the general population. And they can participate in a human-written q&a system.
Side note: recently I used Google to search forperth contours (to make a map fora Wikipedia article). The entire screen on the first results page was dedicated to skin contouring clinics around Australia. When I scrolled down after the two results wanting me to pay, the CC BY-4.0 contours were found by searching the gov website that was 4th result. I am more than happy to write an answer to that question for the next person so they can avoid my terrible experience, naturally that is outside the scope of Wikipedia. Google search is vulnerable at the moment.
To answer to your initial questions:
I am not surprised that 98% of visits originate outside Wikipedia search (search engines offer better results), and people using Wikipedia search have learnt that it is only good for auto-completing a topic title.
Risks: Wikipedia provides articles that overview a topic and the current search box is effective at getting you to an article. If people want to find a Wikipedia article this behaviour with auto-complete is great. Unfortunately it is not good for everything else.
I think a decent search should be pursued regardless (to be honest I want a Google competitor that doesn't try to sell me stuff or generate erroneous answer), I am not sure what other options apart from hybrid there are. If hybrid can get me to the answer my question above faster, that is great.
@Commander Keane, Thanks for this feedback! Yes, we think for now, hybrid search is the logical next step to explore. Keyword search definitely has its uses so we want to ensure we are building upon our existing capabilities. We’ll share more details on the initial experiment soon.
"What are your overall reactions to this exploration and the research behind it?" → Chat-style search is going to be a necessary part of Wikipedia as that becomes more and more how people expect to engage with the internet. Younger kids shout questions to Alexa or Siri without ever surfing the web.Early Wikipedia used a model that barely made use of search at all; this just feels like a logical step.
"Are there risks or concerns you think we should be paying closer attention to?" → A reader should always know upfront if any writing or content was generated by AI.
"What signals or outcomes would matter most in deciding whether a hybrid search approach is worth pursuing further?" → Do the changes result in people using Wikipedia's native search more or less? As search engines and AI software become increasingly eager to quote or even plagiarize Wikipedia, we need local solutions because without readers actually seeing the articles, none of them will ever become editors.
I liked thehighlighting in the prototype. Cat is also a good example as it's 7853 words long and much of the specific cat information is spread across other articles. For example, I was recently trying to remember what to make sure to avoid when getting floor cleaner to use around cats. The answer (phenols) is on Wikipedia but located atDisinfectant § Phenolics.
On thesearch results example, the results copied over fromcat have their citations omitted:
Cats have excellent night vision and can see at one sixth the light level required for human vision. This is partly the result of cat eyes having a tapetum lucidum, which reflects any light that passes through the retina back into the eye, thereby increasing the eye's sensitivity to dim light. Large pupils are an adaptation to dim light. The domestic cat has slit pupils, which allow it to focus bright light without chromatic aberration. [...]
Have you all tested ways to show the citations in the search?
Cats have excellentnight vision and can see at one sixth the light level required for human vision.[1] This is partly the result of cat eyes having atapetum lucidum, which reflects any light that passes through theretina back into the eye, thereby increasing the eye's sensitivity to dim light.[2] Large pupils are an adaptation to dim light. The domestic cat hasslit pupils, which allow it to focus bright light withoutchromatic aberration.[3] [...]
^Ollivier, F. J.; Samuelson, D. A.; Brooks, D. E.; Lewis, P. A.; Kallberg, M. E.; Komaromy, A. M. (2004). "Comparative morphology of theTapetum Lucidum (among selected species)".Veterinary Ophthalmology.7 (1):11–22.doi:10.1111/j.1463-5224.2004.00318.x.PMID14738502.S2CID15419778.
@Rjjiii, appreciate your thoughtful comment. To your third point, our hope is that the changes would result in more readers using Wikipedia’s native search, so that is a key area we will be testing here.
Your note about citations within surfaced results is really helpful. The linked example is a very early mockup and does not reflect any sort of final design choice, so citations will be carefully considered. In addition to highlights, is there anything else you would be interested in seeing in terms of surfacing in-article results?EBlackorby-WMF (talk)18:28, 13 January 2026 (UTC)[reply]
@EBlackorby-WMF: It makes sense to get feedback early on. One note regarding citations is that a study onhow folks read Wikipedia showed that people were checking the citations in medical articles (via the popup) even when they were not going to the linked website.[2] I bring it up because I think it would affect how any testing is done. Few people follow citation links, but apparently, in context-dependent situations, some people are checking the citation itself to evaluate the type of source. Regarding "surfacing in-article results", I hesitate to suggest anything here because an important thing make clear to the reader is what has been added by their search plus the AI tool. Highlighting has long been used by search engines, so that's intuitive. For other elements, I suspect you would need to do some kind of test where you ask readers afterward to see if they were confused or mistaken.Rjjiii (talk)18:57, 13 January 2026 (UTC)[reply]
Yes, I believe hybrid search is worth exploring. As an editor I use site full-text search to find related and duplicate material, and I'd like to be able to run longer and more natural-language queries and still get meaningful results. I'd also like to see better snippets on search results pages. It's interesting to see this question because a work teammate and I recently implemented hybrid semantic search to improve information retrieval for users of an internal reference website. For that website's purpose, AI-generated text is not desirable/appropriate, but users needed better relevance rankings for search results. We were using PostgreSQL full-text keyword search, and we added pgvector and embeddings generated with a commercial general-purpose foundation model, using reciprocal rank fusion to blend the results. (Switching to Elasticsearch was out of budget/scope.) To figure out whether hybrid search was worthwhile for us, I made a sample list of recent real queries and realistic queries, and we did a lot of qualitative testing to figure out whether the new approach returned similar or better results for the majority of sample queries, rarely worse results. We used a prototype where we could fiddle with the parameters, similar tohttps://semantic-search.wmcloud.org/. I'm no expert in any of this, but I learned a few things that may be interesting:
For my site, semantic search works better for longer and more complex queries, and keyword search works better for single-word and quoted queries. So for one-word and quoted queries, we just run keyword search, and for queries longer than ~15 words, we just run semantic search.
Snippet quality and presentation have a big impact on making search results meaningful and usable; people want to see a quick preview of their keywords in context in the document. We kept a relatively traditional search results page, but started offering multiple snippets and longer snippets from each document.
It wasn't easy to figure out a good chunking strategy for our documents, but chunking made a big difference for quality of ranking and snippets, both for traditional keyword search and semantic search.
We've had difficulty getting really good semantic search ranking with our very DIY approach. Some keyword queries that did well in traditional search got worse with early versions of hybrid search because the semantic search results were not helping. So we limited the impact of the semantic element by limiting the distance threshold. Tested a lot of queries to figure out the best threshold for us. We found that the right threshold lets semantic search shine when keyword search is struggling to produce any relevant results, without muddying up search keyword search results when they're quite relevant.
On the search results page, we changed the search input box a little bit - made it a multi-line entry box, with new labeling, to encourage longer queries and more natural-language queries.
Coming late to this but it seems worth commenting on. First, broadly, I'd agree a hybrid search has major upsides if implemented well. For me, Wikipedia is already faster and easier than finding (accurate) information on google, and I would like that to be true for as many people as possible. It feels especially important as search engines seem likely to generate less and less traffic for us, even as they use our content for AI-powered answers. To skip ahead, the most major risks are fairly self evident: don't make the search worse for current users. In gauging success, I think something like clickthrough rate is much more important than session length or "depth", because it's well known most sessions are very short and that should be taken as a sign of Wikipedia's usefulness to people in the everyday, not as an opportunity for more "stickiness" or "user engagement".
With that out of the way, I wanted to express some mild puzzlement at the motivation and logic behind this project. You askWhy do readers find it easier to locate Wikipedia information elsewhere than on Wikipedia itself? It's a worthy question, but while I haven't been able to comb through all the research in detail, I haven't seen anything yet that points directly toward semantic search. It's certainly true that major search engines have semantic search, which certain users probably find convenient, but there are many other differences those platforms have that could plausibly make readers prefer them. For example, if I happen to open wikipedia on my mobile browser instead of the app, the local search bar becomes inaccessible once I scroll down to read the article. I can access duckduckgo with a tiny swipe at any time, making it far faster. I don't know that this is a surmountable technical problem, but I was just a bit surprised that alternate explanations or solutions to these discrepancies didn't appear to have been considered. Similarly, the degree to which this functionality is in-demand seems rather uncertain: apparently roughly 5% of queries use natural language. To me, that indicates the vast majority of current users probably don't need semantic search, and you should be very careful not to ruin things for them with this experiment. To be sure, 5% is not nothing, but it makes me wonder if this is really mission-critical, or if someone just noticed that google has semantic search and we don't.
Again, I'm really not opposed to exploring this, the justification just seems a little weak, and when we have critical tasks starving for attention and resources, I think it's fair to ask these kinds of questions. —Rutebega (talk)08:08, 25 January 2026 (UTC)[reply]
@RutebegaSimilarly, the degree to which this functionality is in-demand seems rather uncertain: Wikimedia currently does not have a "good" way to figure what the readers want and whether building a system is going to have the readers use it. Given that we have a decline in readership specifically because AI engines keeping eating our impressions, I (personally) feel like exploring possibilities to engage readers is more important, since if they go away, we might as well leave as well.
If you have alternate ideas, of things that will help readers engage with Wikipedia further feel free to share them, or if there arecritical tasks starving for attention and resources that you want fixed feel free to share them atthe annual plan talk page.Sohom (talk)12:29, 25 January 2026 (UTC)[reply]
@Sohom Datta, I wouldn't at all dispute thatexploring possibilities to engage readers is more important now. I had hoped it was clear from my comment thatI do support this exploration. Eliza asked for overall reactions, and I was responding to what I see as a potential opportunity formore explorations that could address the same root problems. I also wanted to give Eliza and the project team a chance to explain more about their work and rationale, since there can be a big knowledge gap and at times a sort of language barrier when it comes to communications about these kinds of initiatives, at least for someone like me with no real background in MediaWiki.
I did not want to veer off-topic by bringing up other ideas for improvements, as I don't think this discussion was soliciting those. Since you asked though, I was reminded of a very petty grievance I have that nonetheless could be investigated as related to this topic: I have always been totally mystified that Vector 2022 moved the search bar to the upper left corner of the UI, far from all the other controls. I continued using Vector 2010 for several years mostly because of this, but support for dark mode and other features eventually convinced me to put up with it. I have to imagine some UX research was done at the time, but I have a hard time imagining why anyone would prefer it this way, and it's not aligned with other sites I regularly use, where search is almost always top right or centered. Obviously this does not qualify as an issue of critical importance, but I think it's worth paying attention to how our existing UI design impacts some of the user behavior we are trying to modify, and I may try to raise that appropriately regarding the annual plan. Thanks for engaging, as always. —Rutebega (talk)19:22, 25 January 2026 (UTC)[reply]
@Rutebega Thanks for your detailed and thoughtful comment. Your question about how we arrived at semantic search as the answer to this problem is a good one! The simplest response is that based on thequalitative research we’ve done, we landed on semantic search as one possible solution to this problem among many, and one that seemed the most reasonable starting point. There are a variety of other improvements that could and should be made to the search experience on Wikipedia as well. In particular, improving the UI has a lot of potential (your sticky search bar idea is intriguing!).
User studies have shown that readers have specific expectations about Wikipedia’s search functionality from their experience with other platforms and which are not met by our current search. We believe at least part of this is due to lack of semantic search capabilities on our sites.
A majorpain point relayed by all language wiki readers during the course of this research was the inability to properly find information on Wikipedia due to lack of natural language search, as well as having to put in multiple search prompts before getting to desired information.
Research aboutdiscovery needs of in-depth readers highlights thatreaders prefer external search engines over WP search because the former offers advanced search capabilities (e.g. semantic search via natural language queries).
One relevant quote from our Readers Foundational Research: “Wikipedia search forces you to almost have the exact wording that is inside of the Wikipedia pages… You can't just ask a question like you do on Google and get the Wikipedia searches that are relevant to that question.”
Basically, we think the decline in direct traffic paired with people using external sources for meaning-based queries to navigate to our content are strong signals that we are underperforming in this area. That said, the only way to determine if semantic capability can meaningfully move quantitative numbers is in an experiment to test it with Wikipedia traffic which is what we hope to learn in this exploration.
Google is a terrible model for a search engine; from the perspective of the user, they have been destroying the search engine for decades to please their customers. "Remember that you are not the customer, but the product." A good design goal for a user-oriented search engine is to avoid false positives. Google used to have syntax for "this must be in the candidate" and "this must not be in the candidate", but they removed the support in order to increase their hit counts. I urge that any new search engine provide an easy way to specify
Must include this string as is: no changes in spacing, no changes word order, no substitution of synonyms
Must not include this string
Nice to have: Must include a string matching this regex
Nice to have: Narrow the results of the previousstringsearch
Thanks @Chatul. The notes around specifications and scoring are really helpful, I'm passing along to our engineers for consideration. Are there any other guardrails you'd recommend keeping in mind through this work?EBlackorby-WMF (talk)21:43, 27 January 2026 (UTC)[reply]
Helping people find information on Wikipedia is a good aim. Using machine learning methods of natural language processing and semantic search is a reasonable way to approach that problem. But I would highlight, as a crucial design goal, that Wikipedia should allow people to search with natural-language querieswithout making them feel like they are using an LLM chatbot. That bar of "feel like" means avoiding even aesthetic details that might remind people of chatbots. Those little chat popup interfaces on every website are right out. I like bringing readers to a highlighted section of an article, rather than ever providing a response in text (even a response that is a quote from the article). There are probably many ways to achieve a useful interface that still reassures people that they are on Wikipedia, and not the rest of the internet, so they are getting human-written answers.~ le 🌸 valyn (talk)23:54, 26 January 2026 (UTC)[reply]
look at howarchive.org uses metadata for search; e.g.:(Category:Foo)
how would I find pages with these properties:American +photography +Italian name +people +fashion
Like so:deepcategory:"American people" deepcategory:"Photography" deepcategory:"Italian-language names" deepcategory:"Fashion". I previously left a comment about the deepcategory search operator and potential/needs for improvements regarding it; categorization can also be improved (examples).Prototyperspective (talk)13:06, 3 February 2026 (UTC)[reply]
Agree on categories I think the research team have to specify lots of scenarios similar to what you have mentioned. Categories on wikipedia serve little purpose because there are so many unions, all maintained by off wiki batch processes. The major user of categories is wikidata. Is there any evidence that readers navigate by category?
There are a few UX issue.
- As mentioned by others, It risks creating an AI layer over the Wikipedians work (see simple summary)
- makes Wikipedia risk feeling like google AI (answers your spelling mistakes) or grok (guesses what you meant) or Facebook (spooky guesses what you meant based on your searches)
- Wikipedia UX does not reward experience, exploration, or allow preference (except some skin options).
- Experience - More advanced search methods are not visible on mobile, and can only be chosen after a search on desktop. SQL/quarry, special search, key words, and regex are invisible.
- Exploration - Readers expect the left pane to have tools, seperated under clear headings with expandable sub headings. There are two search options visible on the UI (search bar and left pane - portal). a search heading with the options underneath would be far more interesting, and allow links visualisation tools of navigation.
- Preference - If this is rolled out as the default, then there will be community and reader pushback. ASD needs preference and control of their space. (sed Vector 2022). I am suggesting an editor preference profile on village pump ideasWakelamp (talk)d[@-@]b13:30, 3 February 2026 (UTC)[reply]
later atHelp:Searching#Parameters : "The main parameters arenamespace:,intitle:,insource:,incategory:, andprefix: (namespace as used here isn't literal – use the name of the actual namespace desired)."
isdeepcategory a search operator/parameter likeinsource ?
examples:
for example, find pages that link to filmreference.com ( task: remove Reftags tofilmreference.com )
Categories serve lots of purposes such as finding articles and creating lists. They could be used more and this would require some technical improvements – for example a cat-exploring module or integrating them better into into the search filter options. Disagree that the major user of categories is wikidata, that's just one way these are useful. Of course do readers use categories, the question is how many / could more be done to make more people aware of these and how they could use these.
What's proposed here would not be "creating an AI layer" and isn't like the simple summary. I wonder if you actually read the thread starting comment. Moreover, even if it would be you would need to explain what you mean by that and more importantly where you see why problems. Regarding Wikipedia feeling like this or that, maybe that's exactly what people want (/need). Again more assumptions. But there also the search would show results just like before but also enable this for additional results (or via a new separate search mode(?)). Also all AIs I tested manage to deal with your typos and show you things based on a typos-corrected query.
Wikipedia UX does not reward experience, exploration sounds interesting but what do you mean? Nevertheless, seems offtopic here(?)SQL/quarry, special search, key words, and regex are invisible. you describe a reason for why semantic search could/would be useful. Regex,SQL/quarry is not sth people can just easily enter; people can't just write some SQL and regex when searching sth on the go on mobile or even in general. Semantic search can make use of such things in simple terms (make them accessible to real people in the real world en masse).If this is rolled out as the default the plan isn't to replace the normal default search. I'd oppose that too.
@Piñanana: your question is a bit unclear to me but I'll try to answer it; reis deepcategory a search operator/parameter like insource ?as said earlier, it's a search operator (a powerful too-unknown underused one). It's not much like insource if that's what you were asking.I cant make: [[:Special:Se… SeeTemplate:Search link.examples: I don't know how these relate to the topic; that's use-cases for the insource search operator.Prototyperspective (talk)15:31, 3 February 2026 (UTC)[reply]
The title is "Early Explorations Into Semantic Search: Phase 0", so I included all the search methods for comparison, as readers might prefer an advanced search similar to theInternet archive and journals. A hybrid search method with a single search input is best for looking at a new subject, while an advanced search is much better at detail.
"What's proposed here would not be "creating an AI layer" and isn't like the simple summary. I wonder if you actually read the thread starting comment. " .. : I did, but the researchers are suggesting a combination of semantic (which today means LLM AI) and lexcialexical. A hybrid search method with a single search input is best for looking at a new subject, while an advanced search is much better at detail. Editors and Dev forget, that the current advanced is specified as a default in preferences for logged in editors , while other users have to "access advanced search on Wikipedia, type any query into the search bar, press enter, and then click the "Advanced" option on the search results page to filter by namespace"
" SQL/quarry, special search, key words, and regex are invisible" see [Help:Searching]] as are the[lexical and advanced search options onpreferences
Exploration - rewarding reader through discoverability of features[3] goes hand in hand with search. (greyed out menu items was how I discovered hidden features in systems) Wikipedia stickiness is based on going down rabbit holes. This suggestion is aimed at making the experience easier which will mean less clicks, while we should be aiming the various search options more visible to differentiate us from external search.
Glad we agree on preferences :-) But if it's on for all readers, then I am not certain what happens when they become editors
Lastly, the worst case for me is that this is brought in using wikimedia software rather than using OpenSearch or similar. New features are liked by Mediawiki's customers, but having everything done in house increases up maintenance dev and increases the risk (bugs, security) because it is not used elsewhereWakelamp (talk)d[@-@]b07:33, 10 February 2026 (UTC)[reply]
@EBlackorby-WMF Will this fix the issue I see, where searching fir "Sense and Sensibility", or "Pride and Prejudice" (without the quotes) does not bring up what's expected?
@WhatamIdoing When I click the magnifying glass next to my name, then type sense and sensibility (actually, all lower case), the first suggested result is the article Sensibility. The next suggestion starts with Sensibility. I'm not at my desktop, so I can't give much more info. I can't make and attach a screenshot easily from my tablet (I don't know how). 🙂David10244 (talk)05:35, 7 February 2026 (UTC)[reply]
Screenshot of search results
Here's aWP:WPSHOT showing what happens when I typesense and sensibility (all lower case, as you can see) into the search bar at the top of the page.
@WhatamIdoing Yes, I get different results. My list starts with "Sensibility" (pointing to the article by that name) and the second item is for the book named "Sensibility Objectified". It's very strange that we get different results. I don't have any local scripts in common.css or anything like that.David10244 (talk)04:00, 11 February 2026 (UTC)[reply]
@Matma Rex I don't know; I don't want to change my search preferences or skin at the moment (I'm about to go to bed). I have never changed my search preferences OR my skin. I'll try soon. Thanks.David10244 (talk)05:57, 13 February 2026 (UTC)[reply]
As some may be aware by now, the maintainers ofarchive.today (and archive.is, etc.)recently injected malicious code into all archived pages in order to perform adenial of service attack against a person they disliked (this can be confirmed by the instructions describedhere.) While the malware has been removed now, it is clear that archive.today can't be trusted to not do this in the future, and for the safety of our readers, these archiving services should be swiftly removed and the websites blacklisted to prevent further use.
Absolutely not. We are dependent on external sites for archiving and verification. I have in the past lobbied WMF to acquire archive.org so it can meet our needs (or set up our own version) but unless and until that happens we have to link to external sites for verification.Hawkeye7(discuss)01:24, 5 February 2026 (UTC)[reply]
I trust them a far sight more than someone who has now verifiably used their ownership of a domain we link some 400k times to DDOS another website on the Internet.Izno (talk)03:39, 5 February 2026 (UTC)[reply]
Why insist on framing this as an all or nothing choice? Other archive sites exist and they don't have a demonstrable history of weaponising their service. Treating it as an exceptional case isn't unreasonable.
Also, even if this were an all or nothing choice (which it isn't), Wikipedia's need for citations isn't more important than the security of users. Archive.today has demonstrably abused its users' trust (including Wikipedia's editors and readers) and cannot be considered safe. –Scyrme (talk)20:46, 5 February 2026 (UTC)[reply]
Trash it completely. Archive.today has proven that it's not trustworthy as an archive source (unlike the Internet Archive) and links to it should be considered potentially malicious in nature.SilverserenC04:58, 5 February 2026 (UTC)[reply]
"Archive.today has proven that it's not trustworthy" - there are no (known) examples of its owner tampering with archived pages. "unlike the Internet Archive" - Internet Archive removes archived copies regularly.sapphaline (talk)14:56, 5 February 2026 (UTC)[reply]
there are no (known) examples of its owner tampering with archived pages: Yes there is, see above. Injecting malicious JavaScript is tampering, visible or otherwise. If they are willing to do that, who knows when they'll decide to exploit zero-days or engage in blatant manipulation.ChildrenWillListen (🐄 talk,🫘 contribs)15:03, 5 February 2026 (UTC)[reply]
My point about them being trustworthy when it comes to archived copies stands. Internet Archive is way less reliable in this regard, because archived copies can always be deleted there.sapphaline (talk)15:20, 5 February 2026 (UTC)[reply]
I'd rather have information lost than readers having to encounter mailicious code whenever an archived copy is visited. Also, we know nothing about the maintainer(s) of Archive.today, how they make money, or even if they're ready to pack up their bags tomorrow and leave. They're in a jurisdiction that's politically unstable and prone to censorship. None of these problems exist with the Internet Archive.ChildrenWillListen (🐄 talk,🫘 contribs)15:30, 5 February 2026 (UTC)[reply]
A jurisdiction that's politically unstable and prone to censorship - you mean, like the United States? (I wish I was joking about my country in 2026) Setting that aside, we shouldn't want any information lost just like that. We need a remedying/replacement process coming before a removal process. See my main comment below.Stefen 𝕋owerHuddle •Handiwerk15:34, 5 February 2026 (UTC)[reply]
If this RFC is going to pass (which will be a very unfortunate result!),megalodon.jp archives archive.today snapshots almost perfectly (the only issue is that they're zoomed out and for some reason have a 4000px width, but this is trivially fixed by unchecking some checkboxes in devtools). Maybe WMF could arrange some deal with their operators to archive all archive.today links we have?sapphaline (talk)15:43, 5 February 2026 (UTC)[reply]
"how they make money" - why should we care about this? "if they're ready to pack up their bags tomorrow and leave" - archive.today has existed for nearly14 years. It's a snowball chance in hell they're going to shut the site down tomorrow or in any foreseeable future. "They're in a jurisdiction that's politically unstable and prone to censorship" - how do you know? "None of these problems exist with the Internet Archive" - US isextremely prone to censorship and political unstability + Internet Archive removes archived copies on any requests, not just governmental ones.sapphaline (talk)15:35, 5 February 2026 (UTC)[reply]
It is economically infeasible to hold trillions of archived pages and provide them indefinitely for free. We don't know how they're funding their project, which means we wouldn't know when this funding would dry out.
Their willingness to inject malware over a petty dispute puts their stability in disrepute. If we get in the bad graces of these maintainers, who knows what they'll be willing to do to us?
It's fairly well-known that the maintainer(s) of Archive.today live in Russia, and that the main archive storage is also hosted in Russia. Sometimes, they redirect certain IP addresses to yandex.ru, and of course, their official Wikimedia accountRotlink was created in ruwiki.
Actually thetheory is Ukraine, not Russia, and theevidence is they provision on global edge cloud providers (such as CloudFlare - but not CloudFlare). --GreenC15:46, 7 February 2026 (UTC)[reply]
@ChildrenWillListen makes an important point. People are bringing up that WP already has 500K links to them. What if they introduce malicious code on justsome of the archived pages (for instance, because targets of their malice to be more likely to access those links)?Aurodea108 (talk)01:49, 9 February 2026 (UTC)[reply]
I can't think of an explanation for this that isn't malicious. You'd think the maintainer(s) of archive services wouldn't be stupid enough to try to get a blog removed from the internet as a petty retaliation over some alleged doxxing. —DVRTed (Talk)05:33, 5 February 2026 (UTC)[reply]
I agree, they should be blacklisted. Should have happened a long time ago, really, because of massive copyright violation: they distribute lots of content that the copyright owners only made available behind paywalls. SeeWP:COPYLINK: "if you know or reasonably suspect that an external Web site is carrying a work in violation of copyright, do not link to that copy of the work". —Chrisahn (talk)11:11, 5 February 2026 (UTC)[reply]
I fully appreciate why this needs dealing with, but I am concerned about "the rub". We could end up harming verifiability on a *lot* of our content. Of course, we can leave citations in place without the archive.today links, but without the ready verification of having an article to load, I fear some useful article text could end up being removed by editors who decide they can't trust the listed source due to inaccessibility (typically those with little wiki experience). In cases where the paywalled content still exists, removal would be less likely, but in cases where the original link is permanently dead, it's not available on Archive.org, and we only have archive.today... yikes.
Deprecation makes sense as long as that doesn't include immediate removal before any replacement remedy is pursued. Any process that intervenes with using archive.today should encourage editors to directly replace these sources with archive.org links or newspaper.com clip links, or locate alternate sources. I realize this is generally what deprecation means but if the intervention can be clear and help the editor find an alternative, I would be more relieved of the ramifications of ditching this source.Stefen 𝕋owerHuddle •Handiwerk14:51, 5 February 2026 (UTC)[reply]
well-said. isupport blacklisting as long as it is accompanied by an effort to find alternative solutions instead of just plain removal....sawyer *any/all *talk15:33, 5 February 2026 (UTC)[reply]
Agree with those above arguing that archive.today is simply not trustworthy enough to be sending our readers to. Adding malicious code to cause a DDoS on another website is an absurd thing for a website maintainer to do and we shouldn't be facilitating their behaviour by sending more users to their site, nor simply hoping that they won't do something worse which targets our readers.SamWalton (talk)15:03, 5 February 2026 (UTC)[reply]
Yes, I think so. As you've said above, they can't be trusted tonot do that again in the future, so I would support blacklisting their links.Some1 (talk)01:17, 6 February 2026 (UTC)[reply]
I think there needs to be an official RfC on this to get more opinions. Personally I think this shows that archive.today can't be trusted (if they do this over something rather petty, what's stopping them from putting more malicious code into archived pages, not just the captcha?), and it should be at least deprecated - but only if the links can be replaced with a different archive without loss of information.Suntooooth, it/he (talk |contribs)20:15, 5 February 2026 (UTC)[reply]
They blatantly violatedWikipedia:External links#EL3, but you think we need to have a long discussion about whether malware-serving websites are sometimes okay?
If we're going to have an RFC, let's blacklist now and focus the RFC discussion on how to cope rather than whether we should provide links to malware-serving websites.WhatamIdoing (talk)22:16, 5 February 2026 (UTC)[reply]
I think that formally gaining consensus is important when it affects as many links as this does, especially since even in this thread it hasn't been unanimous. If this affected a much lower number of links (think a couple of orders of magnitude lower) and links that would be easily replaced or removed, then I wouldn't be suggesting a full RfC.Suntooooth, it/he (talk |contribs)00:40, 6 February 2026 (UTC)[reply]
I wonder if there's a way to add a warning in the articles. Something like[replace archive link] (and a category showing affected articles) might encourage people to start the process of finding other sources. It might be possible to do this automagically through the CS1|2 templates. I'm assuming that would catch most of them.WhatamIdoing (talk)02:44, 6 February 2026 (UTC)[reply]
It is in the realm of feasible just to turn off the display of archive links that are via archive.today/is in CS1/2. The real question is if we can get everything or if we just have to start off with vanishing the big quantity of links.Izno (talk)03:59, 6 February 2026 (UTC)[reply]
Removing archive links (even by just turning them off, rather than fully removing them) from this number of articles would be a huge hit to verifiability. If consensus is gained to remove archive.today links, there needs to be a mechanism for replacing them with other archives.Suntooooth, it/he (talk |contribs)16:05, 6 February 2026 (UTC)[reply]
would be a huge hit to verifiability I think turning their display off is a fair compromise on the road to removal and replacement. I do agree that half a million pages or links is a big number. A maintenance category would naturally be set up so we can actually find these quicker.
Good idea. Also considering the RfC above, isn't it possible that many of the archive.today links on the encyclopedia _aren't actually necessary?_ As in, they were added superfluously by the website operators themselves? Perhaps the true scale of the problem is much smaller, and we could vibe code a quick tool to check some of the links.audiodude (talk)04:50, 8 February 2026 (UTC)[reply]
That's not a problem. We won't remove them (at least not for a while). Blacklisting means that no new links to these domains can be added. It doesn't mean existing links have to be removed. —Chrisahn (talk)22:29, 6 February 2026 (UTC)[reply]
Every day, links are added becausethere is no other option. They literally are the only source for a large set of web pages on the Internet. This is why there are so many links. It's the only option. It is pragmatic. You and some othersappear concerned about what is best for Wikipedia, but you don't seem concerned about the consequences, which are very real, immediate and large scale - it would cause significant damage to Wikipedia. Unlike the good feelings about punishing Archive.today for some transgression. What is more important? --GreenC16:04, 7 February 2026 (UTC)[reply]
I've added archive.org URLs to lots of articles. In case a page hasn't been archived by them yet, I click "Save Page Now". I don't recall any significant problems, and I don't recall a URL that couldn't be archived. I'd say such URLs are pretty rare. —Chrisahn (talk)22:50, 7 February 2026 (UTC)[reply]
Agree that there should be an RfC. The implications of the discussion and potential actions taken by consensus will have far reaching effects across the encyclopedia. Additional comment as a technical editor, not one who edits a lot of articles: if archive.today provides a copy of a paywalled or linkrotted news article, but the article was actually published by the news organziation in questionat some point, what does it matter if the archived copy isn't available? The citations are still technically valid right? Does wikipedia remove citations to books that are out of print? Does information exist if it's not on the internet lol?audiodude (talk)04:47, 8 February 2026 (UTC)[reply]
Yes, you're right that aWikipedia:Convenience link (to the original and/or an archive) is not required, if the news article is archived in some place that is accessible to the general public. For example, it's traditional for ordinary print newspapers to keep a copy of all their old newspapers, and many will either let the general public take a look or send the older ones to a local library or historical society. However, not all publications have a print edition, and some news outlets put more information/additional articles on their web. I have, for example, been disappointed that paper copy ofThe Atlantic has fewer articles than their website. A web-only source needs a working URL, because sources must beWP:Published#Accessible.WhatamIdoing (talk)06:04, 8 February 2026 (UTC)[reply]
Has anyone linked to the circus that occurred when archive.today first appeared? As I recall, they used extremely advanced (for the time) techniques to attack Wikipedia by edit warring their links into pages. The views that we have to keep using them miss the big picture: these guys are obviously up to something bad. The infrastructure and operational maintentance to support their system would cost a vast amount and someone is planning to get a return on that investment eventually. It's much more effort than some libertarian philanthropist would support.Johnuniq (talk)02:47, 6 February 2026 (UTC)[reply]
Technical question: How would blacklisting work? If I understand correctly, the idea is that blacklisting prohibits adding new archive.today (and archive.is etc.) links, but we'll keep the existing ones for now. Specifically: If I edit an article and try to add a new archive.today link, I get an error message and can't save my changes. But if I edit an article (or section) that already contains one or more archive.today links and I make unrelated changes, there's no such error message. Is that correct? Can we make that work? A "dumb" edit filter (that simply checks whether such links occur anywhere in the text I'm trying to save) won't work – it won't let me save unrelated changes. I can think of a few ways to implement a smarter filter, but I don't know if edit filters have access to the required information, or how efficient smarter checks would be. —Chrisahn (talk)09:18, 6 February 2026 (UTC)[reply]
@Chrisahn Yes, this is how the built-in tools likeMediaWiki:Spam-blacklist already work, and edit filters can also be made to work that way. They forbid adding new links to blacklisted domains, but if a link is already present in the article, it can be edited without tripping the blacklist. There are still some scenarios that cause problems (e.g. if a vandalism deletes a citation that links to archive.today, you won't be able to revert it without removing those links first), but that hasn't stopped additions to the blacklist before.Matma Rextalk14:41, 6 February 2026 (UTC)[reply]
archive.today is just a very useful website that can be used if archive.org is not helping.
The hosters of archive.today are not as reliable as the guys that host archive.org. However, we don't know any case, where a snapshot was false, do we?
In this case, they "just" abused their visitors for a DDOS attack. Of course we should not support this. But this does not mean, we have to definitely block the website.
By the way, blacklisting (viaWP:SBL) without a previous removal of all links isnot a good option, because this leads to several problems:
Moving of parts of pages to other pages (e.g. archiving) is not possible any longer, if the moved text contains a link.
Modifying an existing blacklisted URL (e.g. link fixing) might trigger the SBL.
It's not possible to add blacklisted links to a discussion which is challenging for some not so technical users.
In my opinion a technical solution could be:
replace all links with a (unsubstituted) template (yes, this is a lot of work, but could be automized partially);
if any problem with the domain occurs again, modify the template such that there is no link any longer to archive.today (and .is and all the other domains);
when the problem is solved, revert the template change;
if anybody adds a link to archive.today without the template, a bot could try fix that afterwards and the bot could write a message on the linker's talk page that they should check whether they could find something better.
By a solution like this we would would still have the benefits of the archived versions. But wecould remove all links fast and at once, if needed.
A couple hundred thousand bot edits is not a good solution, either.
Trappist the monk might have some ideas about whether the citation templates could special-case these domain names for a while, while the work is done. A maintenance category, as Izno mentioned above, would also be a good idea. And even if we don't want to use theMediaWiki:Spam-blacklist quite yet, for fear that it will interfere with rearranging pages, we could implement aSpecial:AbuseFilter that would prevent people from adding any new ones.WhatamIdoing (talk)21:33, 6 February 2026 (UTC)[reply]
I still advocate going back to WMF with a proposal to create our own archive. This will get us off dependence on external archive sites that we cannot control.Hawkeye7(discuss)22:25, 6 February 2026 (UTC)[reply]
Setting up an internet archive requires years of planning and work. I'd like us to start making tangible progress on resolving this problem today, or at least in the next week. Even if we thought that was legal and a good idea, creating our own archive isn't going to address the problem right now.WhatamIdoing (talk)23:24, 6 February 2026 (UTC)[reply]
cs1|2 can special case archive.today (and companion domains) if/when there is a consensus to deprecate/blacklist.
One major problem with the edit filter (and SBL/BED) is that many unexperienced people who trigger a rule just don't know what that means and what they should do. We often see that people who wrote large paragraphs and failed of first try to safe, just run away, although the warning said that if they are sure about what they are doing, they just should try to save again.
The filter (and SBL/BED) should be used if people intentionally (try to) spam. If they actually just want to help, then there's a risk of annoying/frustrating them. That's why -- over time -- I more and more tend to use notification bots and maintenance lists instead of the blacklist-like tools in cases where links are mostly added by non-spammers.
Might be concerning, but that's "two people have been bad people" and each should be judged on their own merit accordingly. You don't treat someone DDOSing another person off the Internet as a stable individual meriting half a million links from the most popular source of collated information on the Internet (and that ignoring the prior dramas, as linked above).Izno (talk)21:37, 6 February 2026 (UTC)[reply]
Doxing? Hardly. Quote: "While we may not have a face and a name, at this point we have a pretty good idea of how the site is run: it’s a one-person labor of love, operated by a Russian of considerable talent and access to Europe."[4] —Chrisahn (talk)22:40, 6 February 2026 (UTC)[reply]
Another aspect: Depending on how the FBI case against archive.today goes, there's a chance that these ca. 500,000 archive links in our articles will become useless in the not too distant future. —Chrisahn (talk)01:05, 7 February 2026 (UTC)[reply]
Prior to about 2015, the Wayback Machine did not systematically archive all links on Wikipedia. There are huge gaps prior to that date. Between 2012(?) and 2015, Archive.today systematically archived Wikipedia. Thus many dead links areonly archived on Archive.today. The one time Archive.today got blacklisted a long time ago, it didn't last long. People reversed it. Why? Because Archive.today is incredibly useful. It's that simple. It's pragmatic. They have the goods nobody else does. This incident with the CAPTCHA will soon be forgotten as inconsequential to Wikipedia. But blocking Archive.today will cause daily conflict with editors who need to use it because there is no other option. --GreenC17:11, 7 February 2026 (UTC)[reply]
Your wish to punish Archive.today over this silly incident (which they undid) would cause widespread and deep collateral damage to Wikipedia. --GreenC18:32, 7 February 2026 (UTC)[reply]
I think that would depend on how it's implemented. First, just to remind everyone,WP:Glossary#verifiable means someone canfind a reliable source. It does not mean that the Wikipedia article already has a little blue clicky number (that'sWP:Glossary#cited) or that the ref contains a functional URL. This means that if the Wikipedia article says "The Sun is really big", and there's no cited source, or the cited source is a dead URL, then that sentence is still verifiable, because an editor (or reader) could look up Alice Expert's book,The Sun is Really Big, and learn that the material in the Wikipedia article matches the material published in at least one reliable source. Removing archive links therefore doesn't (usually) destroy verifiability (unless that was the only source in the world that ever said that, and the original is a dead URL – in which case, are we really sure we should be saying that now?); it just makes verifying the information take more work.
Having looked at a too-small sample size (=4 articles) with these links, I think that some of these links are unnecessary and others deserve a{{better source needed}} tag no matter what the archive status is. I therefore think that checking and replacing sources might be a good thing, overall.WhatamIdoing (talk)19:20, 7 February 2026 (UTC)[reply]
A citation to a book is always verifiable. So is the NYT and other news outlets. Referring to everything elseonline-only, which is most of it. Without an archive, a dead website is unverifiable. Maybe wait 10 years for an archive to surface, but eventually it's gone. Youmight find other sources, but who is going to do that for half a million links? Certainly not the few people engaged in these conversations. Most people don't even verify sources, much less try to replace them with other sources. People are busy creating new citations with future dead links that nobody fixes. The debt continues to grow, and one of our best tools for dealing with it is now being threatened with removal. --GreenC19:44, 7 February 2026 (UTC)[reply]
Please look at the definitions I linked. We don't care whether "a dead website is unverifiable". (It's really none of our business whether people can double-check that some other website's content was taken from a reliable source vs is an original work.)
We care whether the content in the Wikipedia article is verifiable – and we care whether it's verifiable inany reliable source, not just the cited one.
Yes, you're right: half a million sources is a problem, and the debt continues to grow. To stop the bleeding, I think we should deprecate/discourage future additions of this source. To get the existing ones checked, I think we should have a tracking category, and maybe even a way to make this a more mobile-friendly and/or newcomer-friendly task. Based on my experience the other day, we're looking at about five minutes per source. Also based on my experience the other day, half the sources are unreliable ones anyway (at least for medical content).WhatamIdoing (talk)19:55, 7 February 2026 (UTC)[reply]
If Archive.today actually goes offline, then we have another problem. But treating it like it'salready offline by adding{{dead link}} templates is backwards since we don't know the future. The assumption there are alternatives to Archive.today is a mistake. Most Archive.today links are added because Wayback can't do it. There's really only two games in town, and we are eliminating one. And you can't go back and fix it, either you save the web page before it dies or it's gone forever. Archive.today has a monopoly on many archive pages, and many citations are the only game in town there are no better sources. Most people don't read these forums, but if you start blocking or hiding links, there will be many editors complaining. It's a major resource for our community that has a large following.Nobody has really been notified about the RfC. --GreenC21:00, 7 February 2026 (UTC)[reply]
I think it would be useful to see lists of articles that do not include any image, maybe with a column for the linked Commons category if it exists and a column for the image(s) set on the Wikidata item if there are any. The articles could be those in a category or especially some WikiProject list likeWikipedia:WikiProject Climate change/Popular articles orCategory:High-importance science articles (corresponding articles, not the talk pages though). I think it's not unlikely that there is some way for this.
Asking this in the context ofc:Commons:List of science-related free media gaps – this could be useful not just for adding images if a useful relevant high-quality one exists for the article, but also for identifying media gaps.
You could find articles that have been tagged withTemplate:Image requested, but I'm not aware of any way to look for untagged articles.https://pagepile.toolforge.org/ will let you define a list of target pages, and that list can be used be used by other tools for various purposes, but, again, I'm not aware of any tool that would import such a list and identify missing images.
Images are one of the key things that readers want to find in a Wikipedia article. It would be nice to have more emphasis on finding and adding appropriate images.WhatamIdoing (talk)23:59, 5 February 2026 (UTC)[reply]
Good idea – that method shows 191 pages inthis query which is something one can start with.
A way to list articles without images would probably show way more results, would be more dynamic, and could be useful in more ways. It would not rely on users adding that template which is relatively rarely done. Additionally, having that template doesn't mean the article is short of even an image illustrating the main subject and entirely lacking images (also implies there is no image for the article in the page preview hovercard and in the Wikipedia app).
Agree on what you said there. Also of note that only very few users know of, see, and click the Commons category linked to an article – there's often high-quality files there but pageviews stats show that few go to these pages. After creating many Commons categories, I think most of them over a year later weren't even linked via the small often overseen{{Commons category}} somewhere in the article. One can often find images in categories that have been there for years but nobody ever added them to the article including articles including not even one image.Prototyperspective (talk)00:26, 6 February 2026 (UTC)[reply]
Regarding the search link that checks for templates instead of the categories: don't know why it only shows 70 results instead of all 191.
Regarding the search link that checks via the two categories: I've looked into it further and excluded all articles that are biographies or films. Now it contains just 58 items instead of 191 and most of these are niche low-importance articles where I can't see how an image would be very useful or they already have an image for the article's topic (as in the case ofGypsum concrete). I nevertheless added the search query to the media gaps page.
deepcat searches sometimes time out for me this happens for deeply-nested categories which is why it won't really work forCategory:Science currently. This may also be an issue here because not yet all relevant articles in that category branch have been tagged with the WikiProject template. Additionally, it doesn't look like one can scan if an article is in a category and its associated talk page in another. This would be useful because the WikiProject category is only set on the talk page. There's also ways to scan for articles in a category branch that don't yet have the WikiProject template but it's complicated and I guess barely anybody uses that (a tool for that would be great btw).Prototyperspective (talk)15:55, 6 February 2026 (UTC)[reply]
Petscan gets about 95% of the way there - you can ask for pages in a category that don't have "a lead image", which I think is the single image returned in the API. Pages with no images will presumably also have no lead image.Andrew Gray (talk)09:56, 6 February 2026 (UTC)[reply]
Following up on this - it seems "lead image" is defined frommw:Extension:PageImages and is a) one of the first four images in the lead section which b) has a certain range of ratios and c) is not explicitly excluded. So it is possible to have an article with images that nonetheless show up as no-image here. But having said that...
It doesn't seem to be possible to do this in one step starting with a talkpage category (like importance tags), but it is possible in two steps via PagePile.
Interesting, thanks, I didn't know about the petscan feature to only show articles without lead image.
I tried to run it onCategory:Science but the problem is that it's not possible because that cat has too many subcategories and also when limiting it to e.g. just 3 layers, it (query) shows too many results (>60.000).
I first thought maybe the approach of that petscan filter isn't really adequate as it also shows articles with images even lots of images – but looking more closely, I'm not so sure anymore: e.g.Agricultural science is listed buts its image in infobox does not illustrate agricultural science;Artificial intelligence is listed despite having many images but it does not have an image at the top that's some diagram explaining AI types and/or how AI works. Articles likeAnthropology also miss some image that illustrates the subject well. So maybe the issue is not with the methods but simply that there's soooo many articles missing images (I think the community hasn't really begun to systematically address this).
What would be the best ways to address this that takes into account these issues: prioritizing articles that miss images, using only other methods that check whether there is any image at all in the article², somehow further filtering the petscan, somehow extracting fields or large-order topics lacking images?
² here's one additional way to check if there's any image whatsover (or animation or video or audio) in an article:deepcategory:Science -insource:"[[File:" (82,511 articles with incomplete results) Note: in this query it also shows articles with image in infobox so these would also need to be excluded somehow (maybe via filtering out things like .png?). One couldcombine this with incategory:"Commons category link is on Wikidata" to see just articles with no image but a Commons category (2,086 so this one seems quite actionable).
Pages with no lead image linked from Wikipedia:WikiProject Climate change/Popular articles (137/1000) Nice query, this one seems quite actionable as well. I'll probably link that on the science-related media gaps page as well and will look for other similar WikiProject pages for which to also create such a query for and maybe extract some topics in need of illustration/image (note that an article with lots of images illustrating the various subtopics may not be missing an image much even when there no lead image and ideally we'd like to have one).
interesting query which I've found on Quarry and tweaked … identifies six "top/high" importance Science articles with no image links weird that it only shows 6 items. So it seems like currently this query is not useful but maybe it can be tweaked further until it is. The description saysthat have no images of any sort (not even those from templates like {{unreferenced}}) so that seems to be the cause here; maybe one could exclude images in such templates but I also wonder whyResearch statement shows despite that there's several PDF document icon images on the page(?)
The pdf icons aren't added by image links - in a template or otherwise - but by a css class. They're not detectable with queries against the database even if we wanted to (other than by searching for external links ending in ".pdf", which isn't practical).Excluding images included by templates isn't possible either. We've been asking the developers for an equivalent for simple links in WhatLinksHere for over two decades. And it wouldn't help anyway, since it would also exclude images in infoboxes.Whatwould help is a list of specific files to ignore, like{{unreferenced}}'sFile:Question book-new.svg. Or I can write queries for non-free/non-existent lead images by talkpage categories/wikiproject ratings/etc. Asking atWP:RAQ is the best way for such requests not to get lost; my free time and attention is very limited this time of year. —Cryptic18:59, 8 February 2026 (UTC)[reply]
"For mid to large Wikipedias, shorter articles are less likely to have an image"Is there any code issue or wish or project page about enabling a functional Quarry query for seeing articles without any images via some list that specifies common icons used in templates (like the CCBY icon etc)?
So images in infoboxes are taken into account (in imagelinks) in that query? (If they aren't taken into account maybe one could take the results from the query and enter them into a second tool that checks for images in templates.)queries for non-free/non-existent lead images… that's a bit confusing to me – weren't you talking about the Quarry query earlier which doesn't check only for lead images but any images in the article? I would find this again more useful than scanning just for articles without lead images.
Suggestion Mode is a new Beta Feature for the VisualEditor that proactively suggests actions that people can consider taking to improve Wikipedia articles, such as "add citation", "improve tone", or "fix an ambiguous link". The feature islocally configurable, and can be locally expanded. It will be available here as anew Beta Feature on Tuesday (and thus, in practical terms, only visible to experienced editors to begin with).
The goal of this limited early release is for us to work together to:
Identify what issues and improvements need to be addressed before evaluating the impact of the feature on newcomers througha controlled experiment.
Generate ideas for new suggestions you think would be worthwhile to implement.More on this below.
The feature is closely related to the existingEdit Check feature which shows actionable feedback to newcomers as they edit, and shares many configuration details with it.
Why Suggestion Mode?
Suggestion Mode is meant to benefit two audiences:
[Primary] Newcomers who are eager to edit and struggle with how to start doing so constructively, plus giving them encouragement to explore the policies and guidelines.
[Secondary] Experienced editors seeking easier ways to find out what might need fixing, and assembling the context needed to decide how and if to act.
Note: volunteers have helpfully created many tools/gadgets/user scripts to help with the above.[5][6] Suggestion Mode seeks to make the functionality these tools offer easier for more people and in more languages to access.
How it works
When an editor who has the Beta Feature enabled opens an article with VisualEditor, if there are any of the available types of suggestion within the article content, then one or more suggestion cards will be shown alongside. Each card contains a description of the potential problem, a link to the policy or guideline the suggestion is based on, a button to start resolving the problem, and a way to provide feedback about the suggestion itself. You can see some examples and thefeedback flow below. Seemw:VisualEditor/Suggestion Mode#Design for more examples.
The team has started with an initial set of suggestions to demonstrate the concept. They are derived from existing tools, policies, and content guidelines. We're very interested in your recommendations for additional types of suggestions, to add to the growing list inT360489. The complete list of initial suggestions can be seen atSpecial:EditChecks. You can test the feature immediately, viathis user script.
Local configuration
The aspects of a suggestion that are community configurable will vary on a case-by-case basis. They can be configured by admins atMediaWiki:Editcheck-config.json, to enable/disable individual suggestion types and control parameters for each type (e.g. the categories and sections it should/should not be shown within, the cumulative edits someone must have made to see a suggestion, etc.). The listing of available parameters is atmw:Edit check/Configuration. In particular, thetextMatch suggestion type is a relatively simple system that finds words or phrases within the text, and suggests either replacing, deleting, or thinking about the text (along with a contextual guidance link). That sub-feature is easily expanded/adapted in any way you wish. In the future, we hope tosupport regex for these suggestions.
Known issues
The team is currently working on:YAdding the ability to include links within the text-match types of Suggestions (e.g. The "English variant specified" type will link to MOS:RETAIN next week) (T416511);YAdding theeditsuggestion-visible tag to monitor edits that are made when any Suggestions have been seen (T413419); Adding the ability to see the specific suggestions someone acted on within a given edit session (T416535); Improving the feedback flow to be more streamlined (T401739); Adding the ability to toggle the visibility of the Suggestions cards entirely (T415589).
Get involved
For now, Suggestion Mode will be available as a Beta Feature, in order to collect your recommendations for:changes to the default "descriptions" (both the wording and the links),feedback on the individual suggestions and their results, andrequests/ideas for further types of suggestions. The team and some volunteers have been experimenting with the checks for the last few weeks, plusdiscussing the tool inDiscord and Phabricator, and the team has fixed a number of issues, but we need your help finding more ways to improve this feature. We also hope you will have additional ideas for new types of suggestions, that can either be implemented entirely locally as text-match suggestions, or requested for developer-assistance in making more complex suggestions; there is a listing of existing suggestions inT360489. Please share your thoughts on the feature either here or atmw:Talk:VisualEditor/Suggestion Mode, and use the built-in feedback system to share any details about problems with specific suggestions. Much thanks,Quiddity (WMF) (talk)00:29, 6 February 2026 (UTC)[reply]
Agree, I think so too. This could be quite useful for the community. What I don't like here is that it's just for the VisualEditor and not the wikitext editor (albeit this feature is probably most useful for newish editors and not so much for active editors already overburdened with tasks anyway who don't benefit much from any such further ones – those are probably mostly using the wikitext editor but I could be wrong about that).Prototyperspective (talk)15:28, 6 February 2026 (UTC)[reply]
Yeah, the problem is that you need twocompletely different systems when looking for suggestions to make and when applying them to the document. VisualEditor is easier to do, because it offloads the whole "you must parse and modify wikitext without breaking it" part to Parsoid, and lets us work with something that's already got some level of semantic meaning applied to it.
We actuallycould reuse this, kind of, by (essentially) running VisualEditor in the background, and having your wikitext sent into the API and parsed, working out what suggestions there are, then asking the API again to tell us what ranges in the wikitext source they correspond to. Then doing similar things when you want to take an action in response to a suggestion, etc. It'd be painful and slower, as you might imagine.DLynch (WMF) (talk)17:55, 9 February 2026 (UTC)[reply]
This feature is now available. You can enable it atSpecial:Preferences#mw-prefsection-betafeatures. (If you have previously selected the preference for "Automatically opt-in to new Beta Features", then you still need to open your Preferences page once, in order to enable any new type of Beta Feature.)
Please do share your thoughts and feedback (and especially your ideas for other types of Suggestion that could be implemented (either by the team, or by yourselves locally via textMatch), that might be helpful for newcomers to act-on and learn-from) so that we can continue to improve it for you. Thanks.Quiddity (WMF) (talk)18:52, 11 February 2026 (UTC)[reply]
Can someone with the relevant permissions and technical knolwedge please revertTemplate:GeoTemplate back to a state where English language geohack works? The error seems to have been introduced yesterday.Tæppa (talk)13:28, 6 February 2026 (UTC)[reply]
Pinging@Trappist the monk: who edited thew:Module:Lang the day before (some) of the geohack language pages broke. My original thought was that the use of <br />{{lang|ar|خَرائط فلسطين المَفتوحة|rtl=yes}} broke the page, but <br />{{lang|he|עמוד ענן|rtl=yes}} has been there a while.
Thechange I made atModule:Lang was for{{transliteration}}. At this writing, there are ten{{lang}} templates in{{GeoTemplate}}. All ten are used for presentation and have nothing to do with the&language= query portion of the geohack url.
For the record, all of the above links (Arabic through Russian) are now "broken" so it's probably nothing to do with the templates that the wikipedias have for each language.Tæppa (talk)22:02, 8 February 2026 (UTC)[reply]
As someone who reads a lot of geographic articles, this GeoHack glitch has been annoying for the last couple of days. I think the correct place to file a bug report ishere. In case it helps to debug: I notice that, upon opening GeoHack, the table of links to Google Maps, OSM, and other servicesvery briefly displays but disappears after a split second. If I use Firefox's "reader view", all the links are visible. HTH.~2026-87494-1 (talk)01:49, 9 February 2026 (UTC)[reply]
The reason Geohack is broken seems to be related to this change:1234316. Geohack was depending on html comments to find the main content. These html comments are removed in the latest mediawiki update. --wimmel (talk)08:57, 10 February 2026 (UTC)[reply]
Crosspost (sort of) fromTemplate talk:Pounds, shillings, and pence#Alternative coding, I've been looking for something that lets me output non-decimalised numbers. So far I've been using{{GBP|10 8s 9d}} to produce£10 8s 9d for example, but leaving what is effectively text inside the curly brackets doesn't feel right and might break downstream applications e.g. inflation calculators.
I suppose the ideal outcome might be{{GBP|x|y|z|nd}}, with numbers in place of x, y, z, and "nd" for "non-decimalised". Empty y or z values would return a negative sign, e.g. £x/-/z or £x/y/-, and an empty x value would skip the "£" symbol as well e.g. y/z, y/- or -/z. Of course, values y and z would also need to be capped, with any excess being moved to the prior column e.g. 30 shillings would be displayed as £1/10/-, unless some extra term like|abbr=on was included?
Critically, the template system cannot require typing the £ symbol, because not everyone has that on their keyboard. Also, whatever system ends up working for this should be copy-able to other areas e.g. Australia where the three-piece currency system used to apply. ({{AUD|}} doesn't currently support £sd but it might one day.)
Also, after looking at the code for{{£sd}} and{{GBP}}, it would probably be much more straightforward and cleaner to implement this as a new template rather than modify the existing ones. (And if you want the inflation calculator to work, that would probably require converting the input to decimal then converting it back. It doesn't look like the inflation calculator is set up to take nondecimal inputs.) –Scyrme (talk)23:21, 9 February 2026 (UTC)[reply]
I've since swapped over from{{GBP}} to{{Australian pound}} for my articles, which I didn't know about before and seems to have solved my problem. My original concern was that I wanted to keep all numbers related to specific figures within a set of curly figures on a matter of principle, but the inflation element was a secondary concern. If I still needed this, I'd have accepted a modification to{{GBP}} so that it could take a decimal input and output the £sd arrangement, because I could then request or find a separate template that took £sd inputs to generate the decimal output and put that inside the GBP set. This would also have solved issues where a source reported a cost as say 30s instead of £1 10s.Anothersignalman (talk)05:46, 10 February 2026 (UTC)[reply]
@Anothersignalman: Are you only using{{Australian pound}} with Australian predecimalised currency? The template links toAustralian pound when it uses £. If you're also intending to use this with UK currency, maybe it would be helpful to modify the template to allow that link to vary (or to display no link by default), and move the template to a broader title? ({{Australian pound}} would exist as a redirect after the move, so the existing uses would still work.) –Scyrme (talk)17:02, 10 February 2026 (UTC)[reply]
No, I'm writing an Australian article, I just used GBP because it had the right symbols and the two currencies were tied to each other in the relevant time frame.Anothersignalman (talk)08:10, 11 February 2026 (UTC)[reply]
@Maiō T.: I ignored "50%" which is far too wide on desktop screens. Here I usestyle="width:10em;" in both columns to give the same width if there isn't wide content forcing a larger width:
Rank
Team
Qualified for the following tournaments
WORLDS
EURO
1
A
yes
yes
2
B
no
yes
3
C
no
no
Don't use it in wider tables which will require horizontal scrolling on many smartphones. Then it's better to let the browser pick the smallest possible width. 10em would normally be too large for a brief word but I chose it to avoid line wrapping in the header. If you want "half of however wide the colspanned header becomes to fit its content" then I don't know whether it's possible.PrimeHunter (talk)23:48, 8 February 2026 (UTC)[reply]
Exactly,PrimeHunter, I meant what you wrote at the end. But sometimes it works; see the following code and result:
{|! colspan="2" | Qualified for the following tournaments|-! | WORLDS! | EURO|}
Qualified for the following tournaments
WORLDS
EURO
But when I add a column to the left side, it all goes wrong. Bad luck... Maybe it's an HTML bug, or something. It would be really cool if someone fixed it...Maiō T. (talk)11:50, 9 February 2026 (UTC)[reply]
No, it's because 50% + 50% + something is more than a hundred percent. HTML defines an order of priority. So it will go: something + 50% of total space + 50% of total space... oh, that won't fit, you will only get the remaining space instead (so 50% of total space - something). —TheDJ (talk •contribs)16:58, 9 February 2026 (UTC)[reply]
{{#switch:{{{made_grands|}}}|true=Yes|false=''Eliminated in {{{elimination_stage}}}''}}
However, it seems to me that the code above could be re-written using#if as follows:
{{#if:{{{made_grands|}}}|Yes|''Eliminated in {{{elimination_stage}}}''}}
If I do this, and entermade_grands=false, the parameter is still treated as true, and thus produces "Yes". I am very new to conditional expressions; am I doing something obviously wrong?Rockfighterz M (talk)21:37, 9 February 2026 (UTC)[reply]
Latesttech news from the Wikimedia technical community. Please tell other users about these changes. Not all changes will affect you.Translations are available.
Updates for editors
Logged-in contributors who manage large or complex watchlists can now organise and filter watched pages in ways that improve their workflows with the newWatchlist labels feature. By adding custom labels (for example: pages you created, pages being monitored for vandalism, or discussion pages) users can more quickly identify what needs attention, reduce cognitive load, and respond more efficiently. This improves watchlist usability, especially for highly active editors.
A new feature available onSpecial:Contributions showstemporary accounts that are likely operated by the same person, and so makes patrolling less time-consuming. Upon checking contributions of a temporary account, users with access to temporary account IP addresses can now see a view of contributions from the related temporary accounts. The feature looks up all the IPs associated with a given temporary account within the data retention period and shows all the contributions of all temporary accounts that have used these IPs.Learn more.[13]
When editors preview a wikitext edit, the reminder box that they are only seeing a preview (which is shown at the top), now has a grey/neutral background instead of a yellow/warning background. This makes it easier to distinguish preview notes from actual warnings (for example, edit conflicts or problematic redirect targets), which will now be shown in separate warning or error boxes.[14]
TheGlobal Watchlist lets you view your watchlists from multiple wikis on one page. Theextension continues to improve — it now properly supports more than one Wikibase site, for example bothWikidata andtestwikidata. In addition, issues regarding text direction have been fixed for users who prefer Wikidata or other Wikibase sites in right-to-left (RTL) languages.[15][16]
The automatic "magic links" for ISBN, RFC, and PMID numbers have beendeprecated in wikitext since 2021 due to inflexibility and difficulties with localization. Several wikis have successfully replaced RFC and PMID magic links with equivalent external links, but a template was often required to replace the functionality of the ISBN magic link. There is now a newbuilt-in parser function{{#isbn}} available to replace the basic functionality of the ISBN magic link. This makes it easier for wikis who wish to migrate off of the deprecated magic link functionality to do so.[17]
A new global user group has been created:Local bots. It will be used internally by the software to allow community bots to bypass rate limits that are applied to abusiveweb scrapers. Accounts that are approved as bots on at least one Wikimedia wiki will be automatically added to this group. It will not change what user permissions the bot has.[20]
I am specifying special fonts for the Hebrew text and different ones for transliterated Hebrew text (he-Latn), however the former always overrides the latter, lately.
I have the following line to specify Hebrew text style:span[lang|=he]:not(.he-Latn-fonipa, .he-Latn) { font-family:...}
And the following line to specify transliterated Hebrew text:span[lang|=he-Latn]:not(.IPA) { font-family:...}
I was told about the phrasenot(.IPA) here before to prevent it from overriding my IPA style specification:.IPA { font-style:...} andspan[lang|=he-Latn-fonipa]
Oh, I thought you were trying to specify a font for transliterated Hebrew as opposed to Hebrew IPA. If you want to specify a font for Hebrew but not romanization or IPA,:lang(he):not(:lang("*-Latn")) should work.Nardog (talk)06:18, 10 February 2026 (UTC)[reply]
We see this work as a part of addressingthe decline in pageviews on Wikipedia. We want it to be easier to access content on the site, especially on mobile where newer readers tend to come in.WMF’s Reader Foundational Research found that difficulty with in-article navigation, in particular mobile, is a top complaint among readers. We’re trying out a table of contents on mobile web to see if it supports ease of browsing based on data that it can be helpful for navigating. The Wikipedia Android app, for example, has a table of contents, which on average gets opened almost 4 times per user, much more often than users start a search, which is only an average of 1.5 times a session. The app also sees 71.1% clickthrough rate, indicating strong usage on small screens.
These screenshots show the two different table of contents buttons that will be shown to experiment participants in the two treatment options.
What idea are we testing?
Article sections are currently collapsed by default on mobile, which was intended to save users time in navigating as they scroll through long paragraphs of text. However, we suspect that this default may contribute to navigation difficulties since users must first open individual sections before reading. In December 2025, weconducted an experiment on Arabic, Vietnamese, French, Chinese, and Indonesian wikis to 1) auto-expand all sections in an article by default and 2) pin the header of the section in the viewport to the top of the page.
We found that this change actually lowered the retention rate for readers by about 1.5% and shortened the amount of time they spent onwiki. We suspect that auto-opening all the sections on mobile ended up causing navigation difficulties by creating a wall of text, resulting in readers feeling overwhelmed or frustrated and leaving. So we decided to try out something different.
Now we want to see if offering a Table of Contents will improve those navigation needs. The new test will add a Table of Contents button on mobile. When users tap it, a panel slides up from the bottom showing the article’s section headings, which they can then click to jump to different parts of the page.
These screenshots show the two different table of contents interfaces that the two treatment groups will see.
What stage is this project in?
This project is inphase 1: launching a small test with an early version of these ideas. It’s not yet clear whether this feature will be an improvement for readers, so we want to test it to determine whether to proceed intophase 2: building a feature.
What is the timeline?
The experiment will go live the week of February 16 and will run for four weeks. It will affect up to 10% of mobile users on Arabic, Vietnamese, French, Chinese, and Indonesian Wikipedias and up to 1% of mobile users on English Wikipedia. Once we have the results, we will come back here to discuss results and decide whether we want to proceed with this idea.
Something to entertain since you're poking around would be to display some lesser set of the table of contents, perhaps all items pointing to an (h1), h2, or h3, and excluding any pointing to h4s.Izno (talk)19:30, 10 February 2026 (UTC)[reply]
The TOC in the first image on the right is a more familiar & self-explanatory kind of TOC I think. What is missing is a button to expand all section easily with a click on desktop. Is there an issue about this? Glancing over the sections is a good way to find what you're looking for or in discovery-mode see if there's something you may find interesting in an article but having to uncollapse each section individually is too much and one also starts to think about which sections may or may not be relevant instead of just doing that quick click.
.
Also I kind of liked how the TOC used to be in a way because when opening an article one could see the TOC and thereby somewhat a form of summary of the contents of the article at the top right away without having to click anywhere. I have the TOC collapsed to the top instead of the sidebar. On the other hand the always displayable TOC also has big advantages. Why not combine the best of both or give logged-in users to configure a setting to have that: display the left sidebar with the TOC when the mouse goes to the minimized panel on the left but when just reading the article make it a small panel that doesn't take up space. I could make an illustration butthis video also shows what I mean. One could also have an option for whether or not the sidebar should display when opening the article or even then only when hovering left. The sidebar is usually just mostly whitespace so I have it hidden even when I like seeing the TOC often and miss the quickly glancable TOC at article opening. This would also make it faster and easier to find some info.Prototyperspective (talk)00:57, 12 February 2026 (UTC)[reply]
Hi folks, not sure if this is the best place to post this but it seemed like a high-visibility spot where someone might have an answer. Feel free to move or copy my message elsewhere if you'd like.
I created the articleNova Scotia Guard on 1 July 2025. The article appears to be indexed, and is the third result on DuckDuckGo. However, it will not show up on Google at all. Even when searching "Nova Scotia Guard Wikipedia", you'll get articles it's linked to and even a category it's in, but not the article itself. I was particularly perturbed by the fact that the Grokipedia clone of the article shows up in Google, but not the one I created. I mentioned this in the Wikipedia Discord server some time ago and my results were replicated by several other users. Since then I created a redirect, edited the Wikidata item, and added more links to the article, but it hasn't changed anything.
My biggest concern here is that there may be other articles which Google is not showing in search results for one reason or another. If someone might be able to look into this I'd appreciate it. Thanks,MediaKyle (talk)16:04, 10 February 2026 (UTC)[reply]
It was marked as reviewedin September which should allow it to be indexed, but I checked Google Search Console and for some reason Google hasn't crawled it since July when it was noindexed. I requested a re-crawl, so hopefully it will start showing up soon.the wub"?!"16:44, 10 February 2026 (UTC)[reply]
@MediaKyle: It hasn't been edited since 20 August 2025 where it was still noindexed. I get the impression Google is watching our edit logs and often revisits a page shortly after it has been edited so any edit (except an unloggednull edit) may influence them.PrimeHunter (talk)17:31, 10 February 2026 (UTC)[reply]
Thanks for the replies, I appreciate you both looking into this. I thought I edited the article the other day but I guess I did everything except that... Just made an edit. Hopefully it will show up soon and this is just an isolated incident. Cheers,MediaKyle (talk)17:49, 10 February 2026 (UTC)[reply]
Thanks for letting me know, just checked and it shows up for me now as well. Seems this is resolved now... still have to wonder what other articles might be caught by this oddity but I can't imagine it's too widespread.MediaKyle (talk)19:01, 10 February 2026 (UTC)[reply]
Hm. I think maybe quite a lot, actually, if the reason something wouldn't be indexed is "no edits since an NPR hit the reviewed button". --asilvering (talk)05:22, 11 February 2026 (UTC)[reply]
If this is common then probably it would be good if there was some query that listed all of these pages so that a bot could make an edit to these to get them index I think. I kind of doubt this is common though when not considering articles with a delay of 10 days or so but even when it's not common, many pages could be affected.Prototyperspective (talk)11:37, 12 February 2026 (UTC)[reply]
In the past couple of weeksSpecial:WantedCategories has seen more than one recurrence of a redlinkedCategory:Temporary Wikipedian userpages that was deleted in 2016. Both times, it was populatedentirely by the user talk pages of editors who were blocked for vandalism in 2008, the first timeexclusively editors whose usernames began with W, and todayexclusively editors whose usernames began with V — and the culprit appears to be that said talk pages have recently beenundeleted onWP:DELTALK grounds, after having been previously deleted, and were thus put back into a category that existed at the time of the original deletion but has not existed for a decade.
The category is currently empty. Please always include an example. I found one in your contributions:[22]. There is no way to prevent the categorization before you removed it. The page was undeleted byHex. Her logs show she undeleted many such pages beginning with V on 7 February and with W on 29 January. If she is planning to undelete more pages then you could ask if she will remove the category afterwards but now I have also pinged her.PrimeHunter (talk)18:03, 10 February 2026 (UTC)[reply]
Hiya - yes, I'm repairing amass deletion of user talk pages by a former admin back in 2008 before we decided not to do that. It's slow and tiresome because I check each of them to ensure there's nothing requiring RevDel before hitting the button, so I was planning to ask someone to get a bot to clear off the category afterwards. I guess you'd like me to arrange that now? Given that I've done about 600 out of 11,000, this is going to take a while. By the way Bearcat, since you evidently checked the logs, you could have written to me first before posting here. It's also kind of odd that you didn't mention me and so PrimeHunter had to send a ping. —Hex•talk19:58, 10 February 2026 (UTC)[reply]
@Hex: Now that the topic is raised, I do have to wonder what purpose is served by undeleting these talk pages. Was there a discussion somewhere that concluded these 11000 pages would be useful to mass-restore 18 years later?Anomie⚔23:59, 10 February 2026 (UTC)[reply]
I have the same question. I looked atjust one example, which had five Linter errors and one nonexistent category. I expect to see deleted templates as well. Restoring these pages will make work for a lot of gnomes; what is the benefit, and where was the discussion about this restoration? –Jonesey95 (talk)00:32, 11 February 2026 (UTC)[reply]
No discussion was required. We established consensus a long, long time ago that user talk pages shouldn't be deleted except in rare circumstances because they form an important part of the historical record. When that happened, someone should have done this job, but nobody did. I'm rectifying that error. The effort of dealing with a small number of linter issues will be outweighed thousands of times over by the benefits of not having a massive chunk of user interactions and block log context missing for no good reason. —Hex•talk01:43, 11 February 2026 (UTC)[reply]
thousands of times over? Actual human and bot editors are going to have to make thousands of edits to remove errors from these restored pages. That is a guaranteed downside if thiscrusade project goes forward.Hex: Please enumerate concrete instances that balance the downside of those thousands of edits with benefits. I won't ask you to justify the obviously unjustifiable orders of magnitude that you claim. Just a simple positive or break-even counterbalance would be fine. –Jonesey95 (talk)15:09, 11 February 2026 (UTC)[reply]
A quick and dirty sql says that among the undeleted pages with linter issues, most have issues with obsolete tags, no background inline (which a number of wikipedians regard having a lot of false positives) and no end tags. Most of this can be automatically fixed. Everything else is less than 20 pages with issues.
@Snævar: There is no restriction. Some editors create their user page as the very first edit; indeed, for some, it is theonly edit that they ever make. It's often harmless, provided it's not againstWP:UPNOT and isn'tspeedyable under the G and U criteria. But this thread appears to be about usertalk pages, these being the ones that Hex has been undeleting. Very few users create their own user talk pages, although some do. It's also not usually a Wikicrime. --Redrose64 🌹 (talk)23:29, 11 February 2026 (UTC)[reply]
@Anomie - who can say, really? People are interested in anything and everything. Even the most seemingly mundane detail in an archive may be exactly what some future historian is looking for as part of a research project. You could really say that about almost all of our archives, which we generate at a ferocious rate - page revision histories, system logs, talk archives. 18 years is also not very long at all. The discussion we're having right now will get archived, and then nobody might care about it at all for 25, 50, 100 years. But there may be a single historian in 2126 who it's useful to and is reading it right now. (Hello! Do you live in space? I'm sorry for what we did to the planet.) It's thatpossibility that we keep archives for. —Hex•talk20:30, 11 February 2026 (UTC)[reply]
So a moment ago it was the unsupportableThe effort of dealing with a small number of linter issues will be outweighed thousands of times over by the benefits of not having a massive chunk of user interactions and block log context missing for no good reason, and now it'sthat's life? I hope thatHex will considercleaning up the pages that they undelete (link to an example of a Linter error that is of a type that was completely eliminated years ago). Editors are responsible for their edits. This is like watching someone walk through my neighborhood throwing trash on the ground. –Jonesey95 (talk)14:25, 13 February 2026 (UTC)[reply]
If anything, Jonesey95's crusade against linter errors is way more harmful than Hex's undeletions, because they seem to have no issues with loudly complaining and upsetting people over it. The message above ("This is like watching someone walk through my neighborhood throwing trash on the ground") is a perfect example of this.sapphaline (talk)14:59, 13 February 2026 (UTC)[reply]
@Pppery: That was me, not Jonesey95. I guess good for you that you cared at one point in 2019? But not enough to have started a discussion beyond the REFUND you now admit wasn't a good one.Anomie⚔00:23, 12 February 2026 (UTC)[reply]
@Jonesey95: Mentioning me in every single edit summary you make so that I come back to find I have 75 notifications is unbelievably petty and childish. Grow up. —Hex•talk15:15, 13 February 2026 (UTC)[reply]
It was a boilerplate edit summary. What is unbelievable is how much work you are making for your fellow editors. Please clean up the pages that you are restoring. Editors are responsible for their edits. See below for something more constructive. –Jonesey95 (talk)15:23, 13 February 2026 (UTC)[reply]
Two different boilerplate edit summaries which you wrote yourself to specifically mention and talk to me, which you've now stopped doing after getting called out on it. Sure dude. —Hex•talk16:01, 13 February 2026 (UTC)[reply]
You asked me to stop, so I stopped. That's the polite thing to do. See below for an example of an editor who did not stop causing problems when asked to do so. –Jonesey95 (talk)16:43, 13 February 2026 (UTC)[reply]
I just did apartial cleanup on 88 User talk pages restored byHex, fixing types of Linter errors that we eliminated from the English Wikipedia many years ago, and deleting nonexistent templates. A bot also removed nonexistent categories from many of the restored pages. This work took me about an hour that I otherwise would have spent fixing other problems or making actual improvements to Wikipedia. Bots and human editors will be needed to clean up "obsolete tag" Linter errors on a couple hundred additional pages that Hex recently restored.
I suspect that there is a better way forHex to achieve their goals while avoiding this unnecessary work. I can think of a few options:
Stop restoring these pages.
Restore the pages, then fix all errors on the pages (both actions would be performed byHex).
Restore the pages and then blank them. The supposedly valuable information would still be available in the pages' histories.
Now that there is actual time-based evidence of the cost of restoring these pages, explain in detail the thousands of hours of benefits that will accrue to future editors, readers, and researchers from restoration of these 88 pages. If it is really worth it, I can live with the extra work.
I'll also add my voice here that I think you should stop what you are doing and seek consensus for it. If anyone would have edited 11k pages without even a single discussion they'd get blocked immediately. Being an admin does not give you any special rights to bypass this process.Gonnym (talk)20:49, 13 February 2026 (UTC)[reply]
We had the discussions about user talk pagesfrom 2006–2010. In fact, the day after tomorrow is the 20th anniversary ofWP:DELTALK. If you want an MfD for 11,000 user talk pages trying to retrospectively overrule that consensus, well, good luck. —Hex•talk22:05, 13 February 2026 (UTC)[reply]
Find me a consensus that isn't 16-20 years old please. en.wiki has changed dramatically since then and I'd like to see recent consensus that agrees that mass restoring 11k pointless talk pages is wanted.Gonnym (talk)08:34, 14 February 2026 (UTC)[reply]
Starting from 12:27, 7 February 2026, out of the 605 user talk pages Hex has restored, 215 pages currently have at least one lint error (382 have no errors, and 8 were re-deleted). Here's a list in case anyone is interested in fixing those lint errors specifically:User:DVRTed/sandbox4. —DVRTed (Talk)16:22, 14 February 2026 (UTC)[reply]
I think the real issue here isWP:MEATBOT. I don't have an opinion on whether these pages should be restored, but I can understand that folks find undeleting 500 pages in a day to be disruptive when the whole project averages closer to 30-35 per day. Obviously pages are going to be restored from time to time, even ancient ones, IMO it's the scale at which it's happening that's upsetting people.
Restoring11 pages in a single minute is clearly bot-like behavior, and should go through some sort of approval, at which point we can figure out details on how to coordinate with other cleanup bots and humans. Restoring 11k user talk pages doesn't fall underWP:MASSCREATE because they aren't articles, but I think following that guidance would ease bad feelings on both sides.Legoktm (talk)00:23, 14 February 2026 (UTC)[reply]
Call it bot-like if you wish but this is very, very simple work that requires only a glance at the edit history of pages that are 90% just a single block message or maybe a couple of warnings before that. It is even so still work requiring human attention and not a bot. It's also incredibly boring, and because unlike some people I understand thatthere is no deadline I'm not in some all-consuming rush to get this done. It's also how I approach my backlog of grindy projects, of which I have many. Do some to scratch an itch, then forget about it for a while. I started this project a year and a half ago - that's how long it took me to get over the boredom of the last bunch of undeletions. After doing 500 yesterday that itch has been scratched for now until I regain the energy to think about it more, but I'm probably going to be seeing this site in my sleep for a week. It would probably have come up for a bit more scratching relatively soon now that I've gotten a feel for it again, but after the toxic behavior on display in this discussion it's retreated a long way and is unlikely to see the light of day for quite some time. I'll note again here that we could easily have had a good-natured chat about all of this on my user talk page, but someone chose to passive-aggressively post here in a way that they knew would cause drama. For shame.
If people want to artificially limit progress on rectifying this big, stupid mistake from the past, they could at least volunteer to help out with it. I'm the only one doing it, and hamstringing me on the occasions that I feel sufficiently motivated will achieve nothing. If there are more people working on it then that will make a difference even with a go-slow sign on the side of the road.
this is very, very simple work that requires only a glance at the edit history of pages illustrates the problem. The technical work of looking at history and clicking a button is simple, but the job is not done at that point. Instead of moving on to the next boring page restoration, the restoring editor, who has now created one or more problems on a Wikipedia page, bears some responsibility for resolving those problems. The editor should remove nonexistent categories and templates and do their best to fix wikitext syntax errors. The red categories are easy to see in preview. The nonexistent templates are easy to see in "Pages included in this section:". And many of the syntax errors are easy to see using the syntax highlighter gadget. Please fix the errors that you are creating, now that you have been notified that you are creating them. –Jonesey95 (talk)13:09, 14 February 2026 (UTC)[reply]
@Hex, I hope you don't feel that push back to your project is toxic.Jonesey95, in particular, has tried to offer alternative solutions that wouldn't cause issues for other editors, but you seem to have dismissed them out of hand.
I'll note again here that we could easily have had a good-natured chat about all of this on my user talk page, but someone chose to passive-aggressively post here in a way that they knew would cause drama. For shame.
Isn't this really the crux of the problem?Bearcat often asks here for help with his maintenance work keepingSpecial:WantedCategories clean - I don't know where you're getting that this is intended to cause drama (see alsoWikipedia:Aspersions). But clearly this is causing issues for other editors, because that is what precipitated this thread.Qwerfjkltalk15:04, 14 February 2026 (UTC)[reply]
Section of text shows up orange but only in some cases
It is the section starting with "include obviously. It is absurd to say that we should say "he had never been arrested before" in[23]. See the discussion there about this. Thanks.Doug Wellertalk18:40, 10 February 2026 (UTC)[reply]
You are using one of the scripts that check links for reliability (Headbomb's I believe). It highlights theentire list item in which the unreliable link appears. I skimmed it so I can't say which specific link.Izno (talk)19:28, 10 February 2026 (UTC)[reply]
It would be great if somebody could change that script so it doesn't highlight replies on discussion pages or anything on discussion pages that aren't article talk pages. I'm having the same problem of random replies being marked in red and lots of users have that script installed.Prototyperspective (talk)11:41, 12 February 2026 (UTC)[reply]
Hello there :) Apologies if this has been discussed everywhere or is otherwise known, but I noticed an ugly visual bug resulting from the combination of a{{side box}} and (a table with) thefloatright class. You can see the effect atEjective consonant, opening the mobile version from a narrow enough screen (or emulator - the "iPhone SE" preset in chrome devtools is perfect). I tried fiddling with it for a bit but didn't find a convincing solution, or one in which I'm sufficiently confident (e.g., would it make sense to add a content-based width to{{side box}}?). I'm also not familiar with the available layout templates and classes here on enwiki, so y'all may already have a simple solution that I'm not aware of.Daimona Eaytoy(Talk)22:04, 10 February 2026 (UTC)[reply]
This should be changed in thefloatright class definition. Memory says this class (and its friends) used to be wrapped in a media query on mobile such that it only took effect above a certain width. I have been meaning to make that how it works globally and just have been ~lazy~. cc @JdlrobsonIzno (talk)00:51, 11 February 2026 (UTC)[reply]
Those rules exist but they only work on responsive skins (not resized skins). They work fine on mobile devices and people using desktop site on mobile.
Generally people shout at you when you give any kind of impression you are making their favorite pre-2011 skin "responsive" or mobile like which is why we unfortunately intentionally dont have a response version of the Vector 2022 (or Vector classic) skin whicha makes me sad.
Thanks for the context :) I agree that the floatright class is ultimately responsible, although I guess I was also wondering if there's a simpler fix to apply to either of the involved templates while we wait for the proper fix. --Daimona Eaytoy(Talk)12:01, 11 February 2026 (UTC)[reply]
intentionally is doing the heavy lifting there. That a parameter works with the skin doesn't imply that was intended (i.e. designed-for).Izno (talk)17:39, 14 February 2026 (UTC)[reply]
Is there are way for a template to read the parameters of a template nested inside it? For example, if I had{{template one| {{template two |para1=some |para2=thing}} }} is there some way to code "template one" to read the parameters from "template two"? Or can "template one" only ever read the output of "template two", not the inputs?
I've seen source code for some templates use#invoke where they could use an existing template. Is there a benefit to doing this, particularly for sidebars? What's the rationale for invoking a Lua module rather than just using{{sidebar}}?
1. A template cannot do this. Via a module it can read the source text of the whole page and search this source text for strings like a specific template name, but we only do that in special cases likeModule:Auto date formatter. It doesn't sound suitable for your purpose. It also relies on the parameter being present in the source text.
While I was searching, the closest thing I found was{{get parameter}} and{{template parameter value}}. Like you said, it looks like they invoke a Lua module to read the source of a specific article and extract the value of a parameter of a particular template on that page, as opposed to reading a value from parameter of a child template. So it seems you're right.
Occasionally editors will start a discussion on one of the village pump subpages and create a subheader with a title like "Discussion" or "Survey", which inevitably creates navigation problems when there end up being multiple identical subheaders on the page under different discussions. It should just not be technically possible to create a generic subheader on these pages. If someone tries to create one, they should be prevented from saving until they change it a unique subject-specifics subheader (like "Discussion (section headers)").BD2412T19:00, 11 February 2026 (UTC)[reply]
Also agree. The section headers should be descriptive. This also relates to wishW311: Do not fully archive unsolved issues on Talk pages albeit another idea would be needed for what to do at meta pages like VP that get lots of thread than what's suggested in the image there and the bigger problem with that is that here threads aren't marked as 'solved' or at least 'issues' or 'nonissues' (eg Tech News posts aren't issues). I think the solution would be to add a sentence about this to the header asking for descriptive headers and having users edit headers when they're not descriptive. I edited a few section headers atd:Wikidata:Bot requests that weren't descriptive.Prototyperspective (talk)11:33, 12 February 2026 (UTC)[reply]
Personally, I would prefer having a permalink icon next to the heading that provides easy access to a unique link to the heading, so users won't have to generate unique headings on their own. The unique(-ish) ID is already generated by the infrastructure underlying the reply tools feature; there just needs to be a user interface to expose it. (I have my ownscript to copy comment and heading links to the clipboard; other users have written similar scripts.)isaacl (talk)02:00, 12 February 2026 (UTC)[reply]
Ideally section-based editing would be revised to support these IDs as well. However I don't know the practical feasibility of implementing this change.isaacl (talk)02:02, 12 February 2026 (UTC)[reply]
The sandbox link in the personal toolbar is no longer red even if it doesn't exist, in Vector (both kinds) and Monobook. It's still red in Timeless. The classnew is added to<li> not<a> soa.new is not being applied.Nardog (talk)02:35, 12 February 2026 (UTC)[reply]
Perphab:T413542, a task was created to use OOUI in the edit filter interface, but there's a major side effect: the view is messed up in desktop (laptop/computer), and the Ace editor is completely broken in mobile/desktop view (in iOS/Android). I am notifying the community about this error (which has probably affected all wikis) which should be fixed immediately.Codename Noreste (talk •contribs)20:38, 12 February 2026 (UTC)[reply]
Movingthis here as I may get a better answer. The question is about AntiVandal. In the settings page, there's a setting for ORES score.I'm not understanding how that setting is supposed to work. I want it to mimic "likely bad faith" in Recent Changes, but it asks for a decimal? So what do I do if I want that behaviour?TheTechie[she/they] |talk?06:36, 13 February 2026 (UTC)[reply]
It's fine to use paragraph elements appropriately on any page. It's unnecessary when writing paragraphs that aren't embedded within other elements, as the MediaWiki parser will parse newline-separated wikitext as separate paragraphs, but can be used when embedding paragraphs within other elements such as a list (seeWikipedia:Manual of Style/Accessibility § Multiple paragraphs within list items). The{{pb}} template is easier for most people to use, since it doesn't require a closing tag, but it also is less semantic, as it adds a visual vertical break but not a logical paragraph.isaacl (talk)17:15, 13 February 2026 (UTC)[reply]
The<p> tag doesn't require a closing tag, either. It's implicitly closed by the next<p> tag, and by the closing tag of any block-level element that encloses it. It's also implicitly closed by the opening tag of any block-level element that you're trying to nest inside the<p>...</p> - in that respect it's unique among HTML elements. --Redrose64 🌹 (talk)11:32, 14 February 2026 (UTC)[reply]
Are you sure the author of that commend didn't enter the<p> and<em> themselves in the wikitext? In my experience, the tool itself does multiple paragraphs by using multiple colon-indented lines, not<p> tags. As in this comment, for example.
Do not ever modify someone else's comment as you didhere. Those were inserted by the editors-plural, not by DT, and were used deliberately.Izno (talk)15:55, 13 February 2026 (UTC)[reply]
@Sapphaline: About sixty HTML5 elements may be used within Wikitext. As I write this, the list ishere. The element names are delimited by apostrophes; note that some are listed more than once. Sometimes, using these can produce a "cleaner" rendered page than Wikimarkup. For instance, if you have a list, it is possible for one of the items of that list to contain a sublist; but in Wikitext, such a sublist must be the last content in that list item. If you want text to appear at the level of the outer list, but after the inner list, you need to use HTML thus:
Nested list
Original list item is still open, so my sig that follows is part of the post beginningAbout sixty HTML5 elements, and not divorced into a separate item. --Redrose64 🌹 (talk)23:58, 13 February 2026 (UTC)[reply]
Hi! I don't know whether I should put this here or in the Teahouse, but I was about to remove the deprecated parameter "nationality" from the infobox ofFernán Mirás and for some reason the preview warning doesn't appear, even if it hasn't been removed. It's not an issue with my browser, since in the other pages I removed the parameter from, the warning appeared. I removed the space between the infobox and the template above, thnking it would help somehow solve the problem, but nothing changed. Thanks,Bloomingbyungchan (talk)15:36, 13 February 2026 (UTC)[reply]
Thanks, I wasn't aware that it was an error, instead I thought that the warning was supposed to appear regardless of the fact that the parameter is blank or not.Bloomingbyungchan (talk)15:54, 13 February 2026 (UTC)[reply]
A reader asked me about a layout problem they're seeing with an article I currently have atWP:FAC. I suspect the issue is just that they're using an unusually wide window, but I would appreciate suggestions atTalk:Carlisle & Finch#Whitespace if there's some way I can improve on what I'm doing now.RoySmith(talk)18:03, 13 February 2026 (UTC)[reply]
There is a{{clear}} template at the bottom of the Modern Lights section. If the images go further than the text the next section won't start until the images have been displayed. This results in the white space the other editor is seeing. You can remove the template, but then the images will flow down into the Navigation Beacons section unless you move one of the images elsewhere. --LCUActivelyDisinterested«@» °∆t°19:37, 13 February 2026 (UTC)[reply]
No infobox for "Corpus Inscriptionum..." exists and is very much needed. We have several such articles. I'll place similar notes there and direct people to discuss it here.
Hi.Opecuted. Sorry, but did you read what I wrote? There are some 8 different articles called "Corpus Inscriptionum XY", they all need an infobox, but none exists. We areforced to use "Infobox language", but it does NOT serve the purpose well - see above only SOME of the inadequacies deriving from this improvisation. For our purposes on Wiki, acorpus isa collection of inscriptions, either in one language, or from one geopolitical region, including inscriptions in several languages, so very far froma language as such.Arminden (talk)16:11, 11 February 2026 (UTC)[reply]
There are articles on (or redirects to, plus 1 red link to):
This sounds like you WANT an infobox. Articles never NEED an infobox. There is even a considerable “anti-infobox” section of our community, that thinks that we use infoboxes way too often and that infoboxes should be removed from lots of articles. —TheDJ (talk •contribs)08:54, 14 February 2026 (UTC)[reply]
–moved here for better visibility —Opecuted (talk) 05:42, 14 February 2026 (UTC)
Being a series of books,{{Infobox book series}} comes to mind, though there's no parameter for the region and era of the original inscriptions (the existing "country" and "publication date" parameters don't seem appropriate). If you make a list of parameters that the infobox should support, then it wouldn't be too difficult to make one. –Scyrme (talk)06:20, 14 February 2026 (UTC)[reply]
It's more accurate to say each article is about a series of books about a corpus of inscriptions; my understanding is they also include facsimiles of the original inscriptions. I assume they want an infobox that can handle including information about both the books themselves (title, editors, number of volumes, etc.) and the corpus of inscriptions which the books reproduce (era, region, languages). –Scyrme (talk)16:22, 14 February 2026 (UTC)[reply]
Probably more the corpus than the book, but yes.
I find it hugely useful to cross-reference using Wikilinks etc. The inscription collections are in part available online and offer very helpful context to the historical phenomena and sites discussed in individual articles. When this is not the user's main interest that day, an overview in the shape of an infobox, sometimes with links to Google Books or the dedicated website, is just perfect. Without, it takes much longer and I myself sometimes give up and lose much of the context info.Arminden (talk)16:37, 14 February 2026 (UTC)[reply]
@Arminden: It would be easier to fulfil your request if you provided a full list of parameters which it should have. What information should the infobox be capable of displaying?
Without a list it's difficult to make a new template or determine whether an existing template already has all the needed parameters as well as whether a new template would be warranted if one doesn't already exist (there may be other solutions besides an infobox, depending on what's needed). –Scyrme (talk)17:22, 14 February 2026 (UTC)[reply]
I was reading theSwingin' (John Anderson song) article on my iPhone (Vector 2022 skin) and the “Other versions” section header is shown one character per line. Any ideas for troubleshooting this? It does not appear this way when I look at the article using my laptop. Thanks,28bytes (talk)13:29, 14 February 2026 (UTC)[reply]
I have seen this issue before but honestly I can't reproduce it now. It's flex gone wrong but there's nothing that should be causing it particular trouble in this context.Izno (talk)17:35, 14 February 2026 (UTC)[reply]
It's been showing up like thison cellphones since a rather recent Wiki outlook change. What the new outlook screwed up even worse isthe way edits are shown on "edit history": I don't understand anything anymore, "edit history" has becone totally USELESS to me.
Back to this issue: I figured out that by flipping the phone from "portrait" to "landscape" (sorry, I'm a photographer), it fixes the problem A BIT.
Why don't coders stick to the "if it ain't broken, don't fix it" principle? Pleeeease do! Or test phone mode before releasing, at the VERY least! And remember: heaps of conyributors are way past their spectacles-less years, along with all that implies.Arminden (talk)20:11, 14 February 2026 (UTC)[reply]