Movatterモバイル変換


[0]ホーム

URL:


Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview
List overview
Download

Wikitech-lApril 2003

wikitech-l@lists.wikimedia.org
  • 47 participants
  • 72 discussions
Start a nNew thread
New SpezialPage: qualifier?
by Thomas Corell 17 May '03

17 May '03
In the German wikipedia a list of the used qualifiers in the titles was discussed, and most of the participants think it will be a interesting feature.I know I have expressed it wrong, therefore an example:Cell (biology) is a homonym (cell) with a qualifier (biology). To get a proper list of those qualifiers and modify or eliminate wrong ones, a list would be very helpful.The discussion result was that such a unique list of those qualifiers from titles (table cur) and bl_to (brokenlinks) would make sense.Unfortunally I can give you only a proper PostgerSQL select statement (only table cur) but possibly someone can transfer this easy to mySQL:======== WARNING: THIS IS NOT A VALID STATEMENT FOR WIKIPEDIA ==========SELECT DISTINCT substring(cur_title FROM '.+\\((.+)\\)') AS p FROM cur;========================= I WARNED YOU =================================-- "\\(" ==> ( (needed for quoting)-- (.+) ==> the () - construct is used to select the part substring will return.For the titles "foo", "foo (bar)", "foo2 (bar)" and "bar (foo)" the result will be "bar" and "foo".This should only show the as-is state of these qualifiers! There is no intention for any automated process to enforce them, because no Wikipedian should get an error like "Qualifier not allowed" or so. This page is only for administrational and informational purposes!Of course additional features, like showing the matched pages and others, would be nice, but there the discussion must go on further, IMHO.If there are more questions, ask, I will try to answer them.Smurf-- ------------------------- Anthill inside! ---------------------------
1 1
0 0
Patch for LanguageDe.php
by Thomas Corell 08 May '03

08 May '03
Some typos, as usually.Smurf-- ------------------------- Anthill inside! ---------------------------69c69< "et" => "Esti",---> "et" => "Eesti",402c402< "prevn"=> "letzte $1",---> "prevn"=> "vorherige $1",
5 14
0 0
Developper limits
by Anthere 01 May '03

01 May '03
Hum, I am a bit embarrassed here. But well...We have a bottle neck problem I fear.Utilisateur:Alvaro asked to be a sysop on the frenchwiki quite a while ago, and nobody spoke against(indeed, several spoke for him). Then, since this isasked on our pump, he was first forgot for a while.Had to kindly ask again.So I put a message asking for him to be made sysop inthe database on your page Brion, on the englishwikipedia, as well as the metapedia if I rememberwell.He still is not sysop. It is clear it is a pain foryou developpers to do this type of chore; Edmentionned it several times. So, we wait, and askagain, and again.But, then, isnot there a way for us to make people onthe international wikipedia ourselves ?Could there be something like a query, which couldallow french sysop to make french user (just anexample of course) a sysop automatically ? Maybe alist of user could be displayed, and one sysop couldclick on the name of one user to make him sysop ? Ofcourse, it would be absolutely required then than alog is clearly visible to everyone, to avoid abuse.Is something like this possible or not ?Meanwhile, could someone please make Alvaro sysop ?Also, as Aoineko mentionned previously, the mispellingpage is quite broken, and has never been working wellsince we are phase III (31th of october). It wasalready reported.I know you are overbooked. Is there something we cando (ok, not me, but there are some developpers amongus) to make that work ? Do the other internationalwikipedias using accentuated letters do have theproblem as well ?Anthere__________________________________Do you Yahoo!?The New Yahoo! Search - Faster. Easier. Bingo.http://search.yahoo.com
6 13
0 0
Caching
by Geoffrey Thomas 01 May '03

01 May '03
A suggestion for caching: the home page, [[MainPage]], never should include links to nonexistentpages. Could that be cached (in standard stylesheet),e.g., as /index.html? I would think that page isaccessed very frequently and that caching it wouldsave some database work for other pages. When the pageis edited (only by a sysop), he/she would render thepage to HTML, possibly by explicitly accessing/wiki/Main_Page, and save it as /index.html. Wouldthis help some server strain?Could we also cache other pages known to link only toexisting pages, such as this week in dates ([[April30]], etc.), [[Current events]], and [[Recentdeaths]]? Or are these pages updated too often to beuseful? Could we also cache the protected pages?-[[User:Geoffrey]]__________________________________Do you Yahoo!?The New Yahoo! Search - Faster. Easier. Bingo.http://search.yahoo.com
6 5
0 0
Possible performance issue?
by Nick Reinking 01 May '03

01 May '03
I notice that /usr (/dev/sda2) is at 96%. ext2 has some pretty badproblems with fragmentation once it gets above a certain percentage.This can cause some pretty bad performance problems. Once it hasfragmented, it is difficult to get it back to a contiguous state.There are defrag programs, but they are fairly scary. The only otherway to get it back to normal is to back everything up, mkfs, and restoreit.Perhaps somebody can remove a bunch of the packages that are installedthat we don't use?-- Nick Reinking -- eschewing obfuscation since 1981 -- Minneapolis, MN
4 6
0 0
Motherboard for the server we're going to use as Wikipedia front end.I haven't heard from Jason yet as to when he can make the trip toinstall it, but possibly Friday or Monday.----- Forwarded message from Sales(a)Computers4SURE.com -----From: <Sales(a)Computers4SURE.com>Date: Tue, 29 Apr 2003 11:29:37 -0400To: <jwales(a)bomis.com>Subject: We've received your order #C030450733, Jimmy. Thank you.Thank you for shopping withComputers4SURE.com. We would like to confirmthat your order has been received.On 29-Apr-2003, you ordered:Item Quantity Price----------------------------------------------------------SERVERWORKS LE-T DUAL PGA3701$485.95-----------------SNIP-----------------------
5 9
0 0
Re: server names
by Daniel Mayer 30 Apr '03

30 Apr '03
How about Yin and Yang? Or chaos and opportunity?-- mav
3 2
0 0
RE: [Wikitech-l] Ad-hoc changes
by Mark Christensen 30 Apr '03

30 Apr '03
The only issue I see would be if database threads crowd out apache onthe old server, and all the foreign wiki's go down, while the Englishwiki stays up. This could create new tensions with the foreign wiki's.-----Original Message-----From: Lee Daniel Crocker [mailto:lee@piclab.com] Sent: Tuesday, April 29, 2003 5:13 PMTo: wikitech-l(a)wikipedia.orgSubject: Re: [Wikitech-l] Ad-hoc changes> (Brion Vibber <vibber(a)aludra.usc.edu>):> [Configuration info]Here's a thought: when we get the new server up, let's install theEnglish wiki cleanly from the distribution, make sure it's all happy,then do the switch, _leaving the foreign wikis on the old server_ for awhile, then do the same thing for each of the foreign ones in turn,updating them to the newest software, and making everything in sync,before we shut down apache on the old server.Any problems with that?-- Lee Daniel Crocker <lee(a)piclab.com> <http://www.piclab.com/lee/> "Allinventions or works of authorship original to me, herein and past, areplaced irrevocably in the public domain, and may be used or modified forany purpose, without permission, attribution, or notification."--LDC_______________________________________________Wikitech-l mailing listWikitech-l(a)wikipedia.orghttp://www.wikipedia.org/mailman/listinfo/wikitech-l
3 3
0 0
Chat about Wikipedia performance?
by David A. Wheeler 30 Apr '03

30 Apr '03
Hi - clearly, it'd be great if Wikipedia had better performance.I looked at some of the "Database benchmarks" postings,but I don't see any analysis of what's causing the ACTUAL bottleneckson the real system (with many users & full database).Has someone done that analysis?I suspect you guys have considered far more options, but as anewcomer who's just read the source code documentation, maybesome of these ideas will be helpful:1. Perhaps for simple reads of the current article (cur),you could completely skip using MySQL and use the filesystem instead.Simple encyclopedia articles could be simply stored in thefilesystem, one article per file. To avoid the huge directory problem(which many filesystems don't handle well, though Reiser does),you could use the terminfo trick.. create subdirectories for thefirst, second, and maybe even the third characters. E.G., "Europe"is in "wiki/E/u/r/Europe.text". The existence of a file can be used asthe link test. This may or may not be faster than MySQL, butit's probably faster: the OS developers have been optimizingfile access for a very long time, and instead of havinguserspace<->kernel<->userspace interaction, it'suserspace<->kernel interaction. You also completely avoidlocking and other joyless issues.2. The generation of HTML from the Wiki format could be cached,as has been discussed. It could also be sped up, e.g., byrewriting it in flex. I suspect it'd be easy to rewrite thetranslation of Wiki to HTML in flex and produce something quite fast.My "html2wikipedia" is written in flex - it's really fast and didn'ttake long to write. The real problem is, I suspect thatisn't the bottleneck.3. You could start sending out text ASAP, instead of batching it.Many browsers start displaying text as it's available, so tousers it might _feel_ faster. Also, holding text in-memorymay create memory pressure that forces more useful stuff out ofmemory.Anyway, I don't know if these ideas are all that helpful,but I hope they are.
5 7
0 0

30 Apr '03
>From: Lee Daniel Crocker <lee(a)piclab.com>>One things that would be nice is if the HTTP connection could be>dropped immediately after sending and before those database updates.>That's easy to do with threads in Java Servlets, but I haven't>found any way to do it with Apache/PHP.:P No, I looked into exactly this problem in connection with my own little project (improved Special:Movepage). PHP and threads don't mix. As far as I could see, the PHP subprocess has to exit (taking all threads with it) before Apache will drop the connection. Like Brion said, you'd have to set up another process, and use PHP's poorly documented IPC functions. As for what improvement it would achieve: it wouldn't reduce database load per view, it would just allow users to hit more pages sooner.I think caching HTML is the way to go, in the short term. If people don't want to code something complicated, you could ignore user preferences for now and only cache pages for "anonymous" users. The cached version could leave little notes in the HTML like<strong>Isaac Newton</strong> was a <<WIKILINK[[physics|physicist]]>> born in...and maybe<<USERIP>> (<a href ="http://www.wikipedia.org/wiki/User_talk:<<USERIP>>" class='internal' title="User talk:<<USERIP>>">Talk</a>Then a cache processing script would look up the link table and replace the links with real HTML. I imagine looking up the link table is much, much faster than looking up cur_text. Plus the cached text would be stored on the web server, thereby distributing disk load more evenly.As for invalidation, the easiest, and possibly ugliest way I can think of is implementing it in wfQuery() *cringe*. That's a very simple function with very diverse uses, but every single update query passes through that point. Just use a hash table (always in RAM) to store the article name of every cache entry, and remove the rows when they're invalidated.There'd also have to be a check for an altered user talk page. This could be handled with another of my <<TAGS>>.This idea is likely to be met with apathy. I'd like to code it myself, but I don't have Linux on my PC, or a broadband connection, or much free hard drive space, or... time. So there you have it: my two cents, backed up by hot air.-- Tim Starling._________________________________________________________________Hotmail now available on Australian mobile phones. Go tohttp://ninemsn.com.au/mobilecentral/hotmail_mobile.asp
2 1
0 0
Results per page:

[8]ページ先頭

©2009-2025 Movatter.jp