Movatterモバイル変換


[0]ホーム

URL:


Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview
List overview
Download

Mediawiki-apiDecember 2011

mediawiki-api@lists.wikimedia.org
  • 13 participants
  • 9 discussions
Start a nNew thread

20 Jan '26
Hi there,I'm using the API to extract the raw wiki text from my pages, usingthe "?action=query&titles=Main_Page&export&exportnowrap" syntax. Thatworks perfectly.Now I would like to get the templates expanded out in the result, so Iuse: "?action=query&titles=Main_Page&prop=revisions&rvlimit=1&rvprop=content&rvexpandtemplates",which does the job, as expected, but it also strips out the comments.My problem is that the comments are meaningful to me (I use them tohelp process the wiki text in subsequent steps).Is there a way to expand templates with the API, but leave thecomments intact?Thanks,Kevin
4 5
0 0
Need to extract abstract of a wikipedia page
by aditya srinivas 23 Nov '23

23 Nov '23
Hello,I am writing a Java program to extract the abstract of the wikipedia pagegiven the title of the wikipedia page. I have done some research and foundout that the abstract with be in rvsection=0 So for example if I want the abstract of 'Eiffel Tower" wiki page then I amquerying using the api in the following way.http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Eiffel…and parse the XML data which we get and take the wikitext in the tag <revxml:space="preserve"> which represents the abstract of the wikipedia page.But this wiki text also contains the infobox data which I do not need. Iwould like to know if there is anyway in which I can remove the infobox dataand get only the wikitext related to the page's abstract Or if there is anyalternative method by which I can get the abstract of the page directly.Looking forward to your help.Thanks in AdvanceAditya Uppu
4 3
0 0
I want to add a widget to a web page that will allow user to enter searchterms and search wikimedia for images that match the terms. I haveimplemented a similar widget for flickr, using their API, but am havingtrouble doing the same with wikimedia.Basically, I would like to replicate the functionality of thecommons.wikimedia.org search page. Ideally I would like to be able to get aCategory listing (ex.http://commons.wikimedia.org/wiki/Chartres_Cathedral)or a true search results (ex.http://commons.wikimedia.org/w/index.php?title=Special%3ASearch&search=%22c…),but at this point I would be happy with either.I've tried using the allimages list, but that is not adequate. Is there anyother way to search images using the API?I have also been looking at Freebase and DBPedia. These seem like theymight do what I want, but RDF is completely new to me and I'm still tryingto figure out the basics of it. If anyone can point me in the rightdirection for either of those resources, I would appreciate it.Regards.Tim Helck
5 10
0 0
How to parse the contents
by kracekumar ramaraju 01 Jan '12

01 Jan '12
Hellohttp://en.wiktionary.org/w/api.php?format=json&action=query&titles=murky&rv…yields json contents for the word 'murky'.content = json_returned_content.content{"query":{"pages":{"54377":{"pageid":54377,"ns":0,"title":"murky","revisions":[{"*":"==English==\n\n===Etymology===\nCognateto or directly from {{etyl|non}} {{term|myrkr}}. Compare Russian,Serbian [[\u043c\u0440\u0430\u043a]].\n\n===Pronunciation===\n*{{audio|en-us-murky.ogg|Audio (US)}}\n\n*{{rhymes|\u025c\u02d0(r)ki}}\n\n===Adjective===\n{{en-adj|murkier|murkiest}}\n\n#Hard to see through, as a fog or mist.\n# [[gloomy|Gloomy]], [[dark]],[[dim]].\n# [[obscure|Obscure]], [[indistinct]], [[cloudy]].\n#Dishonest, [[shady]].\n\n====Synonyms====\n* [[dark]]\n\n====Relatedterms====\n* [[murk]]\n* [[murkily]]\n*[[murkiness]]\n\n====Translations====\n{{trans-top|hard to seethrough}}\n* Dutch: [[troebel]], [[troebele]]\n* Finnish:{{t+|fi|samea}}\n* French: {{t+|fr|sombre}}, {{t+|fr|trouble}}\n*German: {{t+|de|d\u00fcster}}, {{t+|de|tr\u00fcb}}\n{{trans-mid}}\n*Romanian: {{t-|ro|tulbure}}\n* Russian:{{t|ru|\u043c\u0443\u0442\u043d\u044b\u0439|tr=m\u00fatnyj}}\n*[[Scots]]: {{t\u00f8|sco|mirk|xs=Scots}}\n{{trans-bottom}}\n{{trans-see|gloomy}}\n{{trans-see|obscure}}\n{{trans-top|dishonest,shady}}\n* Russian:{{t+|ru|\u0442\u0451\u043c\u043d\u044b\u0439|tr=t'\u00f3mnyj}},{{t+|ru|\u0433\u0440\u044f\u0437\u043d\u044b\u0439|tr=gr'\u00e1znyj}}\n{{trans-mid}}\n{{trans-bottom}}\n{{checktrans-top}}\n*{{ttbc|da}}: {{t-|da|m\u00f8rk}}, {{t-|da|dunkel}},{{t-|da|dyster}}\n* {{ttbc|he}}: [[\u05e2\u05db\u05d5\u05e8,\u05de\u05d8\u05d5\u05e9\u05d8\u05e9]]\n{{trans-mid}}\n* {{ttbc|is}}:[[myrkr]]\n{{trans-bottom}}\n\n====External links====\n* {{R:Webster1913}}\n* {{R:Century1911}}\n\n[[et:murky]]\n[[io:murky]]\n[[kn:murky]]\n[[lt:murky]]\n[[hu:murky]]\n[[mg:murky]]\n[[ml:murky]]\n[[my:murky]]\n[[pl:murky]]\n[[ru:murky]]\n[[fi:murky]]\n[[sv:murky]]\n[[ta:murky]]\n[[te:murky]]\n[[vi:murky]]\n[[zh:murky]]"}]}}}}content['query']['pages']['54377']['revisions][0]['*'] yields meaningand other related contents.I am interested to retrieve the meaning of the word. how can I do it?In this scenario I find api to be unusable since pageid is dynamicallygenerated which is required to access the contents.Yes I can use xml and find contents inside <rev></rev> tag, but howwill some one fetch synonym alone and part of speech alone?-- *Thanks & Regards"Talk is cheap, show me the code" -- Linus Torvaldskracekumarwww.kracekumar.com*
3 3
0 0

16 Dec '11
Hi,would you know how i could use the API to get articles about musicians among disambiguation results ?Examples are "Queen" and "Calvin Russell" which can refer to different things.I guess i have to find a way to specify "Music", which is the name of a disambiguation section that often appear in those pages.I've tryed several manners like:http://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=conten…andhttp://en.wikipedia.org/w/api.php?action=query&list=search&srlimit=5&format…But i don't see a way to do thing without excessive parsing and querying.Thanks in advance,regards,Adrien
2 1
0 0
action=undelete limit on timestamps?
by Jim Tittsler 15 Dec '11

15 Dec '11
Is there a limit on the number of timestamps of a page that can beundeleted in a single API call?-- Jim Tittslerhttp://www.OnNZ.net/ GPG: 0x01159DB6Python Starshiphttp://Starship.Python.net/crew/jwt/Mailman IRCirc://irc.freenode.net/#mailman
1 1
0 0

08 Dec '11
Hi,I'm creating a .Net library to access the API, but I'm not sure how toapproach selecting what properties should the result have.For example, when the user wants to access “info” of some page, I wantto offer him all the available properties. And if he selects “length”and “fullurl”, I should form URL likehttp://en.wikipedia.org/w/api.php?action=query&prop=info&titles=Main%20Page…The “length” property is included by default, and “fullurl” is addedby “inprop=url”.I can get most other information about the structure of the API byusing “action=paraminfo”, but information about what result propertiesare available and how they correspond to the prop values seems to bemissing.Is this information available somewhere? Is trying the query andseeing what properties are returned the best I can do currently? Doyou think it would be a good idea if I (or someone else) modified“action=paraminfo” to include this information in some form?Petr Onderka[[en:User:Svick]]
2 2
0 0
I'm computing the url of an image by the following:(the md5 of the first char and the second two chars concat) val md = MessageDigest.getInstance("MD5") val messageDigest = md.digest(fileName.getBytes) val md5 = (new BigInteger(1, messageDigest)).toString(16) val hash1 = md5.substring(0, 1) val hash2 = md5.substring(0, 2) val urlPart = hash1 + "/" + hash2 + "/" + fileNameMost of the time, the function works correctly but on a few cases, itis incorrect:For "Stewie_Griffin.png", I get 2/26/Stewie_Griffin.png but the realone is 0/02/Stewie_Griffin.pngThe source file info is here:http://en.wikipedia.org/wiki/File:Stewie_Griffin.pnghttp://upload.wikimedia.org/wikipedia/en/0/02/Stewie_Griffin.pngAny ideas why the hashing scheme doesn't work sometimes?I posted this question on stackoverflow but I might be able to get abetter answerhere.http://stackoverflow.com/questions/8389616/does-wikipedia-use-differen…-- @tommychhenghttp://tommy.chheng.com
4 7
0 0
Hello,Question: is it possible to display the latest version of awikipedia.orgpage in another mediawiki site simply by GET using theAPI:import <https://www.mediawiki.org/wiki/API:Import>? Or is there aneasier way?More info:I am building my first wiki using Mediawiki 1.17 @http://fa.irfanpedia.org Iam total mediawiki newbie.The site will contain original research about Irfan, a form of Gnosticism,similar to sufism.As a convenience to users, I would like to display the latest content onWikipedia about the subject. The exact pages have already been identified.I've already read about xml export/import. But I need the latest version ofthe page and not have to constantly export from wikipedia manually.So is it possible to create a URL for each target page so that on the flypulls the content of a particularwikipedia.org page in the destinationsite? A 1:1 relationship.If so, is this mailing list the right place to ask for technical help? theprocess for obtaining a token - I can't figure it out. Or not sure if thexml produced by the action=import can be immediately displayed to user orit still has to go through another step before it is human-readable.Thank you in advance.
3 2
0 0
Results per page:

[8]ページ先頭

©2009-2026 Movatter.jp