Movatterモバイル変換


[0]ホーム

URL:


Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview
List overview
Download

Mediawiki-apiFebruary 2012

mediawiki-api@lists.wikimedia.org
  • 16 participants
  • 11 discussions
Start a nNew thread

20 Jan '26
Hi there,I'm using the API to extract the raw wiki text from my pages, usingthe "?action=query&titles=Main_Page&export&exportnowrap" syntax. Thatworks perfectly.Now I would like to get the templates expanded out in the result, so Iuse: "?action=query&titles=Main_Page&prop=revisions&rvlimit=1&rvprop=content&rvexpandtemplates",which does the job, as expected, but it also strips out the comments.My problem is that the comments are meaningful to me (I use them tohelp process the wiki text in subsequent steps).Is there a way to expand templates with the API, but leave thecomments intact?Thanks,Kevin
4 5
0 0
Need to extract abstract of a wikipedia page
by aditya srinivas 23 Nov '23

23 Nov '23
Hello,I am writing a Java program to extract the abstract of the wikipedia pagegiven the title of the wikipedia page. I have done some research and foundout that the abstract with be in rvsection=0 So for example if I want the abstract of 'Eiffel Tower" wiki page then I amquerying using the api in the following way.http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Eiffel…and parse the XML data which we get and take the wikitext in the tag <revxml:space="preserve"> which represents the abstract of the wikipedia page.But this wiki text also contains the infobox data which I do not need. Iwould like to know if there is anyway in which I can remove the infobox dataand get only the wikitext related to the page's abstract Or if there is anyalternative method by which I can get the abstract of the page directly.Looking forward to your help.Thanks in AdvanceAditya Uppu
4 3
0 0
I want to add a widget to a web page that will allow user to enter searchterms and search wikimedia for images that match the terms. I haveimplemented a similar widget for flickr, using their API, but am havingtrouble doing the same with wikimedia.Basically, I would like to replicate the functionality of thecommons.wikimedia.org search page. Ideally I would like to be able to get aCategory listing (ex.http://commons.wikimedia.org/wiki/Chartres_Cathedral)or a true search results (ex.http://commons.wikimedia.org/w/index.php?title=Special%3ASearch&search=%22c…),but at this point I would be happy with either.I've tried using the allimages list, but that is not adequate. Is there anyother way to search images using the API?I have also been looking at Freebase and DBPedia. These seem like theymight do what I want, but RDF is completely new to me and I'm still tryingto figure out the basics of it. If anyone can point me in the rightdirection for either of those resources, I would appreciate it.Regards.Tim Helck
5 10
0 0
convert API results to HTML
by Fred Zimmerman 23 Feb '12

23 Feb '12
Hi,I am looking for a convenient way to convert individual documents returnedfrom the MediaWiki API to standalone HTML documents. Currently I amretrieving documents via the action/render constructhttp://en.wikipedia.org/w/index.php?action=render&title="I am encountering two problems:1) I have to put a "shell" of crudely cut and pasted <html><head><body>etc. around the rendered html.2) I need to strip out the external hrefs from the results, and I don'thave a good way to do this.I am taking a fresh look and wondering whether I should be retrieving thedocs in JSON or XML and then using a conversion program to turn those intonice clean HTML docs. Does anyone have any suggestions or working examples?FredZ-----------------------------------------------------Subscribe to the Nimble Books Mailing Listhttp://eepurl.com/czS- formonthly updates
1 0
0 0
Spoof user during edit?
by Jim Safley 23 Feb '12

23 Feb '12
Is it possible to spoof the user IP address during action=edit? Itsounds like an unsavory question, but let me explain. Let's say ananonymous user edits a page in my application, which, in turn, uses aMediaWiki API client to make the necessary token/edit requests. Theresulting user is not the IP address of the anonymous user, ratherit's the IP address of the API client. As a result, every anonymoususer that uses my application is truly, irrevocably anonymous. This iswhy I wonder if it's possible to send an arbitrary user IP addressalong with the edit request.Jim
3 2
0 0
I invite you to the yearly Berlin hackathon.This is the premier event for the MediaWiki and Wikimedia technicalcommunity. We'll be hacking, designing, and socialising.Our goals for the event are to bring 100-150 people together, with lots of people who have not attended such events before. Userscripts, gadgets, API use, Toolserver, Wikimedia Labs, mobile,structured data, templates -- if you are into any of these things, wewant you to come!Some financial assistance will be available -- more details soon.This event will be hosted by Wikimedia Germany (WMDE) and supported bythe Wikimedia Foundation. Thank you, WMDE!Dates: June 1-3 2012. Barely-started wiki page, no registration detailsyet:https://www.mediawiki.org/wiki/Berlin_Hackathon_2012 . Organizers:me and WMDE's Nicole Ebber with assistance from Lydia Pintscher andDaniel Kinzler.Mark your calendars!-- Sumana HarihareswaraVolunteer Development CoordinatorWikimedia Foundation
1 0
0 0
Most of the php bot frameworks, including some which are heavily used onen.WP, use serialized php.I do not believe MW should make such a decision due to IE being apoorly-behaved browser, this being the only argument presented.Amgine
2 1
0 0

08 Feb '12
Hi, this idea had floated around for quite some time, but now thatbug 34257[1] was added to the long list of problems, I would like tostep up and start some progress. We[2] propose to remove the followingformats[3]:* WDDX - doesn't seem to be used by anyone. Doesn't look sane either.* YAML - we don't serve real YAML anyway, currently it's just a subset of JSON.* rawfm - was created for debugging the JSON formatter aeons ago, not useful for anything now.* txt, dbg, dump - the only reason they were added is that it was possible to add them, they don't serve the purpose of machine/machine communication.So, only 3 formats would remain:* JSON - *the* recommended API format* XML - evil and clumsy but sadly used too widely to be revoved in the foreseeable future* php - this one is used by several extensions and probably by some third-party reusers, so we won't remove it this time. However, any new uses of it should be discouraged.We plan to remove the aforementioned formats as soon as MediaWiki 1.19is branched so that these changes will take effect in 1.20, but wouldlike to hear from you first if there are good reasons why we shouldn'tdo it or postpone it. Please have your say.------[1]https://bugzilla.wikimedia.org/show_bug.cgi?id=34257[2] Me and Roan Kattouw, one of API's primary developers[3]https://www.mediawiki.org/wiki/API:Data_formats-- Best regards, Max Semenik ([[User:MaxSem]])
4 3
0 0

07 Feb '12
I posted this on the talk page for the API. Trying to find an answer. I did try with 16.5, but still the same error, it is reporting a doubled Content-Length. Wherein the code is the header formed for content length? Or is it relying on server? Doubled Content-Length in HTTP Header MediaWiki version: 16.0 - also tried 16.5PHP version: 5.2.17 (cgi)MySQL version: 5.0.91-logURL:www.isogg.org/w/api.phpI am trying to track down a bug in the api which is causing a double content-length in theheader. This is causing a lot of issues with a python bot. Here is the report from web-sniffershowing the content of the api.php call from this wiki. All other pages when called, i.e. theMain page, etc. only report 1 content-length. Is the api forcing the headers? Why is doublingonly the one? Status: HTTP/1.1 200 OK Date: Mon, 30 Jan 2012 14:31:25 GMT Content-Type: text/html; charset=utf-8 Connection: close Server: Nginx / Varnish X-Powered-By: PHP/5.2.17 MediaWiki-API-Error: help Cache-Control: private Content-Encoding: gzip Vary: Accept-Encoding Content-Length: 16656 Content-Length: 16656 As you can see this is a Nginx server. On an Apache server with 16.0, only one content-length issent. Could that be the issue and how do I solve it? Thanks. Tom
4 10
0 0
Search acronym words
by Parsa 02 Feb '12

02 Feb '12
Dear Members,I'm using mediawiki API to get a result with the following code:http://en.wikipedia.org/w/api.php?action=opensearch&search=pc&limit=20&form…But if you look at this link:http://en.wikipedia.org/wiki/PCthere is description like this:   PC most commonly refers to:    * Personal computer, a computer whose original sales price, size, and capabilities make it useful  for individuals    * Political correctness, language or behavior that appears calculated to provide a minimum of offensebut with my API, I cannot get such a result. Please help me, how can i change my API to get a result like above.Actually, I want to send an acronym word (e.g. PC) via API and get the suggestions. (e.g: Personal computer, Political Correctness)Please help me to find a solution for that, I searched a lot and I could not find any solution for that. Yours SincerelySasan Moshksar
6 7
0 0
Results per page:

[8]ページ先頭

©2009-2026 Movatter.jp