Movatterモバイル変換


[0]ホーム

URL:


Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview
List overview
Download

Mediawiki-apiDecember 2015

mediawiki-api@lists.wikimedia.org
  • 4 participants
  • 8 discussions
Start a nNew thread
MediaWiki API in specific usage
by Support Technique 19 Feb '26

19 Feb '26
Hello,I'm developing a Joomla 3! plugin based on mediawiki API. The goal is importing all datas (titles and description or definitions) from Wiki database to a Joomla website database. Cause the Joomla! website will be finally used as a dictionnary. Now can you tell me how to proceed, please? Unfortunately, the available tutorials on MediaWiki are not so clear on this usage situation. Thanks for answer me as soon as you can!-- Logo Internis Group <http://www.internis-group.com/> *Modeste EKAMBI*Senior Software Engineer - Internis GroupAgence experte des projets web et IT - Développement sur mesure - Développement web et mobile, d'applications et sites internetSkype: internis.groupTéléphone: 699-789-772 / 679-998-516Mail: support(a)internis-group.comBlog:http://blog.internis-group.comSite:http://www.internis-group.com <http://internis-group.com>
2 1
0 0

20 Jan '26
Hi there,I'm using the API to extract the raw wiki text from my pages, usingthe "?action=query&titles=Main_Page&export&exportnowrap" syntax. Thatworks perfectly.Now I would like to get the templates expanded out in the result, so Iuse: "?action=query&titles=Main_Page&prop=revisions&rvlimit=1&rvprop=content&rvexpandtemplates",which does the job, as expected, but it also strips out the comments.My problem is that the comments are meaningful to me (I use them tohelp process the wiki text in subsequent steps).Is there a way to expand templates with the API, but leave thecomments intact?Thanks,Kevin
4 5
0 0
Need to extract abstract of a wikipedia page
by aditya srinivas 23 Nov '23

23 Nov '23
Hello,I am writing a Java program to extract the abstract of the wikipedia pagegiven the title of the wikipedia page. I have done some research and foundout that the abstract with be in rvsection=0 So for example if I want the abstract of 'Eiffel Tower" wiki page then I amquerying using the api in the following way.http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Eiffel…and parse the XML data which we get and take the wikitext in the tag <revxml:space="preserve"> which represents the abstract of the wikipedia page.But this wiki text also contains the infobox data which I do not need. Iwould like to know if there is anyway in which I can remove the infobox dataand get only the wikitext related to the page's abstract Or if there is anyalternative method by which I can get the abstract of the page directly.Looking forward to your help.Thanks in AdvanceAditya Uppu
4 3
0 0
When list=allusers is used with auactiveusers, a property 'recenteditcount'is returned in the result. In bug 67301[1] it was pointed out that thisproperty is including various other logged actions, and so should really benamed something like "recentactions".Gerrit change 130093,[2] merged today, adds the "recentactions" resultproperty. "recenteditcount" is also returned for backwards compatability,but will be removed at some point during the MediaWiki 1.25 developmentcycle.Any clients using this property should be updated to use the new propertyname. The new property will be available on WMF wikis with 1.24wmf12, seehttps://www.mediawiki.org/wiki/MediaWiki_1.24/Roadmap for the schedule. [1]:https://bugzilla.wikimedia.org/show_bug.cgi?id=67301 [2]:https://gerrit.wikimedia.org/r/#/c/130093/-- Brad Jorsch (Anomie)Software EngineerWikimedia Foundation_______________________________________________Mediawiki-api-announce mailing listMediawiki-api-announce(a)lists.wikimedia.orghttps://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce
1 1
0 0
Dump corrupted ?
by Luigi Assom 23 Dec '15

23 Dec '15
Hello,not sure if this is the proper mail list to notify - I am experiencingproblems with data corrupted on wiki dump.I tried three times to download the last english dump (multistream)
3 3
0 0
If you're also on wikitech-l, you may have seen the recent announcement[1]of some work by WMF's Discovery team to improve the search results poppedup by the search box in wiki pages. Assuming it continues to receivepositive feedback, it's rather likely that the code behind their temporary"action=cirrus-suggest" API module will become the backing engine behindaction=opensearch and list=prefixsearch.This brought to my attention that "list=prefixsearch" has never beenparticularly well-documented, so Gerrit change 260383[2] was submitted andmerged to clarify the issue. Note this isn't a change in behavior, it's aclarification of the existing behavior to avoid future confusion when thatbehavior is improved.ApiQueryPrefixSearch was added in Gerrit change 123118[3] to be likeaction=opensearch but usable as a generator. If your client code is usinglist=prefixsearch in the expectation that it would work like thesimilarly-named web UI page Special:PrefixIndex, you should review yourusage to make sure that it is actually behaving as you intend. If you arereally wanting a list of titles with a prefix rather than a *search* forpages "matching" a prefix in some loosely-defined sense, you should beusing list=allpages with the apprefix parameter instead. [1]:https://lists.wikimedia.org/pipermail/wikitech-l/2015-December/084356.html [2]:https://gerrit.wikimedia.org/r/#/c/260383/ [3]:https://gerrit.wikimedia.org/r/#/c/123118/-- Brad Jorsch (Anomie)Senior Software EngineerWikimedia Foundation
1 0
0 0
Long ago, the only mechanism for session management in MediaWiki wascertain cookies set by the User class. When ApiLogin was written, inaddition to setting these cookies as usual it also returned some of thevalues needed to construct these cookies on the client. Presumably this wasto make things easier for clients that somehow couldn't handle the standardcookie headers.Then CentralAuth came along. Now, constructing the cookies manually wouldlog you in to the local wiki only, without taking advantage of the SULmechanism.Then T55032[1] happened, and clients that were using themanual-construction mechanism had to update their code because one of thecookie names changed and that wasn't part of the data being returned.And soon, we'll have SessionManager and AuthManager, which will make itpossible for login to easily happen in ways that don't involve cookies atall.So it's time to eliminate the pretense that clients can manually constructthe cookies instead of handing the standard HTTP cookie headers. Tentativeplan is to deprecate them now and then remove them sometime during 1.28; ifanyone objects to this schedule, please raise your concerns inhttps://phabricator.wikimedia.org/T121527. [1]:https://phabricator.wikimedia.org/T55032-- Brad Jorsch (Anomie)Senior Software EngineerWikimedia Foundation_______________________________________________Mediawiki-api-announce mailing listMediawiki-api-announce(a)lists.wikimedia.orghttps://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce
1 0
0 0
Dear Mediawiki-api,My web+hybrid mobile app uses the Mediawiki-api to search for Wikipedia articles associated with a particular latitude/longitude. The searched coordinates are determined at run-time. In a session, the user may view several lists of article summaries and several Wikipedia articles.Article summaries of nearby articles are displayed in a list and, when one of the summaries is clicked, the associated Wikipedia article is displayed as-is in a frame that is in my app.I am not sure if this is acceptable to Wikipedia. Is it?Also, assuming it is, what sort of credit/attribution is appropriate.Thanks,Jim Andrews
1 0
0 0
Results per page:

[8]ページ先頭

©2009-2026 Movatter.jp