Movatterモバイル変換


[0]ホーム

URL:


Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview
List overview
Download

Mediawiki-apiApril 2020

mediawiki-api@lists.wikimedia.org
  • 4 participants
  • 5 discussions
Start a nNew thread
MediaWiki API in specific usage
by Support Technique 19 Feb '26

19 Feb '26
Hello,I'm developing a Joomla 3! plugin based on mediawiki API. The goal is importing all datas (titles and description or definitions) from Wiki database to a Joomla website database. Cause the Joomla! website will be finally used as a dictionnary. Now can you tell me how to proceed, please? Unfortunately, the available tutorials on MediaWiki are not so clear on this usage situation. Thanks for answer me as soon as you can!-- Logo Internis Group <http://www.internis-group.com/> *Modeste EKAMBI*Senior Software Engineer - Internis GroupAgence experte des projets web et IT - Développement sur mesure - Développement web et mobile, d'applications et sites internetSkype: internis.groupTéléphone: 699-789-772 / 679-998-516Mail: support(a)internis-group.comBlog:http://blog.internis-group.comSite:http://www.internis-group.com <http://internis-group.com>
2 1
0 0

20 Jan '26
Hi there,I'm using the API to extract the raw wiki text from my pages, usingthe "?action=query&titles=Main_Page&export&exportnowrap" syntax. Thatworks perfectly.Now I would like to get the templates expanded out in the result, so Iuse: "?action=query&titles=Main_Page&prop=revisions&rvlimit=1&rvprop=content&rvexpandtemplates",which does the job, as expected, but it also strips out the comments.My problem is that the comments are meaningful to me (I use them tohelp process the wiki text in subsequent steps).Is there a way to expand templates with the API, but leave thecomments intact?Thanks,Kevin
4 5
0 0
Need to extract abstract of a wikipedia page
by aditya srinivas 23 Nov '23

23 Nov '23
Hello,I am writing a Java program to extract the abstract of the wikipedia pagegiven the title of the wikipedia page. I have done some research and foundout that the abstract with be in rvsection=0 So for example if I want the abstract of 'Eiffel Tower" wiki page then I amquerying using the api in the following way.http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Eiffel…and parse the XML data which we get and take the wikitext in the tag <revxml:space="preserve"> which represents the abstract of the wikipedia page.But this wiki text also contains the infobox data which I do not need. Iwould like to know if there is anyway in which I can remove the infobox dataand get only the wikitext related to the page's abstract Or if there is anyalternative method by which I can get the abstract of the page directly.Looking forward to your help.Thanks in AdvanceAditya Uppu
4 3
0 0
Question about using Swimranking API
by Vladimir Dabic 27 Apr '20

27 Apr '20
Hello,I am a web developer and I was tasked with creating new mobileapp that would allow people to compare different swimmers andtheir times in various swimming styles.My question is:1. Can I use your API freely or is it a paid service?2. Can I store data that your API provides in my database?Thank you
2 1
0 0
Since April 2010,[1] when no lgtoken is passed to the Action APIaction=login it will return a "NeedToken" response including the token touse. While this method of fetching the login token was deprecated inJanuary 2016,[2] it is still present for the benefit of clients that havenot yet been updated and is not (yet) being removed.The NeedToken response was also being returned when an lgtoken was suppliedbut could not be validated due to session loss. While this made sense backin 2010 when the NeedToken response was the only way to fetch the logintoken, these days it is mainly confusing[3] and a way for clients withbroken cookie handling to wind up in a loop.With the merge of Gerrit change 586448,[4] the API will no longer returnNeedToken when lgtoken was supplied. If the token cannot be validated dueto session loss, a "Failed" response will be returned with a messagereferring to session loss as the problem.This change should be deployed to Wikimedia sites with 1.35.0-wmf.28 orlater, seehttps://www.mediawiki.org/wiki/MediaWiki_1.35/Roadmap for aschedule.Note that the change HAS NOT been deployed to Wikimedia sites as of thetime of this email. If your client's ability to log in broke on 6 April2020, the cause is most likely an unrelated change to Wikimedia'sinfrastructure that caused some HTTP headers to be output with HTTP/2standard casing, i.e. "set-cookie" rather than the traditional"Set-Cookie". Seehttps://phabricator.wikimedia.org/T249680 for details andfurther discussion of that situation.[1]:https://www.mediawiki.org/wiki/Special:Code/MediaWiki/64677[2]:https://lists.wikimedia.org/pipermail/mediawiki-api-announce/2016-January/0…[3]:https://phabricator.wikimedia.org/T249526[4]:https://gerrit.wikimedia.org/r/c/mediawiki/core/+/586448-- Brad Jorsch (Anomie)Senior Software EngineerWikimedia Foundation_______________________________________________Mediawiki-api-announce mailing listMediawiki-api-announce(a)lists.wikimedia.orghttps://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce
2 1
0 0
Results per page:

[8]ページ先頭

©2009-2026 Movatter.jp