Movatterモバイル変換


[0]ホーム

URL:


Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview
List overview
Download

Mediawiki-apiJanuary 2020

mediawiki-api@lists.wikimedia.org
  • 11 participants
  • 10 discussions
Start a nNew thread
MediaWiki API in specific usage
by Support Technique 19 Feb '26

19 Feb '26
Hello,I'm developing a Joomla 3! plugin based on mediawiki API. The goal is importing all datas (titles and description or definitions) from Wiki database to a Joomla website database. Cause the Joomla! website will be finally used as a dictionnary. Now can you tell me how to proceed, please? Unfortunately, the available tutorials on MediaWiki are not so clear on this usage situation. Thanks for answer me as soon as you can!-- Logo Internis Group <http://www.internis-group.com/> *Modeste EKAMBI*Senior Software Engineer - Internis GroupAgence experte des projets web et IT - Développement sur mesure - Développement web et mobile, d'applications et sites internetSkype: internis.groupTéléphone: 699-789-772 / 679-998-516Mail: support(a)internis-group.comBlog:http://blog.internis-group.comSite:http://www.internis-group.com <http://internis-group.com>
2 1
0 0

20 Jan '26
Hi there,I'm using the API to extract the raw wiki text from my pages, usingthe "?action=query&titles=Main_Page&export&exportnowrap" syntax. Thatworks perfectly.Now I would like to get the templates expanded out in the result, so Iuse: "?action=query&titles=Main_Page&prop=revisions&rvlimit=1&rvprop=content&rvexpandtemplates",which does the job, as expected, but it also strips out the comments.My problem is that the comments are meaningful to me (I use them tohelp process the wiki text in subsequent steps).Is there a way to expand templates with the API, but leave thecomments intact?Thanks,Kevin
4 5
0 0
Need to extract abstract of a wikipedia page
by aditya srinivas 23 Nov '23

23 Nov '23
Hello,I am writing a Java program to extract the abstract of the wikipedia pagegiven the title of the wikipedia page. I have done some research and foundout that the abstract with be in rvsection=0 So for example if I want the abstract of 'Eiffel Tower" wiki page then I amquerying using the api in the following way.http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Eiffel…and parse the XML data which we get and take the wikitext in the tag <revxml:space="preserve"> which represents the abstract of the wikipedia page.But this wiki text also contains the infobox data which I do not need. Iwould like to know if there is anyway in which I can remove the infobox dataand get only the wikitext related to the page's abstract Or if there is anyalternative method by which I can get the abstract of the page directly.Looking forward to your help.Thanks in AdvanceAditya Uppu
4 3
0 0
So <ns>0</ns> is only regular page right?
by Furkan Gözükara 02 Feb '20

02 Feb '20
If <ns> is other than 0, it means it is some other special page likecategory talk etc right?Are there any documentation for this?
3 7
0 0

31 Jan '20
1 : enwiktionary-20200120-pages-articles-multistream.xml.bz2 854.6 MB2 : enwiktionary-20200120-pages-meta-current.xml.bz2 890.8 MBso number 2 covers 1 and additional files?
2 1
0 0

31 Jan '20
Hello. I would like to learn if this possibleFor example lets say the article gelmekhere the linkhttps://en.wiktionary.org/wiki/gelmekOn that page it has conjugationsWhen we click edit, we see that it has the following template/module{{tr-conj|gel|e|gelir|i|d}}So can i parse this?Like i provide the page id and this template to get the parsed results viaapi?Or any other way?e.g.https://en.wiktionary.org/w/api.php?action=query&titles=gelmek&parseTemplat…
2 2
0 0
edition performance
by wp1080397-lsrs wp1080397-lsrs 29 Jan '20

29 Jan '20
Dear friends, We have been working for some months in a wikidata project, and we have found an issue with edition performance, I began to work with wikidata java api, and when I tried to increase the edition speed the java system held editions, and inserted delays, which reduced edition output as well. I chose the option to edit with pywikibot, but my experience was that this reduced more the edition.At the end we use the procedure indicated here:https://www.mediawiki.org/wiki/API:Edit#ExampleWith multithreading, and we reach a maximum of 10,6 edition per second. my questions is if there is some experience when has been possible to have a higher speed?.Currently we need to write 1.500.000 items, and we would require 5 working days for such a task.Best regardsLuis RamosSenior Java Developer(Semantic Web Developer)PST.AGJena, Germany.
2 6
0 0
Request to block ActionApi-Client
by Frank Wunderlich 29 Jan '20

29 Jan '20
Hello,in 2016 I wrote a small Android app, that is making use of the Wikipedia ActionApi to search for articles at the current location of a user.Due to legal considerations I am currently trying to take down the app.It’s not available any more in the Google PlayStore, but there are still installations out there.That’s why I want to make these installations unusable by deactivating all backend services, that the app is using.Unfortunately the app is (partially) directly communicating with wikipedia servers and not via a proxy under my control.The app sends a special User-Agent HTTP header with every request to identify itself:tagorama/v1.0.0.283-release (http://tagorama.rocks/ <http://tagorama.rocks/>; info(a)tagorama.rocks <mailto:info@tagorama.rocks>)Is there any way for you to block requests from this app?Who would I contact?Thanks for your help,Frank Wunderlich
4 4
0 0
(no subject)
by Trixie Campbell 16 Jan '20

16 Jan '20
Thankyou
1 1
0 0
accessing api of private wiki
by thomas.topway.it@mail.com 15 Jan '20

15 Jan '20
3 2
0 0
Results per page:

[8]ページ先頭

©2009-2026 Movatter.jp