| This page is part of theMediaWiki Action API documentation. |
| MediaWiki Action API |
|---|
| Basics |
| Authentication |
| Accounts and Users |
| Page Operations |
|
| Search |
| Developer Utilities |
| Tutorials |
| v · d · e |
GET/POST request to parse content of a page and obtain the output.
| The following documentation is the output ofSpecial: |
Parses content and returns parser output.
See the various prop-modules ofaction=query to get information from the current version of a page.
There are several ways to specify the text to parse:
Title of page the text belongs to. If omitted,contentmodel must be specified, andAPI will be used as the title.
Text to parse. Usetitle orcontentmodel to control the content model.
Revision ID, for{{REVISIONID}} and similar variables.
Summary to parse.
Parse the content of this page. Cannot be used together withtext andtitle.
Parse the content of this page. Overridespage.
Ifpage orpageid is set to a redirect, resolve it.
Parse the content of this revision. Overridespage andpageid.
Which pieces of information to get:
<html>,<head> element and opening<body> of the page.mw.loader.using(). Eitherjsconfigvars orencodedjsconfigvars must be requested jointly withmodules.mw.config.set().wikitext)<head> of the page.CSS class to use to wrap the parser output.
Use the ArticleParserOptions hook to ensure the options used match those used for article page views
Generate HTML conforming to theMediaWiki DOM spec usingParsoid. Replaced byparser=parsoid.
Which wikitext parser to use:
Do a pre-save transform on the input before parsing it. Only valid when used with text.
Do a pre-save transform (PST) on the input, but don't parse it. Returns the same wikitext, after a PST has been applied. Only valid when used withtext.
Includes language links supplied by extensions (for use withprop=langlinks).
Only parse the content of the section with this identifier.
Whennew, parsetext andsectiontitle as if adding a new section to the page.
new is allowed only when specifyingtext.
New section title whensection isnew.
Unlike page editing, this does not fall back tosummary when omitted or empty.
Usedisablelimitreport instead.
Omit the limit report ("NewPP limit report") from the parser output.
Omit edit section links from the parser output.
Do not deduplicate inline stylesheets in the parser output.
Whether to include internal merge strategy information in jsconfigvars.
Generate XML parse tree (requires content modelwikitext; replaced byprop=parsetree).
Parse in preview mode.
Parse in section preview mode (enables preview mode too).
Omit table of contents in output.
Apply the selected skin to the parser output. May affect the following properties:text,langlinks,headitems,modules,jsconfigvars,indicators.
Content serialization format used for the input text. Only valid when used with text.
Content model of the input text. If omitted, title must be specified, and default will be the model of the specified title. Only valid when used with text.
Return parse output in a format suitable for mobile devices.
Template sandbox prefix, as withSpecial:TemplateSandbox.
Parse the page usingtemplatesandboxtext in place of the contents of the page named here.
Parse the page using this page content in place of the page named bytemplatesandboxtitle.
Content model oftemplatesandboxtext.
Content format oftemplatesandboxtext.
{"parse":{"title":"Pet door","pageid":3276454,"revid":852892138,"text":{"*":"<div class=\"mw-parser-output\"><div class=\"thumb tright\"><div class=\"thumbinner\" style=\"width:222px;\"><a href=\"/wiki/File:Doggy_door_exit.JPG\" class=\"image\"><img alt=\"\" src=\"//upload.wikimedia.org/wikipedia/commons/thumb/7/71/Doggy_door_exit.JPG/220px-Doggy_door_exit.JPG\" width=\"220\" height=\"165\" class=\"thumbimage\" srcset=\"//upload.wikimedia.org/wikipedia/commons/thumb/7/71/Doggy_door_exit.JPG/330px-Doggy_door_exit.JPG 1.5x, ... } }}
#!/usr/bin/python3""" parse.py MediaWiki API Demos Demo of `Parse` module: Parse content of a page MIT License"""importrequestsS=requests.Session()URL="https://en.wikipedia.org/w/api.php"PARAMS={"action":"parse","page":"Pet door","format":"json"}R=S.get(url=URL,params=PARAMS)DATA=R.json()print(DATA["parse"]["text"]["*"])
<?php/* parse.php MediaWiki API Demos Demo of `Parse` module: Parse content of a page MIT License*/$endPoint="https://en.wikipedia.org/w/api.php";$params=["action"=>"parse","page"=>"Pet door","format"=>"json"];$url=$endPoint."?".http_build_query($params);$ch=curl_init($url);curl_setopt($ch,CURLOPT_RETURNTRANSFER,true);$output=curl_exec($ch);curl_close($ch);$result=json_decode($output,true);echo($result["parse"]["text"]["*"]);
/** * parse.js * * MediaWiki API Demos * Demo of `Parse` module: Parse content of a page * * MIT License */consturl="https://en.wikipedia.org/w/api.php?"+newURLSearchParams({origin:"*",action:"parse",page:"Pet door",format:"json",});try{constreq=awaitfetch(url);constjson=awaitreq.json();console.log(json.parse.text["*"]);}catch(e){console.error(e);}
/** * parse.js * * MediaWiki API Demos * Demo of `Parse` module: Parse content of a page * MIT License */constparams={action:'parse',page:'Pet door',format:'json'};constapi=newmw.Api();api.get(params).done(data=>{console.log(data.parse.text['*']);});
| Response |
|---|
{"parse":{"title":"Wikipedia:Unusual articles/Places and infrastructure","pageid":38664530,"wikitext":{"*":"===Antarctica===\n<!--[[File:Grytviken church.jpg|thumb|150px|right|A little church in [[Grytviken]] in the [[Religion in Antarctica|Antarctic]].]]-->\n{| class=\"wikitable\"\n|-\n| '''[[Emilio Palma]]'''\n| An Argentine national who is the first person known to be born on the continent of Antarctica.\n|-\n| '''[[Scouting in the Antarctic]]'''\n| Always be prepared for glaciers and penguins.\n|}"}}} |
| parse_wikitable.py |
|---|
#!/usr/bin/python3""" parse_wikitable.py MediaWiki Action API Code Samples Demo of `Parse` module: Parse a section of a page, fetch its table data and save it to a CSV file MIT license"""importcsvimportrequestsS=requests.Session()URL="https://en.wikipedia.org/w/api.php"TITLE="Wikipedia:Unusual_articles/Places_and_infrastructure"PARAMS={'action':"parse",'page':TITLE,'prop':'wikitext','section':5,'format':"json"}defget_table():""" Parse a section of a page, fetch its table data and save it to a CSV file """res=S.get(url=URL,params=PARAMS)data=res.json()wikitext=data['parse']['wikitext']['*']lines=wikitext.split('|-')entries=[]forlineinlines:line=line.strip()ifline.startswith("|"):table=line[2:].split('||')entry=table[0].split("|")[0].strip("'''[[]]\n"),table[0].split("|")[1].strip("\n")entries.append(entry)file=open("places_and_infrastructure.csv","w")writer=csv.writer(file)writer.writerows(entries)file.close()if__name__=='__main__':get_table() |
| Code | Info |
|---|---|
| missingtitle | The page you specified doesn't exist. |
| nosuchsection | There is no sectionsection inpage. |
| pagecannotexist | Namespace doesn't allow actual pages. |
| invalidparammix |
|
showstrategykeysdisabletidydisablestylededuplicationrevid,useskin,wrapoutputclass