There is a great deal ofpublicly-available, open-licensed data about Wikimedia projects. This page is intended to help community members, developers, and researchers who are interested in analyzing raw data learn what data and infrastructure is available.
If you have any questions, you might find the answer in theFrequently Asked Questions about Data. If you still have questions, you can email your question to theAnalytics mailing list (more information). You can also find a guide about basic concepts for researchers working with Wikimedia Data in the data introduction page.
If you wish to browse pre-computed metrics and dashboards, seestatistics.
If this publicly available data isn't sufficient, you can look at the page onprivate data access to see what non-public data exists and how you can gain access.
See alsoinspirational example uses.
Also consider searching for datasets atZenodo,Figshare,Dimensions.ai,Google Dataset Search,Academic Torrents,DataHub (historical) orHugging Face (see also a curated "Wikimedia Datasets" list on Huggingface).
Dumps of allWMF projects for backup, offline use, research, etc.
Data Services allowsWikimedia Cloud Services users to query a sanitized copy of the Wikimedia MediaWiki databases.
Rawpageviews,unique device estimates,mediacounts, etc.
Reports based on data dumps and server log files.
DBpedia extracts structured data from Wikipedia. It allows users to run complex queries and link Wikipedia data to other data sets.
A collection of various Wikimedia-related datasets.
Differential privacy homepage
A collection of differentially-private datasets, released daily, weekly, or monthly.
Machine learning models homepage
A collection of AI/ML models in production that are used to power user-facing tools and features across Wikimedia projects. All such models have a corresponding model card which includes a variety of information about the model including information about the models’ training datasets.
The table below is a quick reference of data sources organized by data domain. For a more detailed overview of Wikimedia data domains and how to access data in each domain, use the links in the table or seeResearch:Data introduction.
WMF releasesdata dumps of Wikipedia, Wikidata, and allWMF projects on a regular basis, as well as dumps of other Wikimedia-related data such as search indices and short URL mappings.
pagelinks,categorylinks,imagelinks,templatelinks tables)externallinks,iwlinks,langlinks tables)image,oldimage tables)page,page_props,page_restrictions tables)*-all-titles-in-ns0.gz)redirect table)logging table)interwiki,site_stats,user_groups tables)See a more comprehensive list of what is available for download.
Dumps.wikimedia.org offersvarious other database dumps and datasets, including
You candownload the latest dumps for the last year (dumps.wikimedia.org/enwiki/ for English Wikipedia,dumps.wikimedia.org/dewiki/ for German Wikipedia, etc).Download mirrors offer an alternative to the download page.
Due to large file sizes, using adownload tool is recommended.
There are alsoarchives. Many older dumps can also be found at theInternet Archive.
XML dumps are in the wrapper format described atExport format (schema). Files are compressed in gzip (.gz), bzip2/lbzip2 (.bz2) and .7z formats.
SQL dumps are provided as dumps of entire tables, using mysqldump.
Some older dumps exist invarious formats.
Seeexamples of importing dumps in a MySQL database with step-by-step instructions.
Some tools are listed on the following pages, but these tools are mostly outdated and non-functional:
All text content is multi-licensed under theCreative Commons Attribution-ShareAlike 3.0 License (CC-BY-SA) and theGNU Free Documentation License (GFDL). Images and other files are available underdifferent terms, as detailed on their description pages.
The MediaWiki API provides direct, high-level access to the data contained in MediaWiki databases. Client programs can log in to a wiki, get data, and post changes automatically by making HTTP requests.
To query the database you send a HTTP GET request to the desiredendpoint (examplehttps://en.wikipedia.org/w/api.php for English Wikipedia) setting the action parameter toquery and defining thequery details the URL.
https://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=xml&titles=Main%20Page fetches (action=query) the content (rvprop=content) of the most recent revision of Main Page (titles=Main%20Page) of English Wikipedia (https://en.wikipedia.org/w/api.php?) in XML format (format=xml). You can paste the URL in a browser to see the output.To try out the API interactively on English Wikipedia, use theAPI Sandbox.
To use the API, your application or client might need tolog in.
Before you start, learn about theAPI etiquette.
Researchers could be givenSpecial access rights on case-to-case bases.
All text content is multi-licensed under theCreative Commons Attribution-ShareAlike 3.0 License (CC-BY-SA) and theGNU Free Documentation License (GFDL).
TheWiki Replicas (part of WMCSwikitech:Portal:Data Services) host sanitized versions of Wikimedia production MediaWiki databases.
Users of variousWikimedia Cloud Services products can access the wiki Wiki Replicas databases that host sanitized copies of the databases of all Wikimedia projects including Commons.
Explore thedatabase schema of the MediaWiki software.
See theWiki Replicas page on Wikitech on how to access the Wiki Replicas.
Seewikitech:Help:Cloud Services introduction#Communication and support
SeeEventStreams to subscribe toRecent changes on all Wikimedia wikis.This broadcasts edits and other changes as they happen.
Seewikitech:Event Platform/EventStreams/Powered By
Analytics Datasets on dumps.wikimedia.org offers stable and continuous datasets about web request statistics (including page views, mediacounts, unique devices), page revision history, data by country, and Wikidata QRanks.
Pageview statistics are one example. Each request of a page reaches one of Wikimedia'sVarnish caching hosts. The project name and the title of the page requested are logged and aggregated hourly.
Files starting with "project" contain total hits per project per hour statistics.
Per-country pageviews data is also available, sanitized for privacy reasons. Seethis announcement post (June 2023).
See theREADME for details on the format.
You can interactively browse the page view statistics athttps://pageviews.toolforge.org. Moredocumentation on the Pageviews Analysis tool is available.
TheWikipedia clickstream dataset contains counts of(referrer, resource)pairs extracted from the request logs of Wikipedia.
Thepublic "Geoeditors" dataset contains information about the monthly number of active editors from a particular country on a particular Wikipedia language edition (bucketed and redacted for privacy reasons). For some earlier years, similar data is available at[1]/[2], see alsoEdits by project and country of origin.
Additional datasets (mostly irregular or discontinued ones) are published athttps://analytics.wikimedia.org/datasets/. These includeCaching research data, andAS Performance Report.
Wikistats is an informal but widely recognized name for a set of reports which provide monthly trend information for all Wikimedia projects and wikis.
Many dashboards that display trends about reading, contributing, and content broken down by different projects such as:
Data is presented as charts with the option to download the underlying data.
For more details on Wikistats, seewikitech:Data Platform/Systems/Wikistats 2.
DBpedia.org is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia and to link other datasets on the Web to Wikipedia data.
The English version of the DBpedia knowledge base describes millions of things, and the majority of items are classified in a consistent ontology (persons, places, creative works like music albums, films and video games, organizations like companies and educational institutions, species, diseases, etc.). Localized versions of DBpedia in more than hundred languages describe millions of things.
The data set also features:
TheWikimedia organization on theOpen Knowledge Foundation's DataHub was established by the Wikimedia Foundation around 2013, and contains a collection of datasets about Wikipedia and other projects which mostly date from around 2013-2016.
Wikivoyage also maintains data onits own DataHub:
TheWMF privacy engineering team usesdifferential privacy to release data that would otherwise be too sensitive to release. This data currently only includes pageview statistics; in the future, it will include statistics about editors, centralnotice impressions and views, search, and more.
Differentially-private data is currently available in static TSV form athttps://analytics.wikimedia.org/published/datasets/. Work to make this data available via API is ongoing.
Differentially-private data and code is available under aCreative Commons Zero license.