Artificial intelligence (AI) is used on a number of Wikipedia and Wikimedia projects. This may be directly involved with creation of text content, or in support roles related to evaluating article quality, adding metadata, or generating images. As with any machine-generated content, care must be used when employingAI at scale or in applying it where the community consensus is to exercise more caution.
When exploring AI techniques and systems, the community consensus is to prefer human decisions over machine-generated outcomes until the implications are better understood.
The Objective Revision Evaluation Service (ORES) was started in 2015 as a project of the Wikimedia Foundation, and provides arevision score againstmachine learning models that have been trained in order to report article quality or vandalism. This is used in tools such asClueBot NG to help immediately revert vandalism, or in evaluation tools like theProgram and Events Dashboard to measure the outcomes of classwork, edit-a-thons, or organized editing campaigns.
Guidance can be found atHelp:Translation#English Wikipedia policy requirements. There is aContent Translation Tool used across Wikimedia projects that can use the output of machine translation from one Wikipedia article to another, using services likeGoogle Translate. However, on the English Wikipedia, it currently states that "machine translation is disabled for all users and this tool is limited to extended confirmed editors." As a result, only manual translation on the English Wikipedia is supported by the tool, though some users have used translation to Simple English as a workaround. Relatedly, there is a section of the Help:Translation page with the broad advice: "avoid machine translations." However, this guidance was last edited in 2016, and the state of the art for machine translation has advanced significantly since then, which may merit a re-examination of that advice.
The explosion of interest inChatGPT in 2022 has led to increased curiosity in using generative AI to help compose Wikipedia articles. The status of machine-generated text from tools such as ChatGPT is generally accepted to be public domain, so the copyright issues are not a blocker to using the generated text from a legal standpoint. These issues are generally governed byHelp:Adding open license text to Wikipedia#Converting and adding open license text to Wikipedia, which advises to make sure content is adjusted for style and that reliable sources are used. Conversations on theVillage Pump and in some test articles (i.e.Artwork title) have noted positive aspects of machine generated text, but a serious warning that content must be checked for facts and accuracy and never used straight from ChatGPT.
Image metadata – There have been efforts fromGLAM institutions to help supplement image keyword data with machine learning efforts. Among them include:
Computer aided tagging Started in 2019, "The computer-aided tagging tool is a feature in development by the Structured Data on Commons team to assist community members in identifying and labeling depicts statements for Commons files." See:c:Commons:Structured data/Computer-aided tagging
Metropolitan Museum of Art Tagging - This project used Met Museum tagging info to train a machine learning system to help predict new "depiction" recommendations for Wikidata. This resulted in a new Wikidata Game that helped add more than 4,000 new depiction (P180) statements to Wikidata. See the Met Museum blog post byAndrew Lih: "Combining AI and Human Judgment to Build Knowledge about Art on a Global Scale," March 4, 2019,[1]
Wikimedia Commons AI, a rejected proposal for a new Wikimedia sister project aimed at establishing a clear distinction between human-generated content and content produced by artificial intelligence
The four categories, an idea about dividing all images uploaded to Wikimedia Commons in one of four categories
At the onset of the 2020sAI boom, Wikipedia's existing content policies already addressed many emerging AI-related concerns that would prompt other platforms and organizations to adopt a dedicated new policy; consequently, Wikipedia has no single "AI use policy", "AI-generated content policy", "AI content guideline", et cetera.Wikipedia:Large language models § Risks and relevant policies (essay) aims to explain how the broadcore content policies and thecopyrights policy interact with the use of AI tools, mostly in the domain of text. However, there exist disparate portions of policies and guidelines which are specifically and explicitly about AI-generated content, and they are listed here as follows (August 2025[update]):
Consensus that "it is within admins' and closers' discretion to discount, strike, or collapse obvious use of generative LLMs" (Now in guideline:WP:AITALK)
"Most images wholly generated by AI should not be used." "Obvious exceptions include articles about AI, and articles about notable AI-generated images. The community objects particularly strongly to AI-generated images (1) of named people, and (2) in technical or scientific subjects such as anatomy and chemistry." (Now in policy:WP:AIIMAGES)
The WMF announced that machine-generated summaries of articles would be presented to readers, but then put the project on hold in response to negative community feedback.