| Main page | Discussion | Noticeboard | Guide | Resources | Policies | Research |
At the onset of the 2020sAI boom, Wikipedia's existing content policies already addressed many of the emerging AI-related concerns that prompted other platforms and organizations to adopt a dedicated new policy; consequently, Wikipedia has no single all-encompassing, detailed "AI use policy", "AI-generated content policy", "AI content guideline", et cetera.Wikipedia:Large language models § Risks and relevant policies (essay) aims to explain how the broadcore content policies and thecopyrights policy interact with the use of AI tools, mostly in the domain of text.
A dedicated guideline in this area does exist:Wikipedia:Writing articles with large language models (WP:NEWLLM). It is the closest thing to an explicit "AI policy" page on English Wikipedia, but it is intentionally very spartan, comprising one point:Large language models should not be used to generate new Wikipedia articles from scratch. Still, disparate portions of other policies and guidelines contain certain provisions that are specifically and explicitly about AI-generated content. The most important of these is the speedy deletion criterionWikipedia:Speedy deletion § G15. LLM-generated pages without human review (WP:G15), which forms the policy basis tospeedily delete pages that could only plausibly have been generated by large language models and can be assumed not to have undergone reasonable human review. Seen together, NEWLLM and G15 reflect the project's expectations that large language models are not to be used to originate articles and that the editor who adds LLM-originated text to the site (not limited to articles) should reasonably review it to ensure that it complies with all applicable policies and guidelines.
The rest of those other relevant (albeit non-dedicated) policies and guidelines are listed here as follows (November 2025[update]):
The following are not policies or guidelines, but still have some significance in this context: