In the age of artificial intelligence, the decline of traditional websites has become inevitable — even platforms as vast as Wikipedia are facing a steady drop in traffic, despite the fact that most AI models rely heavily on its content for training data.
The Wikimedia Foundation has now announced the launch of a paid API service for AI companies, allowing them to conveniently access Wikipedia’s vast repository of knowledge directly through subscription-based APIs for training datasets or powering real-time question-answering systems.
This paid API model is designed to serve a dual purpose: ensuring a sustainable revenue stream to keep Wikipedia operational, while simultaneously offering AI developers a more efficient and legitimate way to access structured information. The Foundation considers this a mutually beneficial solution, one that also alleviates the strain on its servers caused by unrestricted web scraping.
At least for now, the Foundation has not explicitly threatened legal action or penalties against data scrapers. However, Wikipedia recently disclosed that some AI crawlers have begun masquerading as human users to evade detection systems.
After upgrading its bot-detection mechanisms, Wikipedia observed abnormally high traffic levels between May and June 2025, traced to AI crawlers attempting to bypass monitoring. During the same period, the platform reported an 8% decline in human traffic.
In response, Wikipedia is drafting guidelines for AI developers and providers, emphasizing that creators of generative AI systems should credit human contributors whose work forms the foundation of model outputs.
In its official statement, the Wikimedia Foundation declared:
“For people to trust information shared on the internet, platforms should make it clear where the information is sourced from and elevate opportunities to visit and participate in those sources. With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.”
If the Foundation fails to adapt, Wikipedia’s traffic could plummet further over time, leading to a shortage of human editors and verifiers. Without them, AI models would lose access to fresh, high-quality data, weakening both their training accuracy and their ability to provide reliable, up-to-date answers.
Related Posts:
- Wikipedia goes offline in serveral countries to protest the upcoming copyright law in the EU
- Wikipedia Warns of Existential AI Threat as Page Views Fall 8% Due to Chatbot Summaries
- Wiki-Slack Attack: The New Threat Lurking in Wikipedia Pages
- Google Requires JavaScript for Search: Bots and Crawlers Impacted
- Wikimedia Offers Free AI Dataset to Combat Relentless Web Scraping