People still love Wikipedia. Can it survive AI?

10 hours ago 5

If you grew up with computers in your classroom, there’s a good chance you heard this instruction before starting a research paper: Don’t trust Wikipedia.

The reasoning? Anyone can go in and make changes to a Wikipedia page. This is mostly true, though pages that are subject to a high amount of abuse or vandalism can be locked. However, the notion that the website is unreliable or a playground of misinformation has been overstated in schools. The crowdsourced online encyclopedia relies on a community of volunteers, known as “Wikipedians,” who adhere to a rigorous editing process. Citations are available at the bottom of each article, and public-facing “talk pages” attached to every entry allow editors to discuss changes and try to reach consensus. And the site has an efficient monitoring system, with reputable editors and Wikipedia-approved bots watching entries in real time.

“The fact that we were all told not to use it in school is really frustrating because we just weren’t taught how to actually use it,” Dean, a 22-year-old content creator, told Vox.

Last December, Dean posted a TikTok urging his followers to utilize Wikipedia, emphasizing its importance in an era of rampant misinformation. He’s like many other creators and users on social media who are discovering the credibility and value of the 25-year-old platform in the same moment that sometimes-faulty AI chatbots are ascendant. For example, research conducted by the BBC in December 2024 found that major AI models like OpenAI’s ChatGPT and Microsoft’s CoPilot inaccurately summarize news when prompted, and a Guardian investigation in January 2026 found that Google’s AI Overview was showing users false and misleading medical information that put their health at risk.

All of this makes a knowledge platform that’s human-generated and rigorously monitored look pretty appealing. If there’s any indication that the public understands its necessity, the Wikimedia Foundation, which funds and supports Wikipedia, raised a staggering $184 million in 2025, a $4 million increase from the previous year.

At the same time, the Washington Post reported in August 2025 that “suspicious edits, and even entirely new articles, with errors, made-up citations and other hallmarks of AI-generated writing keep popping up on the free online encyclopedia,” forcing human editors to find and fix them. And Wikipedia is now working directly with the large language models that many users see it as counterbalancing. In January, the organization announced a new batch of tech companies that will train their AI models using Wikipedia Enterprise, a paid product allowing partners to access its content at scale. This isn’t an unprecedented move, but it raises concerns about the future of the early-internet staple. How will it maintain its human-powered identity amid AI’s chokehold on the internet?

There’s still nostalgia for the “old internet”

Wikipedia might have had a big year for fundraising, but it’s faced the same struggles with visibility as other digital publishers. Last October, the organization reported that its monthly human page views had seen a roughly 8 percent decline compared to 2024 and attributed it to the uptick in people using generative AI — which, again, uses Wikipedia as a source and provides the info directly to users — and searching on social media when they need information. (Research has shown that one in five Americans regularly gets their news from TikTok now, while the number of Americans using ChatGPT has already doubled since 2023.)

That’s not to say Wikipedia’s gone out of fashion. It remains a top source listed in Google search results and AI summaries. Over the past decade, its articles have been viewed a total of 1.9 trillion times. It was the ninth most-visited website in the world in 2025.

There also seems to be a niche, nostalgic appeal to Wikipedia that persists online. It’s something that popular Instagram accounts like @tldrwikipedia and @depthsofwikipedia have been able to capitalize on over the past few years. The latter, run by Annie Rauwerda, features screenshots of the site’s more specific and bizarre pages and boasts 1.6 million followers.

On TikTok, there’s a dedicated Wikipedia fandom, with users spreading the gospel of the website (and its app) and sharing their affinity for browsing random articles. Last fall, Chisom, a recent grad and substitute teacher who prefers not to share her last name online for privacy reasons, posted a TikTok saying she “unironically bought a Wikipedia hat.” It received a million views and loads of positive comments.

Chisom, 22, told Vox she grew up believing that Wikipedia was unreliable until a 10th-grade teacher demonstrated how well the site’s monitoring system works and how quickly corrections are made in real-time. Now, she said, she’s become “rabbit-hole Wikipedia girl” and finds it much more user-friendly than Google’s AI overview.

“I definitely use it more,” she said. “I used to use Google, and they would have a little summary of a celebrity — who they’re married to, their kids. But since they started doing the whole AI summary thing, that’s so unhelpful to me.”

The threat of AI lingers, but humans offer hope

Despite renewed enthusiasm for Wikipedia online, the future of the site seems tenuous as AI creeps into more aspects of our everyday lives, and, especially, because seeing the website establish relationships with AI companies feels at odds with its human-first principles. Large language models have been using Wikipedia for a while now, famously without their permission and at a high cost to the site.

Tech journalist Stephen Harrison, who covered Wikipedia on Slate for years, told Vox that he sees the LLM partnerships as “recognition” by tech companies that “their long-term future depends on nurturing projects like Wikipedia.” He’s more concerned about the political attacks the platform has faced recently from people like Elon Musk. (Last year, Musk criticized and called to defund Wikipedia after his entry was updated to note a gesture he made during Trump’s inauguration that was widely interpreted as a Nazi salute. He’s since launched the rival website Grokipedia, with entries edited by his company xAI.) Harrison is also concerned about internet users “forgetting” about Wikipedia if they’re mainly consuming the site’s content through AI summaries.

Elon Musk sitting and speaking, holding a microphone

Hannah Clover, a Wikipedian who has been working with the site since 2018, told Vox her concerns about AI’s impact are a bit less obvious. It’s not that she believes AI will ever replace human editors, but that its prominence will make sourcing harder.

“I worry about it more in the sense that a lot of the sources that we cite might become unreliable in the future,” Clover, 23, said. “We have a perennial sources list, and sometimes you have sources that were previously reliable that become unreliable because they start publishing AI slop out of nowhere.”

These AI deals show that Wikipedia is still an extremely critical knowledge base. But it will inevitably be up to the humans who love it to keep the site going. Clover acknowledges that a lot of young people struggling to pay their bills may not have the time or energy to become Wikipedians who edit the site, but that’s “not for a lack of interest.” Harrison, meanwhile, sees independent creators, like Depths of Wikipedia, as crucial in keeping Wikipedia’s brand alive. “Social media influencers rely on Wikipedia as a sort of invisible foundation for their knowledge,” he said. For now, all the “old internet” nostalgia on TikTok gives him some hope for a revival.

“I grew up when Wikipedia was considered the Wild West of the internet,” he said. “It’s really remarkable how Wikipedia has, in a lot of ways, become this storied institution that people have all these feelings of nostalgia and affection toward.”

Read Entire Article
Situasi Pemerintah | | | |