Home

Donate
Perspective

With Grokipedia, Top-Down Control of Knowledge Is New Again

Ryan McGrady / Nov 3, 2025

Grokipedia, the AI-generated encyclopedia owned by Elon Musk's xAI, went live on October 27. It is positioned as, first and foremost, an ideological foil to Wikipedia, which for years has been the subject of escalating criticism by right-wing media in general and Musk in particular. With Grokipedia, Musk wants to produce something he sees as more neutral.

Much has already been written about the character of Grokipedia’s content. This essay aims to explore the nature of the project and its version of neutrality, as compared to Wikipedia. Technologically, it is one of many experiments designed to replace human-generated writing with LLMs; conceptually, it is less a successor to Wikipedia than a return to an older model of producing officially sanctioned knowledge.

Wikipedia and neutrality

Nearly every encyclopedia asserts some version of "neutrality." Wikipedia's definition is unusual: its "neutral point of view" policy aims not to pursue some Platonic ideal of balance or objectivity, but rather a faithful and proportional summary of what the best available sources say about a subject. Original ideas, reporting, and analysis on the part of its contributors are not allowed. Casting volunteers as "editors" and not "authors" is part of how "an encyclopedia that anyone can edit" is possible — by moving the locus of dispute from truth itself to which sources to use and how to incorporate them. As with the rest of Wikipedia, neutrality is less a perfect state than a continuously negotiated process wherein disputes are expected and common. While neutrality and sourcing discussions are often deeply fraught, with complicated histories that blur lines of reliability and result in lengthy discussions, they're also constructive — a 2019 study in Nature found that articles with many such conflicts tended to be higher quality in general.

On which sources to use, Wikipedia's guideline about identifying "reliable sources" details its priorities: a reputation for fact-checking, accuracy, issuing corrections, editorial oversight, separating facts and opinions, no compromising connection to the subject, and other traditional markers of information literacy that librarians have taught to students and researchers for more than a century. Secondary and tertiary sources are preferred, deferring to them for the task of vetting and interpreting primary sources. Independent subjects are also preferred for any non-trivial claim, as article subjects have a hard time writing about themselves objectively. Ideological orientation is not a factor except insofar as narrative drive affects this list of priorities. Both of the following statements can align with Wikipedia's definition of a "reliable source," even though they're opposed: "unicorns aren't real but I wish they were;" "unicorns aren't real and I'm glad they aren't." Either source would take priority over a source that claims "unicorns are real," regardless of the author's pro- or anti-unicorn sentiment.

Primarying Wikipedia

However, sourcing is also at the center, implicitly or explicitly, of many allegations that Wikipedia is not actually neutral. Some of these claims focus on Wikipedia's "perennial sources list," which includes dozens of sources whose reliability is frequently discussed, highlighted according to the outcomes of those discussions. The idea is to be able to point to a central page where someone can find links and summaries of past discussions rather than have volunteers explain for the umpteenth time why e.g. InfoWars is not a reliable source.

I agree with criticism of this page to the extent it has given rise to a genre of source classification discussion applied not just to extreme cases like InfoWars but to sources that require some nuance, indirectly short-circuiting debates that should take place on a case-by-case basis. But even if the list were to be deleted altogether, it wouldn't turn unreliable sources (according to the guideline) into reliable ones; it would just require more of those debates to play out rather than let someone point to a line in a table. There's an optics argument to be had, too: it's not that there aren't more unreliable right-wing sources than left-wing sources; it's just that people try to use unreliable right-wing sources more frequently in Wikipedia articles.

But in large part, allegations of bias are a straightforward extension of a decades-old argument: that academia, science, mainstream media, etc. are broadly biased towards the left and/or untrustworthy. Whether through Rush Limbaugh's "four corners of deceit" (government was the fourth corner) or some other articulation, the frame is well established. The extent to which it is true is outside the scope of this essay, but anyone who holds this view will inevitably see that bias in Wikipedia, which summarizes academia, science, and media. Musk made this point earlier this year when he called Wikipedia "an extension of legacy media propaganda."

It should not be surprising, then, that the sourcing used by Grokipedia is often radically different from Wikipedia's. It's not clear how reliably Grok will explain its own internal processes, but it should at least communicate the way its developers want Grokipedia to be seen. So I asked it to explain the way it prioritizes sources for different kinds of content, and it provided a table that's worth including here; see below.

The most obvious trend is its preference on most topics for primary, self-published and official sources like verified X users' social media posts and government documents. These are put on par with or at higher priority than peer-reviewed journal articles, depending on the category. The only examples it provides among high-priority sources, apart from X users, are ArXiv (itself contending with an influx of LLM content) and PubMed for scientific/technical topics and Kremlin.ru for historical events.

Some of Wikipedia's fiercest critics contend that its version of neutrality unfairly endorses "Establishment" views on issues like vaccines, climate change, or the results of the 2020 US Presidential election, omitting minority positions or describing them in unfavorable terms. If many people hold a view, the argument goes, it is worth presenting on its own terms rather than deciding one set of sources is better than another. Grokipedia appears to align with this perspective, as its low-priority source criteria explains that it is sensitized to "emotional bias," labels like "pseudoscience," and anything that doesn't present alternative perspectives.

There is another characteristic of the sourcing that will be immediately apparent to anyone who has tried to do a literature review on a subject using a chatbot: it relies on sources available on the open web (or sources widely described by sources available on the open web). Commercial sites with good search engine optimization, apparent content farms, and personal blogs appear alongside traditional media sources. Grok can find extant text on the web faster than Wikipedia's human editors, but does it have access to the books and articles that aren't internet-accessible?

A return to the old way

All of this is ultimately subordinate to Grokipedia's unavoidable prime directive of neutrality: neutrality is whatever Elon Musk says is neutral.

According to the New York Times, Musk has been directly involved with Grok's development, nudging it to the right on several issues. Not only does Grokipedia extoll Musk's personal worldviews, but, as pointed out by many of the news articles about the project, it "breathlessly" promotes him and his products. At the end of the day, it doesn't really matter what the training data is, how it's weighted, how it negotiates points of view, etc. when the last step is necessarily some sort of post-processing/output filtering/reranking intervention based on Musk's final word.

For much of Wikipedia's history, journalists and academics have enjoyed comparing it to historical encyclopedias like the Natural History, the Encyclopédie, and of course Encyclopaedia Britannica. Sometimes, like with Giles' influential 2005 Nature study, it's to compare their factual accuracy, but usually it's to look at their structural and conceptual differences: Wikipedia is larger; Wikipedia is online; Wikipedia is accessible for free by anyone with an internet connection; Wikipedia is editable by anyone. But the most important distinction frequently gets lost: unlike nearly all historic encyclopedias, Wikipedia doesn't need anyone's permission to publish. There is no ideological test for participation or publication. There is no emperor, bishop, investor, or CEO who must approve of ideas expressed within, and there is no owner.

Whether due to the great expense of producing, copying, and distributing voluminous works or because of tight control that structures of governance have exerted on sources of knowledge, encyclopedists as far back as Pliny the Elder, in the first century AD, have always needed the support and consent of powerful people (Pliny had relationships with both Vespasian and Titus) in order for their work to be read. In this way, while Grokipedia is technologically new, with enthusiasm in some ways reminiscent of Wikipedia's early days, its epistemic hierarchy is more old-fashioned.

Who is the audience?

That brings me to my biggest question: who is Grokipedia for, other than its owner? How big is the market for corporate, for-profit general knowledge sources that promote their own products and strictly adhere to the views of a billionaire founder? I know that if any corporation/billionaire has that kind of caché, built-in audience, and resources for a sustained push, it's X/Musk. But what happens when other CEOs decide they don't like their article on Wikipedia or Grokipedia and get into the encyclopedia game? McDonaldspedia and BritishPetroloeumpedia vie with Grok for dominance?

Beyond the corporate nature of Grokipedia, my impression is that most people are not excited to completely trade human-created knowledge sources for fully machine-generated ones. The format of Grokipedia obscures that it is fundamentally just structured LLM-generation, and thus succeeds and fails in similar ways as any other chatbot query, trading the limitations of human judgment for the limitations of LLMs. Given how much AI resentment has been bubbling up in various corners of the internet, I'm frankly surprised "Slopipedia" wasn't trending from launch.

For better or worse, and I increasingly think it's for the better, Wikipedia has developed something of an allergy to AI in general and chatbots in particular. Don't use them to write articles, don't use them to illustrate articles, don't use them to prepare arguments on talk pages, etc., or risk getting banned. There are a handful of non-LLM AI uses, but Wikipedia is human-centric to such an extent that it may miss opportunities to scale labor and improve user experience.

Perhaps Wikipedians are a potential audience. Even if, as argued by 404 Media's Jason Koebler, Grokipedia "is not a 'Wikipedia competitor' [but] a fully robotic regurgitation machine," its experiments in LLM-based encyclopedism may be valuable as an example of what Wikipedia could do if it wanted to. Does Grokipedia shed any light on particular topics that are better suited to LLM-generation than others? Does it confirm Wikipedia's status quo that LLMs have no business writing articles at all?

The most instructive experiment may be the opening up of primary and self-published sources for use in articles. There is no shortage of companies, influencers, and politicians interested in having their own words used to craft an encyclopedia article about them. That doesn't usually serve a general reader very well, but the downside is it omits a lot of potentially useful detail, too. Take a journalist, for example. There's not a lot of writing about journalists, but a policy that welcomes primary and self-published sources could draw information about the person and their work from their own writing, and it would remain more up to date than articles that have to wait for a secondary source. What else is worth comparing?

Conclusion

Wikipedia, for all its many flaws, has always aimed to "set knowledge free" — by giving volunteers the ability to create and apply principles from the bottom-up, using technology to create a knowledge resource as well as to give it away for free, based on the belief that free knowledge is empowering. Opinions will vary about how successful it has been and where its blind spots are, but it's hard to dispute its idealism. In contrast, Grokipedia's defining feature as an encyclopedic project is the use of technological power to re-exert top-down authority over information and knowledge.

This piece is published with a CC BY-SA 4.0 license by agreement with the author. You may share, republish, or adapt it freely, as long as you credit the author, clearly indicate any changes, link to this original article at Tech Policy Press, and distribute any derivative works with the same license.

Article TypeHigh-Priority SourcesMedium-Priority SourcesLow-Priority or Penalized SourcesRationale

Table generated by Grok in response to a query about how Grokipedia prioritizes sources.

Authors

Ryan McGrady
Ryan McGrady is Senior Research Fellow with the Initiative for Digital Public Infrastructure at the University of Massachusetts Amherst. His work focuses on public interest internet research, with special attention to YouTube, Wikipedia, and TikTok. He is also a Researcher with Media Cloud and the M...

Related

Perspective
What Attacks on Wikipedia Reveal about Free ExpressionMay 14, 2025
Perspective
AI Could Never Be ‘Woke’July 24, 2025

Topics