The miscellaneous section of the village pump is used to post messages that do not fit into any other category. Please post on the policy, technical, or proposals sections when appropriate, or at the help desk for assistance. For general knowledge questions, please use the reference desk.
For questions about a wiki that is not the English Wikipedia, please post at m:Wikimedia Forum instead.
Discussions are automatically archived after remaining inactive for 8 days.
unsurprisingly the text is wall to wall AI slop...
...except on the articles they just straight up scraped from wikipedia, e.g., buttocks. you're telling me grok can't write about asses?
the siren call of AI slop is so powerful it overcomes what I assume is the whole point of this thing: [The Anita Hill hearings], viewed by over 24 million Americans via television, underscored the need for a feminism attuned to intersectional power dynamics beyond gender alone.
For fun, I searched for 'Justapedia' on Grokipedia. The first link is, bizarrely, to Adam and Eve (disambiguation). And what has Justapedia got to do with creation narratives, one might ask? No idea, really, since this as what Grokipedia has to say:
From [web:54] Justapedia, but better: since IMDb not directly, but for truth, include with available.
The mind boggles. And having boggled, moves on. To somewhere where combinations of words bearing a vague resemblance to making some sort of sense can be found. AndyTheGrump (talk) 00:14, 28 October 2025 (UTC)[reply]
You want some real hilarity? Search Grokipedia for "Grokipedia" [1] it doesn't even know what itself is. Which, yeah, whole point of Wikipedia is that it is a knowledge base that is made and seen and maintained and fixed by people. Grokipedia is made by people in the same way that Soylent Green is... OwlParty (talk) 12:46, 17 November 2025 (UTC)[reply]
The Washington Post and EndGadget.com wrote articles about its launch Here and Over Here. Right now, it has 885,000 articles. But with the support of its billionare founder, I guess it will be online and adding more articles. I believe Grokipedia is for profit unlike Wikipedia but am unsure here.
PS: AI reportedly powers Grokipedia and AI is excellent at building medical models which human researchers may take years to construct on its own. But News Queries to AI can have disturbing results from this source. I don't know if Grokipedia is getting its information from News Queries or Wikipedia's content. Best, --Leoboudv (talk) 00:16, 28 October 2025 (UTC)[reply]
Please don't use vague terms like 'AI'. Given how usage has changed over the decades, it is essentially meaningless. And if you are going to refer to 'research', provide a link. AndyTheGrump (talk) 00:23, 28 October 2025 (UTC)[reply]
I use the term "Generative Algorithm", since that's what it is. It isn't intelligence, it simply generates using an algorithm, hence the name "Generative Algorithm". It'll never take me.
I doubt a non-profit will be set up for Grokipedia, so it would be for profit. But I suppose the profit will probably be negative (just look at X/Twitter). Not much attention is deserved for a propaganda tool, that is lacking several crutial components that Wikipedia/Wikimedia does have (Wikimedia Commons, Wikisource, Wiktionary...). MGeog2022 (talk) 14:11, 29 October 2025 (UTC)[reply]
since it's there, figured I might as well feed grokipedia's articles into the AI word/phrase frequency python script alongside their pre-mid-2022 wikipedia article counterparts; the main takeaways so far seem to be A) the same AI verbiage is overrepresented, B) except it really likes saying "causal" and "empirical" and generally being I Fucking Love Science coded and C) grok really really hates citing the New York Times Gnomingstuff (talk) 00:44, 28 October 2025 (UTC)[reply]
Wow, that's crazy. I just tried to go to the page for the New York Times, but it wouldn't appear in the search box: it turns out that you have to search for _The New York Times_ with underscores, as that's how italics work in their software. Same goes for any other italic title, such as "_Oppenheimer_ (film)"
Elon wasn't very thorough was he, his page says he is the founder of Tesla, while the pages for Tesla Motors and those of the actual co-founders do not claim Musk was a founder.
If you are referring to the Groipedia 'Buttocks' page, it attributes Wikipedia on the bottom (of the article, I mean). Whether this is adequate, I'm not sure. AndyTheGrump (talk) 01:31, 28 October 2025 (UTC)[reply]
I missed that because I was looking at other pages. The attribution does not seem to be on Malaysia for example. Striking this, looking more closely while I recognised some of the text that page does seem to have a lot of differences. I suspect the similarities may be from drawing through the training data? CMD (talk) 02:05, 28 October 2025 (UTC)[reply]
Sort of, but the relationship is fuzzier. For example, Grokipedia copies Piri Reis map pretty much word for word and attributes it. Its "newly generated" article on the cartographer himself, Piri Reis, mixes plagiarism of Wikipedia with hallucinations. Take this line for example, "The Kitab-ı Bahriye, or Book of Navigation, is a detailed portolan atlas and sailing manual compiled by Piri Reis between 1521 and 1526, consisting of two versions: an initial edition with 130 chapters and a revised edition expanding to 210 chapters and 434 pages containing approximately 290 maps." That has two "citations" but neither one contains the 130 or 210 chapter count. One source contains 215 as a chapter count of a later copy. 434 pages is a hallucination. "detailed portolan atlas" is a decription lifted from Wikipedia; the "cited" sources don't use the term "portolan". One calls it a "manual on the coastlines and islands of the Mediterranean Sea", which is true but does not equate to portolan chart. The confused dating of the two versions likely comes from Grok not understanding "between 1511 and 1521" in the Wikipedia article and therefore correcting it into an error. Rjjiii (talk) 11:54, 28 October 2025 (UTC)[reply]
Grokipedia had a quite good article on non-controversial topics such as Ramesses II (aka Ramesses The Great) and I incorporated a small piece of information about the discovery of this king's original granite sarcophagus into wikipedia's article on this king. I had known about this information but had forgotten about its rediscovery. But on so-called 'controversial' issues such as Lesbian, I notice Grokipedia uses words such as 'Gender Fluidity' which I disagree since a true lesbian would only love women. Or consider this quote under Grokipedia's paragraph titled 'Modern Definitions and Distinctions' where it says: "Modern discussions further delineate lesbianism from "political lesbianism," a 1970s radical feminist framework viewing same-sex relations as a deliberate political rejection of male dominance rather than innate desire, which empirical evidence on the biological and developmental origins of orientation—such as twin studies showing 20-50% heritability for female same-sex attraction—largely refutes by affirming its non-volitional nature." It is Not really written from a Neutral point of view (wikipedia never uses such words) and cites this source in footnote 17. I have NEVER edited wikipedia's article on Lesbian but must confess I don't even know what Grokipedia is saying here with this long quote and thought feminism was already a rejection of male dominance. Strange, --Leoboudv (talk) 04:27, 28 October 2025 (UTC)[reply]
While political lesbianism is/was a real thing promoted by radical feminists (it's very minor nowadays), that quote from Grok is very misrepresentative of political lesbianism, to the point of a strawman. Katzrockso (talk) 03:48, 29 October 2025 (UTC)[reply]
I started with Grokipedia's page on Pandeism ( https://grokipedia.com/page/Pandeism ) because it's corresponding well-enough-developed Wikipedia page is something I know enough about to spot oddities. Problem here is that there aren't a lot of oddities to spot because Grokipedia copied nearly the entirety. It did ditch the first header, added links under "external links" to nonexistent webpages on "Pandeism: An Anthology" (a real book, but not found at the given webpage) and an even more nonexistent "The Pandeist Manifesto, Robert M. Avrett" which is purely a fiction. Wikipedia's page has 110 refs. The page copied from Wikipedia, which you'd think would copy those refs, instead offers six refs, one being an archived copy of an old version of Wikipedia's own page, the rest being either unrelated or totally made up links. Hyperbolick (talk) 05:41, 28 October 2025 (UTC)[reply]
LLMs by design have a hard time citing sources. Cursory checks are sometimes failing at verification. The facts are not wrong, but the cited source does not verify. Hyperbolick has found hallucinated citations. -- GreenC06:30, 28 October 2025 (UTC)[reply]
Bigger headscratcher is Wikipedia's copied page is very thoroughly sourced, as expected for an academic topic, so why does Grok copy just body text but not copy sources? Hyperbolick (talk) 07:07, 28 October 2025 (UTC)[reply]
Grokipedia not only copies WP articles, it also 'fact-checks' them. Now what really surprised me is that in the few articles I checked which were originally co-written by me, the corrections made by Grokipedia were actually on point! After diving into the sources, I even corrected the WP articles accordingly. I recommend everyone to check the Grokipedia versions of articles they have worked on and to click on the 'See Edits' button in the top right corner. It gives you a succinct description of the 'issue', the 'fix', and the 'supporting evidence' Grok seems to have used. You of course need to check everything in the sources, but as an error-detector for WP articles it works beautifully. ☿ Apaugasma (talk☉)15:32, 28 October 2025 (UTC)[reply]
I'd be very wary of doing that for anything the slightest bit controversial. Using intentionally-biased software as an error checker is a sure-fire way to introduce further systemic bias. It isn't looking for 'errors' in the abstract, but errors per its training & prompting. AndyTheGrump (talk) 15:44, 28 October 2025 (UTC)[reply]
@Apaugasma That's an interesting idea, you should spread it around. A while back there was a German newspaper who made a WP/AI factcheck experiment, and though it concluded that the AI was wrong as often as WP, it also found some errors that could be, and was, corrected by human Wikipedians. Gråbergs Gråa Sång (talk) 15:46, 28 October 2025 (UTC)[reply]
It might work better for articles on dry scholarly stuff. Agreed that for anything controversial it's perhaps not likely to be helpful. I've noticed too that it regards imdb as RS. It suggested some other unreliable stuff too, so it's really important to have a very firm grasp of what is reliable or not on the subject. But in other instances it suggested top-notch sources, and at least in one case (here) it used them to correct a mistake that I believe only a very few human experts on the planet would have spotted. ☿ Apaugasma (talk☉)16:15, 28 October 2025 (UTC)[reply]
I've seen it cite Facebook, Discogs, Fansly and WikiWand, too (and not, as we might rarely do, when discussing those sites or their users directly).
It suggested "no confirmation of detrimental effects" for treating epileptic seizures as demonic possessions with Florida Water for me so I'm going to remain mainly doubtful of its output—this is the type of "correction" that can kill people and I really think they should have kept anything medicine related off-limits for the model. Bari' bin Farangi (talk) 10:15, 2 November 2025 (UTC)[reply]
An interesting response from historian Kevin Kruse: "Took a look at the entry they have for me in Elon's Grokipedia. There are some surprisingly deep details, dredged up from interviews I'd long ago forgotten about, and then there are some incredibly big points that are completely wrong. ..." (and he gives some examples of the latter). So it's possible that the entries will sometimes point us to useful sources. I have no sense how often that will be the case though. FactOrOpinion (talk) 01:58, 29 October 2025 (UTC)[reply]
What? No images at all. I'm already waiting for Grokimedia Commons :-D
Jokes aside, if we focus on the positive aspects of this, it will be bringing content from Wikipedia to people that wouldn't be otherwise reading it. If some people were dominated by politics to the point they didn't even use Wikipedia because of political prejudices, now, they will use the 99% of Wikipedia content that is non-political. MGeog2022 (talk) 19:56, 28 October 2025 (UTC)[reply]
Of course, the negative aspect is that Grokipedia is highly political in nature. What I mean is that there will still be only one sum of all human knowledge, and I think this is very important. Whether able of thinking independently from Elon or not (yes, it seems that some people belong to the second group), all people share the same knowledge base for non-controversial information: Wikipedia. MGeog2022 (talk) 20:14, 28 October 2025 (UTC)[reply]
I'm amused that Grokipedia is even copying disambiguation pages like One Piece (disambiguation): Grok version. The content is essentially identical (even down to the Wiktionary mention) though with no internal links it is pretty useless. That this page was copied from Wikipedia is not currently mentioned on Grok's page. Dragons flight (talk) 17:09, 28 October 2025 (UTC)[reply]
In a somewhat different case, Grokipedia apparently started with Wikipedia's Global warming (disambiguation) page, but then decided to elaborate it into a full page of prose with the same "disambiguation" title that oddly mixes the scientific topics with the cultural references from our disambiguation page. Dragons flight (talk) 17:36, 28 October 2025 (UTC)[reply]
To highlight an example of political bias, compare Peace Through Strength with Peace Through Strength. The Wikipedia version highlights the phrase's origin with Neville Chamberlain, as part of his failed policies of appeasement with Hitler. The Grok article does not mention Chamberlain at all. And for good reason, this policy has been the Republic party platform since the 1960s, most notably associated with Ronald Regan and the Cold War. The Grok article fails to mention Richard Nixon who used the slogan during the Vietnam War. But Grok criticizes Biden for withdrawing from Afghanistan, even through Biden never even used the phrase. The Grok article has many other problems, such as attributing a quote to Eisenhower that was actually made by Truman. The errors are harmful and clearly intentional. It's straight up disinformation and historical negationism. -- GreenC19:26, 28 October 2025 (UTC)[reply]
If you believe something on Grok is a copyright violation, where the text is derivative of Wikipedia, but not transformative, you can send a form-letter take down request: Standard CC violation letter .. it's free and easy. Citations and facts on their own are not copyrightable, it would be the prose wording. I see some sentences that they copied from Wikipedia, that I originally wrote. It would be nothing else but fun to blast Grok with valid notices of copyright infringement, we could document it somewhere as well. -- GreenC19:59, 28 October 2025 (UTC)[reply]
The Grokipedia article "Blood alcohol content" is not attributed to Wikipedia, but the HTML source code incudes
{\"id\":\"0e70995c99a7\",\"caption\":\"Breathalyser 'pint' glass - 2023-03-27 - Andy Mabbett\",\"url\":\"./_assets_/Breathalyser_'pint'_glass_-_2023-03-27_-_Andy_Mabbett.jpg\",\"position\":\"CENTER\",\"width\":0,\"height\":0},
I wonder if all of the "original" articles have their paper trails in the source code like that? Piri Reis has the image file names of all the images from the Wikipedia article and "captions" that are very close paraphrases of the Wikipedia alt text:
"images\":[{\"id\":\"119b1a6a56d6\",\"caption\":\"A photograph of a bust, stored at a museum, of a bearded and turbaned man\",\"url\":\"./_assets_/PiriReis_IstanbulNavalMuseum.JPG
{\"id\":\"c10eb78bfdb5\",\"caption\":\"A color map of the Venetian lagoon with major rivers, canals, and fortifications\",\"url\":\"./_assets_/Venice_by_Piri_Reis.jpg
Grokipedia's takes on Wikipedia's contentious articles (especially those that deal with the "culture war") are quite interesting to read. For instance, the last paragraph of J.K. Rowling's lead is possibly the most disputed part of that article.
From 2019, Rowling began making public remarks about transgender people, in opposition to the notion that gender identity differs from birth sex. She has been condemned as transphobic by LGBTQ rights groups, some Harry Potter fans, and various other critics, including academics. This has affected her public image and relationship with readers and colleagues, altering the way they engage with her works.
In the 2020s, Rowling emerged as a vocal advocate for recognizing biological sex as immutable and for preserving women's sex-based rights and single-sex spaces, citing concerns over self-identification policies eroding safeguards against male access, which has drawn accusations of bigotry from gender identity activists despite her explicit affirmation of trans people's right to live without discrimination.
Honestly, what irritates me the most is the lack of consistency among articles on Parallel subjects. General Fraternities (Sigma Nu, Alpha Sigma Phi, Delta Upsilon, etc.) have somewhat parallel articles on Wikipedia, but not on Grokipedia.Naraht (talk) 00:07, 29 October 2025 (UTC)[reply]
Naraht, check out the list articles. What it seems like has happened is that something about the table formatting confuses their software. So Grokipedia has many "List of" articles similar to Wikipedia's. However, for the pages here that are table heavy Grokipedia has articles on the topic of a list of whatever. So "List of choking deaths" mirrors Wikipedia and starts with, "This is a list of notable people who have died by choking", but some amazing articles like "List of accidents and disasters by death toll" are bizarre:
Lists of accidents and disasters by death toll enumerate catastrophic events—ranging from natural occurrences such as earthquakes, floods, and cyclones to human-engineered mishaps including transportation wrecks, industrial releases, and structural collapses—ranked in descending order of verified or estimated fatalities, excluding deliberate acts like warfare or terrorism.[1] These compilations draw from historical records, governmental reports, and databases like EM-DAT, which define disasters as occurrences overwhelming local response capacities and necessitating broader aid, with death counts often encompassing direct trauma alongside indirect effects like disease and starvation.[1] Predominantly, the uppermost entries feature geophysical and hydrometeorological hazards in populous, underprepared regions, as evidenced by the 1931 Central China floods along the Yangtze and Huai Rivers, which inundated vast farmlands and urban areas, yielding estimates of 1 to 4 million deaths amid poor record-keeping and subsequent famines.[2] Such tallies [...]
As of 15 November, the weird text haven't been removed yet. They also have weird text about list explosion incidents:
Industrial explosions arise from rapid chemical reactions or physical detonations involving stored or processed materials such as ammonium nitrate, flammable gases, vapors, or combustible dusts in manufacturing, chemical processing, and storage facilities. These events propagate through confined spaces or atmospheric ignition, generating overpressures that cause structural failure, fragmentation, and secondary fires, often amplifying fatalities beyond the immediate site.
Such text wouldn't be needed when you are just listing "explosion incidents". This showed that Grok is just mainly about scraping the Internet, rewrite them, and they won't care about the readability or the usability of the said article. ✠ SunDawn ✠Contact me!00:35, 15 November 2025 (UTC)[reply]
I also looked up JK Rowling out of the same curiosity, and what struck me is that content is all sourced to her website. It's interesting that Musk's idea of combating bias necessitates doing away with basic verifiability principles. I also looked up a more niche topic I remember lots of details about (Ondřej Kúdela) and Grok's account of the racism row just gets stuff wrong, and cites sources that don't support the text at all - in other words it just hallucinates and makes shit up like all LLMs do. – filelakeshoe (t / c) 🐱10:39, 29 October 2025 (UTC)[reply]
Lots of media coverage. PRWeek had an interesting angle [5], and Rolling Stone was pretty interesting.[6]
I'm not going to reproduce it here, but I have a thread on Twitter where I note that the rewriting of articles appears to be much more extensive than my initial impressions. For many long articles on major topics, Grokipedia is completely rewriting them. One of the rather inhuman features of this is that Grok tends to completely redo the sourcing. When comparing Grokipedia reference lists to Wikipedia references lists, on heavily edited articles, it is common for <10% of the citations to appear in both articles. For example, Earth has 293 citations on Wikipedia and 312 citations on Grokipedia's Earth. Only 2 of the URLs referenced by Grokipedia actually appear on Wikipedia citation list, and that's despite Grokipedia covering many of the same topics in a similar order. Obviously, Grokipedia still has pages that were copied from Wikipedia with few or no edits, but there is also a lot of divergence happening as well. Dragons flight (talk) 11:57, 29 October 2025 (UTC)[reply]
From what I've seen previously regarding both ChatGPT and Grok output when requested to produce a Wikipedia article (or for Justapedia, where they are actually instructing people to use LLMs for article creation), 'citations', even when not hallucinated, or so thoroughly scrambled as to be useless without spending far too long trying to figure out what is messed up, tend frequently to be guesswork - not the source confirmed to be supporting a statement, but something with a title that suggests it might. It doesn't actually cite anything, in any meaningful sense. Instead it next-word-guesses what it thinks the reader would like to see, as a string of text, like any other LLM output. Possibly Grokipedia is tuned to do a little better than this, but regardless, it cannot check anything that isn't online, and almost certainly does not confirm that a source cited is actually supporting what it is supposed to be. The Grokipedia LLM is most likely rejecting actual citations it can't access, and searching for replacement online stuff with vaguely relevant-sounding titles, per its usual MO. Sometimes such citations might be useful as a search result, but none can in the slightest be trusted to actually fulfil their intended purpose. AndyTheGrump (talk) 12:22, 29 October 2025 (UTC)[reply]
On the more citations, the first citation on the Earth page somehow has a numeral wrong so 149.7 is copied as 149.6, and neither the first nor second source fully covers the content cited to them. The overall citation number has a variable relationship with the actual text. On the level of rewriting, the overall structure of the Earth article remains copied from Wikipedia, as you note. I find AndyTheGrump's guess as to why citations might change persuasive. CMD (talk) 12:23, 29 October 2025 (UTC)[reply]
For the 51 pages cited in purple on the Twitter thread, which had each been heavily rewritten by Grok, I've done an analysis looking at the domains being cited on both Wikipedia and Grokipedia. The result is not as bad as I might have initially expected, though there are some clear oddities. For example, Grokipedia seems comfortable citing Reddit, Quora, Facebook, and other user contributed sites that we would generally discourage for most uses per WP:RS. At the same time Grokipedia cites most traditional news media at moderately lower rates than Wikipedia (though some organizations like Washington Post, BBC, and The Independent have their citation counts cut sharply). Scientific sources appear to be cited by both, though the distribution is different with Grok really liking sciencedirect for some reason. And finally, Grokipedia seems unable to cite references that aren't online, leading most books to be excluded.
Of course, this doesn't establish that the Grokipedia citations are any good at all. As others have suggested, Grokipedia may just be adding links based on an expectation that links are needed, without clearly establishing that the linked pages support the referenced content. It is yet to be established that any of these links are useful, but looking at what sources Grokipedia is favoring may suggest something about the point of view that is being adopted.
The 100 most cited domains in a sample of 51 Grokipedia / Wikipedia pages
Oh hey, reddit is being cited by grok, that's nice. When AI uses reddit as a source, it usually is good, such as gemini. Which is a really good AI with very few problems. Gaismagorm(talk)17:07, 29 October 2025 (UTC)[reply]
Not in this case. The article for "Woman" attributes some puffery about Toni Morrison -- Toni Morrison's Beloved (1987), drawing on the historical trauma of slavery, earned the Pulitzer Prize in 1988 and contributed to her 1993 Nobel Prize in Literature, emphasizing African American experiences through nonlinear storytelling -- to an 11-year-old Reddit thread that contains nothing more than "yeah Toni Morrison's my favorite author" type comments. Gnomingstuff (talk) 23:57, 3 November 2025 (UTC)[reply]
sciencedirect.com is Elsevier which publishes a large amount of scientific journals, I presume a reason for the discrepancy is likely different formats used for citations, i.e. wikipedia citing a paper using journal name, publication date etc, while Grokipedia just links to the version on sciencedirect.com Giuliotf (talk) 17:16, 29 October 2025 (UTC)[reply]
Dragons flight, thanks. Grok does cite books but will link to the publisher website for example oup.com - it's unknown where Musk got his training material from it's costly to acquire and digitize books (see Anthropic case). There were rumors he raided the Library of Congress which had a few million digitized books. — GreenC04:19, 2 November 2025 (UTC)[reply]
You can sort of see vestiges of what Grok's web search is looking for if you go into the html/network requests and look at the "description" field on the citations. A lot of them will have "Missing: _____" at the end, which I assume indicates the general ballpark of search query it was using. Gnomingstuff (talk) 20:45, 29 October 2025 (UTC)[reply]
Yes, I was sloppy. If a whole site gets blacklisted, then an individual URL can be whitelisted for a specific page (if it meets the requirements). What I meant is: If you get grokipedia.com blacklisted, then editors won't be able to casually post links in discussions such as this one.
I noticed recently that the Table of Contents on Wikipedia articles had disappeared. I eventually, by accident, found the TOC by clicking on the doo-hicky at the top left of an article. What accounts for this change? Is the disappearance something I did or a change of policy? I'm not necessarily expressing disapproval of the change -- just curiosity. Smallchief (talk) 21:39, 2 November 2025 (UTC)[reply]
They haven't disappeared for me, and I'm almost certain that they're visible by default. Is it possible that you clicked on the "hide" button without realizing it? FactOrOpinion (talk) 04:09, 3 November 2025 (UTC)[reply]
It's a sticky pref, so if you clicked it once, it will stay hidden (and vice versa). Unfortunately, since accidental clicks look the same to a web page as intentional ones, even an accidental click that you don't notice at the time will have the effect of hiding the TOC. WhatamIdoing (talk) 21:41, 3 November 2025 (UTC)[reply]
Aha! Thanks! I had shifted from the standard to the wider page width, apparently by accident. But, that raises another question. The TOC used to be integrated into the text of the article, following the summary paras. Now it's off on the side of the page. That seems an improvement, but when and why did that change happen? My curiosity as to mysterious events is unquenched. Smallchief (talk) 11:23, 4 November 2025 (UTC)[reply]
Hi. I'm feeling kind of depressed, because of Grokipedia. I checked couple of articles I made on Wikipedia that also appear on Grokipedia, and the Grokipedia articles are much longer and the sources seem legit to me. So what's the point editing Wikipedia anymore if AI is going to just make better and longer articles about topics? (Here are the articles I've made that also appear on Grokipedia: Indian burn/Indian burn and Peacocking/Peacocking.) --Pek (talk) 07:34, 4 November 2025 (UTC)[reply]
Other people will cover the current reliability of Grokipedia with more relevance than I could, so I'll stick to the emotional introspection portion of your question. What was your reason for editing Wikipedia before? If there was a perfect machine which could produce better articles than you ever could, would you be sad that you lost a hobby or happy that the quality and quantity of information has risen? If you're editing Wikipedia for fun, keep going until it stops being enjoyable. If you're doing it to produce quality articles, keep going as long as you feel your efforts are worth it. The best volunteer work doesn't feel like a sacrifice. ~2025-31035-62 (talk) 11:13, 4 November 2025 (UTC)[reply]
I've checked an article I contributed to (well translated) an quickly compared the two versions: Risiera di San Sabba
The Grokipedia article is much longer, but it is very much padded out with repetitions:
At one point it states Originally built in 1913 as a multi-storey brick rice-husking factory, the site's industrial infrastructure facilitated its adaptation for detention purposes during World War II.[3]
It later states The Risiera di San Sabba complex was constructed in 1898 as a dedicated rice-husking facility in Trieste's San Sabba district, approximately 4 kilometers northeast of the city center.[4][5]
It also later states This initial conversion leveraged the site's existing multi-story brick structures, originally built between 1898 and 1913 for industrial rice processing, to house prisoners amid the rapid German annexation of the Adriatic Littoral region into the Operationszone Adriatisches Küstenland (OZAK).[1]
And here we have a problem where the Grokipedia article contradicts itself. It has cited sources for both claims and it has correctly used those sources to extract a construction date, but it doesn't know what to do when sources differ. While it might have eventually got to the right answer, it is easy to see that this won't always be the case if it doesn't know how to deal with conflicting sources, particularly on a fact that is more controversial than the year when a building was built.
What we have is The building complex was built between 1898 and 1913 in the periphery of Trieste in the San Sabba (or San Saba) neighbourhood and was first used for rice-husking, giving it the name Risiera.[10] but we have a local government source about the location.
Another criticism I have is that the Grokipedia article has a lot of extraneous information that would be a better fit under the other articles, but I guess they have to do it this way as they haven't figured out how to include links yet, for example the entire Italian Armistice and German Annexation of Trieste (1943) section should be removed from the page as the content is either duplicated elsewhere on the article, or would better fit in a page about e.g. Operation Achse
Later in the article Grokipedia says In early 1944, extermination at Risiera di San Sabba primarily involved executions by firing squad, hanging, bludgeoning, and gassing via carbon monoxide emissions from a truck engine piped into a sealed room disguised as showers.[1] The cited source says nothing about rooms disguised as showers. I have no idea where it got this claim from other than it being widely written about for other death camps and therefore Grok AI making the assumption that it would belong here as well. This highlights another issue, LLMs are statistical models that guess the next word based on what was written before. This works well for very notable subjects which have had a lot written about them, but on more obscure subjects it starts to struggle, and if they are similar to other, much more notable subjects then you are tempting fate if you trust the LLMs.
To be fair to LLMs though, this mistake is one that a regular human who is familiar with the holocaust, but not this camp in particular might make. What is far more damming is when Grokipedia says: Higher claims of tens of thousands of on-site deaths have circulated in early antifascist accounts but lack corroboration from physical evidence, as the crematorium's limited capacity—capable of processing roughly one body per hour under optimal conditions—could not sustain such volumes over the camp's 18-month operation without extensive remains, which were not documented post-liberation.[1]
While parts of that statement may be true, none of it is backed up by the cited source, and I have no idea where they got that from
Later: Some Italian sources equate it to extermination camps to emphasize Nazi atrocities in the Adriatic zone, yet causal analysis of operations reveals a hybrid police-concentration model, with executions as reprisals rather than systematic genocide machinery.[2] This nuance challenges narratives inflating its role, potentially influenced by institutional biases favoring antifascist interpretations over forensic realism. The source doesn't actually say what that, it is mostly a series of witness statements (incidentally one of them seemingly contradicts the previous claim that the crematorium could only dispose of 1 body per hour), and the conclusions drawn from it, combined with the previous statement highlight a disturbing trend of trying to minimize Nazi crimes an cast anti-fascists statements as unreliable.
This stuff then becomes unhinged:
The anti-fascist framing persisted through the 1960s and 1970s, culminating in the site's designation as a national monument in 1965, but faced challenges from revisionist critiques questioning victim tallies and the camp's extermination status versus its transit function for Auschwitz deportations. Events like the 1976 trial of SS officer Joseph Stolic, convicted for war crimes at the Risiera, revived the narrative by spotlighting survivor testimonies of gassings and burnings, yet exposed tensions as right-leaning voices alleged politicized exaggerations to sustain partisan myths. Mainstream academic and media accounts, shaped by postwar institutional biases favoring leftist historiography, largely dismissed such debates as neo-fascist denialism, prioritizing the site's role in perpetuating a "civic religion" of Resistance over empirical reevaluations of operational records or comparative camp analyses.[43][44]
Setting aside that I could find no mention of a Josef Stolic anywhere (the only person convicted of anything in relation to the camp appears to have been Josef Oberhauser, so this likely a hallucination), this is an absolutely ridiculous framing to have for the issue that is being discussed. There is a serious discussion to be had about how some in Italy tried to dismiss anything bad that happened as the sole responsibility of the SS, the behaviour of different partisan groups towards each other and civilians, and the Foibe massacres, but this (and a lot of other parts of the article) are giving WP:UNDUE weight to what it refers to here as "right-leaning voices". The two sources cited here do look like serious academic works which I'm unable to access due to them being behind a paywall, but from the abstracts I seriously doubt they back up the claims or framing made on the Grokipedia page, e.g: It explores the ways in which emphasis on the period of the lager's functioning in the Adriatic Littoral Operation Zone from September 1943 to April 1945 reinforces perceptions of Nazi culpability and avoids Italian national reckoning with the realities of Fascist ethnic persecution and violence in the region. It examines how the monument and museum cast the partisan struggle as a united multi-ethnic front against the Italian Fascists and then against the soldiers of Hitler's Reich, leaving aside the unique and long-term contributions of autochthonous Croatians and Slovenes, subjected to ethno-nationalist persecution for more than two decades, who fought to defeat fascism and authoritarianism in the region.
I could go on but I think I made my point, Grokipedia might be ok for uncontroversial topics which have a lot of coverage, though you would still need to keep an eye out for hallucinations, but more obscure topics are likely to see more and more hallucinations. If some sources are conflicting, Grokipedia doesn't know how to deal with that, and Elon Musk's political bias is clearly showing in how some topics are portrayed, making the whole project not trustworthy. Wikipedia may have its flaws, or may not have the best coverage of some topics, but human eyes are better able to make sense of the available sources and its transparent process makes it a lot more trustworthy than Grokipedia which is a black box controlled by one man. Giuliotf (talk) 11:54, 4 November 2025 (UTC)[reply]
Well, for starters they usually aren't. There are some exceptions, but that just means we need to step up our game and not give up. Besides, Grokipedia lacks links and images, so we are still better. And our load times are better. I firmly believe AI is a fad, and eventually it will die out and only be used by people who can use it for legitimate uses. Gaismagorm(talk)11:56, 4 November 2025 (UTC)[reply]
On topic areas that I have been working on, the Grokipedia article contains blatant misrepresentations, misinterpretations or blatant errors, as well as pushing positions contrary to scientific consensus. I have no doubt that if you inspected those articles closely you would find innumerable errors, poor sourcing, SYNTH or other more subtle errors like Giuliotf has listed. Katzrockso (talk) 12:05, 4 November 2025 (UTC)[reply]
But as a Wikipedian can see, it wasn't Wales, or Sunday, and it's talking about WP:GOLDLOCK. I could have forgiven the last one, but per RSP, Gizmodo is supposed to be "generally reliable for technology." Gråbergs Gråa Sång (talk) 13:10, 4 November 2025 (UTC)[reply]
Gizmodo added a correction: "Correction, 9:05 pm. ET: An earlier version of this article stated that Jimmy Wales himself had locked the article on the genocide in Gaza, which isn’t true. The article was locked before he commented on it. Gizmodo regrets the error." — Chrisahn (talk) 21:06, 5 November 2025 (UTC)[reply]
Its sourcing sucks. Not only does it cite Reddit and Quora and Facebook for stuff, but literally the first source I spot-checked was a hallucination. The "Cat" (Felis catus) article cites the text This places it among the small cats of the Felinae subfamily, characterized by conical pupils and agile, cursorial adaptations suited for stalking prey to this document; its entry for Felis catus is almost comically terse, the page contains nothing anywhere about conical pupils or adaptations to stalk prey, and its only mention of "cursorial adaptations" refers to cheetahs, which are not small cats. So while some of this might be true, the "citation" is complete bullshit. Gnomingstuff (talk) 17:59, 4 November 2025 (UTC)[reply]
also, it very occasionally "slips out of character," resulting in hilarious shit like this from "Ken Paxton": The race was competitive amid Paxton's ongoing securities fraud indictment, but he maintained support in Texas's Republican-leaning electorate. No, don't cite wiki. Remove that. Wait, rephrase: The election occurred during Paxton's facing of felony securities fraud charges from 2015, yet he prevailed narrowly in vote share.Gnomingstuff (talk) 18:07, 4 November 2025 (UTC)[reply]
Better? Pffft. I read the article on Larries there and most of the new content is cited to social media posts, like Tumblr and Reddit. Since there isn't a prominent active anti-Larrie presence online, or an organized community, the article seemed to skew in their favour, and reading through gives the impression that hey! There's so much proof, it must be true! Wikipedia has the upper hand when it comes to reliability. jolielover♥talk03:39, 5 November 2025 (UTC)[reply]
Dear @Pek, though I personally think that Grokipedia is not and will not become a better encyclopedia than Wikipedia, I understand it is important that you feel that way. I think that something to consider is that Wikipedia is not necessarily about being the "best" or "better" encyclopedia. It wasn't created, for example, to correct flaws or biases in Encyclopedia Brittanica. It was created in service of the noble idea of the Wikimedia movement -- that the masses of humans can together, in a somewhat-democratic way, categorize and explain all knowledge and share it free for all. That idea was important then and important now, especially as we feel more isolated from our fellow humans. Even if the idea doesn't work as successfully as Grokipedia (though we'll have to wait and see -- it didn't work as successfully as Brittanica at first) it still should be developed because its an idea that helps fulfill our human potential, that is enjoyable, that creates memories and experiences like no other. That won't change even as competitors get better, because we will still work in service of this idea. ✨ΩmegaMantis✨blather17:56, 7 November 2025 (UTC)[reply]
Better? Grokipedia has no images (people like to look at pictures). It lacks navboxes - the well-organized navigational maps between articles in a topic tree, a major element of classic Wikipedia. Its uppercasing, lowercasing, and italics mistakes are all over the place, and does it lack links, categories, and other valuable Wikipedia features? I haven't spent enough time on it, but these 'lacks' are concerning and hopefully it will correct them (talking to you Elon, come on, step up all the way if you're going to offer a full encyclopedia). That they are copying Wikipedia articles is not a negative, it shows that humans are still in charge and can offer the best that money cannot buy. Randy Kryn (talk) 12:19, 7 November 2025 (UTC)[reply]
"Grokipedia has no images..." Lol, Okay, someone PLZ mention to Elon that his GrossiPedia should have pictures.. Here on Wikipedia we have to deal with pesky copyright issues when adding images, but just imagine the possibilities that are possible with an AI generated encyclopedia! AI generated images! No copyright issues--what could go wrong? While we're at it: AI generated page layouts, AI generated templates, heck, even AI generated Talk Pages!!! AI generated users! With AI generated nominations of Articles for Deletion! I figure if he wants to make a "better" version of Wikipedia, based on AI, he's already missed the point entirely, and deserves to watch it just implode under the weight of it's own artificial UN-intelligence. OwlParty (talk) 12:28, 17 November 2025 (UTC)[reply]
@FactOrOpinion I read your message published on "NOV/05/2025" at "01:20 UTC"".
Grokipedia is full of hallucinations and what would be SYNTH. It also clearly borrows from Wikipedia. The Odessa pogroms page cites several sources about pogroms in Russia in general, not necessarily mentioning Odessa. [7] includes the line, about the 1821 pogrom, Russian authorities, under Governor Ivan Liprandi, intervened to suppress the unrest, arresting several Greek perpetrators, and it is cited to Grosfeld et al 2020 [8] which does not mention any Russian governor intervening. Ivan Liprandi was a Russian soldier and spy who spent time in Odessa, but it's not clear to me he was the governor or certainly I cannot find where he my have intervened in the 1821 pogrom. It seems the governor in 1821 was Louis Alexandre Andrault de Langeron, and it's not clear he is discussed by any of the cited sources or any sources I could find in English about the topic. Andre🚐10:01, 16 November 2025 (UTC)[reply]
Exactly what the question says. Does wikitext support this for CSS? My userpage uses a lot of custom CSS and has a bunch of contrast issues depending on which colour mode a user is on which I need to fix by creating overprecise CSS. thetechie@enwiki (she/they | talk) 18:11, 4 November 2025 (UTC)[reply]
Hi @Vyacheslav84. The lead should be a summary of the article. It's good for the lead to duplicate—or summarize, rather—some content from later sections. However, the coverage in the lead should be shorter and, for technical subjects, simplified. You can cut any new details from the lead (second paragraph) and paste them into Spacecraft_electric_propulsion#Dynamic_properties, including the <ref> tags. I'm not sure if this answers your question. See Help:List-defined references for additional guidance on references. For future reference, Wikipedia:Teahouse or Wikipedia:Help desk is usually better for this kind of question but I am happy to try and help. Let me know if you have further questions. —Myceteae🍄🟫 (talk) 22:17, 5 November 2025 (UTC)[reply]
Is there anyplace on Wikipedia where a serious discussion can be had about "the future of Wikipedia, given AI coming, and our own recent problems with ideological bias in hot-button articles"?
I'm not interested in unproductive arguments that signal, or make wiki editors feel good. But as a 21-year editor with tens of thousands of contributions, I'm seeing a new epoch could be dawning, and would really benefit from a serious discussion with serious editors about this topic. Where is the best place to do this? N2e (talk) 03:56, 7 November 2025 (UTC)[reply]
WP:GROKIPEDIA doesn't replace Wikipedia (<-- read link why). Musk's claims of Wikipedia bias are self-serving and largely untrue. Other than that, not much has changed. We continue to be the best there is, based on the same powerful ideals of peer review and transparency that have been around for 100s of years. -- GreenC04:47, 7 November 2025 (UTC)[reply]
The serious discussion is not mainly about any particular one AI information source, so def not about Grokipedia. And that wiki essay has " has not been thoroughly vetted by the community. " in any case, as it says it its lede. N2e (talk) 12:51, 7 November 2025 (UTC)[reply]
I suspect a useful frame for thinking about the coming of age of AI information sources will be to look at it with an economic lens.
How will the coming of (increasingly, better over time) information sources from AI affect the "demand" for what Wikipedia has provided to global readers for the past couple of decades? Wikipedia was unique and amazing, and obviously filled a great need for information in the early 2000s. Wikipedia is, as Jimbo Wales has said, one of the jewels of the internet. But Wikipedia will not be immune to new tech and new offerings, from services that will have different cost structures for producing that information than that of human volunteers curating/writing/clarifying that information. And we should stay aware of it.
Good comparisons, today, of the services are hard. AI general info sources are too new; and of course, rapidly changing. But the fact that we cannot do a good academic comparison doesn't mean that the global readership for information will not, gradually over time, move to competitive offerings that AIs will produce. The topic is and should be a valid discussion. Let's start by creating metrics, and watch it over time.
How will the changes brought on by the coming of AIs affect our "supply" side? How will it affect our human editors and their willingness to write, to struggle, to create new articles, to fix poor articles? I don't know, but as a data point of one editor with 50k+ edits over 20+ yrs, I can say it is already affecting my willingness to work on certain articles and topics. One characteristic that can already be seen is that the vast decrease in cost to supply encyclopedic information (AIs will do much more, with less human direct input) is already markedly decreasing my interest in doing certain kinds of research and writing. Other editors will have myriad diverse reactions to it. But it would be unlikely that this makes merely a small effect, over time. Let's watch it, monitor it, and think hard about it; rather than wave it off with a schoolyard word fight that says "Wikipedia is better." (and, by implication, always will be).
My take is that human-mediated global information curation will continue to have a place in the future. But I do not think that in 2035 it will look as much like the English Wikipedia of today, as the 2025 Wikipedia looks very similar to the 2015 version. Change is coming; and I suspect we are at or near an inflection point.
What do others think? Little diff from previous changes in the technology and human socialsphere? (say with smartphones, social media more broadly, etc.) Or do you see substantive changes on the horizon? N2e (talk) 12:51, 7 November 2025 (UTC)[reply]
In terms of metrics, human page views are reportedly down about 8% compared to 2024 per WMF. Is there another metric you are interested in? Regarding the supply side, the WMF theory is that fewer views = fewer new editors. It's probably not that direct, there's bound to be some sort of selection bias in terms of the sort of person who would seek out information in a particular way and the sort of person who decides to edit, but that provides another implication to the view count metric. CMD (talk) 13:01, 7 November 2025 (UTC)[reply]
There is no particular metric that I think will sufficiently demonstrate the effect, CMD. There is likely an index of various empirical data that might usefully be generated (and 'human page views' + 'new editors' would no doubt be two of the datasets in the index) to allow interested people who care about Wikipedia monitor the competitive loss during the decade 'til 2035, where I would expect to see rather profound differences. I would posit that no plurality of editors, and certainly no majority of the Wikimedia Foundation board, is ready to accept such a view today. N2e (talk) 17:45, 8 November 2025 (UTC)[reply]
Distinguishing between human page views and non-human page views might be a challenge. I guess non-humans mostly talk to the API right now, but that may change as agents with access to our devices improve. Sean.hoyland (talk) 03:22, 9 November 2025 (UTC)[reply]
I think it's important to look at metrics other than views. I've written about this at User:Thebiguglyalien/Wikipedia is not about page views. I had the same thought that the reader-to-editor pipeline is the main worry here, but also that the 8% were less likely to become editors in the first place. Of course, we're already doing so little to reach out to readers and encourage them to edit Wikipedia that I have trouble believing this is people's top priority. Thebiguglyalien (talk) 🛸22:18, 7 November 2025 (UTC)[reply]
Page views are up (slightly) over the last five years[9], active users[10] and editing[11] are flat but strong, the main issue is getting new registered users[12] (although those figures are skewed somewhat by the COVID lockdowns) but that is a long term issue. -- LCU ActivelyDisinterested«@» °∆t°18:53, 8 November 2025 (UTC)[reply]
The number of new editors has been going down longer than that.[13] There are seasonal patterns (e.g., fewer during June and July), but overall the trend for our next generation is downward. WhatamIdoing (talk) 00:25, 9 November 2025 (UTC)[reply]
That would be expected though as when Wikipedia was created every generation was a potential source of editor recruitments, whereas now older generations have presumably move much closer towards the theoretical cap of new editors. CMD (talk) 02:04, 9 November 2025 (UTC)[reply]
I cut each link to the last five years to show a common data set. Longterm data is useful, but this is a discussion about recent trends. You could go back further[14] but it doesn't add more to the discussion. -- LCU ActivelyDisinterested«@» °∆t°13:51, 9 November 2025 (UTC)[reply]
Not sure it can be measured, but if we knew whether "important topics" are actually improving or are they stuck in a "too boring", "not interested", or "not knowledgeable about" limbo, and that's how we would know if this project was going good. On the other hand, it seems almost certain, we will never lack for editors of 'today's sensation'.-- Alanscottwalker (talk) 14:33, 9 November 2025 (UTC)[reply]
The stats on Wikipedia usage, editors, etc. are all quite useful, especially as we watch for deviations from trend. But with new competition of Encyclopedically-presented information from AIs, at vastly less human cost (higher productivity), we can't ignore the fact of the systemic limitations we've built into Wikipedia over the past decade, intentionally or unintentionally, which resulted in the biases we now exhibit. Our coverage of many political and controversial topics is too-one-sided (COVID, societal lockdowns, climate science, why so many conservative or right BLPs are "far right" but few progressive or left-leaning BLPs are "far left", gender issues, ...; just to name a few). I suspect our policies and practices on making large groups of sources "unreliable" while deeming other groups "reliable" has caused a lot of this. But the result is we have strayed from NPOV and this has turned off a part of our readers, and resulted in increasing publications and vocal opposition to "Wikipedia bias".
But the point is, these AI-generated or AI-assisted competitors to Wikipedia will not suffer from the same accumulated detritus that we do, and this will open up an opportunity for them to out-compete Wilipedia in the free and open information area. Of course, AIs will have their biases as well, but what will matter, as far as our human audience goes, is where do the human readers choose to go for such information, over time. (and the AIs will use our Creative Commons licensed information as well). Wikipedia's vaunted position of the past two decades is likely to change substantively in the next. 2035 will be a very different Wikipedia, and usage of Wikipedia. Will/can Wikipedia change to meet the moment? N2e (talk) 12:45, 10 November 2025 (UTC)[reply]
wwhy so many conservative or right BLPs are "far right" but few progressive or left-leaning BLPs are "far left"
If Wikipedia ever wanted to be the one, true encyclopedia, it was a ridiculous goal from the start, practically megalmonical. What Wikipedia promises is not being the one true encyclopedia nor even completely reliable (see our disclaimer on every page), it is being transparent about what we do, inviting critical thinking. Editors are not going to give up human critical thinking about sources, nor will we ask readers to give up human critical thinking about the sources and what they read, here and everywhere. Alanscottwalker (talk) 15:06, 11 November 2025 (UTC)[reply]
I think one problem with the way you have framed this discussion is with the idea that LLMs are, in any way shape or form, actual artificial intelligence. I've long been annoyed that we (as a society) started calling the new and improved generation of chatbots AI in 2023. Admittedly, they are far better at giving the illusion of intelligence than any chatbots that came before, but at the end of the day, they aren't actually intelligent or conscious.
Even if they were actually intelligent or conscious - and put aside for a moment that we really don't have a good, well-defined definition of what intelligence and/or consciousness is and is not - the fact that they are non-corporeal means that they cannot generate new information, merely rework and repackage information provided by humans. Now, to an extent, this is what our policies require us to do on Wikipedia - we are not allowed to present our original research, merely rework and repackage secondary sources. But if you really pay attention to how they re-work information you'll realize they can be shockingly bad at it. They have no concept of what is important in a document and what is not, they don't know how to accurately combine information from multiple sources while maintaining source to text integrity and avoiding plagiarism (something human editors also often struggle with), and they completely make things up to fill in any perceived gaps.
Now, you have referred specifically to changes in the next 10 years. While I think in the next 10 years these chatbots will continue to improve in their ability to fool people into thinking they are intelligent, I do not think we will see true artificial intelligence, and we certainly won't see it compact enough to be packaged into a robot body that can function without an internet connection and gives it some sense of what the actual corporeal world is like.
In other words, for the next 10 years at lest, humans will be the primary generators of information, while all LLMs can do is repackage it, poorly. I think we have a history of overestimating how much and how quickly things change in "the future", and I think a lot of the hype about how AI will change things falls in that bucket. Will things change? Yes. But in the next 10 years I think things will not feel like they have changed as dramatically as all that. We won't be living in the world of I, Robot.
What does all this mean for Wikipedia? In the short term (and I think of the next 10 years as the short term); I don't think much will noticeably change. Look at how little has changed in the last 10 or 20 years on Wikipedia. 19 years ago when I started editing, we were dealing with several persistent problems: 1. Juvenile vandalism (such as inserting, for example, the word Penis in articles) 2. People trying to use Wikipedia to sell something or promote their pet cause 3. The struggle to craft policies and guidelines to ensure that the information contained in Wikipedia was as reliable as we could make it 4. Human personalities clashing in the way they inevitably do when you get a group of 10 or more people together and try to get them to pull in the same direction. 8.5 years ago when I became an admin, we were dealing with several persistent problems: 1. Juvenile vandalism (such as inserting, for example, the word Penis in articles) 2. People trying to use Wikipedia to sell something or promote their pet cause 3. The struggle to hone and enforce our policies and guidelines and ensure that the information contained in Wikipedia was as reliable as we could make it 4. Human personalities clashing in the way they inevitably do when you get a group of 10 or more people together and try to get them to pull in the same direction. Today, as we speak, we are dealing with several persistent problems: 1. Juvenile vandalism (just last week I deleted several pages under CSD G3 that consisted of nothing but the word Penis over and over) 2. People trying to use Wikipedia to sell something or promote their pet cause (CSD G11 is probably the most used speedy criteria) 3. The struggle to hone and enforce our policies and guidelines and ensure that the information contained in Wikipedia is as reliable as we can make it (this goes to a point you have made several times about bias - it is worth wondering if, in depreciating certain sources which have proven to be unreliable, we have overcorrected. However, our ability to rationally have that discussion is complicated by the fact that the people who are most vocally in objecting to this overcorrection tend to also come across as promoting their pet causes) 4. Human personalities clashing in the way they inevitably do when you get a group of 10 or more people together and try to get them to pull in the same direction.
The rise of these LLMs has complicated these problems, especially since our ability to distinguish LLM-generated garbage from human-generated garbage is as unreliable as the LLM-generated garbage itself. We've been struggling to figure out how to manage this problem for around 3 years now. As they continue to improve, we will find it even harder than it is today to distinguish LLM-generated garbage from human generated garbage. We will need to continue to watch for people trying to use Wikipedia to sell something or promote their pet cause - just now they will be assisted by LLMs in doing so. We will need to continue to craft and hone and enforce our policies and guidelines in order to ensure that the information contained in Wikipedia is as reliable as we can make it - including by ensuring that the policies we craft around the use of LLMs is actually an actionable policy, and not a knee-jerk reaction to what feels like an unmanageable increase in junk. Human personalities will continue to clash, but now some of them will get an LLM to do their arguing for them.
Here are what I see as the biggest challenges LLMs pose for Wikipedia over the next 10 years: 1. The massive amount of LLM slop on the internet will make it MUCH harder for editors to identify reliable sources - but this isn't exactly new. The fact that all our content is licensed under Creative Commons means that even back in 2006 when I started editing we had problems with circular referencing and citogenesis. The fact that Wikipedia is one of the largest freely-licensed sources available to train LLMs means that a lot of their source information comes from Wikipedia, making large portions of their output similar to the circular referencing and citogenesis issues we've been dealing with for 20 years. We'll just have to get better at training people to look for the needles of good sources in the haystack of crap. 2. The fact that LLMs do have some actual use in helping to edit and refine human input, and schools have essentially given up on preventing students from using them in favor of attempting to teach students to use them "correctly" means that we will need to be flexible in allowing some limited use of LLMs in the writing process, and not shaming people who actually use them as tools rather than getting them to do their thinking for them. However, the trend I see here is people wanting to reflexively ban them entirely instead of experimenting and playing around with them in a spirit of curiosity and seeing the areas where they can do the maximum good with the minimum harm and crafting rules for use around those. 3. Casual readers who, say, want the answer to a trivia question at the bar will take Gemini's AI-generated summary instead of clicking through to read the Wikipedia article - but again, that's nothing new. Even before making Gemini's AI-generated summaries available, Google would output the answer to a question like "How old is Hillary Clinton?" in a little box, so people weren't clicking through then. And even before that, when people would click through to the article, they'd skim to find the information they wanted instead of reading the whole thing. 4. No one is getting any younger. 10 years from now, today's 40 year olds will be 50, 50 year olds will be 60, and 60 year olds will be 70, and the list of Deceased Wikipedians will have grown. Meanwhile, todays 10 year olds will be 20 year olds. If you ask me the two groups of people best positioned to be Wikipedia editors are retirees and university students. They have the time, the resources, and the education. Today's 10 year olds - the ones who are running around annoying everyone by yelling six seven, and last year kept saying skibidi toilet, will be prime age to begin editing Wikipedia. But if the schools don't start teaching them to think, to research and write and cite sources, if the schools let them get away with letting Chat GPT do their homework for them, we will have a really hard time recruiting editors and maintaining quality as today's retirees drop off and join the graveyard.
Change is gradual. We won't see a huge change in the next 10 years; just an acceleration and amplification of the problems we've faced for the last 20. But I worry about the next 20, 30, 40 or more if we continue on this course. ~ ONUnicorn(Talk|Contribs)problem solving20:03, 10 November 2025 (UTC)[reply]
There is no way I can spend the time reading and digesting the above wallpost so I dropped it into AI and asked for a summary and that only took 5 minutes to read and absorb and I think you made numerous excellent points about AI amplifying existing problems, not solving them; and the problem of a younger generation that gives up on traditional methods of research and writing as they lean on LLMs. -- GreenC22:26, 10 November 2025 (UTC)[reply]
Thanks for taking the time to engage. AIs will progress as they progress (and here, I'm using the descriptive linguistics sense of AI, as most use the term), and I have no dog in that fight. I'm skeptical of your argument about the effect of AIs/AI technology on Wikipedia over a decade, User:ONUnicorn. In any case, AIs, broadly considered, will present a massive competive option to our readership in the ongoing natural human search for information, and in this way, they will greatly affect Wikipedia. Moreover, as they improve from the state we find them in in 2025, 2 1/2 yrs after the initial non-reasoning chatbot LLMs of 2023, we who toil in the mines of writing Wikipedia will find that the very competition that AIs will offer our readers in providing alternative options, will (at the margin) affect the return (satisfaction, sense of long term value, etc.) that we editors get from writing on Wikipedia, and thus, many of us will write less, at the margin. Some may write more, or do many other things to make up for this technology evolution, but change is coming nonetheless. N2e (talk) 02:37, 13 November 2025 (UTC)[reply]
I wonder if the younger generations (Generation Alpha, Generation Beta, and so on), will prefer to use ChatGPT or other AI chatbots for information over Wikipedia, or will view Wikipedia as unappealing and outdated, much like how we view paper encyclopedias today. If so, that won't bode well for the project's future. I can see the English Wikipedia surviving (albiet in a slow decline) for the next two decades (by 2045, the oldest members of Gen Z, Alpha and Beta will be around 48, 33 and 18, respectively), but after 2045? I'm not so sure. Some1 (talk) 02:57, 16 November 2025 (UTC)[reply]
The danger of so-called AI is not that they will be so good that they replace humans, but that people wrongly trust them. LLMs when used by someone who knows they lie and make mistakes can be a useful tool. Do not make the mistake of thinking LLMs think let alone deeply. Andre🚐10:06, 16 November 2025 (UTC)[reply]
this news doesn't seem to bode well for the future of the site - according to that PDF, FBI gave Tucows a month. I think a month is enough for the person behind it to flee to somewhere FBI-proof, if they're not there already. sapphaline (talk) 12:59, 10 November 2025 (UTC)[reply]
(I'm not sure where the best place to put this is. Attempts at resolving this on his talk page have been unsuccessful, mainly because the user has not replied to either of my last two messages on his talk page, and an image message box at the top of Wikipedia:Categorization/Noticeboard told me to go to the village pump, but if there is a better place for this, feel free to move this there.)
I will admit that most of the chemical categories that I have created have names on the longer side, and there was a different user who expressed concern about this. However, I have never seen JWBE give any hint that this was his reason, and even if it was, Wikipedia:Categories for discussion would still be a better option (especially because deletion is not the only possible solution) than clearing my categories in order to get them speedy deleted and thereby bypassing consensus out of a sense of superiority fueled by the fact that most Wikipedians who participate in categorization do not have a PhD in organic chemistry. Also, in the discussion about merging Category:(cyano-(3-phenoxyphenyl)methyl) 3-methylbutanoates into its subcategory, no one mentioned the length of the category's title as a factor to motivate merging, even after the other reason (underpopulation) no longer applied, and in fact, the nominator proposed merging it into its child category instead of merging the child into the parent even though the child category had a longer name than the parent category; this suggests that most Wikipedia categorizers do not consider the length of my categories to be a problem (although the sample size is somewhat small).
Additionally, I shall mention that when two categories that JWBE had created (Category:Gamma-lactams and Category:Delta-lactams) got tagged for speedy deletion, and JWBE reverted those edits while calling them vandalism (links for gamma and delta). However, looking at the edit history of their pages and subcategories reveals that none of those members were added to either of those categories until after the respective category was tagged for speedy deletion.[nb 1] The most generous interpretation that I can think of is that JWBE forgot that he hadn't populated those categories yet and therefore (incorrectly) thought that someone else must have cleared them and meant to call clearing his categories as vandalism (although this seems unlikely because JWBE had given each of Category: Gamma-lactams and Category:Delta lactams an additional parent category just a few hours before they were tagged for speedy deletion, so he likely would have noticed that they were empty then), in which case his insistence (based on the fact that he reverted my edits to repopulate it and called them rubbish) on clearing my categories would be hypocritical. Even in the more likely case (where JWBE knew that he hadn't populated those categories yet but referred to tagging the categories for speedy deletion as vandalism anyway), the only reason that I can see for why he would feel justified in reacting this strongly to what essentially is another user's failure to read his mind (likely from thousands of miles away) yet have no qualms clearing categories that had had multiple members, thereby getting them speedy deleted, except for a sense of superiority (or even perceived infallibility to the extent that anyone who disagrees with him or makes an edit that he doesn't like must either be a vandal or be creating rubbish) due to being a professional chemist, and this type of mindset, with its consequent reinforcement of double standards, would seem incompatible with following established conventions. For example, if he meant to refer to my category as rubbish, the fact that he seems to think that most people who participate in Wikipedia:Categories for discussion should not have a say in the categorization of chemical articles would explain why he would want to bypass consensus in order to get my categories deleted.
Seems fine, if not long. I don't really follow ANI all too much but it seems more cogently written than 90% of the posts there. I would try to summarize more, there's no need to speculate on the user's possible motivations. That seems only to invite issues of WP:ASPERSIONS being cast at you.
The discussion style of The_Nth_User is in fact extremely voluminous going to be unreadable. He should stop any contributions in chemistry and find better places of personal interest. JWBE (talk) 22:17, 12 November 2025 (UTC)[reply]
I have also seen a *ton* of edits from new accounts of this form. Is this a new way of referencing anonymous accounts? Is the the mechanism to display names broken? Or is this some sort of weird scripting vandal attack? KNHaw(talk)06:38, 11 November 2025 (UTC)[reply]
Yesterday, i published this post on Requested articles. I want soneone to create a new article about the Munich German dialect. I tried to create it before, but the article was deleted because it wasn't professional enough. Can someone with more skills on creating good articles recreate it? Karamellpudding1999 (talk) 08:01, 12 November 2025 (UTC)[reply]
Hello I'm a student from LUISS university in Rome and I'm working on a presentation based on wikipedia's crowdsourcing process and one part of the work is to put myself in the shoes of a wikipedia contributor and find out some feeling he receives when editing or writing pages. The questions I would like to receive answers on are the following:
What does the editor think and feel:
What does the editor say and do:
What does the editor hear and see (about its surroundings):
What are his pains (what type of frustration does the user feel when contributing):
What are his gains (what does make him feel good when contributing):
Active support is really needed so thanks in advance and have a great day
You have already been told to read WP:NOTALAB. In my opinion at least, your research is being conducted inappropriately. You have continued to spam multiple user talk pages, uninvited, for which you risk getting blocked. You are also asking (badly-worded) questions without regard for anonymity, which your university should almost certainly have warned you against. You would do well to rethink your research, and do it properly. AndyTheGrump (talk) 11:01, 12 November 2025 (UTC)[reply]
i grant anonymity and the questions are the ones from empathy map which is highly researched on. I'm trying my best to conduct a good resesrch and more people than you think have responded in a gentle manner. Tartaluca (talk) 11:04, 12 November 2025 (UTC)[reply]
yeah I understood what you're saying I'm sorry I posted something in my user page. if you have any tip to continue the research in the right way tell me Tartaluca (talk) 11:16, 12 November 2025 (UTC)[reply]
Do you mind if I ask what subject you are studying at university? Don't answer if you don't feel happy to, but it might help us guide you if we had a better idea of what you are trying to achieve. AndyTheGrump (talk) 11:52, 12 November 2025 (UTC)[reply]
That might explain why you seem not to have been given proper guidance. What you are doing is engaging in social science research, where students are (hopefully) given a little more advice before conducting surveys etc. You say you are using Empathy map (on which we have an article, but not in my opinion, a good one at all, so not helpful to this discussion), but you don't explain what you are intending to do with your results. As it stands, the answers you get are going to be a whole slew of very different answers to some ambiguous and open ended questions (along possibly with a lecture or two on the gender-related aspects of these questions). How do you expect to condense that down, and summarise it all? Research involves more than gathering data, you need to be able to do something with it at the end. AndyTheGrump (talk) 12:23, 12 November 2025 (UTC)[reply]
Technically, they could be said to have started a discussion at the village pump even if it wasn't the intention of this section. Alpha3031 (t • c) 13:01, 12 November 2025 (UTC)[reply]
That's to help guide student editing projects, not to help conduct research on Wikipedia itself. I agree with AndyTheGrump that the questions as given will not lead to much, but whatever the case any researcher might benefit from gaining at least a little familiarity with the subject through the interviews posted on the Wikimedia Tiktok channel. CMD (talk) 13:21, 12 November 2025 (UTC)[reply]
Hi, I was wondering this question myself when I was using the Vital Articles template, and wanted to reach out the user @SethAllen623 to ask if he still works on his list, and if that is the case, if he would accept any help. If anyone can guide me, that would be much appreciated! ~2025-33093-42 (talk) 14:50, 12 November 2025 (UTC)[reply]
I was prepared to donate today, £15, then the prompts started for “would you like to add 60p to cover the transaction fee”, ok fine. Then “would you consider making this an annually payment”. Then “would you switch this to £3 a month instead”, and then “can we please contact you”.
Good for you. I hope more people do the same and the WMF realises that you can't be ethical but then throw ethics out of the window when you are raising funds. Phil Bridger (talk) 22:54, 13 November 2025 (UTC)[reply]
Agree… we all understand the need for donations, but I too am getting very tired of the constant pop-ups. To now hear that the WMF do an additional “hard sell” when you do try to donate is discouraging. Blueboar (talk) 23:23, 13 November 2025 (UTC)[reply]
I would find it very annoying as well if there are 3 more questions after the initial donation attempt. With Grokipedia/Encyclopaedia Galactica coming WMF should be doing more to combat the threat. WMF is winning by thousands of miles today but we should not be complacent. And annoying donors would be one of the things WMF should not be doing. ✠ SunDawn ✠Contact me!06:47, 14 November 2025 (UTC)[reply]
Hi @GimliDotNet, I'm sorry you had a frustrating experience and it's very useful to get this feedback. It looks like you were giving in the UK or Europe, where we are required to ask for consent to send emails to donors. That, plus the additional suggested upgrades on your initial gift, introduced too much friction.
We have been running some extra, short tests this month in anticipation of the end of year push. This feedback is very actionable to us, and we can look for ways to streamline the options we put in front of donors like you. Thank you very much for considering a gift and for taking the time to share this input. SPatton (WMF) (talk) 20:48, 14 November 2025 (UTC)[reply]
Why do you have to be required to ask for consent? Surely you shouldn't dream of sending spam anyway? This is what I mean by my references to ethics above. Phil Bridger (talk) 16:46, 15 November 2025 (UTC)[reply]
It looks like you were giving in the UK or Europe, where we are required to ask for consent to send emails to donors. ... Are you saying you're only asking for permission to send emails because you are legally required by law?! FaviFake (talk) 16:43, 17 November 2025 (UTC)[reply]
If someone gives their email address, I would presume that they are consenting to be sent occasional emails without checking an additional checkbox, unless the law requires it. – SD0001 (talk) 13:02, 18 November 2025 (UTC)[reply]
Wikipedia's donation banners have become somewhat of a meme among the general (online) public now... An r/interesting thread appeared on reddit's front page yesterday (titled "Jimmy Wales, Co-Founder of Wikipedia, quits interview angrily after one question." -- not sure if I'm allowed to link the reddit thread here) and has some funny comments, e.g.
Wikipedia is so dying, like we're so dead but it's for real this time. Please bro can you spare three fiddy?
Indeed. Perhaps the WMF should consider the long-term effects of their banners on the site's credibility, instead of using them to juice short-term metrics. But there's a good chance that the people responsible for the banners won't be WMF employees in 10 years' time, and therefore have no incentive to maintain the long-term sustainability of the encylopedia. novovtalkedits23:46, 15 November 2025 (UTC)[reply]
They are squandering the reputation of Wikipedia that was built over a quarter of a century by the work of volunteers to get a bit more money today Ita140188 (talk) 07:14, 18 November 2025 (UTC)[reply]
When looking at the local title blacklist for Wikipedia, I noticed there was a section just called "COLBERT". The only line in this section was ".*corn[- ]?hole", and there was no further elaboration. Who/what is Colbert, why did they/it warrant their/its own section, and how will this impact Wikipedia pages being made about Cornhole(s)? AndyShow1000000 (talk) 20:43, 15 November 2025 (UTC)[reply]
Wiki Loves Children will take place from December 1, 2025, to December 31, on Bangla Wikipedia to enrich its content. A central notice request has been placed to target both English and Bangla Wikipedia users, including non-registered users from Bangladesh and the Indian state of West Bengal. Thank you. — Al Riaz Uddin (talk) 14:56, 16 November 2025 (UTC)[reply]
When editing while logged out, this text appears: You are not logged in. Once you make an edit, your IP address will be hidden behind a temporary account but can be viewed by administrators and other trusted users to prevent abuse. If you log in or create an account, your edits will be attributed to a username, only checkusers will be able to view your IP address, and you will be able to continue to receive notifications about your edits after this account expires, among other benefits. This is misleading. If you create an account, the account won't expire. The temporary account will. Please let me know if there is a better place to report this issue. Jens Lallensack (talk) 22:18, 16 November 2025 (UTC)[reply]
I usually only want to replace default icons with specific images, to reduce banner blindness. In this case, the icon looks much more ancient that the ones proposed in the RfC (in which it was clear I wasn't the only supporter), and my suggestion was opposed, therefore I'm posting here. FaviFake (talk) 16:54, 17 November 2025 (UTC)[reply]
Voting in the Arbitration Committee elections is now open
I'll just make a single post here. Pontius Pilate once replied to Jesus with the famous statement: "What is Truth?"
Well according to this 17 November 2025 Guardian UK website article Over Here, Grokipedia is a repository of 'White nationalist talking points and racial pseudoscience: welcome to Elon Musk’s Grokipedia' It normalizes white supremacists, Holocaust deniers, praises white Rhodesia's "effective resource management", says “ethnic homogeneity fosters higher levels of interpersonal trust and social cohesion compared to greater diversity," and justifies the "whites-only community of Orania in South Africa, which forbids any non-Afrikaner taking up residence."
I know that Zimbabwe (modern day Rhodesia) has become a basket case after Mugabe's disastrous rule and mass nationalising of white settler land mostly to benefit his own supporters and that the conciliatory and tolerant President Mandela visited Orania in 1995 and said this town is entitled to run themselves "as they like" but Orania unfortunately runs against Mandela's vision of a rainbow nation where white, East Indian and black people can live together....even though there are very very high crime rates in South Africa. Grokipedia's normalising of racist viewpoints just makes it a right wing form of X/Twitter but with Mr. Musk's billionaire support. If he gets to define the truth, then black might as well be white and vice versa since facts and truth don't matter anymore. I hope others can read the free Guardian article. This is my only post on this topic. --Leoboudv (talk) 02:10, 18 November 2025 (UTC)[reply]
Temporary accounts have completely replaced IP editing. In other words, when unregistered users make an edit, their edits are no longer attributed to their public IP address, but instead, an automatically-created, "temporary account".
This major change has rendered the block templates specific to anonymous users / IP users pretty much useless. Anonymous users no longer receive notifications for messages posted on the talk page of their public IP address (I've tested this myself), and furthermore, temporary accounts that are blocked from editing wouldn't be able to edit that public IP's talk page, but only the talk page of their temporary account, hence making even the block instructions in the block notice potentially misleading/useless.
It is pretty much a waste of admin time and server resources now to still place these templates on IP talk pages when blocking an IP address from editing.
Hence, what do we do with all these IP user-specific block notice templates? Do we delete them (precisely, remove the "anon=yes" feature and delete uw-ablock and uw-ipevadeblock), or, is there still some potential use for them? If they are still useful, what are some potential changes/updates that may need to be made, following the implementation of template accounts?
Templates affected:
All templates in this list that are applicable to IP addresses (e.g. {{uw-voablock}} wouldn't apply), and/or have the "anon=yes" parameter
{{uw-ablock}} (a wrapper for {{uw-block}} with the "anon=yes" parameter pre-enabled)
Yes, I am well-aware that IP addresses may still be blocked from editing in cases where there's too much disruption from a single address/range, or if a vandal has 'outsmarted' the temp account system. Just that the anon-specific block templates that are placed on IP talk pages are pretty much obsolete now, due to a lack of notification system as well as the ability to access/edit it.
Another point I almost forgot to mention in the original post: anonymous users also won't be able to (easily) go to their IP talk pages either, unless they know their public IP address as well as know to type it in along with the "User talk:" prefix in the search bar to access it, which, very very few people would do. — AP 499D25(talk)11:29, 18 November 2025 (UTC)[reply]
I don't know what the standard protocol is for this situation, but I'd think we should always keep the documentation of historical templates that were used as widely as these. Deletion would probably consume more resources than it would ultimately save anyway. —Rutebega (talk) 16:39, 18 November 2025 (UTC)[reply]
All instances of these user block notice templates are actually substituted (not transcluded), and so ultimately nothing will really happen to the thousands and thousands of pages that have these IP user block notices if we delete the templates listed above. As for the fate of all the substitutions of the templates, they definitely should be left alone, especially given that we aren't going after and deleting every single IP user talk page either.
Just to be very clear: I am not asking for removal of every single "Anonymous users from this IP address have been blocked ..." message that exists on IP talk pages, ever; I'm just making a case for if we should discontinue and deprecate the usage of (i.e. new substitutions of) those IP user block templates listed above, due to several key issues I have pointed out. — AP 499D25(talk)07:26, 19 November 2025 (UTC)[reply]
So...since I've been paying attention we usually have roughly 125,000 active users. I feel like it goes up to 140,000 in Northern Hemisphere winter and down to 115,000 in Northern Hemisphere summer.
We are currently at 225,000 active users. I've also noticed a spike in passersby doing what I gather are listed as "beginner tasks" on some page somewhere, which manifest as adding 1–3 Wikilinks on article. Like if the word oxygen was unlinked it gets a link to oxygen.
So what I'm saying is I suspect we are being flooded with sleeper agents and someone somewhere is up to no good. Thought I would swing by and mention. I'm guessing there's already a task force working on this but just in case...this is my human intuition warning system providing an alert to the community. jengod (talk) 05:31, 19 November 2025 (UTC)[reply]
I think it's the switch from IPs to temporary accounts. I'm not sure if temp accounts now count as an active user, but even if they're not, I've been seeing a lot more new users than before. jolielover♥talk06:19, 19 November 2025 (UTC)[reply]
AHAAAA! This makes sense and is very interesting. Thank you for explaining. I'm actually so relieved it's not some diabolical scheme bc who even knows how to deal w that. Thanks again you guys.
The answer doesn't rule out a contribution from one or more diabolical schemes, so we probably shouldn't spoil it for all the people out there with vivid imaginations who enjoy explaining things that way. It's an increasingly popular hobby. Sean.hoyland (talk) 12:30, 19 November 2025 (UTC)[reply]