This is the Village pump (all) page which lists all topics for easy viewing. Go to the village pump to view a list of the Village Pump divisions, or click the edit link above the section you'd like to comment in. To view a list of all recent revisions to this page, click the history link above and follow the on-screen directions.
Addition: "Administrators may choose to further delay running in an RRFA or administrator election by up to 6 months after the recall petition is closed: they will be temporarily desysopped in the interim upon declaring such an intention. The temporary desysop will be reversed if they retain adminship within 6 months by the means described below: otherwise it is made permanent."
Removal: "; they may grant slight extensions on a case-by-case basis"
Additional background: A recent recall petition and the administrator's subsequent request to be allowed to run in the next administrator election, which would start outside the 30-day window specified in the policy, led to this extensive thread at the bureaucrat's noticeboard. I see no clear consensus there as to whether the specific delay in this instance is permissible, or as to how to handle this situation in the future. Rehashing this conversation for each subsequent recall seems to me to be undesirable. Vanamonde93 (talk) 19:55, 25 October 2025 (UTC) Addendum: it has been brought to my attention that in this instance there appears to be 'crat consensus to permit an extension. Vanamonde93 (talk) 04:05, 26 October 2025 (UTC)[reply]
Support, as proposer. As I noted at BN, the community clearly intended for administrator elections to be a path for retaining adminship. However, only offering it to those admins recalled within the arbitrary window of 30 days before each call for candidates feels inequitable. Given the tendency for regular candidates for adminship to choose EFA over RFA, I suspect this matter will come up again, and we will have further lengthy discussions about how much delay is permissible, which this proposal will eliminate. It also gives recalled admins more time to choose their path and reconsider their approach before asking to retain the tools, while simultaneously restricting them from taking bad admin actions. Vanamonde93 (talk) 19:55, 25 October 2025 (UTC)[reply]
Noting that the emergence of 'crat consensus to allow UtherSG an extended timeframe to run in the coming admin elections only strengthens my desire to enact this, because it highlights the potential for difficulty with longer delays, and creates the possibility that an administrator's popularity will affect the community's perception of the delay. Obviating the need for an extension is the most equitable solution. Vanamonde93 (talk) 04:05, 26 October 2025 (UTC)[reply]
Oppose Per the maxim Justice delayed is justice denied, it seems best to act expeditiously rather than spin things out. Six months seems quite a long time and I don't like the idea that an RfA candidate would retain the right to a discount on the % required for success for so long when other candidates, who hadn't given cause for complaint, were not given this advantage. If someone is too busy to attend to an RfA then they can just resign and try a regular RfA later at a time of their choosing.
Note also that there's a procedural problem with this RfC. WP:RFC states "There is no technical limit to the number of simultaneous RfCs that may be held on a single talk page, but to avoid discussion forks, they should not overlap significantly in their subject matter." This RfC obviously overlaps significantly with the Recall check-in RfC above. Tsk.
Oppose I supported reconfirmation by election to avoid the confusion of an admin that preferred WP:AELECT needing to resign to access it during their temp desysop. However, like many expressed in the initial approval of this option, we should not extend the admin's lenience at RRFA and AELECT just to ensure an election occurs within their limbo. If someone really prefers elections, they can pursue it like any other user. ViridianPenguin🐧 (💬) 22:26, 25 October 2025 (UTC)[reply]
Mostly support. Recall only works when it is fair to all parties, and allowing someone to wait until the next admin election is fair. Allowing crats discretion to extend is fair. Sticking to rigid arbitrary deadlines is not - why would we penalise someone for starting an RFA on the 31st day vs the 29th day due to personal circumstances? Thryduulf (talk) 22:27, 25 October 2025 (UTC)[reply]
Weak support (prefer 3 months) I understand and don't oppose the general idea of giving an admin some additional flexibility around the timing of their RRfA. That said, 6 months is a long time; I would support a shorter window for this extension as a first preference. 2601:540:200:1850:CC47:61C6:19C6:6028 (talk) 22:55, 25 October 2025 (UTC)[reply]
If the goal is to allow admins to use the election process to RRfA, perhaps that could be spelled out as an exemption to a 3-month limit: "The temporary desysop will be reversed if they retain adminship within 3 months by a Request for Adminship (RfA) or at the next regularly-scheduled Administrator Election, regardless of date: otherwise it is made permanent." 2601:540:200:1850:CC47:61C6:19C6:6028 (talk) 23:02, 25 October 2025 (UTC)[reply]
That's a fair concern. I chose 6 months to ensure the window would always encompass an admin election. EFAs are supposed to be held every 5 months, plus some wiggle room with scheduling. Vanamonde93 (talk) 23:02, 25 October 2025 (UTC)[reply]
Support for 2 reasons. First the community has been uniformly happy with giving administrators the option to reconfirm via election. This proposal prevents that from being an empty option 4/5th of the time. Second, it gives an admin the option to step back, address a concern, show some personal growth from the process and then reapply for adminship. The current system of a RRFA in the immediate shadow of a petition-generating controversy feels difficult to pass, and transforms 25 signatures from a statement of concern to a de facto permanent desysop. As a pleasant side effect, this should also give clarity to the crats, who would otherwise have hard decisions anytime a candidate wanted an extension for running in an election.Tazerdadog (talk) 23:27, 25 October 2025 (UTC)[reply]
Support, though I agree with S Marshall that "until the next election" is probably the better way to phrase this. We should make it as painless as possible recalled admins, and this is a step towards that goal. The admin is desyopped in the interim, so there is no chance of further misconduct with the tools. HouseBlaster (talk • he/they)01:44, 26 October 2025 (UTC)[reply]
Support Although I agree with S Marshall and House that until the next election is the better wording. This is a reasonable proposal that will enact the communities will to allow ALECT for recall by giving more flexibility for Admins to stand at the next election. GothicGolem29 (talk) 03:27, 26 October 2025 (UTC)[reply]
Oppose. Six months is a long time on the internet, and would allow whatever issues that led to the recall petition to quietly fade from memory. They of course would still be welcome to run in an election, they would just have to follow the same rules as us normal folk. ~~ Jessintime (talk) 03:50, 26 October 2025 (UTC)[reply]
Support, for either RRFA or AELECT, with the temp desysop. Worried that the petition makes admins do RfAs at an inconvenient time? This solves that! Worried that the petition was started by a bunch of bad faith socks? Now you've got potentially 6 more months to prove that, bring the evidence to the community, and watch some SPI blocks get dropped before they show up to RfA. Worried that your favourite vandal and sock blocking admin had gotten too jaded and wish there was an option between having them ignore community concerns and removing them permanently? Then Vanamonde's administrative leave plan may be just the thing you're looking for! More seriously, I do get the concerns around giving somebody desysopped for cause more time for the community to forget (lol, we're Wikipedians, we dig up books from the 1930s about abandoned settlements for fun), and I really do understand that there's an inherent unfairness in turning away a potential new recruit who hit 65% approval rating while letting somebody who was desysopped for cause 180 days ago sail through at 55%, which I really don't like, - but at the end of the day, I don't actually want to desyop admins. I want good admins. I believe that incentivizing a long period of reflection and a period of time without tools, where you have to run every single admin action past your peers instead of cutting corners, can only be a good thing. GreenLipstickLesbian💌🦋04:19, 26 October 2025 (UTC)[reply]
Support for both RRFA and AELECT. This was proposed in the earlier discussion, and I wholeheartedly endorse this. This proposal retains accountability for the admins (they lose their bits) while reducing the "temperature" of RECALL. If an admin is sufficiently flawed, the voters will inevitably bring out their mistakes anyway. But this allows any good admins having a "bad time" to have a gap to improve their behaviour and prove themselves to the community. If passed, I also think this should retroactively apply to every admin who resigned instead of RRFA in the last 6 months. Soni (talk) 04:32, 26 October 2025 (UTC)[reply]
Very weak support for AELECT I agree with S Marshall that it should be "until the next election." I oppose for RRfA unless it's only 2-3 months, in which case it would also be a very weak support. fanfanboy(blocktalk)05:06, 26 October 2025 (UTC)[reply]
Weakish Support - This feels like tinkering around the edges of a bad system, but anything is better than nothing in this case. This definitely should not preclude other changes or indeed getting rid of the whole mess of an RRFA system. FOARP (talk) 06:07, 26 October 2025 (UTC)[reply]
Support per many others, especially GLL. I'm not sure if the "next election" wording is better than a hard limit (6 months), since the former varies with time, which is a criticism of the current system. It would also mean the time limit for an RfA and AELECT could be different, which is odd. Toadspike[Talk]02:09, 27 October 2025 (UTC)[reply]
Musing on your final point, does it matter if the RFA and AELECT deadlines are different (this is genuine question)? You've also got me thinking about the minimum times between petition closure and the stand/don't stand decision deadline. I'll put my comments about that in the discussion section below. Thryduulf (talk) 02:49, 27 October 2025 (UTC)[reply]
Support. I think there probably needs to be some tinkering after the fact to make it more concise and flow better with already there since it's weird to say you have 30 days and then at the end of the paragraph say that actually, it's effectively 6 months (presumably if declared within the 30 days?). I would honestly just make it opt-out instead of opt-in if the point of this is to make it easily for recalled admins to "rehabilitate" themselves to use a criminal justice term. It gives the admin time to schedule a potentially busy week for an RFA/admin election so they can put their best foot forward on how to address the inevitable questions and allows sentiments to cool off for both the admin and by the community. It also allows the admin time to continue to edit and show that they're addressing the issues raised in the recall (e.g. tagging and declining CSDs properly if overzealous CSD deleting was an issue). Maybe if memory is an issue, just make it a link to the recall petition mandatory. -- Patar knight - chat/contributions06:46, 27 October 2025 (UTC)[reply]
Support principle but not for 6 months until RRFA. It's reasonable to allow the re-appointment discount for a little longer, giving the admin time to consider what happened, whether they want the bit and how to go about it. But per S Marshall and others, only until the next election if choosing AELECT and only for 3 months, not 6, for an RRFA. We do, after all, want memories of the events, discussions and petition to be reasonably fresh and comparatively accurate (which may favour the candidate or may not). Three months also happens to be a little more than the average time from a petition passing to the next AELECT, on current timing. NebY (talk) 16:41, 27 October 2025 (UTC)[reply]
See my comments below about "until the next election" - that could be just under 6 months away, it could be minutes, it could be anywhere in between. Thryduulf (talk) 17:32, 27 October 2025 (UTC)[reply]
Yes, I'd seen those comments. That's three months on average, but I also note Vanamonde's comment above, "EFAs are supposed to be held every 5 months, plus some wiggle room with scheduling.". NebY (talk) 17:43, 27 October 2025 (UTC)[reply]
"On average" is fine in the abstract but not when it comes to an individual administrator. What matters then is how long there is until the actual next election - if nominations close imminently that's very very different to the next election being 2-5 months away. Thryduulf (talk) 17:55, 27 October 2025 (UTC)[reply]
Perhaps you haven't read my full support comment. I do support allowing the discount at the next AELECT. However, I don't support allowing the discount for an RRFA for up to 6 months and support up to 3 months instead, for the reasons I stated. I then noted - and it's regrettable if my noting it misled you as to the previous points - that 3 months is also (a little more than) the average discount period created by allowing the discount at the next AELECT. NebY (talk) 18:07, 27 October 2025 (UTC)[reply]
I have read your full comment, and I still think that you're missing the point that I'm making. I cannot think how to say what I've been saying any differently though, so I'm just going to hope someone else can. Thryduulf (talk) 18:18, 27 October 2025 (UTC)[reply]
I view this as largely academic (since starting with 25 opposes dooms a RRFA from the start, and I suspect that's by design); but it doesn't make sense for there to be a longer possible wait time if you choose to use the venue that, so far, has always resulted in much less scrutiny. —Cryptic16:48, 27 October 2025 (UTC)[reply]
Support per Tazerdadog. This gives all recalled administrators the option of running in the next WP:AELECT rather than being forced to go use WP:RFA as their reconfirmation process unless they get lucky, and it also lets both the recalled admin and the community take a step back, reflect, and approach the RRfA after some introspection, rather than being forced to do it immediately after some controversy. Mz7 (talk) 04:53, 28 October 2025 (UTC)[reply]
To avoid confusion regarding whether or not the admin is being elected or re-elected when their first request used the open viewpoint process, personally I suggest staying with the term re-request for adminship, which can proceed either through the open viewpoint process, or the election (or secret ballot) process. isaacl (talk) 00:44, 29 October 2025 (UTC)[reply]
I would say the best way to avoid confusion is to have REELECTS the term for admin elections and RRFA be the term for RFA. This is because it matches each process better with RRFA referring to the process involving RFA and REELECT referring to the process with the Admin Elections. GothicGolem29(talk)15:53, 29 October 2025 (UTC)[reply]
Yes, but that isn't a compelling reason to reduce accountability. RfA will be difficult whether it happens sooner or later. Delaying it only serves to remove it from the reasons Recall was initiated and certified and those reasons should be a key component of those processes. Iggy pop goes the weasel (talk) 14:57, 29 October 2025 (UTC)[reply]
those reasons should be a key component of those processes. Yes and no. They should be a component of the processes, but only in the context of their adminship as a whole. "Occasional mistakes are entirely compatible with adminship" is an oft-repeated principle at arbitration, but finding 25 signatories to a petition in the immediate aftermath of an isolated controversial decision is likely going to be very easy, so there needs to be a period to allow tempers to cool and ensure that the ReRFA is a fair reflection of the admin not just of one incident. However, it is equally likely that the cause for a petition is ongoing chronic inappropriate adminning (with or without an easily-pinpointable final straw), and in that case there shouldn't be too long a gap between petition and ReRFA. This means that the timescale needs to be a balancing act between these competing directions and also remain fair to both petitions and the admin. I don't think 30 days is long enough, but contra WAID I do think a year is too long. If admin elections were not a thing, I'd probably be suggesting 3 months, but admin elections are a thing and the community consensus was strongly in favour of both a 5-month schedule and allowing admins who are the subject of a certified recall petition to choose to stand in an election. We cannot control when petitions are certified relative to the admin election schedule, so to ensure that the community consensuses are respected without unfairly forcing admins to stand immediately after a petition closes we have to allow the election interval plus circa three weeks, which in round numbers is 6 months. Thryduulf (talk) 15:37, 29 October 2025 (UTC)[reply]
Nothing under the current system prevents the context of their adminship as a whole being discussed or taken into account at RRfA or AElect, two processes by which all are able to identify their support or lack thereof. The discussion sections of Recalls have proven this. Iggy pop goes the weasel (talk) 19:39, 29 October 2025 (UTC)[reply]
It's correct that nothing currently prevents that, but it does discourage that. The discussion sections of recall are irrelevant by design. Thryduulf (talk) 19:48, 29 October 2025 (UTC)[reply]
Oppose - six months is too long, and enough with coddling troublemaker admins. They can run for RFA anytime they want, and they can stand in any election. 30 days at a reduced threshold is already a lot of leeway. Nobody else whose perm gets pulled gets this kind of indulgence. Levivich (talk) 16:01, 29 October 2025 (UTC)[reply]
I am proposing a window of up to 6 months during which the admins will no longer be admins. That's not coddling in any sense of the word. Vanamonde93 (talk) 16:18, 29 October 2025 (UTC)[reply]
It's coddling because they get the benefit of the lower pass thresholds six months later instead of just 30 days later. I appreciate that the proposal would prohibit tool use during the six months, I think that aspect is good of course, but still, six months is too long. If an admin wants to run six months after their recall petition is certified, they can just do so, at the normal thresholds. I think it's coddling because you're giving them a six month window for a full community review of their actions while enjoying the lower threshold privilege. Nobody gets this. I didn't get to delay any of the arbcom cases where I was a party by six months to a time that was convenient for me. The last one happened over Christmas and New Years, nobody gave a crap that this was bad timing. I get having a little leeway like 30 days, but I don't see why admins should get so much leeway as six months. Imagine an ANI thread and the reported editor says "can we talk about this in six months? I promise not to edit the article in the meantime." Nobody gets this privilege on Wikipedia, no reason to give it to admins. Levivich (talk) 16:59, 29 October 2025 (UTC)[reply]
It isn't accurate to say nobody gets this "privilege": I can think of at least three admins who received similar grace periods, when desysop cases were opened by ARBCOM and suspended until such a time as the admin chose to resume them. It's not accurate to say we don't extend the privilege to editors either. We have certainly closed noticeboard reports based on a voluntary commitment to stay away from a particular conflict. Now maybe you think that's coddling too, and I won't argue with that. But there's certainly precedent. And I will emphasize for anyone following along at home that the "privilege" is only the lower passing threshold, not a retention of the mop. Indeed the proposal will likely reduce the length of time that an admin can hang on to the tools after a successful recall petition, by obviating the scenario we just had and limiting that grace period to 30 days plus the length of RRFA/AELECT. Vanamonde93 (talk) 19:21, 29 October 2025 (UTC)[reply]
Oppose I don't believe that AELECT has proven itself to be fit for the task of the recall system. It produces admins, but I don't really think the evidence is there that the marked lack of scrutiny isn't a problem. Affixing two new systems to each other isn't a good idea. Stick with 30 days for an RFA. Parabolist (talk) 22:35, 29 October 2025 (UTC)[reply]
@Parabolist: AELECT is already an option. But only if the 30-day window after the closure of a recall petition overlaps with the call for candidates of an AELECT or - as happened this week - the bureaucrats grant a discretionary delay. I am seeking to abolish that discretionary delay, which is primed for inequities. Vanamonde93 (talk) 00:47, 30 October 2025 (UTC)[reply]
Right, but extending this window essentially guarantees the choice of AELECT. The inequity is that poorly timed (by my personal standard) recalls can allow for less scrutiny in how the tools are reconfirmed. So this solution does solve that, but by making everyone have the worse outcome. For the record, I'm against the crats allowing the extension they're allowing in this case, so I'm at least consistent! Parabolist (talk) 00:58, 30 October 2025 (UTC)[reply]
Acknowledging that our data about AELECT is still limited, I genuinely do not think admins would necessarily choose to participate in AELECT over RFA. As I see it the major difference is in voter anonymity. It's an open question whether editors would be more likely to support a recalled admin if they are anonymous. I suspect it depends on the popularity of the admin and the nature of their transgressions. You're entitled to your opinion of course. Vanamonde93 (talk) 16:37, 30 October 2025 (UTC)[reply]
Support mainly because if I had had the chance, I'd have chosen an administrator election instead of the classical RfA process, and because I'd prefer a re-election to a re-RfA. Whether this can be discounted as a biased vote with a conflict of interest, or given additional weight as one made with experience others lack after having experienced both RfA and ACE, I don't know. ~ ToBeFree (talk) 01:38, 30 October 2025 (UTC)[reply]
If they want the lower threshold for success that the community consensus says they are entitled to, then they can only do this if an admin election happens to be scheduled within about 30 days of the petition being certified. As elections only happen every 5 months, that's only a (very approximately) 20% chance. Thryduulf (talk) 04:06, 30 October 2025 (UTC)[reply]
Though given the level of discretion hasn't been said fully as far as I know that does mean the 20% figure you gave could change a fair ammount(to the point where I would say there isn't a percent even very aproximately given the level it could change.) GothicGolem29(GothicGolem29 Talk)16:37, 30 October 2025 (UTC)[reply]
The level of discretion is not formally bounded, but given the comments at BN regarding the current case I'd be very surprised if it were extended much further. For the sake of argument, if we assume that the crats said an extra 20 days was acceptable but 21 days was not (I think this is more generous than it would be in reality) then that gives a 50-day window during which admins can nominate themselves for AELECT with the reduced threshold. The duration of the nomination window is not specified in the policy but it has been 7 days every time so far. So the 50-day and 7-day windows need to overlap, and let's generously assume that every part of the 50 days is equally useful (in reality it won't be due to real life commitments, not having prepared a nomination statement in advance, etc). The 50 day window can occur at any time, the 7-day window occurs only once every 5 months - so a maximum of three times a year.
If my maths is correct (and I'd really like someone to double check if it is) then there are 414 possible 50-day windows with at least 1 day in a non-leap year. Only 21 of those overlap with a nomination window, which is actually very slightly over 5% - and thats with very generous assumptions. Thryduulf (talk) 17:49, 30 October 2025 (UTC)[reply]
Oppose and propose instead that any admin who receives 25 signatures for RECALL is immediately desysopped, and prevented from running for admin again until 6 months has passed, after which they may run again for admin (with no reduced pass threshold). Tewdar 15:01, 30 October 2025 (UTC)[reply]
Oppose per Levivich. The current system does not need amendment. If a former admin wants to request re-adminship after 30 days, they are welcome to do so at RfA (under the regular thresholds). Ajpolino (talk) 19:07, 1 November 2025 (UTC)[reply]
Oppose. If you prefer AELECT over RfA, then you can wait, just like everyone else. If not having admin rights for a few months is unacceptable for you, then you should not be an admin. Thebiguglyalien (talk) 🛸02:13, 2 November 2025 (UTC)[reply]
Sorry, I could've been clearer. My opinion is that a desysopped admin, even "temporarily", is just a regular editor and I've yet to be convinced that special considerations need to be given. Thebiguglyalien (talk) 🛸16:22, 2 November 2025 (UTC)[reply]
Support One of the pluses of RfA is you can choose when it happens. RfA is one of the most stressful things I ever did (on par with taking the bar exam). This is a volunteer project afterall, and we are struggling to recruit and keep editors. Giving folks a little more leeway to choose a time that fits their life best is humane and sensible. CaptainEekEdits Ho Cap'n!⚓20:45, 2 November 2025 (UTC)[reply]
Support I find it a little weird that whether admins get to run in an election with the lower threshold depend solely on whether they happened to be recalled at the right time. While I'm not suggesting anyone has done so, it could easily lead to concerns an editor has chosen to start the recall precisely at a time to prevent an admin chosing election. More significantly, one of the concerns expressed by those opposed to the way recalls are currently working is that a successful recall means that the admin is going to be permanently desysoped in part because their chances are already low and in so much as they might have a chance with the reduced threshold, the stress of doing so when the former admin is effectively required to run an RRfA in an emergency rather than at a time of their choosing means the reduced threshold is basically pointless. Frankly, I'd prefer an immediate desysop upon successful recall and the admin then getting 6 months to decide whether to try to confirm their adminship than the current system (by which I mean they have to start an RfA or enter an election). While I appreciate even under the proposed change if the timing is off an admin might still have to run an election in an emergency which isn't ideal it strikes a decent balance although I wouldn't be opposed to extending it to 9 months to give an admin the chance to not have to run for an election in an emergency. Although I appreciate this does mean memories of the problems with an admin will be less fresh, I still feel it's a decent balance noting also most recalls seem to have been for longer term problems. Nil Einne (talk) 13:26, 10 November 2025 (UTC)[reply]
As Stifle pointed out, specific proposals got a bit lost there, as tends to happen with a general temperature-taking exercise. This proposal isn't limited to AELECT though. Vanamonde93 (talk) 18:30, 26 October 2025 (UTC)[reply]
In the voting section, several editors have commented about setting the next admin election as the deadline for an admin who is the subject of a certified petition to decide whether to initiate a new RFA/AELECT with the reduced passing percentage versus a fixed deadline (whether that is the current 30 days or something longer). The next election could be as long in the future as almost 6 months (nominations closed just before the petition is certified) or as short as (in theory) minutes but more realistically a few hours - all of which could be in the middle of the night in the subject's timezone or during some other period where they are unable to look at Wikipedia. This means an admin could go from being in apparent good standing to desysopped with little or even no warning at all. Obviously in extreme cases the crats would uncontroversially use their discretion and not insist on the literal meaning of "next election" (doubly so if there was any indication of gaming the timing of the petition or its closure). However given the ongoing discussion about discretion in UtherSRG's case, if we're going down the movable deadline we need to put some guidelines in place for the minimum time before the deadline. Hopefully even those who see nothing wrong with the current system can agree that 5 days or less is unarguably not fair on the admin, but what if the close of nominations is 29 days after the petition was certified? If those choosing RFA get up to 6 months, does that mean that's the minimum someone choosing AELECT gets? With the possible exception of those opposed to any recall procedure in principle, I can't see anyone agreeing that 11 months (6 months minimum, plus up to 5 further months for the next election) is within the spirit of the process. Where in the middle of the extremes does consensus lie though? It needs to be long enough to enable the admin to make a considered decision and, if they choose to stand, to write a good nomination statement but not so long that an admin who is actually and actively causing harm to the project can be reasonably curtailed. I should stress that this is explicitly not trying to influence consensus either way regarding this option, I'm literally just surfacing questions that need answers before it could be implemented. Thryduulf (talk) 03:22, 27 October 2025 (UTC)[reply]
It is largely to avoid these sorts of questions that I proposed an unchanging six month window that should always encompass an admin election that's more than a few hours after recall. Vanamonde93 (talk) 04:48, 27 October 2025 (UTC)[reply]
AELECT is too young a process to know how often it will end up running over time.
Thryduulf, I might be able to support a year-long window. It might be nice if de-sysopped folks took a little while to reflect on what went wrong and whether they want to re-commit to a community that just rejected them. A decision made while emotions are still running high might not be the best for anyone. WhatamIdoing (talk) 06:28, 27 October 2025 (UTC)[reply]
Small point: it might be better not to frame recall petitions as rejection by the community; formally speaking at any rate, that would come at an RFA or AELECT. Seeing a petition that way might even be making emotions run higher. Otherwise yes, taking time to take stock should be encouraged, assisted and if possible normalised. NebY (talk) 12:49, 27 October 2025 (UTC)[reply]
Going to self-close since the end result is uncontested:
A bot (presumably AnomieBOT) may tag pages that are unambiguously U6-eligible] (0 non-userspace edits by user & 0 non-bot-flagged edits in past 6 months). It will tag however many pages newly meet that standard as of a given day, plus 150 old ones. All other details can be sorted out at BRFA.
The bot will use a a |bot_timestamp= parameter in {{db-u6}} (sandbox diff) for all taggings. This will add the page to a day-based subcategory, with deletion if still tagged after 7 days.
I will update WP:U6 to confirm the provisional guidance that humans are discouraged from mass U6-tagging by script. Human tagging behavior is otherwise unimpeded; there is no 7-day wait for these taggings.
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
"User subpages of users who have made few or no edits outside of user space, which have not been edited by a human in at least six months, excluding redirects, .js pages, .css pages, and Wikipedia Books. Promising drafts may be moved to draftspace by any editor as an alternative to deletion."
— CSD U6 (wording as of 14:17, 30 October 2025 (UTC))
An RfC has closed with the enactment of CSD U6 and U7, which replace U5 for the handling of userspace material by non-contruibutors. With U6, which calls for procedural deletion of most such pages if they go 6 months without being edited, there are two implementation details I'd like to follow up on. I'm not making this a formal RfC, because no major new consensus is needed here, but I'll make subheadings below for the two questions I have. -- Tamzin[cetacean needed] (they|xe|🤷) 14:17, 30 October 2025 (UTC)[reply]
There is some room for variation in interpretation of U6' "few or no edits outside of user space" rule and "human edit" rule, so some pages will need manual review to see if they qualify. However, in cases where the user has no edits (including deleted edits) outside their own userspace and there are zero edits in the past six months except by flagged bots, a bot could easily tag such pages with near-zero risk of error. I propose a bot to do just that; it would also check that the page is not a redirect, that its title does not end in .js or .css, and that its wikitext does not start with {{saved book}}. Per U6' wording, reviewing admins (or anyone else monitoring the category) would still be able to draftify in lieu of deletion. The bot would ignore pages created before May 2025 (6 months prior to U6' enactment); see below for other pages.
I put together some bot code for this and the "old pages" task just below, on the assumption that this plan will get consensus. Rather than "ignore pages created before May 2025", though, I'm having it treat any page last edited by a human in May 2025 or later as being a current page rather than an old page. I also decided to exclude .json subpages (there are 12 that would be eligible), as those seem likely to need more human attention in case they're loaded by a ,js page or something. And rather than running this under AnomieBOT, I'll probably create a separate bot account for this one. Anomie⚔03:40, 31 October 2025 (UTC)[reply]
I don't think we should engage in systematic (bot- or human-) tagging of these articles. This should be a viable option for a page that an individual human editor happens to encounter and believes that keeping the article would be undesirable for a reason specific to the page's contents. For example, please delete (i.e., hide from non-admins' view) a page with insulting content, but don't waste time deleting simple test edits. "Leaving a test page alone" is better than "Test edit page + another copy to tag it (every edit makes a separate copy on the servers) + admin time to verify that it's eligible + log entry hiding ('deleting') the pages". I think the most value we could get from a bot is one that removes bad tags, especially if it can see deleted edits. WhatamIdoing (talk) 19:13, 31 October 2025 (UTC)[reply]
We just had a monthlong RfC resulting in consensus that these kinds of pages should be procedurally deleted (with the option of checking for draft quality but no obligation), so I think any attempt to say that that shouldn't be enforced systematically is a nonstarter. I respect that you disagreed with that proposal but I'm hoping we can keep this thread focused on implementing the consensus that the community reached. -- Tamzin[cetacean needed] (they|xe|🤷) 20:15, 31 October 2025 (UTC)[reply]
Did the RFC actually develop a consensus that it's important to delete a couple hundred thousand old User: subpages, and that we should do so as expeditiously as possible, or did the RFC merely provide an "optional option" that could be applied to as many or – importantly – as few of the eligible pages as we choose to tag? WhatamIdoing (talk) 22:00, 31 October 2025 (UTC)[reply]
The RfC explicitly framed this as a procedural mechanism that would apply to all non-contributor subpages with no edits in six months (minus a few excluded categories). "Procedural" was in the name, and my opening !vote included the sentences The logical solution to this is to make the deletion of unmaintained pages in non-contributors' userspace procedural, the same as it is for unmaintained drafts in draftspace. This means that the vast majority of U5 cruft will be deleted without anyone needing to assess it on the merits. (emphasis original). I do not think there was a single participant, for or against U6, who interpreted it as something optional that would only apply ad hoc. But I'm happy to ping SilverLocust if he'd like to comment as closer. -- Tamzin[cetacean needed] (they|xe|🤷) 22:09, 31 October 2025 (UTC)[reply]
@WhatamIdoing: The entire premise of the RfC was that U6 would be modeled on G13, which is primarily enforced by bot. Furthermore, RfC-level consensus is not needed to authorize a bot to enforce existing policy. This implementation thread, in which you are the only person opposed to this being done by bot (or one of two if one reads Blueboar's "why" as rhetorical), should suffice as consensus for BRFA purposes. You're of course welcome to raise the matter at BRFA when it's filed, though. -- Tamzin[cetacean needed] (they|xe|🤷) 03:27, 10 November 2025 (UTC)[reply]
If the entire premise was that this would be enforced by bot, why didn't the word bot get mentioned by anyone, ever? It seems like if the goal were to have a bot to tag 161,000 pages for deletion, then someone would actually mention that, at least once. Since nobody did, I question whether editors who were supportive of being able to delete these were actually supportive of this kind of bot-based mass CSD tagging. WhatamIdoing (talk) 03:39, 10 November 2025 (UTC)[reply]
If you want to get particular about it, there was at least one comment explicitly about automation. I couldn't tell you why no one specifically used the word "bot", but I think everyone at WT:CSD knows what G13-style procedural deletion looks like, and it involves bots. You are welcome to contest this with the closer, in a close challenge at AN, or in a follow-up RfC at WT:CSD, but this is the thread for implementing the RfC consensus, and procedurally cannot overturn its outcome. -- Tamzin[cetacean needed] (they|xe|🤷) 03:47, 10 November 2025 (UTC)[reply]
By my estimate, about 13% of the 2,014,835 non-redirect subpages in userspace are eligible for U6 deletion, and in about two-thirds of those cases the eligibility will be unambiguous (per the same definition as used above). It would place an untenable load on admins were someone to go and tag all ~161,000 unambiguously eligible pages. At the same time, deleting them all in one fell swoop would make it unfeasible for people to go through and rescue salvageable drafts, as U6 allows them to do.
So what I propose is this: On the first of every month, a bot will generate a list of the 1,000 oldest U6-eligible pages. The pages will be tagged with a custom version of {{db-u6}} specifying the one-month timer and putting them in a distinct subcategory. People will then have a month to look through those pages and draftify anything salvageable. At the end of the month, the bot will run a second time to remove any listed pages that are no longer unambiguously eligible and then update some template that will flip the relevant CSD tags from "pending" to "due for deletion" and move them into a different category. An admin can then mass-delete.
A note would be placed in U6 advising users not to tag pages from before May 2025 with U6 if the page is unambiguously eligible. People could still manually U6-tag old pages whose eligibility requires human analysis. After about 3 years this would become obsolete once we catch up with May 2025incorrect, see below; we could make it faster by picking a higher number than 1,000.
Thoughts on that? The other option here is just allowing for all ~161k pages to get mass-tagged, which I don't think would be the end of the world, but I do like the idea of leaving some room for draft salvage if people want. -- Tamzin[cetacean needed] (they|xe|🤷) 14:17, 30 October 2025 (UTC), ed. 07:22, 31 October 2025 (UTC)[reply]
I like the 1000/month proposal. If any single human edit makes the page ineligible for deletion, then the custom U6 template can simply state that if someone believes the page should be kept but not moved to draft space then they should just remove the template. Perhaps a log-only edit filter (or some other method) could track such removals by the owner of the userspace so that a human can review and take it to MfD if they think it needs to be deleted. This seems like the fairest solution for everyone. Thryduulf (talk) 15:28, 30 October 2025 (UTC)[reply]
Well, a single edit would remove them from that month's list, but would just restart the U6 timer. Since the idea is for this to be like G13, which is exempt from the no-retagging rule in cases where the six-month window lapses anew. But someone having removed a U6 template should probably keep a page from being tagged by bot, as discussed in the subsection above, since human review may be needed to determine if the removal is "Shouldn't be U6'd yet" or "Categorically ineligible for U6" (e.g. it's in the userspace of someone with significant contributions on another account). -- Tamzin[cetacean needed] (they|xe|🤷) 15:56, 30 October 2025 (UTC)[reply]
significant contributions on another account has just made me think of the one question I had during the RfC, but didn't ask. What about significant contributions on another Wiki? Like, say, an foreign language Wikipedia admin who admin who makes a edit notice for their enwiki talkpage, or an editor who tries to start translating one of their articles to enWiki by dumping a few sources into a sandbox? Would these be categorically ineligible? (I dug up a couple examples of pages like this, then promptly lost my notes ) GreenLipstickLesbian💌🦋17:23, 30 October 2025 (UTC)[reply]
I don't love the "few or no edits" wording, to be clear; it's a holdover from U5 because we couldn't find a clear better alternative at VPIL that wouldn't risk tanking the proposal, but I'd support changing it to something else. In the case where someone has edits on another wiki, well, they'd be subject to U6 by the letter of the policy, but note that anyone can decline a CSD if they don't think deletion would be non-controversial (excluding a few special rules like G4 and G5). -- Tamzin[cetacean needed] (they|xe|🤷) 18:43, 30 October 2025 (UTC)[reply]
{{db-u6}} currently has a "contest this speedy deletion" button. Should it be changed to the {{db-g13}} format (If you plan to improve this subpage, simply edit this page and remove the {{Db-u6}} code.) plus mention moving it to draftspace as a possibility? Chaotic Enby (talk · contribs) 19:39, 30 October 2025 (UTC)[reply]
I don't like 1,000 in a single batch. If we were going to do this IMO unnecessary and time-wasting thing at all, it would make more sense to do 250 per week, or even 33 per day. And maybe have the bot check the size of the category, and only top it up to the limit. That way, if admins decline to bother with these, the bot won't keep dumping new entries on top of the old backlog.
OTOH, I think we've just found the perfect solution for WP:INACTIVITY: Just go delete a handful of User: pages, and now you've "used the tools". Less-than-ideally-active admins should remember that everyone needs to share, so please limit yourself to about 10 of these deletions. WhatamIdoing (talk) 19:22, 31 October 2025 (UTC)[reply]
"Oldest" by latest edit timestamp or first edit timestamp? Either way it's going to be very slow. Sorting by current page length, on the other hand, is practically instant. Currently-longest page for zero live non-userspace edits and less than a thousand edits total; there are 17819 such with length 0, i.e. blanked. (I'd post the query but suspect we'd wake up tomorrow to find that every hit had already been meatbot-deleted, with no checks for alt accounts, viability in the draft namespace, etc.) —Cryptic17:39, 30 October 2025 (UTC)[reply]
The code I've put together takes around 15 minutes to scan all pages for matches, querying the database for batches of 1000000 page IDs, so not too bad. I wound up sorting the results by latest human edit timestamp in post-processing. Anomie⚔03:40, 31 October 2025 (UTC)[reply]
I did some querying and found 319306 User-namespace pages that seem unambiguously eligible by the criteria described here, after filtering out all subpages of User:UBX and all flagged bots. Looks like another few hundred could be filtered out by excluding a few unflagged bots operating under WP:BOTUSERSPACE, which pretty much by definition will have zero edits outside of their userspace.Also of note is that 246726 of the subpages (77%!) are "/sandbox", 12256 are "/Sample_page", 4403 are "/TWA/Earth", 3299 are "/TWA/Earth/2", 2886 are "/be_bold", 2882 are "/Sandbox", 520 are "/citing_sources", 386 are "/Editnotice", 346 are "/Enter_your_new_article_name_here", 250 are "/new_article_name_here", 214 are "/sandbox2", 213 are "/Evaluate_an_Article", 182 are "/About_you", 160 are "/test", 148 are "/WikiProjectCards/WikiProject_Women_in_Red", 146 are "/sandbox/", 143 are "/citations", 130 are "/UserProfileIntro", and 106 are just "/". Anomie⚔18:02, 30 October 2025 (UTC)[reply]
Thanks for crunching the numbers, Anomie. Presumably (almost) any bot, flagged or not, will be operated by someone who does have non-userspace edits, and "user" in policies like U6 and U7 generally refers to the individual, not the account (although in practical terms keeping track of alts is hard and the occasional mistaken U6 will probably happen on that basis; fortunately U6s can be speedily reversed, like G13). Anyways, point is, can probably safely just have the old-U6-listing bot sort usernames containing bot into a separate list for manual review; once a username is flagged as a bot or other alt, it can be added to a list and the listing bot can ignore it in the future. -- Tamzin[cetacean needed] (they|xe|🤷) 18:50, 30 October 2025 (UTC)[reply]
I wouldn't strongly object to exempting those, if you want to propose that at WT:CSD. I'd weakly oppose, though, on the basis that adding exceptions complicates enforcement, and there is some small increase in vandalism potential by having unwatched pages sitting around. Neither would be a reason to make specifically those pages eligible for a CSD, but taken together IMO are enough of a reason to not exempt them, given that the pages are entirely value-neutral (i.e. while they don't cause a direct harm, there's also no harm in deleting them). -- Tamzin[cetacean needed] (they|xe|🤷) 20:18, 31 October 2025 (UTC)[reply]
After about 3 years this would become obsolete once we catch up with May 2025 I think the math is off on that estimate. At 1000/month, 3 years would only take care of 36k backlogged pages. The estimated 161k pages would have taken about 13 years, and the actual 319k will take 26 years at that rate. OTOH, 1000/week would get though 161k pages in about 3 years, and the 319k in a bit over 6 years. Anomie⚔20:34, 30 October 2025 (UTC)[reply]
I wonder if it would make sense to do something like "tag 1000 more when the category gets to <10 entries" instead of 1000/month. That way if people work through them quickly, they won't have to wait for a new batch. And having reviewers delete or mark-for-deletion as they go, versus trying to flip any left-overs at the end of a month, could save some duplicate work too. Anomie⚔03:40, 31 October 2025 (UTC)[reply]
It is possible that a sleep-deprived Tamzin, in xyr haste to post this promptly after the RfC closed, thought that there are 52 months in a year. I like where you're going with this rolling-window proposal, but I worry it leaves too much room for an admin just steamrolling through the category (as was infamously a problem with U5), since they will all be deletable at admin's discretion. What if we had the old-U6 template work like a PROD? One-week window for someone to rehabilitate the page (including by just removing the template to kick the can six months), and at the end of the week the page is deleted if still tagged. Every time the category drops below 900, the bot can add another 100. This also avoids the overhead of having to have a page where everything's listed, because things will just either be in the "will be deleted after a week's pendency" subcat or the "can be deleted now" subcat. -- Tamzin[cetacean needed] (they|xe|🤷) 07:22, 31 October 2025 (UTC)[reply]
That'd be a long year! Or short months. 😀 I don't think I'd want the bot to do the deletion automatically (which I don't think you're suggesting either). I can code "if it drops below 900, add 100" as easily as any other numbers (there'll be a cap for internal reasons, but not one likely to be hit unless someone is steamrolling), and I can have the bot put pretty much whatever tag-wikitext we want.Beyond the bot part, I don't spend much time doing deletions (speedy or otherwise) so I don't have much of a strong opinion on how exactly they're handled by human admins. Wouldn't a prod-like process have similar issues to the steamrolling admin though, with things getting deleted without really being reviewed? Anomie⚔17:19, 31 October 2025 (UTC)[reply]
@Anomie: Well my thinking was that the one-week PROD-like window (or make it a month or something else) would provide the window for review by admins and non-admins alike, ensuring that even if the deleting admin doesn't look closely, there's been some chance to salvage drafts and pages that should be preserved for any other reason. Part of my thinking here is based on the idea that, with undeletion being as cheap as for G13, and most of these belonging to accounts who will never edit again, and most of these pages not being salvageable drafts but rather mostly being worthless, the cost of the occasional misfire across 300k pages is not as high as for, say, incorrectly deleting a new article as A7. -- Tamzin[cetacean needed] (they|xe|🤷) 18:44, 31 October 2025 (UTC)[reply]
If it's G3 or G10 or something, you can go ahead and do that as always. And personally I'd think a normal amount of manual U6 tagging would be ok, particularly if you're looking at editors who've ever edited outside their own userspace (which the proposed bot here won't handle). The thing here is more that we don't want people to decide to tag thousands of easily identified pages (semi-)automatically, it makes more sense to have a bot do it at an agreed-upon rate. Anomie⚔03:40, 31 October 2025 (UTC)[reply]
I agree that editors should feel free to U6-tag pages that wouldn't be tagged by the bot, and that limited manual tagging of pages that would be tagged by the bot should be fine. I could see doing that, for instance, for a page that has BLP issues (but falls short of G10 or BLPDELETE), rather than waiting for it to wind its way to the front of the bot's queue.The policy wording I'm picturing here is something like A special process exists for pages created before May 2025 where the creator has zero non-userspace edits and there are zero edits in the past six months except by flagged bots. It is generally not necessary to patrol such pages, and editors should not do scripted mass-tagging, but it is permissible to tag one if you encounter one. For all other eligible pages from before May 2025, editors may tag as normal. -- Tamzin[cetacean needed] (they|xe|🤷) 07:14, 31 October 2025 (UTC)[reply]
Makes sense. Let me know if tagging on my part is too much.
For what it's worth, I think manual triage is probably a much better way to tackle this, where vandalish stuff is at the top and drafts are at the very bottom. I don't love the idea of indiscriminate random bot tagging for older pages -- feels like there's too much possibility to sweep up useful drafts and edge cases, when there are probably lots of pages that couldn't possibly be interpreted as useful. Gnomingstuff (talk) 13:39, 31 October 2025 (UTC)[reply]
How about indiscriminate sequential bot tagging? 😀 Seriously though, this part of the bot is more intended as "delivering the backlog for review in smallish chunks". If humans want to specifically search for drafts to rescue or vandalism to CSD-tag, that's fine. We just don't want a meatbot deciding to tag 319000 easily-identifiable pages all at once, or a meat-admin-bot blindly deleting them without actually looking at them. Anomie⚔17:29, 31 October 2025 (UTC)[reply]
Why do we (apparently) feel a need to hide these pages? How about "don't create a bot, don't worry about it, and just do everything manually, when and if you see a page that really shouldn't be kept"? WhatamIdoing (talk) 19:18, 31 October 2025 (UTC)[reply]
Your definition of "harm" seems to be more expansive than mine. Sure, User:Ozaloy/sandbox is the kind of thing people should post on LinkedIn instead of on Wikipedia. But it got just five (5) page views in the ten years(!) before you complained about it last month. I rate this as zero harm.
I won't stand in the way of this, and I know enough to know I may well be proven wrong in the long run. However I strongly suspect we are fooling ourselves if we operate on the assumption that there is a rate of deletion that will both result in careful review of every individual page and clear the backlog on any kind of reasonable time scale. Yes some drafts will be rescued, but the input-to-output ratio is going to be rather unfavorable. I can see good arguments supporting both the tag manually as encountered and delete everything via script positions, but I think that in trying to split the difference we are going to end up reducing many of the advantages of those two approaches while incurring a new drawback in the form of a guaranteed workload in addition to the inherent disadvantages of both that are retained. 184.152.65.118 (talk) 20:39, 2 November 2025 (UTC)[reply]
(To avoid doubt: I have an account, but I haven't logged in for the previous ~2 weeks. I am posting this logged-out, because right now, I have neither time nor energy to go through my watchlist, notifications, etc. I will maybe respond to comments when I log in.)
Firstly, I don't like the recurrent month-long cycle of nominating, reviewing, deleting pages. I dislike the recurring deadline for checking all the month's pages. I would say that if we want to salvage prospective drafts, one month for reviewing 1000 pages is not always enough. (SD is a niche area. There a wouldn't be any feedback telling us which pages have already been checked and deemed deletable.)
Secondly, here is my counter-proposal:
Anytime, any (sufficiently priviledged) editor could carry out any of following actions on any page deemed eligible under U6 criteria:
delete it
draftify it
mark it as "endorsed for deletion by a non-admin" (This would be equivalent to adding a SD template to that page.)
Anytime, an admin may mass-delete (without review) pages that simultaneously:
have been marked as "endorsed for deletion" for an amount of time (This time could either be fixed (e.g. 2 months), or decrease based on number of "endorsements" of that page.)
meet the unambiguous criterion mentioned above ([...] where the [creator] has no edits (including deleted edits) outside their own userspace and there are zero edits in the past six months except by flagged bots [...])
Point 1 ensures that there is always enough work available to do for everybody. Point 2 ensures that the backlog of endorsed deletions doesn't accumulate (when there is too much admin work to do). If the varying-time variant was chosen, it would encourage editors to supervise others rather than to exacerbate the existing backlog.
There is just one problem: what about the main user pages of contributors whose pages violate WP:UPNOT? I guess when it is a spambot we can delete under G11, maybe other general criteria apply for other cases, I don't know. Aasim (話す) 14:59, 30 October 2025 (UTC)[reply]
Correct. If any G-series criterion applies, a top-level userpage can still be deleted under that. And per the newly-added wording at WP:UPNOT, if a top-level userpage would be eligible for deletion under [U7] if it were a subpage, it may be blanked by any editor. -- Tamzin[cetacean needed] (they|xe|🤷) 15:06, 30 October 2025 (UTC)[reply]
@Tamzin: What, exactly, is the difference between U6 and U7 if they both apply to subpages of non-contributors only and need a six-month waiting period? I've read the policy page and can't find any. Initially I assumed that U7 might also apply to main userpages but when I read the new templates that seems to not be the case. With a few exceptions the main thing I used U5 for (on its own and not alongside G11) was lengthy profiles on main userpages of non-contributors that were obviously autobiographical, very resume-like, and would require a complete rewrite to be published (if we're going to WP:AGF and assume that the person is both notable and has misplaced a WIP draft), and I think that self-promotion of this nature should still be deleted. Passengerpigeon (talk)19:17, 4 November 2025 (UTC)[reply]
In spirit, the effect is that U6 is more similar to G13 (have all abandoned userpages expire so we don't have to worry too much about edge cases), while U7 is to deal with the more problematic cases that would be a problem even if they are still "in use". Chaotic Enby (talk · contribs) 19:32, 4 November 2025 (UTC)[reply]
Should we have a similar limit as the above section per U7? Checking for U7 requires more triage than U6, but based on a few test search queries, there are probably a lot of eligible pages. (For comparison, there's an analogous speedy criterion on Commons, and that category sometimes gets up to a few hundred per day depending on whether anyone's working on the backlog.) Gnomingstuff (talk) 21:32, 30 October 2025 (UTC)[reply]
I don't think a limit is urgent. Most of the pages now eligible for U6 weren't speedy-deletable before; everything now a U7 should have previously qualified for U5 (but not vice-versa). —Cryptic21:36, 30 October 2025 (UTC)[reply]
other U7 question -- what exactly qualifies as "personal life"/"creative writing" stuff? doing a trawl currently, a large amount of it seems to fall in a gray area between U6 and U7, such as this or this. Then there's stuff that would probably fall under (c) but is pretty short, like most of the stuff here. Gnomingstuff (talk) 22:31, 30 October 2025 (UTC)[reply]
Well, both of those would count as U6 to begin with. Part of the idea with U6 is that we get to be agnostic as to the merits of the page, only needing to decide if it was inherently problematic in the event that a user requests undeletion. To these specific examples, the first one can be deleted under G3 (might be on the margins, but it's within discretion IMO) and the second is valid use of a sandbox for testing, so should not be deleted. U7 would only come into play if either of the pages was being actively maintained such that U6 couldn't apply, but I don't think either meets any of the U7 subcriteria. And short personal content like "I love my friends" is intentionally excluded from U7 because "limited autobiographical content" is permissible under the userpage policy. -- Tamzin[cetacean needed] (they|xe|🤷) 06:59, 31 October 2025 (UTC)[reply]
Got it. Trying to stay really conservative here (a lot more conservative than my similar edits on commons, certainly) but of course it's not always easy to calibrate that. Gnomingstuff (talk) 12:55, 31 October 2025 (UTC)[reply]
@Gnomingstuff: Regarding some of your recent taggings: There's nothing in U7 saying it can't be used on a page that is U6-eligible, and I don't think there needs to be, but just speaking as one admin, if you tag a page for U7 that would also meet U6, you'll probably find me deleting it under U6, for the simple reason that it's much easier for me to check whether it's eligible. In the event the user tries to REFUND it, the reviewing admin can always make the determination then to decline if they think U7 also applies. So you might find it easier for both yourself and CSD admins to tag such pages as just U6. -- Tamzin[cetacean needed] (they|xe|🤷) 20:27, 31 October 2025 (UTC)[reply]
If you're working on it, I'll add that the U6 and U7 templates are now set to only work on user subpages, and show up as a warning banner otherwise, so there's less risks of mistagging. Cryptic also envisioned an edit filter for that matter, I'll drop a note at WP:EFR and you could add that to the FAQ if it goes through. Chaotic Enby (talk · contribs) 14:52, 31 October 2025 (UTC)[reply]
There is nor a U7 template nor a template message. I have no idea what wording should be used in one so I can't create it myself, but I feel this was a massive oversight to not have it ready first. LakesideMinersCome Talk To Me!15:19, 30 October 2025 (UTC)[reply]
If a page has been around for 6 months it can wait another a few hours for someone to create {{db-u7}} (if one didn't want to use {{db|u7}}), as was done while I was called away by IRL things. Nobody has yet tagged a page for either, though there was one non-test deletion (that happened to be incorrect) via the dropdown menu. ~ Jenson (SilverLocust💬) 17:08, 30 October 2025 (UTC)[reply]
(Someone's tagged at least one U7, which also didn't qualify for any part of U7 except that its creator had few edits. (Just like they used to for U5!) I speedied it as G11. —Cryptic17:23, 30 October 2025 (UTC))[reply]
Looking above at §§ U6: Bot tagging of unambiguously eligible pages and U6: Handling of old pages, I see no opposition on the former, and on the latter I see a rough consensus for doing something to prevent runaway mass-tagging, but not for my own original proposal in particular. Usually when that happens and then a thread dies down it's because things are too complicated, so here is my simplified solution, which requires no tracking page or anything like that:
A bot may tag pages under U6 in cases where no subjective assessment is required
A |bot_timestamp= parameter will be added to {{db-u6}}, to be used both for old U6s and new ones. When specified, it will add the page to a day-based subcategory, like with CAT:PROD, and the template's wording will say something like "Any user may remove this tag to restart the 6-month window, or may move this page to draftspace. Otherwise, it will be deleted on <bot_timestamp+7d>".
The bot's priority in tagging will be: first, all pages that hit 6 months on that day; then, a number of older pages not to exceed a total of 150. Further prioritization can be left to the bot op / informal consensus.
None of this changes how human tagging works, except that humans are discouraged from mass U6-tagging by script. Things in the main CAT:U6 work like any other CSD. The fact that a human made the decision to tag the page supplies the level of review that the 7-day window for bot-tagging is meant to encourage.
@Anomie: I had the latter in mind when I wrote this, but I was actually just thinking as I went to sleep last night how that could seriously delay what's already looking like a 6-year process, depending on the volume of new bot taggings. So I think each daily subcat should consist of however many are newly eligible as of that day, plus 150 old ones. -- Tamzin[cetacean needed] (they|xe|🤷) 02:47, 10 November 2025 (UTC)[reply]
"Newly eligible as of that day" would potentially be unreliable, if e.g. the bot is down for a day. I liked the earlier definition of "tag any with a last-human-edit after May 2025", although we may want to shift that to the day the bot gets approved instead of May 1 to avoid dumping a month's worth (currently around 600, IIRC) all at once. BTW, looks like the per-day numbers for the first week of November would have been 58, 54, 60, 42, 70, 83, and 77. Anomie⚔13:26, 10 November 2025 (UTC)[reply]
Yeah that works for me. So basically: The bot tags any eligible pages it can find that were created more recently than <six months before its BRFA approval>, plus 150 pages from before that date. -- Tamzin[cetacean needed] (they|xe|🤷) 13:29, 10 November 2025 (UTC)[reply]
With three days since I suggested this, three supports, and no opposes, I'll suggest that if no one has any objections in the next day or two, this whole thread can be closed, with any remaining details to be sorted out at BRFA, and an explanatory note added to WP:U6 including the discouragement of human mass-tagging. WT:CSD can of course change any of the implementation in the future, likewise without need for a full RfC since none of this changes the core of the criterion. -- Tamzin[cetacean needed] (they|xe|🤷) 17:10, 11 November 2025 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Marked historical as unneeded, unenforced or lacking consensus?
If C or D are adopted, the following guidance at WP:NCPLACE#Belgium would be removed: The Brussels naming conventions should be used for articles related to Brussels.
If C or D are adopted, a discussion would be opened to determine the status of the Brusselsname talk page template.
This page was marked as a guideline 2009 by Oreo Priestafter discussion on the talk page and a much more substantial discussion at Talk:Brussels-Capital Region. For those who are not familiar with the subject matter, Brussels is now a majority francophone city, but historically was Dutch-speaking. Place names in the city are thus the subject of controversy. As shown in the discussion, this topic area seems to have been subject to a substantial dispute on Wikipedia prior to the creation of this page. More than a decade has passed, and the dispute is mostly forgotten. Recently, twoeditors have removed the guideline tag, saying that it should be properly situated as a Wikiproject advice page. To come to a consensus about what we should do with this page, I have opened this RfC. Yours, &c.RGloucester — ☎06:43, 31 October 2025 (UTC)[reply]
B – This page is useful, and served to quiet a long-standing dispute over place names in Brussels. However, the page itself is not suitable as a standalone guideline, because it provides no original guidance. Instead, it explains how editors came to a consensus in this topic area based on our other policies and guidelines. It is not suitable as a Wikiproject advice page either, though, because it is currently referenced at the WP:NCPLACE#Belgium guideline, which specifies that editors should follow its guidance. Therefore, I think the best option is to retain the page, break it off from the Wikiproject, and make it an explanatory supplement to NCPLACE. Yours, &c.RGloucester — ☎06:43, 31 October 2025 (UTC)[reply]
B – An INFOPAGE is reflective of both its content and its unclear level of consensus. It might be worth looking at Category:Wikipedia naming conventions, which only has the main category for guidelines and a subcategory for proposals, so doesn't provide a place for supplementary pages.--Trystan (talk) 13:54, 31 October 2025 (UTC)[reply]
Option D its a solution for a problem that isn't needed. We are already supposed to use English names or the names used in English WP:USEENGLISH so we don't need another policy that largely states the same thing but with an odd caveat that objects without Wikipedia articles should ignore that and instead use a dual name. We already have too many pointless precise guidelines when general guidelines already suffice. Traumnovelle (talk) 08:40, 10 November 2025 (UTC)[reply]
This page was marked as a guideline 2009 by Oreo Priest after discussion on the talk page and a much more substantial discussion at Talk:Brussels-Capital Region I cannot see discussion about marking this page as a guideline on either of those talkpages; could you link to an actual discussion rather than an entire talkpage? Caeciliusinhorto-public (talk) 11:01, 31 October 2025 (UTC)[reply]
This was an informal process of consensus making, which is why I linked the whole talk page, though sections 1, 2, 3 and 4 are the most relevant. The page was drafted by a variety of editors from WikiProject Belgium in 2009. If you are asking for a specific discussion that resulted in the guideline tag being added, that would probably be the Wikipedia talk:WikiProject Belgium/Brussels naming conventions#In conclusion section, which was immediately followed by Oreo Priest's action. If you are looking for a discussion that meets the current expected standard, i.e. WP:PROPOSAL, there is none. Yours, &c.RGloucester — ☎12:15, 31 October 2025 (UTC)[reply]
Which is to say, there was no discussion about marking it as a guideline. There were only comments from people who assumed that of course it was going to be a guideline. WhatamIdoing (talk) 19:01, 31 October 2025 (UTC)[reply]
Wikipedia:Naming conventions (Cyrillic) is not a Wikiproject advice page, but an information page, outside of any Wikiproject's control. It is not normal for a guideline to prescribe that editors should follow Wikiproject advice without any obvious consensus, because Wikiprojects are not rule-making organisations, per WP:PJ. Guidelines may sometimes link to Wikiproject pages as a reference, but that is different from prescribing that one should follow a given project's internal strictures.
Keep in mind, the removal of the guideline tag in this case was premised on 'simplifying our policies and guidelines'. Think of a random editor that encounters the guidance at WP:NCPLACE, or the talk page template above, which prescribes that one should follow the guidance at Wikipedia:WikiProject Belgium/Brussels naming conventions, but then, one arrives at the page and encounters a template that says that its contents are the mere 'opinion' of a Wikiproject that has not been vetted by consensus. This is beyond confusing, and one will be left wondering, should this guidance be followed or not? This is the opposite of simplification, it is confounding. Yours, &c.RGloucester — ☎00:15, 1 November 2025 (UTC)[reply]
Nope. I removed the {{guideline}} tag, and I did so not out of any desire to 'simplify our policies and guidelines', but solely because tagging it as a guideline was a violation of both the WP:PROPOSAL policy and the WP:PROJPAGE guideline.
It's true that I found the list of violations over at that WikiProject's talk page, but that was only a matter of where I happened to see it; I'd have done the same thing no matter when or where I found out about it. WhatamIdoing (talk) 00:59, 1 November 2025 (UTC)[reply]
I see. What I would point out to you again, as I have done before, is that merely removing or changing a tag without considering the impact of that change on adjacent articles, guidelines, and policies is not very helpful, if the end result is to make our guidelines even more confusing. The point of this RfC is to tidy up what is admittedly a mess, and ensure that there is a clear consensus for any result. No matter which option is adopted, the end result will be a simplification, a clarification, and that is something I think that even you should find laudable. I long for your constructive participation here, as your many years of experience in the topic area will be of great value in reaching a well-reasoned consensus. Yours, &c.RGloucester — ☎01:16, 1 November 2025 (UTC)[reply]
What I would point out to you again, as I have done before, is that changing this tag has no effect whatsoever on any guidelines or policies.
Wikipedia:Policies and guidelines#Content says "Policies and guidelines may contain links to any type of page, including essays and articles". Almost nobody is actually confused when they see that a guideline has linked them to an essay page, probably because almost all experienced editors have banner blindness, and those who don't are used to our practices.
For example, the introduction to WP:V has links to two essays (both of the "supplement" variety), and its first section has links to two "information pages" that "may reflect differing levels of consensus and vetting". The next section has links to four ordinary Wikipedia articles and two essays (one ordinary and one of the "supplement" variety). This happens in almost all of our policies and guidelines, and people are not confused by it. If you're genuinely confused by it, then you're confused by basically every policy we have. WhatamIdoing (talk) 01:42, 1 November 2025 (UTC)[reply]
As for reaching consensus: I don't actually care what the page's title ends up being or what it gets tagged with – so long as it isn't a {{guideline}} that implies it's WP:OWNED by any WikiProject.
IMO the only actual {{guideline}} that can have "WikiProject" in its name is WP:PROJGUIDE, and that's because WP:COUNCIL is a bit more like a weird meta-noticeboard for people trying to organize groups than like a real WikiProject. (Even then, if PROJGUIDE got moved to another title, that wouldn't break my heart.) WhatamIdoing (talk) 01:48, 1 November 2025 (UTC)[reply]
I agree with you entirely that Wikiprojects should not have any control over any guidelines, and this is a position I have consistently held in any discussion on the subject. However, there is nothing to be gained from narrowly focusing on the title of the page or procedural concerns without considering the page's actual value or function. As for 'links', yes, many guidelines and policies link or reference essays, as I said above. The issue is not a link or reference, but the guidelines' current prescription that editors should follow what is now tagged as a 'Wikiproject advice' page. This is clearly irregular, as it is basically delegating rule-making authority to a Wikiproject, something that is out of line with WP:CONLEVEL. Yours, &c.RGloucester — ☎04:13, 1 November 2025 (UTC)[reply]
It is appropriate and helpful to take corrective action and remove the guideline template from any page which is not a guideline. Recognition must be denied to the status quo to begin with. That is because a lack of consensus to "demote" the false guideline is not an acceptable outcome. Instead, the falsehood that a given page is a guideline needs to be addressed, and then the same page may be made into a guideline, or it may not become a guideline, and both of these outcomes are acceptable—whereas maintaining the falsehood that a page that is not a guideline is a guideline because of a lack of consensus to correct the falsehood is not acceptable. —Alalch E.13:32, 1 November 2025 (UTC)[reply]
I agree with Alalch, and I add that it's not "clearly irregular" to recommend good advice, no matter where it's found. For example, Wikipedia:Manual of Style/Medicine-related articles recommends advice pages from three different WikiProjects, and the absence of the exact word should in those sentences in no way lessens the recommendation about where to find the specialist advice. WhatamIdoing (talk) 17:29, 1 November 2025 (UTC)[reply]
If a guideline tag has been stable for more than ten years, that in and of itself is a form of consensus, per WP:EDITCON, though as WP:PROPOSAL says the tag itself does not grant guideline status. Whether the community wants the page to actually be a guideline or not can only properly assessed in an RfC, and that is what is being done here. This incredibly narrow focus on the tag itself is bizarre, because Wikipedia is not a bureaucracy. You say that it is not 'irregular' to recommend 'good advice', but have not bothered to consider whether this page actually is 'good' advice, never mind that the page is written as if it were a guidleine, and never mind that an actual guideline references the page not merely as a recommendation, but as almost mandatory, excluding the usual IAR exceptions, and that numerous Brussels-related pages currently have a talk page template that specifies that editors should follow this page. I understand what you are trying to do, but please consider the impact on actual articles. This is an encyclopaedia, and these sorts of pages don't exist in a vacuum. They only exist in as much as they help us build an encyclopaedia, and that is where your thoughts should go, not to some legalistic understanding of the meaning of the word 'guideline'. Yours, &c.RGloucester — ☎04:25, 2 November 2025 (UTC)[reply]
The word should does not mean "almost mandatory".
I am not particularly concerned about whether this page offers good advice. I assume that it does, but I don't really care whether it does, and I will not spend my time figuring out whether it does.
What I care about is whether the process for tagging the page was improper (answer: yes) in a way that misleads ordinary editors into thinking that it was actually vetted by the community (answer: yes) instead of being advice put together by a small group of editors (answer: yes). I fixed the misleading and procedurally improper parts. You may find it better to describe my focus in this process as bureaucratic rather than bizarre.
If you want to make a WP:PROPOSAL or otherwise pick an arrangement that is procedurally proper and results in a non-misleading status for the page, then be bold! But my chosen role doesn't extend to that point. I'm here for the "not wrongly marked as a community-wide guideline" part. What it ends up getting marked as is not important to me, so long as the result is not wrongly marked as a community-wide guideline.
I just stumbled upon this dusty old part of Wikipedia. While Category:Bibliographies by writer (i.e. lists of works of a particular writer) have some merit, I am concerned about Category:Bibliographies by subject, ranging on more esoteric (Bibliography of hedges and topiary) to major topics like Bibliography of Canada or Bibliography of World War II. There are dozens (hundreds) of articles in the form of "bibliography of Fooian topic", and they are pretty much random laundry lists of books/articles/etc. related to Foo. The criteria are pretty loose; virtually all lists that I checked at best have claims saying "we list only important works" (as decided by who?). Some larger topics (like countries) cannot be reasonably scratched by a single list. Most don't have any criteria (failing WP:LISTCRIT. This reminds me of MOS:TRIVIA and the dreaded "list of mentions of Foo in popular culture" that we have been steadily deleting at AfD for many years now (see Wikipedia:WikiProject Deletion sorting/Popular culture...); it is just that instead of works of fiction that "foo" concept appears in, here we have a list of works of non-fiction. Other than that, it's the same principle (see also: WP:INDISCRIMINATE, WP:IPC, WP:NOTTVTROPES). I think these kitchen-and-sink lists are now-useless relics of the past (when we weren't sure what Wikipedia's scope is) that we need to, well, delete. An RfC may be in order, but perhaps we can judge early consensus here. PS. I checked AfD logs; such articles are AfDed with random outcome. Deleted (hard and soft): Oakland, Californi, Tirana, psychology. Kept: Thomas Jefferson, American Civil War Union military unit histories, books critical of Islam. There were the fist six results in my search, and the outcome is very much no consensus (3 deleted, 3 kept). Sigh. (I have a feeling that more recent outcomes may be more deletionist, as our standards rise, but I haven't run the data...). Piotr Konieczny aka Prokonsul Piotrus| reply here13:24, 6 November 2025 (UTC)[reply]
I think most bibliographies should be merged or deleted. Articles are welcome to have modestly sized Further Reading sections, but when the list is quite long, as you say, there is no clear criteria for what should be included. We are not an indiscriminate card catalogue of any book in the library. If a book is significant, it should be cited or listed in the main article, but beyond that if there's no indication why a reader should care about a particular set of books that one could find in a variety of search engines and resources, it shouldn't be a standalone article. — Reywas92Talk15:12, 6 November 2025 (UTC)[reply]
It's possible that there are some notable collections of works about a particular subject, but in those cases there should be some prose about the collection that establishes why it is notable and the selection obviously won't be made by Wikipedians. The article title is also less likely to be "Bibliography of <subject>" Thryduulf (talk) 15:32, 6 November 2025 (UTC)[reply]
The only relevant bibliography I have to draw from in this subject is Wikipedia:Articles for deletion/Grmoščica bibliography, which explicitly looks at WP:NLIST (that the collection of works about a subject has to be described as a group independently to meet this notability standard). That seems like a reasonable standard. In this case the criteria is fairly narrow (there are only so many works about this Croatian hill) but I can see this being applied to a larger subject (e.g., "bibliography of the economy of the Maldives", where we might expect to find multiple metatextual works that examine the works examining the economy of the Maldives - review articles?). For an example that actually exists - would it be considered appropriate for an article called bibliography of work-related injuries to exist citing this article? -- Reconrabbit16:28, 6 November 2025 (UTC)[reply]
@Reconrabbit Regarding that example, this is a literature review. Most acadademic papers do it, through they are almost never throughout. We can find lit reviews on many topics, but I don't think they should be sufficient to say that bibliographic lists are notable. That said, I could see a compromise, where each work on our list is cited to a secondary source of that type (a lit review, etc.). As in "criteria for inclusion on our list is being cited in a relevant secondary work". Piotr Konieczny aka Prokonsul Piotrus| reply here00:45, 7 November 2025 (UTC)[reply]
I don't see how this has anything in common with trivia sections. They are not any more indiscriminate than any other kind of article content on a broad topic is. They're books about a topic, it's more like further reading than anything, or categories. The fiction is not analagous because to be included it should the sole or main part of the work, not at all similar to those lists of individual tropes or what have you, more like genres (which we do have lists for, e.g. list of dystopian literature). I would strongly oppose deleting them unless there is a extra problem with the subject (e.g. it's about a topic that is itself non-notable, the group is not discussed in sources (so, e.g. any topic where the group of books about it has been the subject of discussion would pass, which is extremely common for any topic; when looking for sources, I have never found one topic with enough books to sustain a bibliography that did not have some kind of meta aspect on the general circumstance of books about the topic and what they're like), or it's about a topic that has too few writings on it to sustain a bibliography), because they are very useful and I don't see the problem. Bibliographies are regularly part of print encyclopedias on specialized topics, so yes, it is inherently encyclopedic. Why should we be different than print encyclopedias in this regard? PARAKANYAA (talk) 18:29, 6 November 2025 (UTC)[reply]
I don't see a fundamental problem with bibliographies but individual pages may need some attention. Along the lines of what others have said, I can see the utility for somewhat specialized topics where the literature is not extensive but starts to get too long for a typical 'Further reading' section. I do question the selection criteria, especially for enormous topics like Canada. —Myceteae🍄🟫 (talk) 23:04, 6 November 2025 (UTC)[reply]
Including any book, even if it's only books related to a specific topic, is indiscriminate and a violation of policy. If we're going to have articles that are just lists of books and they aren't split out bibliographies of an author who is already notable, then we need those lists of random books in a topic to meet one particular criteria to make it a discriminate list. They need to be notable books that have an article already. That should be the criteria we're working with here. SilverserenC00:55, 7 November 2025 (UTC)[reply]
No, it is not. It is not any one of the things listed at WP:INDISCRIMINATE, any more than literally any notable list about any topic that has ever existed on Wikipedia is. Per the list guidelines there is no strict need to limit list contents to notable items unless there is consensus to do so. Also the individual entries being notable would not make any difference about it being indiscriminate, at all. PARAKANYAA (talk) 01:05, 7 November 2025 (UTC)[reply]
If we're making them "lists that reliable sources have also made lists about", that seems just as uncontrolled and random. And, even in such a case, such articles don't follow that requirement anyways as it stands. WhatamIdoing above points out reliable sources on lists of best WWII books. So, we should reduce that bibliography of WWII article to only those listed books, about 50 or so, correct? Should we also WP:SYNTH in any other books that has ever been on such a list ever? Is every list of books on a topic ever made in a reliable source now subject to having a bibliography article made on it? Is just a single reliable source list good enough? Is there a threshold now? What exactly is the criteria being used to claim inclusion on such a bibliography article? As it stands now, the criteria is none and all books ever, it seems. SilverserenC01:45, 7 November 2025 (UTC)[reply]
It's just as uncontrolled and random as notability is, because notability is determined by the whims of the media and academia. Doesn't mean all notable things are indiscriminate. It is not synth to put a book that says it's about world war II on a list of books about world war II; by this logic, what is not synth? Would it not be equally synth to use that book as a source if another source did not say it was about WWII?
"Is every list of books on a topic ever made in a reliable source now subject to having a bibliography article made on it?" Is every notable topic subject to having an article made on it?
No, it is just to determine the notability of the list. We do not only have to use notability-proving sources content of the list, per Wikipedia:Stand-alone lists. Or, would you argue that to require we put someone on "list of French poets", they be included in "top 50 french poets"? Or "list of [geographical feature] in X"? Ludicrous.
List of French poets still requires actual secondary reliable sourcing on someone being a French poet, however. Or it's subject to removal per WP:V. And, no, using the books themselves as a primary source to themselves is not appropriate. SilverserenC01:55, 7 November 2025 (UTC)[reply]
Why not, if WP:V is your concern here? The book is certainly verifiable for that. Of course, you can limit it to only notable works if there is consensus for that, and so it may be useful on broader topics, but per the list guidelines that is not mandatory. And our coverage of books is abysmal; most books people use as sources or in bibliography listings are notable, they just don't have articles yet, because our coverage is terrible. PARAKANYAA (talk) 01:57, 7 November 2025 (UTC)[reply]
It's not true that List of French poets still requires actual secondary reliable sourcing on someone being a French poet. A secondary source for that would have to do some kind of analysis ("Is this person really French? Is their work really poetry? Did they do enough to be called a poet, rather than a writer who sometimes writes a poem? Analyzing it according to the P.O.E.T. model, this paper concludes that this person probably is a French poet...").
I believe they are. I'll agree that some bibliographies should be merged. Especially shorter ones for which WP:GNG "it has been discussed as a group or set by independent reliable sources" cannot be met. But The rule for a standalone bibliography article is simply that bibliographies of the same scope must have been published before. Bibliography of works on Georges Méliès for example, a featured list, cites two annotated bibliographies of Georges Méliès to establish precedent. Not all bibliographies have bothered establishing precedent. For example, articles like List of bibliographies of works on Catullus, another featured list, do not, but that article could be considered split from Catullus bibliography for space reasons, so it doesn't necessarily need to. Some bibliographies, such as Bibliography of works on Madonna, another featured list, don't establish notability but easily could with a few added lines (it already cites Cowden but not the chapter "Madonna"). Raymond Chandler bibliography, another featured list, establishes notability by citing the monograph Raymond Chandler: A Descriptive Bibliography. Josephine Butler bibliography, another featured list, doesn't bother doing so, but a quick search reveals more bibliographies of Josephine Butler have been published than any of the aforementioned except Catullus. The same goes for Agatha Christie bibliography, another featured list. Other featured bibliographies (for the curious):
As you can see, they are all author bibliographies. But there are many subject-specific bibliographies whose notability is supported by even more sources. See Bibliography of World War II#Bibliographies, for instance, which is a very incomplete list of bibliographies of WWII. Bibliography of Italy doesn't cite it, but there is a book, Bibliotheca bibliographica italiana (1889), with a 27 page chapter, Bibliografie di bibliografie, devoted entirely to bibliographies of bibliographies of Italy! Any Wikipedia bibliography article with a matching chapter in A List of Bibliographies of Special Subjects (1902) meets WP:NLIST. Bibliographies of many specific topics have also been covered in more than one encyclopedic aspect, such as the "History of bibliography of Subject X", i.e. A History of Bibliographies of Bibliographies (1955). Once a bibliography grows large enough, it can be split into multiple bibliographies, as happened with the Bibliography of WWII. Ⰻⱁⰲⰰⱀⱏ (ⰳⰾ) 02:03, 7 November 2025 (UTC)[reply]
You're correct that discussions aren't strictly votes. When people use templates like these, or simply type "+1", what they're saying is something like "I second the sentiment in this comment" or perhaps "I would have said this, but they beat me to it". You'll generally see these in open discussions where people are simply talking things out and expressing their thoughts. They're not as common if it's a formal poll where people are indicating whether they "Support" or "Oppose" a proposal. Thebiguglyalien (talk) 🛸22:31, 7 November 2025 (UTC)[reply]
Sometimes it's better to do it like that than spend unnecessary time saying the same thing in different words just for the sake of fostering discussion. Katzrockso (talk) 01:04, 9 November 2025 (UTC)[reply]
Suppose there is a clear academic consensus on a topic - all the academic sources agree with a certain position. However, there are no sources that state that this is the academic consensus such as literature reviews. What can and cannot Wikipedia say about academic opinion in this situation? Eldomtom2 (talk) 18:21, 8 November 2025 (UTC)[reply]
But if there is no descent on the matter, it should just be stated as a fact - no need to include whether it's the academic consensus. If no sources say that planets are flat then Planet just says "A planet is a large, rounded astronomical body ..." So it will depend on the specific situation. -- LCU ActivelyDisinterested«@» °∆t°19:16, 8 November 2025 (UTC)[reply]
That's likely to specific a question, and best discussed at each articles talk page. Is it misreported in a news article due to not understanding the complexity of the issue, or do non-academics disagree due to an issue with language (academics tend to use language very specifically, at times that can conflict with common usage), or one of many other issues. -- LCU ActivelyDisinterested«@» °∆t°13:25, 9 November 2025 (UTC)[reply]
That dissent has not been expressed doesn't it mean it doesn't exist. When there isn't a positive attribute of "scientific consensus", there are better ways to word things that don't invoke a consensus that falls afoul of WP:RS/AC. Katzrockso (talk) 01:02, 9 November 2025 (UTC)[reply]
If everyone says something is a fact, then state it as a fact. To do otherwise would be against basic NPOV. Absolutely don't say that something is the scientific consensus if no sources say that, instead just state it as fact if no dissent exists. If something doesn't exist in reliable sources, because dissent has never been expressed then to iinclude or balance content based on something that doesn't exist in the sources is not NPOV. -- LCU ActivelyDisinterested«@» °∆t°13:22, 9 November 2025 (UTC)[reply]
That's is exactly how we get in trouble with neutrality (and there's a very recent issue that this would apply to). We should not be trying to arbitrate what the truth is, particularly if what's at stake is still on going or very recent. Even if all reliable sources only say one stance, we shouldn't assume that's the truth and treat it as a fact until well after the dust has settled, we can review sources far distance from the event, and make a better judgement. Just because no RS discuss opposition to an idea doesn't mean the opposition doesn't exist, and in the short term we shouldn't be jumping to conclusions, particularly if we know that there is such opposition to some degree that is not covered in RSes.
In the hard sciences, there can be theories that there is no disagreement among all reliable sources that the theory is true, but we still present it as a theory and not a hard fact if its clear there still other possible explanations or that they cannot absolutely prove the truth but have found nothing to deny it. This type of attitude needs to apply to the rest of our coverage. Masem (t) 13:35, 9 November 2025 (UTC)[reply]
If something does not exist in a source then including it in content, or in weighing content is not neutral. Arguing that we should say that some sources that that planted are round is nonsense. By including something that doesn't exist in sources editors are injecting their own opinions on the TRUTH. -- LCU ActivelyDisinterested«@» °∆t°13:56, 9 November 2025 (UTC)[reply]
Obviously, if no opposing view exists in RSes to discuss it, we can't include it, but the absence of that type of information does not automatically make the view covered by RSes the truth to be said in Wikivoice, particularly if it is something that cannot be proven or is highly subjective and contentious. Documenting the prevailing view outside wikivoice (with attribution) when we as editors see that it could be taken as contentious (like, in the midst of an ongoing event or in its immediate wake) does zero harm and keeps us neutral.
The "flat earth" issue is not a good example, because we have decades/centuries of proven evidence that the earth is spherical and thus can readily justify the use of Wikivoice to say its round and not flat. That's a clear case where FRINGE applies, and the documentation of "flat earth" is mainly due to coverage of groups that insist that. On the other hand, the origin of COVID is a prime example. The prevailing theory is that it did not come from the lab, the lab theory rejected by the bulk of reliable sources, but yet we still report COVID being zootrophic in nature as the prevailing theory. Maybe it will take a decade, or a century, before we can flip that to being factually the zootrophic origin, but we can't do that now, and thus take it out of Wikivoice. Masem (t) 14:04, 9 November 2025 (UTC)[reply]
But I've not argued for stating more than what exists in sources, if the sources states it as a theory then follow the source. But if all sources state it as fact, then not stating it as fact is against NPOV. Stating it as fact if all sources state it as theory is the same. The flat earth argument works for the former, the zoonotic origin of COVID the latter. But in each case stating something that is not in the sources is bullshit. -- LCU ActivelyDisinterested«@» °∆t°14:15, 9 November 2025 (UTC)[reply]
Taking something out of Wikivoice and adding some type of attribution is not "stating something that is not in the sources", its simply writing from a far more distance neutral tone, and requires common sense to consider, not blind adherence to the sources and nothing else. Masem (t) 14:19, 9 November 2025 (UTC)[reply]
That argument is saying that we should attribute the earth being round. Absolute adherence to 'everything must be attributed' is not sustainable as an argument. -- LCU ActivelyDisinterested«@» °∆t°14:21, 9 November 2025 (UTC)[reply]
Depends on the article… in our article on Flat Earth, where we are comparing the claims of various flat earth proponents to the scientific consensus, it does make sense to attribute the various viewpoints (so readers know who says what). In our article on Earth, we can omit the fringe claims of flat earth proponents and simply state that the Earth is globular in Wikivoice. Blueboar (talk) 14:40, 9 November 2025 (UTC)[reply]
Also it is not always more neutral. Adding attribution, when no source states it as anything but fact, is not more neutral - it is adding your own opinion. It's not about blindy following sources but rejecting your own feelings about a subject. -- LCU ActivelyDisinterested«@» °∆t°14:31, 9 November 2025 (UTC)[reply]
Per RS/AC, it shouldn't be stated in Wikivoice that there is an academic consensus on a subject if there isn't an RS stating explicitly that. However, if there is no dissent or countering claims in RS and all of the available RS, particularly in academic sources, state one position, then you can (and probably should) state whatever the fact is in Wikivoice directly. To prevaricate on this sort of thing and to make it only a list of such and such says opinion statements is absolutely not a proper way to showcase NPOV on whatever the subject is. No RS dissent means you can state the thing as a fact. Because it is a fact. Because there is no RS statements to the contrary. SilverserenC22:39, 9 November 2025 (UTC)[reply]
Generally, academic sources 100% trump news sources, which can be reliable for general information, but not if they conflict with what all of the academic sources say. We wouldn't, for example, use news sources that are credulous toward anti-vaccine stances to then claim there isn't a 100% stance of safety on the subject of vaccines, as the academic sources represent. I feel like if we're ever in a situation where all of the academic sources that exist have one stance and there are news sources claiming otherwise, the news sources should be counted as discredited on that specific topic. SilverserenC01:24, 10 November 2025 (UTC)[reply]
I think that holds for hard sciences, on questions for which there is generally one correct answer and the others are false, but in other situations, we really need to look at whether an "academic only" rule improperly excludes viewpoints in violation of WP:YESPOV. Particularly in the social sciences, it's possible for academic sources to disagree with (e.g.,) political sources or financial sources, and since those non-academic views have real-world consequences, it would be non-neutral for the Wikipedia article to report only one viewpoint. WhatamIdoing (talk) 02:58, 10 November 2025 (UTC)[reply]
I don't see why political or financial topics would be any different. If a political or financial event is covered in academic sources as meaning or representing one particular thing and some news sources claimed otherwise, we'd still consider the news sources to not be superior to the academic sources. In fact, we'd be likely to consider the news sources to be actively and likely purposefully biased in their representation of the topic because of that contradiction with the academic sources. News sources, on a whole, are both not experts on topics, they're also often misinformed, credulous, and focused on breaking news rather than factual analysis. We use them in the interim when they are the best sources available, but once academic sources are made on a subject, they supersede those news sources completely. As is appropriate, because random news journalists do not trump actual academic analysis of factual reality. SilverserenC03:12, 10 November 2025 (UTC)[reply]
The problem is when academic sources don't cover a significant viewpoint. In the best-case scenario, we have:
Scholarly sources talk about their abstract viewpoint ("This will produce valuable social benefits")
Scholarly sources talk about the political viewpoint ("I oppose this because I want to be re-elected")
Scholarly sources talk about the financial viewpoint ("This could cause many small businesses to fail")
But sometimes we only have this:
Scholarly sources talk about their abstract viewpoint ("This will produce valuable social benefits")
Political sources talk about the political viewpoint ("I oppose this because I want to be re-elected")
Business news talk about the financial viewpoint ("This could cause many small businesses to fail")
In that latter case, it's not always reasonable to exclude significant viewpoints (which YESPOV says we need to include) just because the peer-review cycle hasn't gotten around to describing those non-academic POVs yet. WhatamIdoing (talk) 03:33, 10 November 2025 (UTC)[reply]
From what I'm seeing on this particular article in question based on the talk page discussions, the academic sources do address the "is it just a moral panic" question though. They frequently and often do and all state that, yes, it is a moral panic and not substantive in terms of being a real thing. That is addressing the political viewpoint. Just because political news sources would like to claim otherwise doesn't mean the academic sources very clearly all stating the opposite don't count on that aspect of the topic. SilverserenC03:40, 10 November 2025 (UTC)[reply]
Wikipedia covers such a broad range of topics that it's hard to state any one-size-fits-all rules. Generally speaking, as WP:BESTSOURCES states, basing content on the best respected and most authoritative reliable sources helps to prevent bias, undue weight, and other NPOV disagreements, and as WP:SOURCETYPES says, When available, academic and peer-reviewed publications, scholarly monographs, and textbooks are usually the most reliable sources. Putting those two together, if a topic is thoroughly covered by academic sources, the Wikipedia article on that topic should be based on those sources. So if you can write a whole article using only academic sources, ignore other sources. Take, for example, topics like quantum mechanics, democracy, or climate change: it really doesn't matter what non-academics have to say about any of those topics; the academic sources cover it.
If all the best sources say X, then Wikipedia should just say X, as WP:WIKIVOICE explains: Uncontested and uncontroversial factual assertions made by reliable sources should normally be directly stated in Wikipedia's voice, for example 'the sky is blue' not '[name of source] believes the sky is blue.' Unless a topic specifically deals with a disagreement over otherwise uncontested information, there is no need for specific attribution for the assertion, although it is helpful to add a reference link to the source in support of verifiability. Further, the passage should not be worded in any way that makes it appear to be contested.
Putting it all together, to answer your original query, if all the academic sources say X, and non-academic sources say not-X, Wikipedia should generally just say X, directly in Wikivoice, and the non-academic sources should be ignored. If not-X were a significant minority view, the academic sources would cover it; if they don't, it means the view is not significant.
Of course, as always, there will be exceptions. The most common is breaking news: academic sources may become outdated, sometimes suddenly, in which case we must rely on non-academic news sources to keep the article current. There were times in history when all the academic sources would have said that the USSR existed, or that no one had ever stepped on the moon, and they all would have been wrong, because they would have been superseded by recent events (until new academic sources were written).
There are also some topics where academic sources really don't cover the entire topic well or in an up-to-date manner even though they are available (e.g. video games, professional wrestling, music, film, sports, art, etc.). In such topic areas, academic sources may not be enough. That's why you need human editors to make case-by-case judgments. But, generally, prefer scholarship over non-scholarship. Some people say scholars shouldn't be weighed more than, say, governments or political commentators, but those people are wrong. Scholars will be the most reliable sources for almost every topic covered by scholarship. Levivich (talk) 23:38, 10 November 2025 (UTC)[reply]
The phrase if a topic is thoroughly covered by academic sources is the key one. Most topics are notthoroughly covered by academic sources, and sometimes editors seem to call for 'academic sources only' for the purpose of excluding POVs they disagree with. It's all well and good to say that 'only academic' voices matter for Climate change, but politicians and businesses, rather than professors, are the ones who control how bad it gets. WhatamIdoing (talk) 18:10, 11 November 2025 (UTC)[reply]
User:Silver seren, to put it bluntly, I agree with you. Academic sources are frequently and explicitly written with political intent and are not neutral observers. Pretending that they are and regurgitating their politics creates issues. But it looks like the consensus says that's what we must do, so I have rewritten Grooming gangs scandal accordingly.--Eldomtom2 (talk) 00:18, 11 November 2025 (UTC)[reply]
The discussion on this page doesn't apply here: [2]. We've got a topic, that originally hit the news when the first criminal case lead to a conviction, and news coverage was criticized AT THAT TIME by academics as 'it almost never happens, so people's concern is out of proportion to the event'. You're arguing to go against WP:LABEL and put "moral panic" in the lead, in wiki voice, not the content of what was said in those articles: [3]. Over the years, as the convictions have increased, more widespread coverage has happened, multiple government reports were written, the topic went international, academics wrote that the previous academics were wrong (which I provided evidence for), academics don't describe it that way any more (like the book from Oxford University that describes it as a failure of government), and those early academics moved on to new topics and just stopped writing about this one. I absolutely reject these changes and challenge the logic you've used to to get there. Denaar (talk) 03:35, 11 November 2025 (UTC)[reply]
Dear WikiCleanerMan, the Government of the People's Republic of China, as you probably know much better than me, was proclaimed in 1949. The name "People's Republic of China" has been widely accepted by other governments, and the PRC was admitted to the United Nations under that name in the early 1970s. In my specialist field, invariably the references are to the "People's Republic of China", including Category:Military units and formations of the People's Republic of China. For example, the US DOD publishes a document known as the "Military and Security Developments involving the People's Republic of China."
You have started to try and change a large number of categories involving these terms. Before I start an RfC and/or an ARBCOM case for what appears to be a massive exaggeration of the commonality of the term, would you kindly like to present *why* you believe "China" has now been widely enough adopted as the *common* name of the People's Republic of China? Should say, I do not believe that having the main article as "China" creates enough precedent to change all the terms associated with the People's Liberation Army. Kind regards, Buckshot06(talk)00:25, 9 November 2025 (UTC)[reply]
To date, I've been of the understanding that non-WMF open wikis such as many of those on Fandom, Conservapedia, and RationalWiki, are almost never appropriate in external link sections, but there is a key loophole in that those with a substantial history of stability and a substantial number of editors. The problem there's little to go by as to what constitutes a substantial history of stability. For example, I'm inclined to think many popular Fandom sites like Wookiepedia, the Marvel Database, The Sims Wiki (Fandom version), as well as maybe Nookipedia.com might fit the bill in that they're wikis with a lot of specialized content that would never pass for inclusion at Wikipedia, they have active communities, and there's not a tremendous amount of in-fighting or external attacks. RationalWiki, Conservapedia, Metapedia, Encyclopedia Dramatica, and poorly maintained wikis on Fandom, on the other hand, seem like sites that should never be included in an external link section with the exception of the articles on those subjects, due to a history of serious in-fighting and serious external attacks from vandals and worse on the WP:OPENWIKI side in addition to questionable content quality. I've found my removal of the latter sites challenged at least twice, and I think the main reason is disagreement of what a substantial history of stability looks like. What looks obvious to me apparently is not as obvious to others, or I'm just wrong in interpretation.
On the other side of this, I've noticed the content quality side of WP:ELNO leaves a little to be desired. For example, point number 2 would seem to exclude the obvious such as deprecated sites and obvious fake news sites, but what about a link to a page about Chuck Schumer on GOP.com on his article, or a page about Donald Trump on Democrats.org on his article? What about a link to RationalWiki on Clarence Thomas's article? WP:OPENWIKI aside, is seems like common sense for an encyclopedia adhering to WP:NPOV to NOT link to RationalWiki on any article other than the site's own article and possibly one about rationalism if there were consensus that it passed WP:OPENWIKI, just as it seems like it would be common sense to not link to sites owned by political parties except for those parties' own articles and possibly articles directly related to that party's members, yet there is nothing in WP:ELNO addressing bias or any other quality factor other than not to link to sites that publish blatantly bogus content or personal sites such as social media, and it seems like addressing this directly could reduce disputes over the matter (and not just in consideration of RationalWiki).
So to both points, I'd like to propose we amend WP:OPENWIKI to clarify exactly what constitutes "significant stability" and also add a new point addressing non-neutral external links, not necessarily requiring external links to perfectly follow WP:NPOV as we do, but to address sites with an obvious agenda to push, such as GOP.com, democrats.org, moveon.org, sites related to the TEA Party movement, RationalWiki, Conservapedia, Planned Parenthood's websites, etc. Obviously there are places where links to those sites are appropriate, but in my opinion their use should be limited.
PCHS Pirate Alumnus (talk) 01:25, 9 November 2025 (UTC)[reply]
Life is too complex for rules to be able to dictate what should happen in every circumstance. What is best for the encyclopedia has to be argued over for individual cases. I haven't looked at what RationalWiki has to say about Clarence Thomas, but I assume it would be an obvious fail of WP:EL. At any rate, there can't be a good definition of what external links are suitable other than what is at WP:EL. Johnuniq (talk) 02:05, 9 November 2025 (UTC)[reply]
Our goal: Send readers to pages that contain content that will interest them. Do not send readers to pages that will be worthless, a mess, defunct, usurped by that gambling company, etc.
First decision: Are you linking a single page (https://starwars.fandom.com/wiki/Darth_Bane to go in Darth Bane) or the whole site (https://starwars.fandom.com/ to go in Star Wars)? If the first, make sure that the individual page is worth reading. If the second, make sure that the landing page is good (e.g., informative) and that the site contains information (e.g., pictures) that you think a reader would be interested in. Don't send our readers to lousy pages. Every external link in every article should be justifiable, regardless of whether it's a wiki or some other kind of page.
Second decision: Is this a site you can trust to be in good shape in the coming months and years? Some little place that nobody's editing is not (e.g., because low participation means spam or vandalism isn't likely to get caught quickly), and we don't want to send our readers to a page that's at risk of being vandalized or spammed without anyone noticing. Consequently, we're looking for "a substantial history of stability" (if they've managed to keep good pages for a while, they'll probably manage in the future) and "a substantial number of editors" (more eyes on the wiki = lower risk of spam and vandalism going unreverted).
If it looks like a good page and a solid site, then consider linking it. Otherwise, don't. And when editors disagree about the specifics, then consensus is king.
If you personally need numbers, then look for a wiki that has been open for couple of years (no brand-new groups, because the failure rate is high in the early days; no groups that were obviously taken over by another recently) and edits in Special:RecentChanges (or the equivalent for non-MediaWiki software) from at least 30 registered accounts in the last 30 days (=at least one person a day). But I hope you can use good editorial judgement instead of simplistic numbers. WhatamIdoing (talk) 02:34, 9 November 2025 (UTC)[reply]
a webpage arguing that the umpire's call was correct and the Kansas City Royals deserved to win and
a webpage arguing that the call was wrong and the St. Louis Cardinals should have won,
but not:
a webpage arguing that secret government research caused a wormhole to open up the multiverse and the umpire's call was correct in an alternate universe.
When discussion has ended, remove this tag and it will be removed from the list. If this page is on additional lists, they will be noted below.
This request for comment proposes deprecating the Associated Press Stylebook as a naming authority within WP:USPLACE. The current guideline ties certain U.S. city article titles to whether the AP Stylebook lists them as not requiring a state name, a practice that dates back to Wikipedia’s early years. However, this external dependency conflicts with Wikipedia’s self-governed policy hierarchy and with the way other countries’ naming conventions are structured. No other national convention relies on an outside publication to determine article titles. This discussion invites editors to consider whether Wikipedia should instead base U.S. city naming solely on internal principles such as WP:TITLE, WP:COMMONNAME, and WP:PRIMARYTOPIC, supported by verifiable usage data such as pageviews and clickstreams.
Proposal
Deprecate the Associated Press Stylebook as a naming authority within WP:USPLACE. Future decisions about the inclusion or omission of state names in U.S. city article titles should be based solely on Wikipedia’s internal policies and verifiable usage evidence.
Replace the existing paragraph:
"Cities listed in the AP Stylebook as not requiring the state modifier in newspaper articles have their articles named 'City' unless they are not the primary topic for that name."
with:
"Cities are titled by the most common and unambiguous name used by readers and reliable sources, in accordance with WP:TITLE and WP:PRIMARYTOPIC. The inclusion or omission of a state name is determined by actual disambiguation need, not by external style guides.""
Add an explanatory note:
"References to the AP Stylebook in earlier versions of this guideline are deprecated. Wikipedia naming conventions should rely on internal policy and verifiable data, such as reader behavior or reliable source usage, rather than on external editorial manuals."
Background
The current wording of WP:USPLACE incorporates the Associated Press Stylebook as part of its reasoning for which United States cities are exempt from the “Placename, State” format. This reliance on an external publication is unusual within Wikipedia’s system of self-contained policies and guidelines. Other country-specific naming conventions (for example WP:UKPLACE, WP:CANPLACE, WP:NCAUST, WP:NCIND) rely only on internal policy principles such as WP:TITLE, WP:COMMONNAME, and WP:PRIMARYTOPIC.
Rationale
The AP Stylebook was created for journalistic brevity, not encyclopedic clarity. Wikipedia’s naming standards are designed for reliability and reader intent, not for newspaper copy.
No other country’s naming convention cites an external editorial manual as authority. The United States should not be an exception.
The AP list of cities without state modifiers is dated and arbitrary, reflecting mid-20th-century newspaper familiarity rather than modern global recognition.
Wikimedia’s pageview and clickstream data provide transparent, empirical evidence of what readers mean when they search for a city name.
This change aligns WP:USPLACE with WP:TITLE and WP:PRIMARYTOPIC, ensuring that the same principles apply worldwide.
Intended outcome
Consensus to remove or deprecate references to the Associated Press Stylebook from WP:USPLACE and clarify that U.S. city naming follows the same internally governed, data-based principles used for other countries. TrueCRaysball💬|✏️18:07, 10 November 2025 (UTC)[reply]
I strongly oppose something as broad as The inclusion or omission of a state name is determined by actual disambiguation need, not by external style guides. While I may agree with the principle that we needn't rely specifically on only the AP for which cities have standalone names, I believe nearly all US cities should still include the state name in the title, even if the city is the primary topic for that name or disambiguation isn't needed. Even if we could retain our discretion to deviate from the AP in particular in some circumstances, I see no issue with the current practice and this method helps avoid pointless move debates while maintaining consistency. I'd rather extend this practice of including a state name in the title to other countries, rather than the other way around. Reywas92Talk18:31, 10 November 2025 (UTC)[reply]
Isn’t that the entire point of a Village Pump discussion? To craft something better that we can all agree to through consensus? The AP standard is written for journalists, not encyclopedias, and in my view it has no place in our naming conventions. TrueCRaysball💬|✏️19:21, 10 November 2025 (UTC)[reply]
I've shared my opinion, others are welcome to contribute. I see no strong reason to change the current consensus, and even if the wording were changed not to prioritize just the AP, I strongly believe we should not start proposing to remove state names from other titles, which would be a huge waste of effort over something that works fine as it is. Reywas92Talk19:31, 10 November 2025 (UTC)[reply]
Oppose per Reywas. This reads like a solution in search of a problem. I have no objection to deviating from the AP in individual cases if someone can demonstrate a benefit from doing so, but as a general rule everything is working fine as it stands and I see no benefit to changing it after this many years without problems. Thryduulf (talk) 19:51, 10 November 2025 (UTC)[reply]
Oppose – There is no evidence of a problem with the existing scheme. It is clear, a long-standing consensus, and based on a reliable source. Implementing this change will result in the need to reconsider the article titles of thousands of pages, for no good reason, resulting in a waste of valuable editor time. See WP:TITLECHANGES and WP:BROKE. What will the reader gain from this change? As far as I can see, nothing. If the text of the guideline needs to be rewritten, that can be arranged: WP:CONSISTENT is one element of our article titles' criteria. As mentioned above, it is already possible to deviate from this guideline when consensus exists. Yours, &c.RGloucester — ☎00:32, 11 November 2025 (UTC)[reply]
Oppose Regardless of its intent, the AP Stylebook is still reputable, and our usage of it to help inform our guidelines, as others have stated, has not caused any issues as far as I'm aware. Lazman321 (talk) 04:08, 11 November 2025 (UTC)[reply]
Comment - Several of the opposes here rely on "if it ain’t broke, don’t fix it" reasoning or the assumption that editors can already make exceptions. However, that ignores the reality of how this actually functions in practice.
Every city move discussion in the United States is automatically opposed or SNOW-closed on the basis of WP:USPLACE, even when strong evidence and consensus-building attempts are presented. That means editors cannot meaningfully discuss exceptions. The policy itself shuts down the conversation before it can happen. My own RM of Orlando, Florida from last year is one of many examples.
Additionally, the claim that "it works fine" does not hold up when data says otherwise. Clickstream analytics show that thousands of readers type terms like "Orlando" expecting to reach the Florida city, only to land on a disambiguation page and have to click through. That is, by definition, a navigation failure. It proves the system is broken for readers. Not just editors.
The workload objection is also a red herring. A simple "grandfather clause" could apply: existing titles remain until a discussion is individually initiated. No one is proposing a mass retitling campaign.
Finally, the AP Stylebook is written for journalists, not encyclopedias. Its inclusion in our naming conventions has no policy basis and should not function as an unchallengeable authority. We have robust internal guidelines like WP:COMMONNAME and WP:PRIMARYTOPIC that already handle naming consistently and logically without relying on external style manuals. TrueCRaysball💬|✏️04:46, 11 November 2025 (UTC)[reply]
That your proposed move was rejected does not indicate that anything is amiss with the guideline. What it means was that you failed to provide persuasive evidence of a 'good reason' to change the article title per WP:TITLECHANGES. In fact, in that RM, you failed to provide any evidence to support your claims, at all. I can see that you are now engaging with empirical data, such as Clickstream analytics. If you think you can make a better case now per WP:PRIMARYTOPIC, you are free to open a new RM discussion. Yours, &c.RGloucester — ☎05:46, 11 November 2025 (UTC)[reply]
I think the current guidelines would suggest that the proper RM if you're right about PTOPIC would be Orlando → Orlando (disambiguation), with Orlando turned into a redirect to Orlando, Florida. That way all the readers expecting to reach the city will get there right away, and a hatnote at the city page could send confused readers back to the dab page. It looks like this was last discussed here in May and there was consensus that the city is not the primary topic. Firefangledfeathers (talk / contribs) 14:29, 11 November 2025 (UTC)[reply]
Replying here since I realize my oppose was a little snippy and I think this comment makes it clearer what you are getting at. My understanding is that you feel that WP:USPLACE is causing undue knee-jerk opposes to RMs like Orlando, Florida -> Orlando that you think would benefit the wiki. But the actual RFC reads like you asked ChatGPT "write me an RFC that will stop wiki editors from using WP:USPLACE to oppose my RM". That's probably why this RFC is getting so many opposes - we don't like having our time wasted. It would be more helpful to present clearer arguments at your RM next time (maybe share some of this clickstream data you mention). -- LWGtalk17:32, 14 November 2025 (UTC)[reply]
Oppose I think there is benefit from nearly all US places having the state added. We also benefit from not discussing (too often) which cities should or shouldn't be exempted, which would definitely happen more if we pull in the list locally. I'd be more likely to support removing the AP list exemption and move the 29 cities to names with states. As mentioned above, primary redirects could still exist for cities whose names are the primary topic for that term. Skynxnex (talk) 19:10, 11 November 2025 (UTC)[reply]
One, no one is suggesting removing the "city, state" format. I suggested moving the standard to internal review/consensus for which use the state and which don't instead of relying on an external style guide. Two, the latter suggestion only makes sense if you're gonna do that with every country that also is broken down into counties or states, or even just a simple "city, country" format. Consistency is key here and is the entire premise of my starting this RfC. TrueCRaysball💬|✏️20:33, 11 November 2025 (UTC)[reply]
I never said anyone was proposing removing the City, State format. But given we have only 29 localities special cased currently (DC is its own thing), to me the implication is very strongly that this proposal is to allow more places to be named by just their name without state added.
I don't think that all countries need to have consistent rules for populated places. I think the US model might be good to apply to places like Canada and Australia (maybe others?) where the state-level subdivision matters more than in some countries. But in some places I believe it's generally seen as less of a part of the identity/name of the populated place. I think consistency within a country is more important and why I idly mentioned as both a reason to oppose this and maybe weigh people's willingness to rename things like Cleveland to Cleveland, Ohio. I doubt that is likely at this time.
I think you providing some examples of specific US place article titles that would be improved by this change may be helpful. But Myceteae's comment describing reasons why the status quo is probably better helps make specific examples somewhat unneeded. Skynxnex (talk) 01:57, 12 November 2025 (UTC)[reply]
Oppose. The current guidance is not broken and does not need fixing. Appealing to an external style guide is not inherently at odds with WP policy and practice. Much of the content in our naming conventions and MOS reflects and is consistent with external style guides and accepted conventions, even when these are not explicitly cited. Furthermore, consensus to adopt a particular external standard is valid. We do this explicitly in several places, such as (the admittedly controversial) MOS:FRENCHCAPS, and numerous naming conventions that refer to specific authoritative bodies to source appropriate article such as WP:NCFILM and WP:MEDTITLE. The whole section WP:USPLACE does incorporate local (US) customs, as does the entirety of Wikipedia:Naming conventions (geographic names). This does result in discrepancies between how cities in different countries are handled, especially in English-speaking regions where WP:ENGVAR considerations prevail. The AP Style guidance is authoritative, appropriate, and represents a specific application of broader guidance like WP:COMMONNAME to a particular subject area. Referring to a respected external source simplifies decision-making, harmonizes article titles, and prevents endless battles about when to drop the state. —Myceteae🍄🟫 (talk) 00:06, 12 November 2025 (UTC)[reply]
This RfC fundamentally misunderstands how USPLACE operates. I don't know if it is a misreading of the guideline or something to do with an llm, but it is backwards. USPLACE ignores WP:PRIMARYTOPIC, setting the standard as "Place, State". The AP-exceptions are the only place where WP:PRIMARYTOPIC is considered. The proposed change leads to the opposite impact that the rationale seems to want, so I suggest the RfC is closed as it cannot as proposed actually lead to a consensus for change. CMD (talk) 04:09, 12 November 2025 (UTC)[reply]
Oppose I agree with RGloucester that this would lead to a waste of editor time for little to no benefit to readers, with Myceteae that there is no procedural problem with the current situation, and with CMD that this RFC doesn't seem to have a coherent purpose. -- LWGtalk16:00, 14 November 2025 (UTC)[reply]
Sympathize. I agree that the AP Stylebook is a pretty arbitrary way to determine which U.S. cities play by WP:PRIMARYTOPIC and which are exempt. I don't recall how I've !voted in the past, but it does seem like a cleaner solution would be to strike the AP stylebook, and either (1) apply WP:PRIMARYTOPIC as normal, or (2) require City, State for every U.S. city. If the argument is that "City, State" is the dominant convention, then there is no reason to have Baltimore coexist with Nashville, Tennessee. It should be Baltimore, Maryland, with Baltimore as a WP:PRIMARYREDIRECT. Or, allow Nashville as an article title (since it already redirects there). Either way would go a long way to eliminate the perennial move requests and RfCs like this one. The status quo is inherently unstable. But it's also very ingrained in Wiki-world. Dohn joe (talk) 21:49, 14 November 2025 (UTC)[reply]
Just want to get a reality check on this. Basically there's been a kerfuffle involving the BBC and it's now covered in the article above linked. It already had a controversial talk page notice but I've just thought about it and given the allegations are very specifically around:
Coverage of Israel during the ongoing Gaza war, with allegations it was deliberately pro-Palestinian/anti-Israel.
Coverage of trans issues within the UK, with allegations of suppression of "anti-trans" stories.
I'm myself convinced it's very much within the scope of CTs WP:CT/AP, WP:CT/AI, and WP:CT/GG so I've now put the appropriate template on the talk page which displays the ArbCom remedies as better safe than sorry. Obviously given how stringent those remedies are (particularly due to WP:CT/AI) can I just get the thoughts of some other experienced editors as to the suitability/appropriateness of this, lest it be considered overkill by the community. Rambling Rambler (talk) 22:54, 13 November 2025 (UTC)[reply]
I think we're at right result, wrong procedure here. I think you have to be an admin to actually add restrictions to a page under an AE case, and not a normal editor who has involved themselves on the talk page. That said, what's there looks sane, so ideally a passing admin takes it over, logs it at AE, and we all move on with our day. Tazerdadog (talk) 23:22, 13 November 2025 (UTC)[reply]
I don't think you need to be an admin to put the templates on because they're already restrictions in being (i.e. even on a page without a warning, if you edit material that refers to the AI area you're bound to those remedies). Basically from how I understand things to be it's a bit like "signing the Official Secrets Act" in that you don't actually need to sign it before being bound to it, simply that signing it signals you were consciously aware of its restrictions.
I agree with Rambling Rambler in that the restrictions have already been enacted for all pages within the scope of the contentious topic in question. Anyone can post a notice that a page is within scope. There can be disagreement, though, on whether or not a given page (or portion of a page) is within scope. If there is a dispute and a community discussion cannot reach a resolution, then it can be raised at as a clarification request at the arbitration enforcement noticeboard. isaacl (talk) 23:36, 13 November 2025 (UTC)[reply]
Well the Montevideo Convention is something that only states in the Americas ever signed up for. Not the rest of the world. And the majority of micronations are not in the Americas. Plus micronations are not recognised internationally as states. And most micronations don't meet the criteria in that they don't have their own territory or population. A couple people coming out to live temporarily in an area does not constitute a population. At the end of the day micronations aren't really a thing, just the fantasies and creative fiction of some people. Canterbury Tailtalk19:38, 14 November 2025 (UTC)[reply]
Why should Wikipedia care in the slightest what 'Micronations' (i.e. almost nobody) wants? This isn't MIcroWiki, and we don't tailor content to suit the wishes of random fantasists. AndyTheGrump (talk) 19:44, 14 November 2025 (UTC)[reply]
When discussion has ended, remove this tag and it will be removed from the list. If this page is on additional lists, they will be noted below.
WP:NCORP presently states that it is to help "determine whether an organization (commercial or otherwise), or any of its products and services, is a valid subject for a separate Wikipedia article"
Agree - NCORP is meaningless as a standard if it can simply be avoided by turning whatever WP:PROMO article it is you wish to write into a list of the goods/services of the company concerned. It simply does not make sense that you should be able to write a article listing the goods and services of a company (so basically an article about the company) based on local coverage, trade-press, primary news coverage based on press-releases and company announcements, when you are barred from doing so about the company itself or individual goods and services. Basically, there's no reason why we should be able to have an article entitled List of pizzas sold by Phil's Pizza Shop based entirely on press-releases, local news coverage, and trade-press, when Phil's Pizza Shop would be non-notable under such WP:SIRS-failing coverage. Even a straight-forward reading of NCORP, which states that it applies to "any" of an organisation's goods and services, indicates that it was always intended to means lists of the same. FOARP (talk) 11:05, 15 November 2025 (UTC)[reply]
Disagree. NCORP is unambiguously about prose articles, the relevant standard for lists is WP:NLIST. Multiple people have told you this in multiple different discussions, it's time to drop the stick. Thryduulf (talk) 12:09, 15 November 2025 (UTC)[reply]
Sure, if Phil’s isn’t notable then their pizzas will not be notable. However, what if Phil’s is considered notable? At that point you have to consider why Phil’s is notable.
IF they are notable for their pizza, then a list of their pizzas might be appropriate. However, they might be notable for (say) the architecture of their building… or for some other factor. In which case a list of pizzas is inappropriate.
Apparently this thread was inspired by lists of airline destinations, where the airlines themselves are considered notable (so NCORP is not the issue). The next question is, why are these airlines notable? Are they notable for their destinations? Are they notable for the type of planes they fly? Are they notable for the luxury of their first class service? Etc. Blueboar (talk) 13:23, 15 November 2025 (UTC)[reply]
Disagree The concern that we will have a list of products of non-notable companies is completely hypothetical. It also has an easy compromise solution, allow a list of products of company X only if the company itself passes WP:NCORP. Ultimately, we should prioritize the readers in such discussions. A significant part of them use mobile and benefit from shorter and more to-the-point articles. Stand-alone lists are useful so they don't need to spend additional time navigating the parent article. The readers also won't benefit if we remove most of the entries in Category:Lists of products. Kelob2678 (talk) 13:26, 15 November 2025 (UTC)[reply]
Partial agree Given that NLIST doesn't necessary require notability of the "products made by company X" be a notable topic nor company X to be notable if the list is, we still want WP:SIRS (sourcing requirements) from NCORP to be respected if we're just creating a list where individual products may be notable. Even if company X is NCORP-notable, a full list of their offered product or services without SIRS-type sourcing will still be a problem in failing the goal of NCORP, which is to avoid using WP for promotion or business purposes. If there is SIRS-type sourcing for every product, great (this to me would be a case for something like Apple iPhones which absolutely do not go unnoticed by the general media). But if such a list is heavily relying on only press releases or similar first-party, dependent material, that's not acceptable. Masem (t) 13:34, 15 November 2025 (UTC)[reply]
Close this in favor of Wikipedia:Village pump (polic1y)/Airport destination lists Do we really need even more wikilawyering from people fighting over that topic? As for the question at hand, WP:NLIST seems the appropriate guideline to follow. If there's any reason that lists of a corporation's products and services can't effectively be handled by WP:NLIST, I doubt we'll find it buried in the airport destination list mess. Anomie⚔15:15, 15 November 2025 (UTC)[reply]
Bad RFC While I appreciate that the proposer does note below what induced them to start this, this feels like a roundabout form of forum shopping to get an answer to one question that he can apply to a different one. This question is a bit vague and does not include a specific proposal regarding language on that page. Anomie makes the right points, though I'll note that airport destination sections are very different from standalone airline destination lists in how they're presented and constructed. Anyway, I disagree and don't think the pages in Category:Lists of products or those the proposer has been nominating necessarily need to be deleted under these grounds. If a corporation is notable, it often makes sense to provide what makes them notable, be that what they manufacture or where they operate. We are generally able to address this kind of listcruft already without this RFC. Reywas92Talk17:05, 15 November 2025 (UTC)[reply]
This RfC is another episode in the saga about airline destination lists. Most of the recent AfDs regarding them were initiated by FOARP[4]. Earlier this year, the community expressed their doubts about whether WP:NOT applies to them[5]. Now, the issue is being pressed from the WP:NCORP perspective. The change discussed here was boldly added to the guideline[6] and was reverted[7]. In response, we got this RfC.FOARP himself notes, we still have listings of airline services that don't pass either WP:NLIST or WP:NCORP.[8] So why do we even need to subject the lists to WP:NCORP? In my opinion, to make the discussion more focused, it's better to stick to WP:NLIST. Kelob2678 (talk) 13:26, 15 November 2025 (UTC)[reply]
"why do we even need to subject the lists to WP:NCORP" - to avoid WP:PROMO content based entirely on press-releases, local coverage, trade-press etc., just simply written as a list rather than as a prose-article. FOARP (talk) 15:04, 15 November 2025 (UTC)[reply]
The core question here is the “group or set” requirement of NLIST… are airline destinations as a set notable? To answer that, we need to ask: Are there independent reliable sources that discuss the concept of airline destinations as a set? Blueboar (talk) 14:03, 15 November 2025 (UTC)[reply]
Latest tech news from the Wikimedia technical community. Please tell other users about these changes. Not all changes will affect you. Translations are available.
Updates for editors
Administrators will now find that Special:MergeHistory is now significantly more flexible about what it can merge. It can now merge sections taken from the middle of the history of the source (rather than only the start) and insert revisions anywhere in the history of the destination page (rather than only the start). [9]
For users with "Automatically subscribe to topics" enabled in their preferences, starting a new topic or adding a reply to an existing topic will now subscribe them to replies to that topic. Previously, this would only happen if the DiscussionTools "Add topic" or "Reply" widgets were used. When DiscussionTools was originally launched existing accounts were not opted in to automatic topic subscriptions, so this change should primarily affect newer accounts and users who have deliberately changed their preferences since that time. [10]
Scribunto modules can now be used to generate SVG images. This can be used to build charts, graphics and other visualizations dynamically through Lua, reducing the need to compose them externally and upload them as files. [11]
Wikimedia sites now provide all anonymous users with the option to enable a dark mode color scheme, featuring light-colored text on a dark background. This enhancement aims to deliver a more enjoyable reading experience, especially in dimly lit environments. [12]
Users with large watchlists have long faced timeouts when editing Special:EditWatchlist. The page now loads entries in smaller sections instead of all at once due to a paging update, allowing everyone to edit their watchlists smoothly. As part of the database update, sorting by expiry has been removed because it was over 100× slower than sorting by title. A community wish has been created to explore alternative ways to restore sort-by-expiry. If this feature is important to you, please support the wish! [13]
View all 31 community-submitted tasks that were resolved last week. For example, the fixing of the persisting highlighting when using VisualEditor find and replace during a query. [14]
Updates for technical contributors
Since 2019 the Wikimedia URL Shortener at https://w.wiki is available for all Wikimedia wikis to create short links to articles, permalinks, diffs, etc. It is available in the sidebar as "Get shortened URL". There are 30 wikis that also install an older "ShortUrl" extension. The old extension will soon be removed. This means /s/ URLs will not be advertised under article titles via HTML class="title-shortlink". The /s/ URLs will keep working. [15]
On Thursday, October 30, the MediaWiki Interfaces and SRE Service Operations teams began rerouting Action API traffic through a common API gateway. Individual wikis will be updated based on the standard release groups, with total traffic increased over time. This change is expected to be non-breaking and non-disruptive. If any issues are observed, please file a Phabricator ticket to the Service Ops team board.
MediaWiki Train deployments will pause for the final two weeks of 2025: 22 December and 29 December. Backport windows will also pause between Monday, 22 December 2025 and Thursday, 2 January 2026. A backport window is a scheduled time to add things like bug fixes and configuration changes. There are seven deployment trains remaining for 2025. [16]
In 2025, the Wikimedia Foundation reported that AI systems and search engines increasingly use Wikipedia content without driving users to the site, contributing to an 8% drop in human pageviews compared to 2024. After detecting bots disguised as humans, Wikimedia updated its traffic data to reflect this shift. Read more about current user trends on Wikipedia in a Diff blog post.
HTML is a very bad tool to try to communicate the relationships in a graph. SVG isn't much better emitted-wise but would be significantly cleaner for the programmer. I agree with Sapphaline that some of these are clear and obvious rewrite targets ({{climate chart}} is another one). Izno (talk) 19:33, 4 November 2025 (UTC)[reply]
aesthetic - no. accessibility - yes, considering that some of these pseudo-graph templates are still hacking <table>s (I'm talking about template:clade and template:cladogram). include size - I think <img alt="..." src="data:image/svg+xml;base64,..."> is less HTML output than something like
Another reason to prefer SVGs is that they are "real" images that can be copied, exported, downloaded and indexed by search engines. It also contributes almost nothing to PEIS. The <img> tag isn't actually part of post-expand output – it's replaced by a strip marker which adds just about 26-30 bytes to PEIS (regardless of SVG size). – SD0001 (talk) 09:21, 7 November 2025 (UTC)[reply]
From the accessibility point of view, the SVG images don't do anything by themselves. You should think of them like any other image, so they should come with alt text and/or a caption. Often it may be helpful to display the same data in another format elsewhere on the page, e.g. the Climate chart template could generate a SVG chart with a simple HTML/wikitext table below it, perhaps collapsed.
Redoing the existing templates as SVG images without also adding alt text or other alternative presentation would probably be an accessibility regression; although I couldn't make sense of something like the Clade template using a screen reader, I imagine a more experienced user could, and at the very least it's possible to copy and read the text from it (which is no longer possible when the text is inside an image). Redoing them with some alternative presentation would definitely be an improvement. Matma Rextalk21:26, 4 November 2025 (UTC)[reply]
There are two reasons for why SVG text is not selectable, and neither is the fault of SVG itself. One reason is that some people use text that is actually just paths and thus not selectable. People should instead use the fonts at meta:SVG_fonts. The other reason is that SVG thumbnails are not shown as SVG´s, but PNG´s. As for proof that SVG can have selectable text see for example https://upload.wikimedia.org/wikipedia/commons/d/d0/IsisPapyrus.svg . There has been some progress on sending SVG´s to the browser in the subtasks of phab:T5593. Snævar (talk) 23:32, 4 November 2025 (UTC)[reply]
The Scribunto modules that we're talking about emit SVGs in <img> tags, so the text in them is not selectable anyway. Making them interactive (including text selection) would be covered by T407783, but doing this securely is difficult. Matma Rextalk19:40, 5 November 2025 (UTC)[reply]
That sounds like an easy and useful enhancement, assuming you have some use-cases in mind. It would be analogous to a long-standing feature of {{yesno}}. DMacks (talk) 17:38, 6 November 2025 (UTC)[reply]
Or we could avoid adding complexity to a simple module when {{yesno}} already exists:
{{yesno|{{#invoke:If any equal|main|a|b|c|d|value=c}}|yes=output for yes|no=output for no}} → output for yes
{{yesno|{{#invoke:If any equal|main|a|b|c|d|value=r}}|yes=output for yes|no=output for no}} → output for no
I’m Eliza Blackorby from the WMF’s Reader Growth team. A few weeks ago, WMF posted here about declining pageviews to Wikipedia – that’s what our team is working to address. We want both new and existing readers to return to Wikipedia because they find it a compelling place to learn. Over and over, a top request from readers is that they wish for “more images/photos” on Wikipedia, as demonstrated in surveys of global internet users. As a result, we want to show readers more images and display images in a more enriching way. Our hypothesis is that by making it easier to explore images already in articles, readers may find Wikipedia more engaging and return more frequently, with some of them eventually becoming editors.
What idea are we testing?
A few weeks ago we shared how we were considering a test of a sliding gallery view of all an article’s images at the top of the article that readers can then click to jump to that part of an article, inspired by Community Wishlist requests for improved discovery of media. We’ve since built a prototype, called Image browsing, that takes your feedback into account. You can try it by adding the url parameter ?imageBrowsing=1 to the end of any URL on the mobile view for enwiki. For example: https://en.wikipedia.org/wiki/Hummingbird?useskin=minerva&useformat=mobile&imageBrowsing=1
What stage is this project in?
Our initial discussions with you constituted phase 0 of our reader experiment phases. We now want to enter phase 1: launching a small test with an early version of these ideas. It’s not yet clear whether this feature will be an improvement for readers, so we want to test it to determine whether to proceed into Phase 2: building a feature.
What is the timeline?
We will A/B test this version with 0.05% of mobile readers on English Wikipedia starting the week of November 17 and ending four weeks later on December 17.
What does the experiment include?
This test will include a gallery at the top of an article that shows all the article’s images. The feature will be available for any article that has three or more images. Tapping on any image will open a browsing experience with the image enlarged, its caption, and options to view it on Commons (if available). Readers will see images and paragraph-excerpts from the article itself in the gallery and will be able to switch back to where the image appears within the article. At the bottom of this experience, readers will be able to view images selected by editors for the same article in other Wikipedias.
Screenshots:
Below are three separate screenshots of the test's different aspects to demonstrate the experience when a user clicks through and scrolls.
What input are we looking for from you?
While this round of the experiment is focused on simply testing if readers are interested in image browsing, there are still issues that would need to be resolved before developing this into a feature, including those around bad images and cross-wiki images as it relates to conflicting policies or cultural sensitivities. We invite you to help us continue to identify concerns like this. In the collapsed box you'll find a summary of the feedback and risks we heard from you in September, along with how we're thinking through them.
Feedback from Phase 0
Mixed feelings on showing images from other projects
Commons: Some editors liked the idea of pulling more images from Commons, while others felt there was too much risk to showing Commons images without editor oversight. For this test, we have decided to only use images that have been added to at least one Wikipedia.
Images from other wikis: Some editors felt that allowing readers to click to see images from other wikis for the same article could pose a risk to editor oversight. For the purposes of gathering information in this test, we are including the ability to view images from other wikis, and will carefully observe and share the results with you for future conversations.
Risk of showing inappropriate images
We’ve set up this first A/B test so that you can exclude page images by adding the tag for exclusion, but we agree there’s still some risk. If we decide to proceed with this idea after the test, we’ll review ways we can expand this list to include further editorial oversight.
Risk of showing irrelevant images
Here, we’ll be using the same classes as MediaViewer. Instructions on how to add these classes are available on this page. Images already excluded from Media Viewer will not appear in the experience.
We agree this is a risk when displaying images from across wikis since not all wikis have the same level of moderation. We’ll be reviewing this piece with our legal team and current policy to make sure everything is aligned.
The guidelines in the Manual of Style for images focus on how images are presented within article content. Since this experience is more like a navigation or browsing experience outside the main content space, similar to how images appear in Media Viewer, we’re not sure how or whether to apply the MOS here, so let’s keep talking about that.
Since this work is still experimental, we expect to refine and adjust this idea based on your feedback. We’d love for you to try the feature on a few articles using the url parameter above. Your input will help us decide how to improve it if we move forward after the test. Also, stay tuned for the test results. We’ll share them with you and discuss together whether it makes sense to continue with this idea into Phase 2, and if so, what additional changes we will need to make before proceeding. Please share your thoughts and questions here, and for more info, see our project page.
When I go to the hummingbird page and click on the article image that has the caption "Adult male bee hummingbird, Cuba", I am taken to a page that contains the image, along with credit and a link to the license information. It is my understanding that CC licenses require that information to be linked to and displayed when the image is clicked. When I click on the same image in the slide show, I do not see any licensing information. This may be a problem.
The other obvious issue, of course, is that the images are displayed without their captions until you click them individually. As User:DarthVader might have said, I find your lack of context disturbing. – Jonesey95 (talk) 01:21, 7 November 2025 (UTC)[reply]
Hi @Jonesey95, thanks so much for flagging. You raise some important points. We've taken them into conversation with colleagues in the Legal department, and agree that we would need to address them if we end up building a future feature out of this experiment. I'll follow up more here if/when that happens. EBlackorby-WMF (talk) 22:31, 7 November 2025 (UTC)[reply]
Is it really OK with the legal department to knowingly violate the terms of CC-BY-SA, even on an experimental basis? I'm not a lawyer, so I don't know if it really is a violation, but it doesn't seem like something that would normally be allowed here at en.WP. If someone tried to roll out a template that behaved in this way, I think it might get some license-related pushback. – Jonesey95 (talk) 00:15, 8 November 2025 (UTC)[reply]
@Jonesey95 I disagree that there is any semblance of a violation here. There is a prominently displayed link to commons off to the side, which imo counts as attribution. I also disagree that if a editor made a similar choice, they would get any license related pushback. The practice of using images as background and then overlaying attribtuion text is pretty common across userpages (a example of this would be Sigma's userpage) Sohom (talk) 13:30, 11 November 2025 (UTC)[reply]
The Creative Commons licences do not require a specific method for providing attribution. The licence states that it may be reasonable to meet the attribution requirement by providing a link to a page that has all the required information. Since clicking on a gallery image displays an expanded image with an overlaid link to the attribution information, personally I feel this is a reasonable approach to provide attribution. isaacl (talk) 06:02, 8 November 2025 (UTC)[reply]
I find the workflow to jump to the section where the image is located to be awkward. After swiping through the gallery at the top of the article, I select an image, and see an expanded image at the top of the page, but there's no link to the section. I have to scroll down through the list of images (with the little summaries) until I reach the image I originally selected, and then I can select "View in article". I think it would be better if the little summary and "View in article" link appeared directly below the expanded image, so the context and jump link would be immediately available. (I think it should still appear within the comprehensive list as well.) isaacl (talk) 06:11, 8 November 2025 (UTC)[reply]
Hi @Isaacl, thanks for this feedback! It's helpful to hear your thoughts and ideas around page navigation. We're still figuring out the best way for anchor links to behave on the page in a way that more seamlessly connects images with context via their summary and their place in the article. Our design team will take a closer look with this in mind. Do you think the summary and "view in article" link would work best overlaid on top of the expanded image, underneath where the caption currently is? Or do you have something different in mind? EBlackorby-WMF (talk) 20:48, 13 November 2025 (UTC)[reply]
Due to the length of the blurb (I guess it's an excerpt, not a summary), I think it would be better to appear below the image, as I suggested. From a UI perspective, since the "view in article" link will bring you to the text in the excerpt, I think it would be better for the link to appear floated to the right at the top of the excerpt. I suggest leaving a bit of extra space at the bottom so the top of the excerpt and the link is visible without scrolling. I think it is better for the text to appear distinct from the caption, so prefer the text not to appear as though it is floating over the image. isaacl (talk) 22:43, 13 November 2025 (UTC)[reply]
I think the general idea here is good. If we can increase the visibility of media in a way that enhances the usefulness of Wikipedia to readers, then I'm all for it. The current implementation also seems to be decent, although some kinks may have to be worked out as pointed out above.
Even if directly pulling images from Commons is ruled out, what about simply providing a link to the Commons category? We already have {{Commonscat}}, so I don't see how this would be controversial. The link could look something like this:
Hi @~2025-32228-23, thanks for the note, glad you like the idea so far. Potential connections with Commons is something we are thinking about a lot, and this idea you've posed is a good one for future investigation. How do you feel about offering a link to view media from other projects besides Commons (like other language wikis)? EBlackorby-WMF (talk) 21:01, 13 November 2025 (UTC)[reply]
If I edit via the API, how does that interact with temporary accounts? There's (probably) no cookie management going on, so does a new TA get created on every API call? RoySmith(talk)13:44, 9 November 2025 (UTC)[reply]
Not on every API call. But if the API process isn't handling cookies (and isn't logged in with OAuth or the like), then yes, they'd get a new TA created on every edit or other action that triggers creation of a temp account. And probably quickly hit the daily temp-account creation rate limit. Anomie⚔15:27, 9 November 2025 (UTC)[reply]
I know that most client-side HTTP libraries have the ability to handle cookies, but many real-world application don't set that up. For example, the popular requests package for python only handles cookies if you go to the trouble to create a session and people often don't bother. And even if you did, if your client is multi-threaded (or multi-process), each thread will have its own session and thus its own cookie jar.
Do API calls just start failing? I'd think so, but I haven't tested it. 🤷 But really, any API-using thing that's making logged actions shouldn't be doing it while logged out anyway, and could probably be blocked per WP:Bot policy even if it is properly managing sessions. Anomie⚔16:06, 9 November 2025 (UTC)[reply]
@RoySmith I haven't tested it, but I believe that temp account are only created on their home wiki on first edit, but are created on subsequent wikis on read requests. --Ahecht (TALK PAGE)15:04, 10 November 2025 (UTC)[reply]
Yes, but that's not a true account creation, just an autocreate, same as with named accounts. And you'd need to have cookies to do that anyway. There are some actions that can cause MediaWiki to reserve a temporary account name (like previewing an edit) but not fully create the temporary account, but I'm not sure any of those apply to the API. Unless you use the acquiretempusername API, of course. AntiCompositeNumber (they/them) (talk) 00:41, 11 November 2025 (UTC)[reply]
Hey All. We have rolled out one method of interactive visualization for OWID based on an all Commons svg approach. You can see the integration of these graphs here.
With this technique we are able to bring over limited functionality of the OWID "grapher visualizations", we however are unable to support their newer and more complex explorer visualizations which one can see here MDWiki:WikiProjectMed:OWID_popup#Explorer.
Wikimedia Foundation and Our World in Data technical staff are interested in meeting with interested Wikipedians to pitch possible usage to this community. We do not have a date set as first I am wanting to collect interest for such a discussion / plus concerns folks have. The WMF supports its use and has a signed MOU and are happy from a security perspective and license perspective.
Latest tech news from the Wikimedia technical community. Please tell other users about these changes. Not all changes will affect you. Translations are available.
Updates for editors
Example of a talk page with the new design, in French.
MediaWiki can now display a page indicator automatically while a page is protected. This feature is disabled by default. It can be enabled by community request. [18]
Using the "Show preview" or "Show changes" buttons in the wikitext editor will now carry over certain URL parameters like 'useskin', 'uselang' and 'section'. This update also fixes an issue where, if the browser crashed while previewing an edit to a single section, saving this edit could overwrite the entire page with just that section’s content. [19][20][21]
Wikivoyage wikis can use colored map markers in the article text. The text of these markers will now be shown in contrasting black or white color, instead of always being white. Local workarounds for the problem can be removed. [22]
The Activity tab in the Wikipedia Android app is now available for all users. The new tab offers personalized insights into reading, editing, and donation activity, while simplifying navigation and making app use more engaging. [23]
The Reader Growth team is launching an experiment called "Image browsing" to test how to make it easier for readers to browse and discover images on Wikipedia articles. This experiment, a mobile-only A/B test, will go live on English Wikipedia in the week of November 17 and will run for four weeks, affecting 0.05% of users on English wiki. The test launched on November 3 on Arabic, Chinese, French, Indonesian, and Vietnamese wikis, affecting up to 10% of users on those wikis. [24]
View all 27 community-submitted tasks that were resolved last week. For example the inability to lock accounts on mobile sites has been fixed. [25]
The JWT subject field in OAuth 2 access tokens will soon change from <user id> to mw:<identity type>:<user id>, where <identity type> is typically CentralAuth: (for SUL wikis) or local:<wiki id> (for other wikis). This is to avoid conflicts between different user ID types, and to make OAuth 2 access tokens and the sessionJwt cookie more similar. Old access tokens will still work. [27]
A REL1_45 branch for MediaWiki core and each of the extensions and skins in Wikimedia git has been created. This is the first step in the release process for MediaWiki 1.45.0, scheduled for late November 2025. If you are working on a critical bug fix or working on a new feature, you may need to take note of this change. [29]
The process for generating CirrusSearch dumps has been updated due to slowing performance. If you encounter any issues migrating to the replacement dumps, please contact the Search Platform Team for support. [30][31]
MediaWiki can now display a page indicator automatically while a page is protected. This feature is disabled by default. It can be enabled by community request.
@Malcolmxl5 Enwiki already has a pretty extensive version of this of its own. It might be possible to convert that, but we first need to analyze how much functional overlap there is (or not). It’s not a very high priority, it is especially useful for wikis that don’t want to make and maintain their own version of a protection indicator. —TheDJ (talk • contribs) 07:25, 11 November 2025 (UTC)[reply]
Hello techies. I'm using User:Amalthea/userhighlighter.js, in a way that leaves the background color of admins' names transparent and instead adds a little crown symbol after them. Please see my User:Bishonen/common.css, right at the top. I love that silly effect, but the script seems to have just stopped working, and Amalthea has left the project. On the script's page, near the top, I find the advice "Consider using User:Theopolisme/Scripts/adminhighlighter.js instead, a better version of this script". Maybe I should do just that, but I have no idea how to modify it to get my desired effect, i. e. the effect that Amalthea's script used to give. (Theopolisme has also left the project.) Should I uninstall Amalthea's script, install Theopolisme's, and attempt to give it the same specifications as Amalthea's script now has for me? I'm a little scared of fiddling with it and making a mess, incompetent as I am. Any help, please? And might there be a third script, more up-to-date and actively maintained, that does what I want? Bishonen | tålk03:52, 11 November 2025 (UTC).[reply]
Installing Theo's script and using this in your common.css:
Thanks, Writ! I see that the little crown is back now. Should I still do it, do you think? (On the principle that Theopolisme's script is supposed to be better?) Or leave well enough alone? Bishonen | tålk07:40, 11 November 2025 (UTC).[reply]
Eh. Switching over isn't a bad idea, but honestly, I'm of the "if it ain't (currently) broken, don't fix it" opinion. Might be useful to keep this CSS in 'zilla's back pocket, in case of future need. Writ Keeper⚇♔13:47, 11 November 2025 (UTC)[reply]
A few hours ago, trying to view a page, I received a message: Sorry! This site is experiencing technical difficulties, in large letters.
It then displayed, in medium-size letters, "Try waiting a few minutes and refreshing", and,
(Cannot access the database: Cannot access the database: Database servers in cluster31 are overloaded. In order to protect
application servers, the circuit breaking to databases of this section have been activated. Please try again a few seconds.)
I waited a few minutes, as advised, and then was able to view the pages normally.
My question is whether this is any cause for special concern, or whether the system is responding as designed to a maximum load. If this is a normal response to a peak load, I will ignore it.
@Robert McClenon This morning we had a massive network outage leading to loss of a whole row which overloaded practically everything and triggered our circuit breakers (they intentionally kill a certain subset of requests to try to keep the services up in general). It did recover in ten minutes. We are still investigating the root cause of that network incident and I will try to update here later. Thanks to theDJ for the ping. Ladsgroupoverleg11:48, 11 November 2025 (UTC)[reply]
Does Wikipedia support inline prefers-color-scheme?
I've moved this here as suggested by WhatamIdoing:
Exactly what the question says. Does wikitext support this for CSS? My userpage uses a lot of custom CSS and has a bunch of contrast issues depending on which colour mode a user is on which I need to fix by creating overprecise CSS.
Wikitext has little support for CSS that requires media queries (though I note the existence of CSS light-dark() which might work?). What can be done regardless is for you to make a Template:TemplateStyles sandbox (see WP:TemplateStyles) and then move it to a subpage of your user page and then you have access to all the media queries you might like. Izno (talk) 05:28, 12 November 2025 (UTC)[reply]
You can make dark mode-specific styles using html.skin-theme-clientpref-night, e.g.
#mw-content-text{--discussion-threads-style-border-colour1:rgb(85%,85%,100%);/* left border colour */}/* dark mode */@mediascreen{:where(html.skin-theme-clientpref-night)#mw-content-text{--discussion-threads-style-border-colour1:rgb(5%,5%,30%);}}/* user-selected dark colour scheme */@mediascreenand(prefers-color-scheme:dark){:where(html.skin-theme-clientpref-os)#mw-content-text{--discussion-threads-style-border-colour1:rgb(5%,5%,30%);}}/* OS-selected dark colour scheme */#mw-content-text/* ... other selectors ... */{border-left:5pxsolidvar(--discussion-threads-style-border-colour1);}
This is Eliza from the Reader Experience team at the Wikimedia Foundation. The team’s focus is on making Wikipedia a more engaging and valuable place for our existing readers, encouraging them to come back to explore and learn more. We see this work as a part of addressing the decline in pageviews on Wikipedia we’ve talked about in the past. You may have recently seen my post about the Reader Growth team’s image browsing experiment. This is a separate idea focused on a different part of the reading experience.
As part of our explorations into ways to encourage readers to be more active in their experience and return to Wikipedia more frequently, next week the team will be launching a test of an early-stage version of what could become a “reading list” feature on the desktop and mobile web browser experience. This feature, which is already on the Wikipedia mobile apps, would allow readers to save articles they want to come back to later. Articles would be organized in a list that will be accessible in the top-right navigation for logged-in readers.
Why are we working on this?
We think that when readers are more active in their reading experience, they could become closer to making their first edit. We first want to explore the simplest approach to curation – allowing readers to save an article for reading later. Reading lists and saved tabs are a couple of the most popular features on the Wikipedia Android and iOS apps. Readers who use them frequently ask to sync their saved articles to their desktop reading experience.
The test is designed to help us understand if readers are interested in saving articles to read later, and whether they use these articles after they’ve saved them. The feature will be available to a small portion of randomly selected readers who have accounts with zero edits and nothing saved to their watchlists on English, Arabic, Chinese, French, Indonesian, and Vietnamese Wikipedias, accounting for about 15% of all logged-in users on these wikis.
What stage is this project in?
We are currently in Phase 1: launching a small test with an early version of these ideas. It’s not yet clear whether this feature will be an improvement for readers, so we want to test it to determine whether to proceed into Phase 2, building a feature. After the A/B test, we’ll come back here to discuss the results with you and decide together whether to proceed.
Because reading lists are an established feature on the Apps, we used learnings from the Apps team as well as informal conversations at in-person events and on Discord to serve as our research and background for Phase 0.
This is an example of what a user in the reading lists desktop experiment would see when they go to save the "Dog" article.
What is the timeline?
We will test the prototype starting the week of November 17. The test will run for 4 weeks and we will stop collecting data on December 17; however, the feature will still be accessible to the users who received it, so that they don’t lose their saved articles.
This is an example of how the reading list feature, current looks in the mobile app, where it is already deployed.
What are we testing?
The main capabilities that will be in the feature are:
Informing readers that the feature is available and indicating where to click to save an article and where to click to view the list of saved articles
Allowing readers to save articles to a private list for reading later
Allowing readers to access their list and remove articles that are no longer relevant to them
We will measure how often people are using the new feature and whether this has any positive effect on the number of pages readers open per session. If this test shows positive results on these two metrics, we will then discuss adding additional functionality, such as the ability to sync lists with the apps, creating multiple custom lists, and more.
What input are we looking for from you?
Would you as an editor find reading lists personally useful, either for your reading or editing work? If so, how?
If the experiment is successful, we will need to ensure designs avoid confusion between reading lists and watch lists. For this test, we’re only including logged-in users who don’t use the watchstar. However, if we decide to proceed with this idea in the future, we want editors as well as readers to be able to use both reading lists and watchlists.
Icons: The icons for reading lists and watchlists are similar. What is your opinion on the current icon for watchlist? Do you think it could be improved and if yes, how?
Placement: In the experiment, users don’t see the buttons for reading lists and watchlist next to each other, but instead the watchlist button is in the tools menu. Do you have any thoughts about that placement?
Have you used reading list/saved pages/playlist type features before, whether on the Wikimedia app or on other websites? If so, what types of functionality do you think could be useful for readers?
I just added a hatnote to Lexiphanes, and found that no hatnote appears on the page when viewed in mobile form through a mobile browser (I tried it on iOS through both the Safari and Firefox apps; Firefox showed it only with the "desktop site" option enabled), though it shows up normally on a regular computer (in my case Firefox 144.0 on Windows 10). I have no idea what is causing this problem. - LaetusStudiis (talk) 23:17, 12 November 2025 (UTC) This was the result of my own mistake in placing the note, as User:PrimeHunter said. - LaetusStudiis (talk) 03:00, 13 November 2025 (UTC)[reply]
@LaetusStudiis: I looked again and it actually happened because you placed the hatnote after the taxobox in the code. The mobile version didn't do anything wrong. I have moved the hatnote to the top where it belongs per MOS:ORDER. PrimeHunter (talk) 00:37, 13 November 2025 (UTC)[reply]
The coordinates at A3055 road were wrong, so I corrected them here, as far as it is possible to locate a road by single coordinates. When I click through from the new numeric coordinates at the top right of the article to the Google map, the location is OK. However, the pin in the "infobox" does not seem to have changed correspondingly. So where is the "infobox" getting its coordinates? I can't see where else in the article they are coming from. ITookSomePhotos (talk) 23:27, 12 November 2025 (UTC)[reply]
I don't understand, why ClueBot does not switch to a new archive at my talk page, despite maxarchsize being exceeded. Any idea? Anyone is invited to fix it directly. Leyo16:49, 14 November 2025 (UTC)[reply]
Thank you. I changed it. BTW: There are currently 69 talk pages and user talk pages that use insource:/maxarchsize=10000[^0]/. --Leyo17:21, 14 November 2025 (UTC)[reply]
The most recent run of Special:WantedCategories featured a redlinked Category:Taxon listed on CITES Appendix II that was being autogenerated and transcluded by the {{Population taxobox}} template on Scottish wildcat. I can't find any other "Taxon listed on X" categories at all (nothing exists at the plural "taxons" either), so I suspect this is most likely a misgeneration of a category that's supposed to exist at a completely different naming format, or possibly even a straight-up error that isn't supposed to exist at all under any naming format — but the template itself doesn't actually contain any code that would generate that category organically, and is instead calling out other modules that are smuggling the category in.
But that meant that I don't know how to find and fix where that category is coming from, and had no choice but to create it as a hidden category to get it out of the reds — and even worse, with no clues as to what parent category to file that category under due to the lack of any locatable siblings, I had to create it as an uncategorized hidden category.
So could somebody with more knowledge of this stuff find where the category is coming from, and either fix the naming format if it's being named incorrectly or just make it go away if it's not supposed to exist at all? Thanks. Bearcat (talk) 15:08, 15 November 2025 (UTC)[reply]
Well, regardless of what the expected plural form is, why's the template even passing through a singular instead of plural form here in the first place? But also, there's still no category for Category:Taxa listed on CITES Appendix II, so this still isn't a thing that should be getting passed through at that form either. Bearcat (talk) 16:22, 15 November 2025 (UTC)[reply]
Ironically I'm asking about something in the FAQ above, but hear me out. We have a self-reference hatnote at search telling people to go to special:search. Would there be a way to embed one of those javascript scripts in that specific hatnote to be able to append "or [use the search box]" there, with the link just switching their cursor focus to the search box? (In the population of users who somehow manage to land at the "search" page while looking for the internal search, I'm guessing this might be helpful.) --Joy (talk) 15:33, 15 November 2025 (UTC)[reply]
When discussion has ended, remove this tag and it will be removed from the lists. If this page is on additional lists, they will be noted below.
There have been some perennial discussions about removal of |slogan= from various infoboxes, but I could not find a case that discussed making WP:SLOGAN essentially policy.
Now WP:SLOGAN is just an essay which I know many people object to, but hence the reason for this RFC. I encourage everyone to read the essay but here are the key points (This is copied from WP:SLOGAN)
Mission statements generally suffer from some fundamental problems that are incompatible with Wikipedia style guidelines:
Even though mission statements are verifiable, they are written by the company itself, which makes them a primary source.
Per this search there are at least 37 infoboxes that have some form of slogan in them. The question is should all of those be removed? This does not mean that slogans cannot be mentioned in the body of an article, that is another conversation about whether they meet notability and are encyclopedic. My question is purely do they belong in the infobox?
In addition to this, what about mottos? It seems as though they are used rather interchangeably in Infoboxes... This search shows at least 72 infoboxes with a motto type parameter. Should some of those be removed? Personally I'd say keep it for settlement type infoboxes, but the way it is used on {{Infobox laboratory}} or {{Infobox ambulance company}}, it is performing the same functionality as a slogan and has the same issues.
No A slogan is one of those trivial things people go on Wikipedia to find out. (What company's slogan is "leave the driving to us"?) The claim that they conflict with Wikipedia style guidelines is nonsense. Quoting a slogan isn't endorsing it, anymore than the quotations in Mein Kampf endorse Naziism. --Isaac Rabinovitch (talk) 01:19, 21 October 2025 (UTC)[reply]
I don't care how this cookie crumbles, but slogans coming from primary sources, or "not being verifiable though third party sources", really is irrelevant to whether or not to include them. Headbomb {t · c · p · b}02:29, 21 October 2025 (UTC)[reply]
No (and doubly no for mottos). But I do think editors should use some discretion when deciding whether to include one. Nike can have Just Do It. Apple Inc. can have Think different. Disneyland can have "Happiest Place on Earth". M&M's should have "Melts in Your Mouth, Not in Your Hands". But slogans that almost nobody recognizes should be excluded through editorial judgement, not through removing the option entirely from the infobox. WhatamIdoing (talk) 02:40, 21 October 2025 (UTC)[reply]
No. Mottos are absolutely often promotional, but oftentimes so are names/logos/etc. They can still be essential pieces of information about an organization. I'd rather we encourage tight editorial discretion about which mottos are notable enough to warrant inclusion than ban them outright by removing the fields for them. Perhaps a good minimum standard would be secondary coverage (i.e. a source explicitly noting that they have a particular motto). Sdkbtalk04:51, 21 October 2025 (UTC)[reply]
No each use should be determined on a case by case basis. If it is famous slogan (finger licking good) or (the fish others reject) then may as well include it. But if it is excessive or ridiculous, then omit it. Graeme Bartlett (talk) 08:25, 21 October 2025 (UTC)[reply]
Comment the RFC question is not neutral -- it has a deletionist bias. If the arguments given in the No-votes above gain consensus, the slogan parameter should be restored in the infoboxes it was removed from. This would be a new global consensus overriding the local consensus at the infobox talk page archive. Joe vom Titan (talk) 14:14, 21 October 2025 (UTC)[reply]
No per Sdkb and Isaac Rabinovitch, and restore any that have been removed without a specific consensus discussion per Joe von Titan. Thryduulf (talk) 21:28, 21 October 2025 (UTC)[reply]
I'm with Blueboar on this, if secondary sources are mentioning it then we should to. I'd also add that we are a global site, writing for a global audience, I doubt all of these slogans are global, or even consistent across the Anglosphere. ϢereSpielChequers09:55, 23 October 2025 (UTC)[reply]
No. Slogans, mission statements, etc. are a basic piece of information about a company. They are reasonable to include and inclusion is not really promotional. We include logos and mention marketing stylization like all-caps but don't consider these promotional. A primary/self-published source is fine for this. Readers know what a slogan is and seeing one reproduced in an infobox is not going to be interpreted as Wikipedia declaring its accuracy. Secondary sources should be used to resolve any discrepancies or doubt, I suppose. All that said, I don't know that every company article needs to have the slogan included. Individual cases should be discussed on talk. —Myceteae🍄🟫(talk) 01:01, 25 October 2025 (UTC)[reply]
Yes per MOS:INFOBOXPURPOSE. The infobox is not for every true fact about a topic, it is about basic, uncontroversial facts, and ideally the kind that change rarely. If a slogan is relevant, great, cover it in prose. It doesn't have to be in the infobox. SnowFire (talk) 18:58, 30 October 2025 (UTC)[reply]
Slogans are clearly notable, we have articles on some of them (Category:Slogans), and clearly where appropriate it would be part of our prime directive to include mention of them, as indicated in the WP:MISSION essay which prompted this discussion: "Slogans may be worth mentioning briefly as part of a description of the organization's marketing approach." As such we shouldn't set about forbidding editors to include these details. A company's mission statement may also be worth noting, even though most are not - it comes down to judgement and consensus of the editors working on the article. I feel the Mission/Slogan essay is a useful guideline, leaving decision to editors working on the articles. I don't think we should be about imposing restrictions which may limit or restrict appropriate, notable, and useful encyclopedic knowledge. Blind, sweeping restrictions are rarely useful. So, of course, No. SilkTork (talk) 14:53, 31 October 2025 (UTC)[reply]
RFC: What should be done about unknown birth/death dates
When discussion has ended, remove this tag and it will be removed from the lists. If this page is on additional lists, they will be noted below.
With the implementation of Module:Person date, all |birth_date= and |death_date= values in Infoboxes (except for deities and fictional characters) are now parsed and age automatically calculated when possible.
With this implementation, it was found that there are a large number of cases (currently 4537) where the birth/death date is set to Unk, Unknown, ? or ##?? (such as 19??). Full disclosure, Module:Person date was created by me and because of an issue early on I added a number of instances of |death_date=Unknown in articles a few weeks ago. (I had not yet been informed about the MOS I link to below, that's my bad).
Per MOS:INFOBOX: If a parameter is not applicable, or no information is available, it should be left blank, and the template coded to selectively hide information or provide default values for parameters that are not defined..
There is also the essay WP:UNKNOWN which says, in short, Don't say something is unknown just because you don't know.
So the question is what to do about these values? Currently Module:Person date is simply tracking them and placing those pages in Category:Pages with invalid birth or death dates (4,537). It has been growing by the minute since I added that tracking. Now I am NOT proposing that this sort of tracking be done for every parameter in every infobox... There are plenty of cases of |some_param=Unknown, but with this module we have a unique opportunity to address one of them.
I tried to find a good case where the |death_date= truly is Unknown, but all the cases I could think of use |disappeared_date= instead. (See Amelia Earhart for example).
The way I see it there are a few options
Option A - Essentially do nothing. Keep the tracking category but make no actual changes to the pages.
Option B - Implement a {{preview warning}} that would say This value "VALUE" is invalid per MOS:INFOBOX & WP:UNKNOWN. (Obviously open to suggestions on better language).
Option C - Take B one step further and actually suppress the value. Display a preview warning that says This value "VALUE" is invalid per MOS:INFOBOX & WP:UNKNOWN. It will not be displayed when saved. then display nothing on the page. In other words treat |death_date=Unknown the same as |death_date=. (Again open to suggestions on better language for the preview warning).
We definitely shouldn't be using things like "Unk" or "?" - if we want to say this is not known we should explicitly say "Unknown". Should we ever say "unknown" though? Yes, but for births only when we have reliable sources that explicitly say the date is unknown to a degree that makes values like "circa" or "before" unhelpful - even "early 20th Century" and is more useful imo than "unknown". "Unknown" is better than leaving it blank when we have a known date of birth but no known date of death (e.g. Chick Albion). I'm not sure how this fits into your options. Thryduulf (talk) 00:24, 22 October 2025 (UTC)[reply]
Agreed. There are cases where no exact date is given but MOS:INFOBOX and WP:UNKNOWN do not apply because the lack of known date can be sourced reliably. If the module cannot account for this, I really think only option A is acceptable. —Rutebega (talk) 18:15, 22 October 2025 (UTC)[reply]
@Rutebega and Thryduulf: So I can very easily make it so that |..._date=Unknown<ref>... is allowed but just plain |..._date=Unknown is not. That is just a mater of tweaking the regular expression. Not hard at all to do at all. That being said (mostly for curiosity sake) can you give me an example of a page where the lack of known date can be sourced reliably? Every case I could think of (and I really did try to find one) either has a relevant |disappeared_date= (so you don't need to specify that |death_date=Unknown) or you can least provide approximate dates (i.e. {{circa|1910}}, 1620s or 12th century). Zackmann (Talk to me/What I been doing) 18:23, 22 October 2025 (UTC)[reply]
Metrodora isn't quite date unknown, but the only fixed date we have is the manuscript which preserves her text (c.1100 AD), and her floruit has been variously estimated between the first and sixth centuries AD. Of course, so little is known for certain about Metrodora that every single infobox field would be "unknown" were it filled in, and therefore there's little point having an infobox at all.
Corinna's dates are disputed: she was traditionally a contemporary of Pindar (thus born late 6th century and active in the fifth century BC) but some modern scholars argue for a third-century date. If the article had an infobox, a case could be made either for listing her floruit as either "unknown", "disputed", "5th–3rd century BC", "before 1st century BC" (the date of the first source to mention her) or simply omit it entirely.
@Caeciliusinhorto-public: thanks for some real examples. I think your point that so little is known that Infoboxes don't make sense is a good one... If there were other info that made sense to have in an Infobox I think the dates would still be able to be estimated (even if the range is hundreds of years). You could still put |birth_date=5th-3rd century BC or, of course, just leave it blank! Leaving it blank to me implies that it is Unknown, though it does leave ambiguous whether is it Unknown because no editor has taken the time to figure it out or whether it is Unknown because the person live some 2,200 years ago and we have no real way of knowing when they were born... Zackmann (Talk to me/What I been doing) 09:05, 23 October 2025 (UTC)[reply]
This is above my pay grade but can you give us an idea of how much "It has been growing by the minute". The scale of those additions may inform our view as to how best to deal with it.Lukewarmbeer (talk) 16:34, 22 October 2025 (UTC)[reply]
@Lukewarmbeer: so this is mostly a caching issue. I don't think very many new instances of this are being created each day, it just takes a while for the code to propagate. I really don't have an objective way of saying how many new instances are being created daily... Zackmann (Talk to me/What I been doing) 17:13, 22 October 2025 (UTC)[reply]
FWIW, about 15% of our biographies of living people have unknown birthdates (based on a count by category I did in 2023). I would assume that deceased biographies are perhaps more likely to miss this data, so we're looking at a number in the low hundreds of thousands? Not all of those will have infoboxes, of course. Andrew Gray (talk) 20:39, 22 October 2025 (UTC)[reply]
@Andrew Gray: when you say have unknown birthdates do you mean "no birthdates are given"? Because that is NOT what we are talking about here... We are talking about |birth_date=Unknown, where someone has specifically stated that the date is Unknown, not just left it blank. Zackmann (Talk to me/What I been doing) 20:42, 22 October 2025 (UTC)[reply]
@Zackmann08 ah, right - I think I misunderstood, apologies. If the module does nothing when the birthdate field is blank or missing, that sounds good.
Perhaps the problem is the multiple meanings of "Unknown". Some may have filled it meaning "nobody knows about the early life of this historical guy, only that he became relevant during the X events, already an adult", and others "unknown because I don't know". We may make it so that "Unknown" has the same effect as an empty field, and require a special input for people with truly unknown dates. And note that any biography after whatever point birth and death certificates became ubiquitous should be treated as the second case. Cambalachero (talk) 14:09, 23 October 2025 (UTC)[reply]
Option D The variant on option C where it's permitted iff there's a citation seems like a good solution to me. By a similar argument to WP:ALWAYSCITELEAD, I think a citation should always be required to assert that someone's date of death is outside the scope of human knowledge. From WP:V we should always cite material that is likely to be challenged, and I think the assertion that someone's date of death is "unknown" falls well within that scope; in particular I myself will always challenge it if unsourced. lp0 on fire()16:32, 23 October 2025 (UTC)[reply]
I think whether someone's date of birth or death being unknown falls into the category of material that is likely to be challenged is party a factor of when and where they were born and the time, place and manner of their death and how much we know about them generally. It is not at all surprising to me that we don't know the date of birth or death of a 3rd century saint or 18th century enslaved person, or when a Peruvian athlete who competed in the 1930s died; we do need a citation to say that we only know the approximate date of death for Dennis Ritchie and Gene Hackman. Thryduulf (talk) 16:50, 23 October 2025 (UTC)[reply]
Do you think the citation always needs to be inside the infobox? Our article about Metrodora has a couple of paragraphs about which century she might have lived in. There's no infobox at the moment, but if we added one, would you insist that the citations be duplicated into the infobox? WhatamIdoing (talk) 18:40, 24 October 2025 (UTC)[reply]
Option D Allow Unknown but not other abbreviations. Require citations for dates. Rationale: Looking at the Sven Aggesen it’s easy to see that “Unknown” is helpful because it’s communicating that the person is dead. In my opinion it’s still stating a fact. So Unknown should be allowed. “?” Should not. It seems like dates of birth and death should always be cited. Thanks for your work on this!! Dw31415 (talk) 17:54, 23 October 2025 (UTC)[reply]
In the case of Sven Aggesen I think we could reasonably expect a reader to infer from "born: 1140? or 1150?" that he is probably dead! In the case of people born recently enough that there might be confusion, I can't imagine there are many cases where both (a) they are known to be dead and (b) their date of death is known so imprecisely that we don't have a more useful value than "unknown" for the infobox. Caeciliusinhorto (talk) 20:35, 23 October 2025 (UTC)[reply]
Option A - needs more study - The category seems flawed, the concern seems more a flaw in process or concept of the template itself. Looking at a few pretty random clickings from Category:Pages with invalid birth or death dates (4,537) I see that maybe listing them as bad instead is indication that when the context is historical or a short stubby article, it just should not expect modern and detailed precision. And that there was at least one simply a typo to remind me articles are imperfect.
Carlos Altés 3 Sept 1907 to unknown -- there obviously is a death, but that the death date is unknown is perhaps a correct statement of fact.
Georgios Anitsas born 1891, died unknown -- well it's a stub article about a 1924 Olympics shooter based on two sports cites.
Æthelbald of Mercia King of Mercia died 757 - the death would be known from the succession, though exact day not so much, and the birth before rising to Kingship even less so.
Vicente Albán born 1725, died unknown - and typo on birthplace "Viceroyalty if New Gramada"
Po Aih Khang King of Panduranga died 1622, born ? - the death would be known from the succession, though again exact day not so much, and the person is only known from historical chronicles so the birthdate being a question mark seems informal ask for someone who knows to put in ...
In the actual instances: Option B seems a nonstarter since the pages already exist and such a flag seems meaningless; Option C supression seems in many casesw hiding simple fact of what is not known or what is only known to the year without exact day; and Option D ... I don't have a fix for the cases other than to say 'needs more study' and/or 'thingss are about as good as can be done with what is shown so just leave it'. Cheers Markbassett (talk) 19:35, 13 November 2025 (UTC)[reply]
@Markbassett: to really sure how the concept of the template itself is flawed... Again per MOS:INFOBOX and WP:UNKNOWN we should not be putting Unknown in the infobox... Your comments don't really address that... You give a few examples where the date should supposedly be inferred, but don't address the underlying issue here... Zackmann (Talk to me/What I been doing) 19:40, 13 November 2025 (UTC)[reply]
I think it's complicated and depends on context, this RFC was missing too many questions and too many cases to start trying to make a conclusion.
The simple list of 'Category:Pages with invalid birth or death dates' has many different situations and various templates -- and some of the fields might well be a good usage or the best that can be expected. Needs considerably more study, maybe the category needs to look at things by-template and by-era for example, or maybe separate out those that are from very short articles with less than 4 cites.
See my remark for Carlos Altés - "that the death date is unknown is perhaps a correct statement of fact."
If the date is not known and not knowable - does that mean the template 'Infobox football biography' was a bad one to use or that the template is incomplete? Per template guidance "Do NOT use this template when the person's exact date of death is disputed or unknown; consider death year and age instead.'
Does this mean that template death year and age needs to add guidance for if year is unknown ? Should date fields have some text values allowed as options to distinguish 'nobody knows' from 'unknown from limited cites' from 'someone please put in a value'?
I could ask similarly if 'Infobox royalty' date fields should allow simply year-of values or specify some text values as options, because the template Birth date defaults to Birth year and no further, but in just these few examples I'm seeing that centuries-ago kings seem often not knowing the year of birth.
Perhaps the category list is showing a few thousand places of questions more than issues -- if you broke it out by which template is used from Wikipedia:List of infoboxes - Wikipedia it might be reduced to mostly just a few where birth-date is an issue, or perhaps it would emerge that the template birth-year needs a mod. I don't know, but I think nobody knows without considerably more study -- and meanwhile no change. Cheers Markbassett (talk) 20:46, 13 November 2025 (UTC)[reply]
Again Markbassett you have not bothered to read the beginning of this RFC where the following is clearly stated...
Per MOS:INFOBOX: If a parameter is not applicable, or no information is available, it should be left blank, and the template coded to selectively hide information or provide default values for parameters that are not defined..
There is also the essay WP:UNKNOWN which says, in short, Don't say something is unknown just because you don't know.
Umm, obviously you're making a claim without ability to know, but I did read that -- and then looked further at the unstated and perhaps unseen flaws in Category:Pages with invalid birth or death dates (4,537) by looking at some specific cases in Category:Pages with invalid birth or death dates (4,537) and chased thru a couple of various Infobox templates with a subfield of birth-date etcetera. Though I don't know why the templates count of issues is small when yours is big, I do know that 'it's complicated'. There's a lot of different situations and different infoboxes and maybe the wrong infobox was used or maybe the wrong fill was used or maybe just maybe this is just too superficial and generic a study so far to start trying for conclusions. I didn't propose INVALID RFC, but suggested it needs a deeper look and made a couple suggestions. I am not excluding that perhaps the infoboxes need to address an area that's not going well and edit there -- in which case removing the indication would be a bad thing. Cheers Markbassett (talk) 22:26, 13 November 2025 (UTC)[reply]
Option D - estimate to a few decades of precision Wikipedia is unusual for broadly covering global human history. Because of this, unusually as compared to other publications, readers browse biographies not even knowing a person's century or country. Consider the American Civil War veteran Francis A. Bishop. Is this person American, born in the 1800s, and male? The article lacks sources to establish such things, but still, this kind of demographic information is important for categorizing people in Wikipedia. If we can determine a person's century of birth then that is helpful, and if we can narrow it to within a few decades then that also is helpful. Placing biographies in time is critical, and even when we lack an explicit source to WP:V the claim, then I favor doing the WP:OR to place this person into visibility in categories and data structures. Bluerasberry (talk)20:06, 13 November 2025 (UTC)[reply]
The documentation for Module:person date is really only useful to the person who wrote it. It's very difficult to figure out where this fits in the infobox ecosystem. But my best guess is that it is only invoked if templates such as {{Death date and age}} and {{Birth date}} are used in the infobox. These templates ONLY support dates in the Gregorian calendar. The earliest possible Gregorian date was 15 October 1582. The discussion above makes reference to a number of examples from antiquity. These articles should not be using any of these date templates. I can't see how mentioning these people in this discussion makes sense. Jc3s5h (talk) 20:27, 13 November 2025 (UTC)[reply]
User:Jc3s5h for the record you can set |birth_date= to ANY value... It has long been prefered that you using a template such as {{birth date and age}} but with the creation of Module:Person date even THAT is no longer necessary for modern, Gregorian calendar dates. The real issue here is what to do with dates that claim to be Unknown. I would argue there is not an example where the date is COMPLETELY unknown. While you may not know the EXACT date, you at least know a decade, or a century the person was alive. You can simply say |birth_date=6th century or |death_date={{circa|610}}... Zackmann (Talk to me/What I been doing) 20:33, 13 November 2025 (UTC)[reply]
I can't understand your reply without a more complete context. If I have the following:
{{Infobox Christian leader
| type = Pope
| birth_date = c. 530
| birth_place = [[Blera]], [[Eastern Roman Empire]]
| death_date = 22 February 606 (aged 75–76)
}}
It would seem to me such an infobox would not invoke Module:Person date and so would not be a suitable example for this discussion. Jc3s5h (talk) 20:49, 13 November 2025 (UTC)[reply]
@Jc3s5h: you are correct, if you use that code you provided, it would NOT Invoke Module:Person date. My point is that if you had |death_date=Unknown I have yet to find a case where that cannot be replaced with SOME information. We may not know the exact date, or even the exact year, but you should be able to replace Unknown with c. 123 or 15th century and thus resolve the problem of it appearing in the category. I have yet to find a page where there is literally NO CLUE about when the person lived, not even a century.
The root of the question is for those pages that DO use Unknown should we display some sort of {{preview warning}} message to editors that essentially says "Hey this isn't a valid value, you need to put SOMETHING (a decade, a century, a 'circa') or (per MOS:INFOBOX) simply leave it blank". This is the goal of Options B & C. Hope that helps... - Zackmann (Talk to me/What I been doing) 21:59, 13 November 2025 (UTC)[reply]
When discussion has ended, remove this tag and it will be removed from the list. If this page is on additional lists, they will be noted below.
Should the community harmonize the rules that govern community-designated contentious topics (which are general sanctions authorized by the community) with WP:CTOP? If so, how? 19:55, 22 October 2025 (UTC)
Background
Before 2022, the contentious topics process (CTOP) was instead known as "discretionary sanctions" (DS). Discretionary sanctions were authorized in a number of topic areas, first by the Arbitration Committee and then by the community (under its general sanctions authority).
In 2022, ArbCom made a number of significant changes to the DS process, including by renaming it to contentious topics and by changing the set of sanctions that can be issued, awareness requirements, and other procedural requirements (see WP:CTVSDS for a comparison). But because the community's general sanctions are independent of ArbCom, these changes did not automatically apply to community-authorized discretionary sanctions enacted before that date.[a]
In an April 2024 RfC, the community decided that there should be clarity and consistency regarding general sanctions language and decided to rename community-authorized discretionary sanctions to "contentious topics". However, the community did not reach consensus on several implementation details, most prominently whether the enforcement of community CTOPs should occur at the arbitration enforcement noticeboard (AE) instead of the administrators' noticeboard (AN), as is now allowed (but not required) by ArbCom's contentious topics procedure.[b]
Question 1: Should the community align the rules that currently apply in community-designated contentious topics with WP:CTOP, mutatis mutandis (making the necessary changes) for their community-designated nature?
Question 2: Should the community authorize enforcement of community contentious topics at AE (in addition to AN, where appeals and enforcement requests currently go)?
^Specifically, AE may consider "requests or appeals pursuant to community-imposed remedies which match the contentious topics procedure, if those requests or appeals are assigned to the arbitration enforcement noticeboard by the community." – Wikipedia:Arbitration Committee/Procedures § Noticeboard scope 2
The following discussion is an archived record of a request for comment. Please do not modify it. No further edits should be made to this discussion.A summary of the conclusions reached follows.
Yes to both questions. For almost three years now, we have had two different systems called "contentious topics" but with different rules around awareness, enforcement, allowable restrictions, etc. In fact, because WP:GS/ACAS follows the new CTOP procedure but without AE enforcement, we actually have three different systems. We should take this chance to make the process meaningfully less confusing. There is no substantive reason why the enforcement of, for example, WP:GS/UYGHUR and WP:CT/AI should differ in subtle but important ways. As for using AE, AE is designed for and specialized around CTOP enforcement requests and appeals. AE admins are used to maintaining appropriate order and have the benefit of standard templates, word limits, etc., while AN or ANI are not specialized around this purpose. As a result of WP:CT2022, ArbCom now specifically allows AE to hear requests or appeals pursuant to community-imposed remedies which match the contentious topics procedure, if those requests or appeals are assigned to the arbitration enforcement noticeboard by the community. We should take them up on the offer as Barkeep49 first suggested at the previous RfC. FYI, I am notifying all participants in the previous RfC, as this RfC is focused on the same topic. Best, KevinL (aka L235·t·c) 19:57, 22 October 2025 (UTC)[reply]
Yes to both - I don't see a downside to this standardization, and it would appear to both make the system as a whole easier to understand, and allow admins to take advantage of the automated protection logging bot for the currently-GS topics. signed, Rosguilltalk20:01, 22 October 2025 (UTC)[reply]
Yes to both. The CTOP system is complicated even without these three different regimes and confuses almost everyone involved. AE can be a great option for reducing noise in discussions, compared to AN. —Femke 🐦 (talk) 20:20, 22 October 2025 (UTC)[reply]
Yes to both but as I said in the previous RFC, if we're going to go in this direction, we should also be moving towards a process where the community eventually takes over older ArbCom-imposed CTOPs, especially in areas where the immediate on-wiki disruption that required ArbCom intervention has mostly settled down but the topic itself remains indefinitely contentious for off-wiki reasons. ArbCom was intended as the court of last resort for things the community failed to handle; it's not supposed to create policy. Yet currently, huge swaths of our most heavily-trafficked articles are under perpetual ArbCom sanctions, which can only be modified via appeal to ArbCom itself, and which are functionally the same as policy across much of the wiki. This isn't desirable; when ArbCom creates long-term systems like this, we need a way for the community to eventually assume control of them. We need to go back to treating ArbCom as a court of last resort, not as an eternal dumping ground for everything controversial, and unifying ArbCom and community sanctions creates an opportunity to do so by asking ArbCom to agree to (with the community's agreement to endorse them) convert some of the older existing ArbCom CTOPs into community ones. --Aquillion (talk) 20:51, 22 October 2025 (UTC)[reply]
Yes to both per nom. Consistency is great, and eliminating the byzantine awareness system (where you need an alert every 12 months) is essential. WP:AE is a miracle of a noticeboard (how is the noticeboard with the contentious issues the relatively tame one?), and we as a community should take advantage of ArbCom's offer to let us use it. Best, HouseBlaster (talk • he/they)22:10, 22 October 2025 (UTC)[reply]
Yes to both, and a full-throated "yes" for using AE in particular. The other noticeboards are not fit for purpose with respect to handling CTOP disruption. Vanamonde93 (talk) 22:24, 22 October 2025 (UTC)[reply]
Yes to both – This has been a mess for more than a decade. Harmonising the community and ArbCom general sanctions regimes will cut red tape, and eliminate confusion over which rules apply in any given case. I am also strongly in favour of allowing community sanctions to be enforced at WP:AE. Previously, there were numerous proposals to create a separate board for community enforcement, such as User:Callanecc/Essay/Community discretionary sanctions, but all failed to go anywhere. In my opinion, the most important aspect of community sanctions (as opposed to ArbCom sanctions) is that the community authorises them, and retains control over their governance. Enforcement at AE does nothing to reduce the community's power to enact sanctions; if anything, it will ensure that these regimes are enforced with the same rapidity as ArbCom sanctions. It would be foolish to not take advantage of ArbCom's offer to allow us to use their existing infrastructure. Yours, &c.RGloucester — ☎23:54, 22 October 2025 (UTC)[reply]
Yes to both. I was in favor of this during the March 2024 rfc but was relcutant to push it too hard since I was then on arbcom. I am no longer on arbcom and thus can freely and fully support this thoughtful and wise pproposal for the same reasons I hinted at in the previous discussion. Best, Barkeep49 (talk) 02:00, 23 October 2025 (UTC)[reply]
Yes to both, and future changes to either sanction procedure should be considered for both. Not to be unduly repetitive of others above, but the system is more complex than it needs to be. AE as an additional option is a positive. CMD (talk) 04:38, 23 October 2025 (UTC)[reply]
Yes to both. We already have overlapping CSes (Arbcom-imposed) and GSes (community-imposed) - A-A and KURD, at least, where the community chose to impose stricter sanctions on a topic area than ArbCom mandated (in both of those cases, the community chose to ECR the topic area). This has caused confusion for me as an admin a few times, for a regular user it can only be more so. Harmonizing the restrictions, with the only difference being who imposed them, can only make sense. - The BushrangerOne ping only20:02, 23 October 2025 (UTC)[reply]
Yes and Yes - The same procedures should apply to topics that the ArbCom has found to be contentious as to topics which the community has found to be contentious. The differences have only caused confusion. Robert McClenon (talk) 20:56, 23 October 2025 (UTC)[reply]
No I understand what Arbcom is per WP:ARBCOM and it seems to be a reasonably well-organised body with good legitimacy due to it being elected. But what's the community? Per WP:COMMUNITY and Wikipedia community, it seems to be be any and all Wikipedians and this seems quite amorphous and uncertain. Asking such a vague community to do something is not sensible. In practice, I suppose the sanctions were cooked up at places like WP:ANI which is a notoriously dysfunctional and toxic forum. That's not a sensible place to get anything done.
I looked at one of these community sanctions as an example, and it was some special measure for conflict about units of measurements in the UK: WP:GS/UKU. Now I'm in the UK and so might easily run afoul of this but this is the first I heard of this being an especially hot topic. And I've been actively editing for nigh on 20 years. Our general policies about edit-warring, disruption and tendentious editing seem quite adequate for such an issue and so WP:CREEP applies. That sanction was created over 10 years ago and so should be expired rather than harmomised. The other general sanctions concern such topics as Michael Jackson, who died 16 years ago and that too seems quite dated.
So, I suggest that all the general sanctions be retired. If problems with those topics then recur, fresh sanctions can be established using the new WP:CTOP process and so we'll then all be on the same page.
I will note that policy assigns to the community the primary responsibility to resolve disputes, and allows ArbCom to intervene in serious conduct disputes the community has been unable to resolve (Wikipedia:Arbitration/Policy § Scope and responsibilities) (emphasis added). That is to say, ArbCom's role is to supplement the community when the community's efforts are unsuccessful. I think that's why there should be some harmonized community CTOP process that can be applied for all extant community CTOPs. I understand that it may be time to revisit some of the community-designated CTOPs, which I support – when I was on ArbCom, I was a drafter for the WP:DS2021 initiative which among other things rescinded old remedies from over half a dozen old cases. But that seems to be a different question than whether to harmonize the community structure with ArbCom's. Best, KevinL (aka L235·t·c) 14:32, 29 October 2025 (UTC)[reply]
At this time there is no consensus to lift these sanctions, with a majority opposed. People are concerned that disputes might flare up again if sanctions are removed: Give them an inch and they will take a kilometer ... — User:Sandstein00:00, 15 November 2020 (UTC)
Yes to both - By having two systems with the same name, we should then avoid differences in the rules. I say this because if the rules are different, then a user will need to be aware of who designated an area as a contentious topic before reporting or handling reports. For example, if we had the two systems use the same rules but different reporting pages (with no overlap on what pages that can be used), then I expect that users will by mistake post to the wrong pages. Dreamy Jazztalk to me | my contributions20:56, 1 November 2025 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Question 3. How should we handle logging of community contentious topics?
Create a new page such as Wikipedia:General sanctions/Log which would only log enforcement actions for community contentious topics (subpages would be years)
Continue logging at each relevant page describing the community contentious topics (Wikipedia:General sanctions/Topic area), and if 2 or 3 are chosen, the page would transclude these relevant pages.
2+3+4 as proposer, one of the problems I do notice is that loading WP:AELOG does take a lot of time because the page has a lot of enforcement actions. The advantage of 2 is having a single page that can be quickly searched. Aasim (話す) 20:42, 22 October 2025 (UTC)[reply]
BTW except for 1 the other options are mutually exclusive. If option 1 is chosen, options 2-4 are irrelevant. I am not asking people to pick one and be done, people can choose any combination. Aasim (話す) 21:32, 23 October 2025 (UTC)[reply]
2 > 1 – Both ArbCom and community CT are forms of general sanctions (see my incomplete essay on the subject); the only distinction is who authorises them. For this reason, '3' does not make sense. Eliminating the sprawling log pages that currently exist for community-authorised regimes should be a priority if our goal is to eliminate red tape, therefore '4' does not make sense either. That leaves me with 2, which allows for a centralised log for both forms of sanctions. I am perfectly fine with creating subpages as needed, but centralisation is paramount in my mind. Yours, &c.RGloucester — ☎00:02, 23 October 2025 (UTC)[reply]
In the past, there have been concerns raised about it being clear if the enacting authority is the arbitration committee or the community. Thus I do not feel option 1 is the best choice.
Regarding searching: I feel the typical use case is to search for actions performed within a specific topic area. If necessary, Wikipedia search with a page prefix criterion can be used to search multiple subpages. isaacl (talk) 16:19, 23 October 2025 (UTC)[reply]
2 I am in favor of fewer, larger pages because they are easier to find and to search. If a searcher needs to confirm that something isn't there, for example, fewer pages, even if very large, are much easier to work with. Darkfrog24 (talk) 13:33, 23 October 2025 (UTC)[reply]
1 - in keeping with the spirit for Q1 and Q2, the whole point here is to merge everything into a single system that is simpler to follow. We already have a practice of splitting off subpages when specific sections in the log get too large. signed, Rosguilltalk13:52, 23 October 2025 (UTC)[reply]
2 as a first choice, as centralization is helpful, but the current WP:AELOG is ultimately an ArbCom page and shouldn't have jurisdiction over community sanctions. I agree with Rosguill's point about splitting off subpages, and I presume this would be encouraged to a greater extent here. I could also be convinced by 1 (to avoid an unnecessary transclusion, although it should be made clear that it isn't an ArbCom-only page anymore) or by a temporary 3 (to avoid a lag spike until the main subpages are sorted out). Chaotic Enby (talk · contribs) 17:04, 23 October 2025 (UTC)[reply]
Actually, I'm realizing that 2 doesn't help with centralization compared to 3, and creates a bit of an inconsistency between some topics being directly logged there and others being transcluded. Count 3 as my first choice, with the possibility of a combined log transcluding both for reference. Chaotic Enby (talk · contribs) 19:34, 23 October 2025 (UTC)[reply]
1 > 3 > 4, but my actual preference is to delegate this to a local consensus of those who are involved in implementing this. 1 is my preference, like Rosguill, because centralizing where the existing logs live promotes simplicity and would avoid the need for admins to check which types of CTOPs are which (one goal I have is for the community CTOPs and ArbCom CTOPs to feel almost identical). Not to mention, it would preserve compatibility with tools like WP:SUPERLINKS that check AELOG but not other pages. The biggest hurdle in my mind is that #1 would require ArbCom approval, which I think is likely but not certain (given that ArbCom allows AE for community CTOPS, why not AELOG?). Best, KevinL (aka L235·t·c) 19:29, 23 October 2025 (UTC)[reply]
1 > 2 These should be standardized as much as possible. It's already the most confusing and obfuscated system of policies on Wikipedia; we should strive to eliminate as much confusion and pointless red tape as possible. Apart from where actions are logged, there are now pretty much no practical differences between ArbCom and community CTOPs: they are imposed by different bodies, enforced identically, and logged in different places. I agree with others that these systems should feel identical; this would have the additional advantage of making Aquillion's vague long-term proposal, to have old ArbCom topics "expire" into community ones if deemed no longer pertinent, seem like a realistic option. lp0 on fire()22:49, 10 November 2025 (UTC)[reply]
Comment I understand the functional difference between an AE sanction and AN sanction is that an AE sanction can be removed only by a) the exact same admin who placed it, called the "enforcing admin" or b) a clearly-more-than-half balance of AE admins at an AE appeal while a sanction placed at AN can be removed by c) any sufficiently convinved admin acting alone. To give an example of how this would change things, I found myself in a situation in which I was indefinitely blocked at AE and then the enforcing admin left Wikipedia, which removed one of my options for lifting a sanction. Some of our fellow Wikipedians will think making it easier to get a sanction lifted is a good thing and others will think it's a bad thing, but we should be clear about that so we can all make our decision. Am I correct about how these changes would affect those seeking to have sanctions removed? Darkfrog24 (talk) 13:31, 23 October 2025 (UTC)[reply]
@Darkfrog24: I think this is incorrect. As it stands now, restrictions imposed under community CTOPs are only appealable to the enforcing administrator or to AN (see, e.g., WP:GS/Crypto, which says Sanctions imposed may be appealed to the imposing administrator or at the appropriate administrators' noticeboard.). Q1 is about aligning the more subtle but still important differences between community CTOPs and ArbCom CTOPs, while Q2 is about adding AE as a place (but not changing the substantive amount of agreement needed) for enforcement requests and appeals. Best, KevinL (aka L235·t·c) 13:43, 23 October 2025 (UTC)[reply]
Comment: Is there any way that we could implement the semi-automated logging process that is used for page protection of CTOPS here? Is there any expectation that if any of these options were chosen, that process would revert to manual? ⇒SWATJesterShoot Blues, Tell VileRat!18:17, 23 October 2025 (UTC)[reply]
Comment – If we are to create a seperate log for community-authorised contentious topics as in alternative 3, it should not be subpage of Wikipedia:General sanctions. 'General sanctions' is a broad category that includes ArbCom sanctions, and also non-contentious topics remedies such as the extended confirmed restriction. This is a recipe for confusion. Please consider an alternative naming scheme. Yours, &c.RGloucester — ☎00:19, 24 October 2025 (UTC)[reply]
The title can always be different. The title I named was just an example title to explain the purpose of the question. Aasim (話す) 01:30, 24 October 2025 (UTC)[reply]
Given this has now passed (aside from the nitty-gritty of logging), does this mean community GSes imposing ECR now conform to Wikipedia:Arbitration Committee/Procedures#Extended confirmed restriction, specifically the portion about Non-extended-confirmed editors may use the "Talk:" namespace only to make edit requests related to articles within the topic area, provided they are not disruptive? Because the fact that, at least previously, that did not apply to community-imposed GSes has tripped me up in the past. - The BushrangerOne ping only23:32, 4 November 2025 (UTC)[reply]
The extended confirmed restriction is a separate kind of general sanction, not part of contentious topics. Nothing in this discussion should apply to community-imposed extended confirmed restrictions. Yours, &c.RGloucester — ☎23:39, 4 November 2025 (UTC)[reply]
Support – Articles that contain obvious evidence of unreviewed AI use are evidence of a competence issue on the part of their creator that is not compatible with the GA process. Having reviewers perform a spot check for obvious signs of AI use will help militate against the recent problem whereby AI-generated articles are being promoted to GA status without sufficient review. Yours, &c.RGloucester — ☎10:08, 26 October 2025 (UTC)[reply]
Support Per nomination. This would not prohibit AI use per se, but would rule out promoting any low effort usage of AI. AI use in this manner could be argued to be a failure of GA criteria 1 and 2 as well, but explicitly stating as such will give a bit more weight to reviewers' decisions. --Grnrchst (talk) 10:45, 26 October 2025 (UTC)[reply]
Oppose. GAs should pass or fail based only and strictly only on the quality of the article. If there are AI-generated references then they either support the article text or they don't, if they don't then the article already fails criteria 2 and the proposal is redundant. If the reference does verify the text it supports then there is no problem. If there are left-over prompts then it already fails criteria 1 and so this proposal is redundant. If the AI-generated text is a copyright violation, then it's already an immediate failure and so the proposal is redundant. If the generated text is rambly, non-neutral, veers off topic, or similar issues then it already fails one or more criteria and so this proposal is redundant. Thryduulf (talk) 12:20, 26 October 2025 (UTC)[reply]
As I see it, this proposal as-written is actually quite limited in scope and is not doing anything beyond saving resources. Obvious unreviewed AI use will not meet all criteria, but at the moment a reviewer of the GAN is still expected to do a full review. This proposal if passed would effectively codify that obvious AI is considered (by consensus of users of the GA process) to mean the article has insurmountable issues in its current state and should be worked on first before a full review. Kingsif (talk) 14:07, 26 October 2025 (UTC)[reply]
Oppose per IAWW and Thryduulf. All issues arising from AI use are already covered by other criteria, and there are legitimate uses of AI, which should not be prohibited. Kovcszaln6 (talk) 12:33, 26 October 2025 (UTC)[reply]
Oppose. I agree with this in spirit, but I don't think it would be a useful addition. If a reviewer spots blatant and problematic AI usage (e.g. AI-generated references), almost all would quickfail the article immediately anyway. I can't imagine this proposal saving any additional reviewer time or reducing the handful of flawed articles that slip through that process. But if a nominator used AI for something entirely unproblematic and left an edit summary saying something like "used ChatGPT to change table formatting" or "fixed typos identified by ChatGPT", that would be obvious evidence of LLM usage and yet clearly doesn’t warrant a quickfail. MCE89 (talk) 12:52, 26 October 2025 (UTC)[reply]
I think that would be slightly better, but I still don't really see what actual problem this proposal is trying to solve. If an article consists of unreviewed or obviously problematic LLM output and contains things like fake references, reviewers aren't going to hesitate to quickfail it (and potentially G15 it) already. I don't see any signs that GAN is currently overwhelmed by AI-generated articles that reviewers just don't have the tools to deal with. And given that lack of a clear benefit, I'm more worried about the potential for endless arguments about process rather than content in the marginal cases (e.g. Can an article be quickfailed if the creator discloses that they used ChatGPT to help copyedit? What if they say they've manually verified and rewritten the LLM output? What is the burden of proof to say that LLM usage is "obvious", e.g. could I quickfail an article solely based on GPTZero?) MCE89 (talk) 15:11, 26 October 2025 (UTC)[reply]
About the problems, I have a lot of thoughts and happy to discuss, perhaps we should move it to the section below? I also assume and hope people take obvious to mean obvious: if it’s marginal, it’s not obvious. Genuine text/code leftovers from copypasting LLM output is obvious, having to ask a different AI isn’t. Kingsif (talk) 15:29, 26 October 2025 (UTC)[reply]
Oppose largely per Thryydulf, except that I don't believe that AI content necessarily violates criterion 1. AI style is often recognisable but if it's well-written then I wouldn't care and we should investigate if the sources were not hallucinated. Fake references (as opposed to incomplete/obscure/not readily available references) should be an instafail reason. Szmenderowiecki (talk) 13:13, 26 October 2025 (UTC)[reply]
Support Per my comments in discussion and here. I also see no objection that couldn’t be quelled by the proposed text already having the qualifier “obvious”: the proposal includes benefit of the doubt, even if I personally would take it much further. Kingsif (talk) 14:12, 26 October 2025 (UTC)[reply]
Support. On a volunteer-led project, it is an insult to expect a reviewer to engage with the extruded output of a syntax generator and not the work of a human volunteer. I am not interested in debating this; please don't ping me to explain that I'm being a Luddite in holding this view. ♠PMC♠ (talk)14:52, 26 October 2025 (UTC)[reply]
Oppose per MCE89. LLM use isn't necessarily problematic (even if it often is), and the proposed wording would discourage people from disclosing LLM use in their edit summaries. Anne drew (talk · contribs) 15:28, 26 October 2025 (UTC)[reply]
Weak support -- Did not realize this discussion had been ongoing, I noped out because I was frankly way too exhausted to sisypheanly re-explain things I had already tried to explain. Anyway, I don't object to these criteria per se but this is a really low bar. What I would really support is mandatory disclosure of any AI use, because if AI was used then the spot-checking that is required in GA review is not going to be nearly enough. Nor is the problem really fake sources anymore, the problem is "interpretations" of sources that might not seem worth checking if you don't know what AI text sounds like, but if you do know what AI text sounds like, are huge blaring alarms that the text is probably wrong. Here's an example (albeit for a Featured Article and not a Good Article). All the sources were real, but the text describing the sources was fabricated. And it took me about 15 minutes to zero in on the references that were likely to have issues because I know how LLMs word things; without AI disclosure, reviewers are likely to spot-check the wrong things (as happened here). Gnomingstuff (talk) 17:34, 26 October 2025 (UTC)[reply]
Weak oppose While I fully agree with the intent of this proposal, in practice I am concerned that this is subject to misuse by labeling anything as "AI". I agree with Thryduulf and others that any sort of poorly done AI use (which is almost all of it) will already be failable per the existing GA criteria. I share others' concern about the proliferation of AI generated articles and reviews but I'm not convinced this is the solution. Trainsandotherthings (talk) 18:37, 26 October 2025 (UTC)[reply]
Weak oppose per my comments at WT:GAN. I also agree that LLM-generated articles are problematic, but the existing criteria already cover most of what's proposed - for instance, evidence of persistent failed verification is a valid reason to quickfail already. I'm concerned that a reviewer would use an LLM detector to check an article, the detector incorrectly says that the article is AI, and the reviewer fails based on that basis. AI detectors are notoriously unreliable - you can run a really old document, like the United States Declaration of Independence, through an AI detector to see what I'm talking about. (Edit - I would support changing WP:GACR criterion 3 - It has, or needs, cleanup banners that are unquestionably still valid. These include {{cleanup}}, {{POV}}, {{unreferenced}} or large numbers of {{citation needed}}, {{clarify}}, or similar tags - to list {{AI-generated}} as an example of a template that would merit a quickfail, since AI articles can already be quickfailed under that criterion. 13:01, 27 October 2025 (UTC)) Epicgenius (talk) 20:18, 26 October 2025 (UTC)[reply]
They have high numbers of both false positives and false negatives. See, for instance, this study: Looking at the GPT 3.5 results, the OpenAI Classifier displayed the highest sensitivity, with a score of 100%, implying that it correctly identified all AI-generated content. However, its specificity and NPV were the lowest, at 0%, indicating a limitation in correctly identifying human-generated content and giving pessimistic predictions when it was genuinely human-generated. GPTZero exhibited a balanced performance, with a sensitivity of 93% and specificity of 80%, while Writer and Copyleaks struggled with sensitivity. The results for GPT 4 were generally lower, with Copyleaks having the highest sensitivity, 93%, and CrossPlag maintaining 100% specificity. The OpenAI Classifier demonstrated substantial sensitivity and NPV but no specificity.The link you provided says annotators who frequently use LLMs for writing tasks excel at detecting AI-generated text. This is about human writers detecting AI, not AI detectors detecting AI. That is not what I am talking about. Other studies like this one state that human reviewers have significant numbers of false positives and false negatives when detecting AI: In Gao et al.’s study, blind human reviewers correctly identified 68% of the AI-generated abstracts as generated and 86% of the original abstracts as genuine. However, they misclassified 32% of generated abstracts as real and 14% of original abstracts as generated. – Epicgenius (talk) 02:57, 27 October 2025 (UTC)[reply]
The study also contains a chart comparing the performance of automatic AI detectors such as Pangram, GPTZero, and Binoculars. As you would have noticed if you read it fully. Gnomingstuff (talk) 16:03, 27 October 2025 (UTC)[reply]
If you'd read to the conclusion you'd see While AI-output detectors may serve as supplementary tools in peer review or abstract evaluation, they often misclassify texts and require improvement. The limitations section also notes that paraphrasing the AI output significantly decreases the detection rate. This clearly indicates they are not fit for the purpose they would be used for here - especially when the false positive rate is sometimes over 30%. We absolutely cannot afford to tell a third of users that their submission was rejected because they used AI when they didn't actually use AI. Thryduulf (talk) 17:42, 27 October 2025 (UTC)[reply]
Support It's not fair to submit GA checkers to the noxious task of checking everything in a long detailed article for AI problems. Even without a rule, if you see evidence of AI, say so in the review, that everyone can see, the AI rabbit hole has been found. Nobody is obligated to go down that warren, note it and pass it by. Heck make some warning templates or essays, so future reviewers understand their obligation. It should take 10+ hours to correctly verify an AI article, requires reading all sources and understanding topic in depth. -- GreenC20:52, 26 October 2025 (UTC)[reply]
It's not fair to submit GA checkers to the noxious task of checking everything in a long detailed article for AI problems they don't have to at the moment. If there are problems the review is already failed regardless of whether or not the problems result from AI use. If there are no problems then whether AI was used is irrelevant. Thryduulf (talk) 20:57, 26 October 2025 (UTC)[reply]
Also, I should note that if a reviewer finds so many issues that the article requires 10+ hours to fix, it is already acceptable to quickfail based on these other issues. GA is supposed to be a lightweight process; reviewers already can fail articles if they find things like failed verification or issues needing maintenance banners, and determine that the issues can't be reasonably fixed within a week or so. The proposed GA criterion is well-intentioned, but I think focusing on the means of writing the articles, rather than the ends, is not the correct way to go about it. Epicgenius (talk) 22:40, 26 October 2025 (UTC)[reply]
With AI you don't even know errors exist. It took me 7 days once to find all the problems in an AI generated article. Turned out to have a reasonable sounding but nationalistic-bent supported by errors of omission. How do you know this without research on the topic? This is why so many are against AI, it's incredibly difficult to debug. Normally a nationalistic writer is easy to spot, but AI is such a good liar, not even the operators realize what it is doing. Not to say AI is impossible to use correctly, with a skilled, disciplined, and intellectually honest operator. — GreenC23:41, 26 October 2025 (UTC)[reply]
I agree, and based on some known AI model biases, any controversial topic (designated or just by common sense) should probably have AI use banned completely. Kingsif (talk) 23:50, 26 October 2025 (UTC)[reply]
I do see, and agree with, the point that you would have to very carefully examine all claims in an article that is suspected of containing AI content. However, WP:GAQF criterion 3 (It has, or needs, cleanup banners that are unquestionably still valid. These include {{cleanup}}, {{POV}}, {{unreferenced}} or large numbers of {{citation needed}}, {{clarify}}, or similar tags ) already covers this. If an article is suspected of containing AI, and thus deserves (or has) {{AI-generated}}, it is already eligible for a quick fail under QF criterion 3. – Epicgenius (talk) 02:53, 27 October 2025 (UTC)[reply]
Weak oppose while i am against use of AI in GA and the GANR process I think this is a somewhat misguided proposal as it covers things that would already fall under quick fail criteria and does not actually identify the scope of the issues (i.e what is considered obvious evidence of AI use?). I would be able to support a non-redundant and more detailed proposal but it would need to be more fleshed out than this. IntentionallyDense(Contribs)02:18, 27 October 2025 (UTC)[reply]
So you want people to quick-fail a nomination on the basis of @Headbomb's script, about which the documentation for the script says it "is not necessarily an issue ("AI, find me 10 reliable sources about Pakistani painter Sadequain Naqqash")". That sounds like a bad idea to me. WhatamIdoing (talk) 06:20, 27 October 2025 (UTC)[reply]
your wording says 6. It contains obvious evidence of LLM use, such as AI-generated references or remnants of AI prompt. this tells me that you can’t have AI prompts in your writing and no AI generated references. Okay… so both of those would be covered by the current criteria. It didn’t mean took the headbomb script. And I definitely would not support any quickfail criteria that relies on a used script especially when the script states “ This is not a tool to be mindlessly used.” also on what basis of HB script are we quick failing?
The current proposal tells me nothing about what is considered suspicious for AI usage outside of the current existing quick fail criteria. It gives me no guidance as a reviewer as to what may be AI unless it is blatantly obvious. IntentionallyDense(Contribs)13:18, 27 October 2025 (UTC)[reply]
I agree, we do have some pretty solid parameters around what is a red flag for AI but I believe any proposal around policy/guidelines for AI needs to incorporate those and lay out what that looks like before we take action on it. I would just like an open conversation on what editors think signs of AI use are, and what we can gain some consensus around regarding indications of AI, then it will be a lot easier to implement policy on how to deal with those indications.
My main issue with this proposal is that it completely skipped that first step of gaining consensus about what the scope of the problem is and jumped to implementing measures to resolve said problem that we have not properly reached consensus on. IntentionallyDense(Contribs)20:26, 28 October 2025 (UTC)[reply]
Oppose. If a GAN is poorly written, it fails the first criterion. If references are made up, it fails the second criterion. If the associated prose does not conform with the references, then it fails the second criterion. We shouldn't be adding redundant instructions to the good article criteria. I don't want new reviewers to be further intimidated by a long set of instructions. Steelkamp (talk) 04:53, 27 October 2025 (UTC)[reply]
Oppose, as I pointed in other discussions, and many already pointed here, GA criteria should be focused on the result, not the process. But besides that, I see a high anti-AI sentiment in those discussions and fear that if those proposals are approved, they will be abused. Cambalachero (talk) 13:16, 27 October 2025 (UTC)[reply]
Support. Playing whack-a-mole with AI is a problematic time sink for good-faith Wikipedia editors because of the disparity in how much time it takes an AI-using editor to make a mess and how much time it takes the good-faith editors to figure it out and clean it up. This is especially problematic in GA where even in the non-AI cases making a review can be very time consuming with little reward. The proposal helps reduce this time disparity and by doing so helps head off AI users from gaming the system and clogging up the nomination queue, already a problem. —David Eppstein (talk) 17:29, 27 October 2025 (UTC)[reply]
Please rewrite that. By writing "good-faith Wikipedia editors" meaning editors who do not use AI, you are implying that those who do are acting in bad faith. Cambalachero (talk) 17:37, 27 October 2025 (UTC)[reply]
"Consensus-abiding Wikipedia editors" and "editors who either do not know of or choose to disrespect the emerging consensus against AI content" would be too unwieldy. But I agree that many new editors have not yet understood the community's distaste for AI and are using it in good faith. Many other editors have heard the message but have chosen to disregard it, often while using AI tools to craft discussion contributions that insist falsely that they are not using AI. I suspect that the ones who have reached the stage of editing where they are making GA nominations may skew more towards the latter than the broader set of AI-using editors. AGF means extending an assumption of good faith towards every individual editor unless they clearly demonstrate that assumption to be unwarranted. It does not mean falsely pretending the other kind of editor does not exist, especially in a discussion of policies and procedures intended to head off problematic editing. —David Eppstein (talk) 18:31, 27 October 2025 (UTC)[reply]
Support. People arguing that any article containing such things would ultimately fail otherwise are missing the point. The point is to make it an instant failure so further time doesn't need to be wasted on it - otherwise, people would argue eg. "oh that trace of a prompt / single hallucinated reference is easily fixed, it doesn't mean the article as a whole isn't well-written or passes WP:V. There, I fixed it, now continue the GA review." One bad sentence or one bad ref isn't normally an instant failure; but in a case where it indicates that the article was poorly-generated via AI, it should be, since it means the entire article must be carefully reviewed and, possibly, rewritten before GA could be a serious consideration. Without that requirement, large amounts of time could be wasted verifying that an article is AI slop. This is especially true because the purpose of existing generative AI is to create stuff that looks plausible at a glance - it will often not be easy to demonstrate that it is a long way from meeting any one of the six good article criteria, wasting editor time and energy digging into material that had little time and effort put into it in the first place. That's not a tenable situation; once there is evidence that an article was badly-generated with AI, the correct procedure is to immediately terminate the GA assessment to avoid wasting further time, and only allow a new one once there is substantial evidence that the problem has been addressed by in-depth examination and improvement. Determining whether an article should pass or fail based only and strictly only on the quality of the article is a laborious, time-intensive process; it is absolutely not appropriate to demand that an article be given that full assessment once there's a credible reason to believe that it's AI slop. That's the entire point of the quickfail criteria - to avoid wasting everyone's time in situations where a particular easily-determined criteria means it is glaringly obvious that the article won't pass. --Aquillion (talk) 19:46, 27 October 2025 (UTC)[reply]
Support in principle, although perhaps I'd prefer such obvious tells in the same criteria as the copyvio one. Like copyvio, the problems might not be immediately apparent, and like copyvio, the problems can be a headache to fix. Llm problems are possibly even much more of a timesink, checking through and potentially cleaning up llm stuff is not a good use of reviewer time. This QF as proposed will only affect the most blatant signals that llm text was not checked, which has its positives and negatives but worth noting when thinking about the proposal. CMD (talk) 01:34, 28 October 2025 (UTC)[reply]
Support. Deciding on the accuracy and relevance of every LLM's output is not sustainable on article talkpages or in articles. Sure, it could produce something passable, but there is no way to be sure without unduly wasting reviewer time. They're designed to generate text faster than any human being can produce or review it and designed in such a way as to make fake sources or distorted information seem plausible.--MattMauler (talk) 19:34, 28 October 2025 (UTC)[reply]
Oppose This proposal is far too broad. This would mean that an article with a single potentially hallucinated reference (that may not have even been added by the nominator) would be quickfailed. Nope. voorts (talk/contributions) 01:19, 29 October 2025 (UTC)[reply]
That's why I said potentially hallucinated. I'm worried this will be interpreted by some broadly and result in real, but hard to find, sources being deemed hallucinated. Also, sometimes editors other than the nominator edit an article in the months between nomination and review. We shouldn't penalize such editors with a quickfail over just one reference that they may not have added. voorts (talk/contributions) 19:34, 2 November 2025 (UTC)[reply]
"Whoever wants to know a thing has no way of doing so except by coming into contact with it, that is, by living (practicing) in its environment. ... If you want knowledge, you must take part in the practice of changing reality. If you want to know the taste of a pear, you must change the pear by eating it yourself.... If you want to know the theory and methods of revolution, you must take part in revolution. All genuine knowledge originates in direct experience." – Mao Zedong
Editors who have never done a GA review or who have done very few should consider that they may not have adequate knowledge to know what GAN reviewers want/need as tools. It seems to me like a lot of support for this is a gut reaction against any AI/LLM use, and I don't think that's a good way to make rules. voorts (talk/contributions) 15:48, 29 October 2025 (UTC)[reply]
I like the way you’ve worded this as it is my general concern as well. While I’m not too high up on that list I’ve done 75 ish reviews and have never encountered AI usage. I know it exists and do see it as problem, however I don’t feel it deserves such a hurried reaction to create hard and fast rules. I would much prefer we take the time to properly flesh out a plan to deal with these issues that involves community input from a range of experiences and reviewers on the scope of the problem, how we should deal with it and to what extent. IntentionallyDense(Contribs)20:32, 29 October 2025 (UTC)[reply]
Yes. Recently, I've noticed a lot of editors rushing to push through new PAGs without much discussion or consideration of the issues beforehand. It's not conducive to good policymaking. voorts (talk/contributions) 20:35, 29 October 2025 (UTC)[reply]
I echo this sentiment. In my 100+ reviews done in the last year I have only had a few instances where I suspected AI use, and I can't think of any that had deep rooted issues clearly caused by AI. IAWW (talk) 22:10, 29 October 2025 (UTC)[reply]
This rule would’ve been useful years ago when we had users who really wanted to contribute but couldn’t write well enough, their primitive chat bot text was poor and they were unable to fix it, and keeping a review open to go through everything was the response because they didn’t want to close it and insisted it just needed work. As gen AI use is only increasing, addressing the situation before it gets that bad is a good thing. Kingsif (talk) 19:13, 30 October 2025 (UTC)[reply]
Cool, but I am easily highest up that list (which doesn’t count the reviews I did before it, or took over after) of everyone in this discussion, so your premise is faulty. Kingsif (talk) 19:07, 30 October 2025 (UTC)[reply]
I don't think my premise is faulty. I never said everyone who does GAN reviews needs to think the same way, nor do I believe that, and I see that you and other experienced GAN reviewers disagree with me. My point was that editors who have never done one should consider whether they have enough knowledge to make an informed opinion one way or the other. voorts (talk/contributions) 19:12, 30 October 2025 (UTC)[reply]
While you didn’t speak in absolutes, your premise was based in suggesting the people who disagree with you aren’t aware enough. Besides being wrong, you must know it was unnecessary and rather unseemly to bring it up in the first place: this is a venue for everyone to contribute. Kingsif (talk) 19:19, 30 October 2025 (UTC)[reply]
That wasn't my premise. I just told you what my premise is and I stand by it. I felt like it needed to be said in this discussion because AI/LLM use is a hot button issue and we should be deliberative about how we handle it on wiki. If editors who have never handled a GAN review want to ignore me, they can. As you said, anyone can participate here. voorts (talk/contributions) 19:51, 30 October 2025 (UTC)[reply]
Forgive me for disagreeing with your point, then, but I don’t think it even really requires editing experience in general to have an opinion on “should we make people waste time explaining why gen AI content doesn’t get a Good stamp or just let them say it doesn’t” Kingsif (talk) 20:11, 30 October 2025 (UTC)[reply]
Oppose. The proposal lacks clarity in definitions and implementation and the solution is ill-targeted to the problems raised in this and the preceding discussion. Editors have stated that the rationale for new quick fail criteria is to save time. On the other hand, editors have said it takes hours to verify hallucinated references and editors disagree vehemently about the reliability of subjective determinations of AI writing or use of AI detectors. Others have stated that it is already within the reviewer's purview to quick fail an article if they determine that too much time is required to properly vet the article. It is not clear how reviewers will determine that an article meets the proposed AI quick fail criterion, how long this will take, or that a new criterion is needed to fail such articles. Editors disagree about which signs of AI writing are "obvious" and as to whether all obvious examples are problematic. The worst examples would fail, anyway, and seemingly without requiring hours to complete the review so again it is unclear that this new criterion addresses the stated problem. Editors provided examples of articles with problematic, (allegedly) AI-generated content that have passed GA. New quick fail criteria would not address these situations where the reviewer apparently did not find the article problematic while another felt the problems were "obvious". Reviewers who are bad at detecting AI writing or don't verify sources or whatever the underlying deficit is won't invoke the new quick fail criterion and won't stop AI slop from attaining GA status.—Myceteae🍄🟫 (talk) 01:42, 29 October 2025 (UTC)[reply]
Support in the strongest possible terms. This is half practical, and half principle: the principle being that LLM/AI has no place on Wikipedia. Yes, there may be some, few, edge-cases where AI is useful on Wikipedia. But one good apple in a barrel of bad apples does not magically make the place that shipped you a barrel of bad apples a good supplier. For people who want a LLM-driven encylopedia, Grokipedia is thataway →. For people who want an encyclopedia actually written by and for human useage, the line must be drawn here. - The BushrangerOne ping only01:53, 29 October 2025 (UTC)[reply]
Oppose per IntentionallyDense and because detecting AI generation isn't always "obvious", and because the nom's proposed method for detecting LLM use to generate the article's contents will also flag people who use (e.g.) ChatGPT as a web search engine without AI generating even a single word in the whole article article. Also: if you want to make any article look very suspicious, then spam ?utm_source=chatgpt.com at the end of every URL. The "AI detecting" script will light up every source on the page as being suspicious, because it's not actually detecting AI use; it's detecting URLs with some referral codes. I might support adding {{AI generated}} to the list of other QF-worthy tags. WhatamIdoing (talk) 02:09, 29 October 2025 (UTC)[reply]
Oppose. If it's WP:G15-level, G15 it (no need to quickfail). Otherwise, we shouldn't go down the rabbit hole of unprovable editor behaviour and should focus on the actual quality of the article in front of us. If it has patently non-neutral language or several things fail verification, it can already be quick-failed as being a long way from the criteria. ~ L 🌸 (talk) 07:01, 29 October 2025 (UTC)[reply]
Oppose per MCE89. If you use an LLM to generate text and then use it as the basis for creating good, properly verified content, who cares? It's not as if a reviewer has to check every single citation — if you find one that's nonexistent, that alone should be sufficient to reject the article. Stating "X is Y"<ref>something</ref>, when "something" doesn't say so or doesn't even exist, is a hoax, and any hoax means that the article is a long way from meeting the "verifiable with no original research" criterion. And if we encounter "low effort usage of AI", that's certainly not going to pass a GA review. And why should something be instantly failed just because you believe that it's LLM-generated? Solidly verifying that something is automatically written — not just a high suspicion, but solidly demonstrating — will take more work than checking some references, and as Whatamidoing notes, it's very difficult to identify LLM usage conclusively; we shouldn't quick-fail otherwise good content just because someone incorrectly thinks that it was automatically written. I understand that LLMs tend to use M-dashes extensively. I've always used them a lot more than the average editor does; this was the case even when I joined Wikipedia 19 years ago, long before LLMs were a problem this way. Nyttend (talk) 10:44, 29 October 2025 (UTC)[reply]
Support per PMC, David Epstein and Aquillion. A lot of opposes to me look like they are either completely missing the point of a useful practical measure over irrelevant theoretical concerns. I also do find it absolutely insulting to not give reviewers every possible tool to deal with this trash, making them waste precious time and effort to needlessly satisfy another of the existing criteria. Choucas0 🐦⬛⋅💬⋅📋15:25, 29 October 2025 (UTC)[reply]
I also do find it absolutely insulting to not give reviewers every possible tool to deal with this trash, making them waste precious time and effort to needlessly satisfy another of the existing criteria. I've reviewed a lot of GAs and oppose this because it's vague and a solution in search of a problem. I see that you've completed zero GAN reviews. voorts (talk/contributions) 15:44, 29 October 2025 (UTC)[reply]
You are entitled to your opinion, but so am I, and I honestly do not see what such a needlessly acrimonious answer is meant to achieve here. The closer will be free to weigh your opposition higher than my support based on experience, but in the meantime that does not entitle you to gate-keep and belittle views you disagree with because you personally judge them illegitimate. Choucas0 🐦⬛⋅💬⋅📋15:58, 29 October 2025 (UTC)[reply]
You are entitled to your opinion. But when your opinion is based on the fact that something is insulting to a group to which I belong, I am entitled to point out that you're not part of that group and that you're not speaking on my behalf. I don't see how it's acrimonious or gate-keep[ing] or belittl[ing] to point out that fact. voorts (talk/contributions) 16:06, 29 October 2025 (UTC)[reply]
That is not what my opinion is based on (the first half of my comment pretty clearly is), and I did not mean to speak on anyone's behalf; I apologize if it was not clearer, since it is something that I aim to never do. I consider being exposed to raw LLM output insulting to anyone on this site, so I hope what I meant is clearer now. On another hand, your comment quoting Mao Zedong immediately after your first answer to me clearly shows that you do intend to gate-keep this discussion at large, so you will forgive me for being somewhat skeptical and not engaging further. Choucas0 🐦⬛⋅💬⋅📋16:31, 29 October 2025 (UTC)[reply]
I'm not sure how pointing out that editors should think before they opine on something with which they have little to no experience is a form of gatekeeping. That's why I didn't say "those editors can't comment" in this discussion. It's a suggestion that people stop and think about whether they actually know enough to have an informed opinion. voorts (talk/contributions) 16:44, 29 October 2025 (UTC)[reply]
I've seen a couple people suggest this, and... I don't really get how this is different at all? Anything under the proposed criterion can be tagged as AI-generated already, this would just be adding an extra step. Gnomingstuff (talk) 20:25, 31 October 2025 (UTC)[reply]
Oppose. If it has remnants of a prompt, that's already WP:G15. If the references are fake, that's already WP:G15. If it's not that bad, further review is needed and it shouldn't be QF'd. If AI-generated articles are being promoted to GA status without sufficient review, that means the reviewer has failed to do their job. Telling them their job is now also to QF articles that have signs of AI use won't help them do their job any better - they already didn't notice it was AI-generated. -- asilvering (talk) 15:56, 29 October 2025 (UTC)[reply]
Oppose. The article should be judged on its merits and its quality, not the matter or methods of its creation. The judgment should be based only on its quality. Any AI-generated references will fail category 2. If the AI-generated text is a copyright violation, it would be an instant failure as well. We didn't need to write up new rules for things that are forbidden in the first place anyway. Another concern for me is the term "obvious". While there may be universal agreement that some AI slop are obvious AI ("This article is written for your request...", "Here is the article...") some might not be obvious for other people. The use of em-dashes might not be an obvious AI use as some ESL writers might use them as well. The term "obvious" will be vague and it will create problems. Obviously AI slop can be dealt with G15 as well. ✠ SunDawn ✠Contact me!02:55, 30 October 2025 (UTC)[reply]
Support - too much junk at this point to be worthwhile. Readers come here exactly because it is written by people and not Grokipedia garbage. We shouldn't stoop to that level. FunkMonk (talk) 13:38, 30 October 2025 (UTC)[reply]
Support If there is an obvious trace of LLM use in the article and you are the creator, then you have no business being anywhere near article creation. If you are the nominator, then you have failed to apply a basic level of due diligence. Either way the article will have to be gone over with a fine comb, and should be removed from consideration. --Elmidae (talk · contribs) 13:54, 30 October 2025 (UTC)[reply]
Support GAN is not just a quality assessment – it also serves as a training ground for editors. LLM use undermines this; using LLMs just will not lead to better editors. As a reviewer, I refrain from reading anything that is potentially AI generated, as it is simply not worth my time. I want to help actual humans with improving their writing; I am not going to pointlessly correct the same LLM mistakes again and again, which is entirely meaningless. LLM use should be banned from Wikipedia entirely. --Jens Lallensack (talk) 15:59, 30 October 2025 (UTC)[reply]
Oppose. The Venn Diagram crossover of "editors who use LLMs" and "editors who are responsible enough to be trusted to use LLMs responsibly" is incredibly narrow. It would not surprise me if 95% of LLM usage shouldn't merely be quickfailed, but actively rolled back. That said, just because most editors cannot be trusted to use it properly does not mean it is completely off the table - using an LLM to create a table in source given input is fine, say. Additionally, AI accusations can prove a "witch hunt" where just because an editor's writing style includes m-dashes or bold, it gets an AI accusation - even though real textbooks may often also use bolding and m-dashes and everything too! If a problematic LLM article is found, it can still be quick-failed on criterion 1 (if the user wrote LLM-style rather than Wikipedia-style) or criterion 2 (if the user used the LLM for content without triple-verifying everything to real sources they had access to). We don't need a separate criterion for those cases. SnowFire (talk) 18:40, 30 October 2025 (UTC)[reply]
Oppose – either the issues caused by AI make an article a long way from meeting any one of the six good article criteria, in which case QF1 would apply, or they do not, in which case I believe a full review should be done. With the current state of LLMs, any article in the latter category will be one that a human has put significant work into. Some editors would dislike reviewing these nominations, but others are willing; I think making WP:LLMDISCLOSE mandatory would be a better solution. jlwoodwa (talk) 04:20, 31 October 2025 (UTC)[reply]
But wouldn't those reviewers that are possibly willing to review an LLM generated article be primarily those that use LLMs themselves, have more trust in them, and probably even use them for their review? A situation where most LLM-generated GAs are reviewed by LLMs does not sound healthy. --Jens Lallensack (talk) 12:00, 31 October 2025 (UTC)[reply]
LLM usage is a scale. It is not as black-and-white as those who use LLMs vs those who don't. I am of the opinion that LLMs should only be used in areas where their error rate is less than humans. In my opinion LLMs pretty much never write adequate articles or reviews, yet they can be used as tools effectively in both. IAWW (talk) 13:22, 31 October 2025 (UTC)[reply]
Oppose. GA is about assessing the quality of the article, not about dealing with prejudice toward any individual or individuals. If the article is bad (poorly written, biased, based on rumour rather than fact, with few cites to reliable sources), it doesn't matter who has written it. Equally, if an article is good (well written, balanced, factual, and well cited to reliable sources), it doesn't matter who has written it, nor what aid(s) they used. Lets assess the content not the contributor. SilkTork (talk) 12:15, 31 October 2025 (UTC)[reply]
Oppose. Focuses too much on the process rather than on the end result. Also, the vagueness of 'obvious' lays the ground for after-the-event arguments on such things as "I already know this editor uses LLMs in the background; the expression 'stands as a ..' appears, and that's an obvious LLM marker". MichaelMaggs (talk) 18:20, 31 October 2025 (UTC)[reply]
I already know this editor uses LLMs in the background
Support GA is a mark of quality. If you read something and you can obviously tell it is AI, that does not meet our standards of quality. Florid language, made up citations, obvious formatting errors a human wouldn't make, whatever it is that indicates clear AI use, that doesn't meet our standards. Could we chock that up to failing another criteria? Maybe. But it's nice to have a straightforward box to check to toss piss poor AI work out--and to discourage the poor use of AI. CaptainEekEdits Ho Cap'n!⚓19:34, 2 November 2025 (UTC)[reply]
Support In my view, if someone is so lazy that they generate an entire article to nominate without actually checking if it complies with the relevant policies and guidelines, then their nomination is not worth considering. Reviews are already a demanding process, especially nowadays. Why should I or anyone else put in the effort if the nominator is not willing to also put in the effort. Lazman321 (talk) 03:10, 3 November 2025 (UTC)[reply]
This proposal would impact those people, but it would also speedily fail submissions by people who do (or are suspected of) using LLMs but who do put in the effort to check that the LLM-output complies with all the relevant policies and guidelines. For example:
Editor A uses an LLM to find a source, verifies that that source exists, is reliable, and supports the statement it is intended to support but doesn't remove the associated LLM metadata from the URL. This nomination is speedily failed, despite being unproblematic.
Editor B uses an LLM to find a source, verifies that that source exists, is reliable, and supports the statement it is intended to support, and removes the associated LLM metadata from the URL. This nomination is speedily failed if someone knows or suspects that an LLM was used, it is accepted if someone doesn't know or suspect LLM use, despite the content being identical and unproblematic.
Editor D finds a source without using an LLM, verifies that that source exists, is reliable, and supports the statement it is intended to support. This nomination is accepted, even though the content is identical in all respects to the preceding two nominations.
Editor D adds a source, based on a reference in an article they don't know is a hoax without verifying anything about the source. The reviewer AGFs that the offline source exists and does verify the content (no LLMs were used so there is no need to suspect otherwise) and so the article gets promoted.
Whoops, the second should obviously be Editor E (I changed the order of the examples several times while writing it, obviously I missed correcting that). Thryduulf (talk) 01:23, 4 November 2025 (UTC)[reply]
Oppose "Obvious" is subjective, especially if AI chatbots become more advanced than they are now and are able to speak in less stilted language. Furthermore, either we ban all AI-generated content on Wikipedia, or we allow it anywhere, this is just a confusing half-measure. (I am personally in support of a total ban, since someone skilled enough to proofread the AI and remove all hallucinations/signs of AI writing would likely just write it from scratch, it doesn't save much time). Or if it did, they'd still avoid it out of fear of besmirching their reputation given the sheer amount of times AI is abused. ᴢxᴄᴠʙɴᴍ (ᴛ) 11:31, 3 November 2025 (UTC)[reply]
A total ban of AI has not gained consensus, in part because there are few 'half-measures' in place that would be indicative that there is a widespread problem. The AI image ban came only after a BLP image ban, for example. CMD (talk) 11:53, 3 November 2025 (UTC)[reply]
Adding hypocritical half-measures just to push towards a full ban would be "disrupting Wikipedia to make a point". As long as it's allowed, blocking it in GAs would make no sense. It's also likely that unedited AI trash will be caught by reviewers anyway because it's incoherent, even before we get to the AI criterion. ᴢxᴄᴠʙɴᴍ (ᴛ) 15:46, 4 November 2025 (UTC)[reply]
I'm not sure where the hypocrisy is in the proposal. Whether reviewers will catch unedited AI trash is also not affected by the proposal, the proposal provides a route for action following the catch of said text. CMD (talk) 16:01, 4 November 2025 (UTC)[reply]
Support-I think at some point LLMs like to cite wikipedia whenever they spit out an essay or any kind of info on a given topic. Then an editor will paste this info into the article, which the AI will cite again, and wikipedia articles will basically end up ouroboros'd User:shawtybaespade (talk) 12:01, 3 November 2025 (UTC)[reply]
There is extensive explanation above of why this is not a good proposal, so this comment just indicates you haven't read anything of the discussion, which is something for the closer to take note of. Thryduulf (talk) 21:25, 3 November 2025 (UTC)[reply]
I urge everyone not to make inferences about what others have read. The wide diversity of opinions makes it clear that different editors find different arguments compelling, even after reading all of them. isaacl (talk) 23:58, 3 November 2025 (UTC)[reply]
If Stifle had read and thought about any of the comments on this page it would be extremely clear that it is not "obviously" a sensible idea. Something that is "obviously" a sensible proposal does not get paragraphs of detailed explanation about why it isn't sensible from people who think it goes too far and from those who think it doesn't go far enough. Thryduulf (talk) 01:27, 4 November 2025 (UTC)[reply]
Oppose I feel the scope that this criteria would cover is already redundant by the other criterias (see Thryduulf's !vote). Additionally, I am concerned that this will raise false positives for those whose writing style is too close to what an LLM could generate. Gramix13 (talk) 23:02, 3 November 2025 (UTC)[reply]
Grokipedia is an uncontrollable AI slop where no one can control the content (except for Elon Musk and his engineers). Current Wikipedia's rules is enough to stop such travesty without adding this quickfail category. GA criteria #1 and #2 is more than enough to stop the AI slop. G15 is still there as well. No need to put rules on top of other rules. ✠ SunDawn ✠Contact me!04:37, 4 November 2025 (UTC)[reply]
Oppose For the statements made above and in the discussion that the failures of AI (hallucination) are easily covered by criterions 1 and 2. But, additionally, because I am not confident that AI is easily detected. AI-detector tools are huge failures, and my own original works on other sites have been labeled AI in the past when they're not. So I personally have experience being accused of using AI when I know my work is original all because I use emdashes. And since AI is only going to improve and become even harder to detect, this criterions is most likely going to be used to give false confidence to over-eager reviewers ready to quick-fail based on a hunch. Terrible idea.--v/r - TP01:40, 4 November 2025 (UTC)[reply]
Support after consideration. I do not love how the guideline is currently written - I think all criteria for establishing "obvious" LLM use should be defined. However, I would rather support and revise than oppose. Seeing multiple frequent GA reviewers !vote support also suggests there is a gap with the current QF criteria. NicheSports (talk) 04:58, 4 November 2025 (UTC)[reply]
Support, the fact that people are starting to write like AI/bots/LLMs means that "false positives" will be detecting (in some cases) users who are too easily influenced by what they are reading. Let's throw those babies out with the bathwater. Abductive (reasoning)05:16, 4 November 2025 (UTC)[reply]
It's literally the opposite of that. LLMs and GenAI are trained on human writing. It mimics human writing. Not the other way around. And are you suggesting banning users for writing in similar prose to the highly skilled published authors that LLMs are trained on? What the absolute fuck?!?--v/r - TP15:21, 4 November 2025 (UTC)[reply]
LLMs are trained on highly skilled published authors? Pull the other one, it's got bells on. I didn't know highly skilled published authors liked to delve into things with quite so many emojis. Cremastra (talk·contribs) 15:26, 4 November 2025 (UTC)[reply]
Yeah, I know that. Dial down the condescension. But they're trained on all published works, including plenty of junk scraped from the internet. Most published works aren't exactly Terry Pratchett-quality either. Cremastra (talk·contribs) 00:56, 5 November 2025 (UTC)[reply]
You want me to dial down the condescension on a request that anyone whose prose is similar to that of the material the AI is trained on, including published works, be banned? Did you read the top level comment that I'm being snarky to?--v/r - TP00:59, 5 November 2025 (UTC)[reply]
I did, and it isn't relevant here. What's relevant is your misleading claim that AI writing represents the best-quality writing humanity has to offer and is acceptable to be imitated. In practice, it can range from poor to decent, but rarely stellar. Cremastra (talk·contribs) 01:07, 5 November 2025 (UTC)[reply]
First off - I made no such claim. I said AI is trained on some of the best quality writing humanity has to offer. Don't put words in my mouth. Second off - Even if I did, calling for a ban on users who positive contribute because their writing resembles AI is outrageous. Get your priorities straight or don't talk to me.--v/r - TP22:17, 5 November 2025 (UTC)[reply]
Modern LLMs are trained on very large corpuses, which include everything from high-quality to low-quality writing. And even if one were trained exclusively on high-quality writing, that wouldn't necessarily mean its output is also high-quality. But I agree that humans picking up speech patterns from LLMs doesn't make them incompetent to write an encyclopedia. jlwoodwa (talk) 22:30, 5 November 2025 (UTC)[reply]
Support per Aquillion and Lf8u2, most editors I assume would not want Wikipedia to become Grokipedia, a platform of AI slop. LLMs such as Grok and ChatGPT write unencyclopedically or unnaturally and cite unreliable sources such as Reddit. Alexeyevitch(talk)07:55, 4 November 2025 (UTC)[reply]
Users !opposed to this proposal are not supportive of AI slop or a 'pedia overrun by AIs. It's just a bad proposal.--v/r - TP15:22, 4 November 2025 (UTC)[reply]
We can either wait for the 'perfect' proposal, which may never come, or try something like this, so as to have some recourse. It has been years since ChatGPT arrived. If there are some problems that arise with this criterion in actual practice, they can be dealt with by modifying the criterion through the usual Wikipedia process of trial and error. The point is that there is value merely in expressing Wikipedia's stance on AI in relation to good articles. I hope you can understand that users who support this proposal think something is better than nothing, which is the current state of affairs. Yours, &c.RGloucester — ☎22:02, 4 November 2025 (UTC)[reply]
There is already something. 1) That sources in a GA review are verified to support the content, and 2) That it follows the style guide. What does this new criterion add that isn't already captured by the first two?--v/r - TP00:48, 5 November 2025 (UTC)[reply]
Adding this criterion will make clear what is already expected in practice. Namely, that editors should not waste reviewer time by submitting unreviewed LLM-generated content to the good articles process, as Aquillion wrote above. It is true that the other criteria may be able to be used to quick-fail LLM-generated content. This is also true of articles with copyright violations, however, which could logically be failed under 1 or 3, but have their own quick-fail criterion, 2. I would argue that purpose of criterion 2 is equivalent to the purpose of this new, proposed criterion: namely, to draw a line in the sand. The heart of the matter is this: what is the definition of a good article on Wikipedia? What does the community mean when it adds a good article tag to any given article? Adding this criterion makes clear that, just as we do not accept copyright violations, even those that are difficult to identify, like close paraphrasing, we brook no slapdash use of LLMs. Yours, &c.RGloucester — ☎01:34, 5 November 2025 (UTC)[reply]
I disagree that the quickfail criterion, as proposed, would make that clear. Not all obvious evidence of LLM use is evidence of unreviewed LLM use. jlwoodwa (talk) 01:43, 5 November 2025 (UTC)[reply]
Any 'successful' use of LLMs, if there can be such a thing, should leave no trace behind in the finished text. If the specified bits of prompt or AI-generated references are present, that is evidence that whatever 'review' may have been conducted was insufficient to meet the expected standard. Yours, &c.RGloucester — ☎07:25, 5 November 2025 (UTC)[reply]
If someone verifies that an article's references exist and support the claims they're cited for, I would call that a sufficient review of those references, whether or not there are UTM parameters remaining in the citation URLs. jlwoodwa (talk) 07:47, 5 November 2025 (UTC)[reply]
No, because not only the references would need to be checked. If such 'obvious evidence' of slapdash AI use is in evidence, the whole article will need to be checked for hallucinations, line-by-line. Yours, &c.RGloucester — ☎07:53, 5 November 2025 (UTC)[reply]
Unfortunately I think it might be. Malacca dilemma for example, claims a pipeline has been "operational since 2013" using a source published in 2010 (and that discusses October 2009 in the future tense); in fact most of that paragraph seems made up around the bare bones of the potential for these future pipelines being mentioned. I assume the llm is drawing from other mentions of the pipelines somewhere in its training data, or just spinning out something plausible. Another llm issue is that when actually maintaining the transfer of source content into article prose, such as the first paragraph of background, it can be quite CLOPpy. CMD (talk) 11:59, 5 November 2025 (UTC)[reply]
That's actually found in Chen, but with with 2013 given as the planned operational date. China-Myanmar 2000 Kunming 20, for 2013 1.5 Oil Pipeline 30 years... Construction of the two pipelines will begin soon and is expected to be completed by 2013. China National Petroleum Corporation (CNPC), the largest oil and gas company in China, holds 50.9 per cent stake in the project, with the rest owned by the Myanmar Oil and Gas Enterprise (MOGE). (Sudha, 2009) I've wikilinked the article on the pipeline and added another source for becoming operational in 2013. That was definitely an issue, but would you call that a quick fail? ScottishFinnishRadish (talk) 12:12, 5 November 2025 (UTC)[reply]
What's found in Chen are parts of the text, the bare bones I mention above. The rest of the issues with that paragraph remain, and it is the presence of many of these issues, especially with the way llms work by producing words that sound right whether actual information or not, that is the problem. CMD (talk) 12:35, 5 November 2025 (UTC)[reply]
Support: Yes, this is redundant, but it also saves time, and discourages editors who have used AI from submitting their (really, the LLM's) articles at GAN. --not-cheesewhisk3rs ≽^•⩊•^≼ ∫ (pester) 10:08, 9 November 2025 (UTC)[reply]
Oppose per Mycetae. I see several arguments in this discussion to advance the proposal.
1) That AI usage is bad and this proposal addresses it. I think all editors here agree that uncritical use of AI is an increasing problem facing Wikipedia. What editors don't agree is that this proposal effectively addresses the issue in a way that doesn't introduce new problems
2) This will save editor time when reviewing for GAR. I find this very unconvincing: I haven't seen any examples or explanations of an article that this guideline would save editor time, but rather magnify debates over whether or not particular aspects of an article are "evidence of AI usage". Editors argue that there are a class of articles that would be quickfailed under this guideline, but are currently not quickfailed and that this detracts from editor time. Whether or not this class of articles exists (something I'm skeptical of given the lack of examples), the application of the guideline would not effectively reduce editor time because there is no clear way to apply this guideline - the lack of specificity in the proposal defeats its functionality.
In general, I haven't seen any evidence that there is a problem here. If a problem exists with how GA reviews function, that evidence is still forthcoming. I agree with editors who point out that this guideline is a solution in search of a problem.
Katzrockso (talk) 22:38, 12 November 2025 (UTC)[reply]
I sometimes use AI as a search engine and link remnants are automatically generated. I'd rather not face quickfail for that. I'm also not seeing how the existing criteria are not sufficient; if links are fake or clearly don't match the text, that is already covered under a quickfail as being a long way from demonstrated verifiability. Can a proponent of this proposal give an example of an article they would be able to quickfail under this that they can't under the current criteria? Rollinginhisgrave (talk | contributions) 10:47, 26 October 2025 (UTC)[reply]
The purpose of this proposal is to draw a line in the sand, to preserve the integrity of the label 'good article', and make clear where the encyclopaedia stands. Yours, &c.RGloucester — ☎12:55, 26 October 2025 (UTC)[reply]
In a nutshell the difference is that with AI-generated text, every single claim and source must be carefully checked, and not just for the source's existence; GA only requires spot-checking a handful. The example I gave above was a FA, not GA, but it's basically the same thing. Gnomingstuff (talk) 17:56, 26 October 2025 (UTC)[reply]
Thankyou for this example, although I'm not sure how it's applicable here as it wouldn't fall under "obvious evidence of LLM use". At what point in seeing edits like this are you invoking the QF? Rollinginhisgrave (talk | contributions) 21:23, 26 October 2025 (UTC)[reply]
The combination of "clear signs of text having been written by AI..." plus "...and there are multiple factual inaccuracies in that text." Or in other words, obvious evidence (#1) plus problems that suggest that the output wasn't reviewed well/at all (#2). Gnomingstuff (talk) 02:20, 27 October 2025 (UTC)[reply]
I've spent some time thinking about this. Some thoughts:
What you describe as obviously AI is very different to what RGloucester describes here, which makes me concerned about reading any consensus for what proponents are supporting.
I would describe what you encountered at La Isla Bonita as "possible/probable AI use" not "obvious", and your description of it as "clear" is unconvincing, especially when put against cases where prompts are left in etc.
If I encountered multiple substantial TSI issues like that and suspected AI use, I would be more willing to quickfail as I would have less trust in the text's verifiability. I would want other reviewers to feel emboldened to make the same assessment, and I think it's a problem if they are not currently willing to do so because of how the QF criteria is laid out.
I see no evidence that this is actually occuring.
I think that the QF criteria would have to be made more broad than proposed ("likely AI use") to capture such occurrences, and I would like to see wording which would empower reviewers in that scenario but would avoid quickfails where AI use is suspected but only regular TSI issues exist (for those who do not review regularly, almost all spot checks will turn up issues with TSI).
Not a fan of RGloucester's criteria tbh, I don't feel like references become quickfail-worthy just because someone used ChatGPT search, especially given that AI browsers now exist.
As far as the rest this is why I !voted weak support and not full support -- I'm not opposed to quickfail but it's not my preference. My preference is closer to "don't promote until a lot more review/rewriting than usual is done." Gnomingstuff (talk) 05:00, 1 November 2025 (UTC)[reply]
It's correct that every single claim and source needs to be carefully checked, but it needs to be checked by the author, not the GA reviewer. The spot check is there to verify the author did their part in checking. – Closed Limelike Curves (talk) 01:23, 6 November 2025 (UTC)[reply]
What's "obvious evidence of AI-generated references"? For example, I often use the automatic generation feature of the visual editor to create a citation template. Or I might use a script to organise the references into the reflist. The proposal seems to invite prejudice against particular AI tells but these include things like using an m-dash, and so are unreliable. Andrew🐉(talk) 10:53, 26 October 2025 (UTC)[reply]
Yeah, it's poorly written. "Obvious evidence of AI-generated references" in this context means a hallucination of a reference that doesn't exist. Viriditas (talk) 02:43, 28 October 2025 (UTC)[reply]
What about something similar to WP:G15, for example 6. It contains content that could only plausibly have been generated by large language models and would have been removed by any reasonable human review.Kovcszaln6 (talk) 10:59, 26 October 2025 (UTC)[reply]
This would be the first GA criterion that regulates the workflow people use to write articles rather than the finished product, which doesn't make much sense because the finished product is all that matters. Gen AI as a tool is also extremely useful for certain tasks, for example I use it to search for sources I may have missed (it is particularly good at finding multilingual sources), to add rowscopes to tables to comply with MOS:DTAB, to double check table data matches with the source, and to check for any clear typos/grammar errors in finished prose. IAWW (talk) 11:05, 26 October 2025 (UTC)[reply]
It’s irrelevant to this discussion but I don’t think it’s right to call something “extremely useful” when the tasks are layout formatting, and source-finding and copy editing skills you can and should develop for yourself. You will get better the more you try, and when even just pretty good, you will be better than a chatbot. You also really don’t need gen AI to edit tables, there are completely non-AI tools to extract datasets and add fixed content in fixed places, tools that you know won’t throw in curveballs at random. Kingsif (talk) 14:24, 26 October 2025 (UTC)[reply]
Well, "extremely useful" is subjective, and in my opinion it probably saves me about 30 mins per small article I write, which in my opinion justifies the adjective. I still do develop all the relevant skills myself, but I normally make some small mistakes (like for example putting a comma instead of a full stop), which AI is very good at detecting. IAWW (talk) 14:55, 26 October 2025 (UTC)[reply]
You still don’t need overconfident error-prone gen AI for spellcheck. Microsoft has been doing it with pop ups that explain why your text may or may not have a mistake for almost my whole life. Kingsif (talk) 15:02, 26 October 2025 (UTC)[reply]
I have no idea how accurately IAWW can manually check spelling, grammar etc. That wasn't the alternative offered however, which was to use existing specialist tools to do the job. They can get things wrong too, but rarely in the making-shit-up tell-them-what-they-want-to-hear way that generative AI does. AndyTheGrump (talk) 21:40, 26 October 2025 (UTC)[reply]
Generative AI can do that in certain situations, but things like checking syntax doesn't seem like one of those situations. Anyway, if the edits IAWW makes to Wikipedia are accurate and free of neutrality issues, fake references, etc. why does it matter how that content was arrived at? Thryduulf (talk) 21:48, 26 October 2025 (UTC)[reply]
'If' is doing a fair bit of work in that question, but ignoring that, it wouldn't, except in as much as IAWW would be better off learning to use the appropriate tools, rather than using gen AI for a purpose other than that it was designed for. I'd find the advocates of the use of such software more convincing if they didn't treat it as if it was some sort of omniscient and omnipotent entity capable of doing everything, and instead showed a little understanding of what its inherent limitations are. AndyTheGrump (talk) 23:02, 26 October 2025 (UTC)[reply]
To me - and look, as much as it's a frivolous planet-killer, I am not going to go after any individual user for non-content AI use, but I will encourage them against it - if we assume there are no issues with IAWW's output, my main concern would be the potential regression in IAWW's own capabilities for the various tasks they use an AI for, and how this could affect their ability to contribute to the areas of Wikipedia they frequent. E.g. if you are never reviewing your own writing and letting AI clean it up, will your ability to recognise in/correct grammar and spelling deteriorate, and therefore your ability to review others' writing. That, however, would be a personal concern, and something I would not address unless such an outcome became serious. As I said, with this side point, I just want to encourage people to develop and use these skills themselves. Kingsif (talk) 23:21, 26 October 2025 (UTC)[reply]
why does it matter how that content was arrived at? Value? Morality? If someone wants ChatGPT, it's over this way. We're an encyclopedia. We have articles with value written by people who care about the articles. LLM-generated articles make a mockery of that. Why would you deny our readers this? I genuinely can't understand why you're so pro-AI. Do you not see how AI tools, while they have some uses, are completely incompatible with our mission of writing good articles? Cremastra (talk·contribs) 01:57, 28 October 2025 (UTC)[reply]
Once again, Wikipedia is not a vehicle for you to impose your views on the morality of AI on the world. Wikipedia is a place to write neutral, factual encyclopaedia articles free of value judgements - and that includes value judgements about tools other people use to write factual, neutral articles. Thryduulf (talk) 02:17, 28 October 2025 (UTC)[reply]
Your refusal to take any stance on a tool that threatens the value of our articles is starting to look silly. As I say here, we take moral stances on issues all the time, and LLMs are right up our alley. Cremastra (talk·contribs) 02:28, 28 October 2025 (UTC)[reply]
That LLM is a tool that threatens the value of our articles is your opinion, seemingly based on your dislike of LLMs and/or machine learning. You are entitled to that opinion, but that does not make it factual.
If an article is neutral and factual then it is neutral and factual regardless of what tools were or were not used in its creation.
If an article is not neutral and factual then it is not neutral and factual regardless of what tools were or were not used in its creation. Thryduulf (talk) 02:52, 28 October 2025 (UTC)[reply]
You missed two: If an article is not neutral and factual and was written by a person, you can ask that person to retrace their steps in content creation (if not scan edit-by-edit to see yourself) so everyone can easily identify where the inaccuracies originated and fix them. If an article is not neutral and factual and you cannot easily trace its writing process, it is hard to have confidence in any content at all when trying to fix it. Kingsif (talk) 03:01, 28 October 2025 (UTC)[reply]
It’s irrelevant to this discussion but I don’t think it’s right to call a calculator “extremely useful” when the tasks are division, exponentiation, and root-finding skills you can and should develop for yourself. – Closed Limelike Curves (talk) 01:18, 6 November 2025 (UTC)[reply]
The following discussion has been closed. Please do not modify it.
Are you being deliberately obtuse and pulling back a week-ended thread just to make a WP:POINT? Tut tut. And all for an incorrect "nootice" too. Humans are inherently better at determining source usefulness and copyediting than a computer will ever be. Computers, however, were literally created to do routine but lengthy mathematical equations. Kingsif (talk) 20:42, 10 November 2025 (UTC)[reply]
Honestly, I think it's very interesting you think someone questioning your pointless-besides-provocation comment is less civil than (regardless of date) yourself making the pointless-besides-provocation comment. Kingsif (talk) 21:34, 11 November 2025 (UTC)[reply]
I find it interesting you continue to use a condescending tone after calling a colleague "obtuse" and shushing them like a child. Lots of things are interesting; some are even so interesting they might get administrators' attention. Cremastra (talk·contribs) 21:54, 11 November 2025 (UTC)[reply]
That was an honest comment, mate (very literally noted as such), and even re-reading I don't see where you find condescension at all. But should I take your mimicry to be a deliberate use of condescension if that is how you apparently believe it's used? If so, much like CLC's, your comment has an edge of provocation with no discussion value. Why? Kingsif (talk) 23:14, 11 November 2025 (UTC)[reply]
Now, relevantly, this proposal clearly does not regulate workflow, only the end product. It only refers to the article itself having evidence of obvious AI generation in its actual state. Clean up after your LLMs and you won’t get caught and charged 😉 Kingsif (talk) 14:28, 26 October 2025 (UTC)[reply]
The "evidence" in the end product is being used to infer things about the workflow, and the stuff in the workflow is what the proposal is targeting. IAWW (talk) 14:50, 26 October 2025 (UTC)[reply]
Y’all know I think gen AI is incompatible with Wikipedia and would want to target it, but I don’t think this proposal does that. If there’s AI leftovers, that content at least needs human cleanup, and that shouldn’t be put on a reviewer. That’s no different to identifying copyvio and quickfailing saying a nominator needs to work on it rather than sink time in a full review. Kingsif (talk) 14:59, 26 October 2025 (UTC)[reply]
Regarding "fake references", I can see the attraction in this being changed from a slow fail to a quick fail, but before it can be a quick fail there needs to be a reliable way to distinguish between references that are completely made up, references that exist but are inaccessible to (some) editors (e.g. offline, geoblocked, paywalled), references that used to be accessible but no longer are (e.g. linkrot), and references with incorrect details (e.g. typos in URIs/dois/ISBNs/titles/etc). Thryduulf (talk) 12:56, 26 October 2025 (UTC)[reply]
I think this is the problem: The proposal doesn't say "a reference that doesn’t work". It says "AI-generated references". Now maybe @RGloucester meant the kind of ref that's completely fictional, rather than real sources that someone found by using ChatGPT as a type of cumbersome web search engine, but that's not clear from what's written in the proposal.
This is a bit concerning, because there have been problems with citations that people can't check since before Wikipedia's creation – for example:
Proof by reference to inaccessible literature: The author cites a simple corollary of a theorem to be found in a privately circulated memoir of the Slovenian Philological Society, 1883.
Proof by ghost reference: Nothing even remotely resembling the cited theorem appears in the reference given.
Proof by forward reference: Reference is usually to a forthcoming paper of the author, which is often not as forthcoming as at first.
– and AI is adding to the traditional list the outright fabrication of sources: "Proof by non-existent source: A paper is alleged to exist, except that no such paper ever existed, and sometimes the alleged author and the alleged journal are made-up names, too". These are all problems, but these need different responses in the GA process. Made-up sources should be WP:QF #1: "It is a long way from meeting any one of the six good article criteria" (specifically, the requirement to cite real sources. A ghost reference is a real source but what's in the Wikipedia article {{failed verification}}; depending on the scale, that's a surmountable problem. A forward reference is an unreliable source, but if the scale is small enough, that's also a surmountable problem. Inaccessible literature is not grounds for failing a GA nom.
If this is meant to be "most or all of the references are to sources that actually doesn’t exist (not merely offline, not merely inconvenient, etc.)", then it can be quick-failed right now. But if it means (or gets interpreted as) "the URL says ?utm=chatgpt", then that's not an appropriate reason to quick-fail the nomination. WhatamIdoing (talk) 06:10, 27 October 2025 (UTC)[reply]
Perhaps a corollary added to existing crit, saying that such AI source invention is a QF, would be more specific and helpful. I had thought this proposal was good because it wasn’t explicitly directing reviewers to “this exact thing you should QF”, but if there are reasonable concerns (not just the ‘but I like AI’ crowd) that the openness could instead confuse reviewers, then adding explicit AI notes to existing crit may be a better route. Kingsif (talk) 16:05, 27 October 2025 (UTC)[reply]
Suggestion: change the fail criterion to read "obvious evidence of undisclosed LLM use". There are legitimate uses of LLMs, but if LLM use is undisclosed then it likely hasn't been handled properly and shouldn't be wasting reviewers' time, since more than a spot-check is required as explained by Gnomingstuff. lp0 on fire()09:17, 27 October 2025 (UTC)[reply]
It would also be so hard to implement making it mandatory, in practice. Heavy rollout means some users may not even know when they’ve used it. Left google on AI mode (or didn’t turn it off…)? Congrats, when you searched for a synonym you “used” an LLM. Kingsif (talk) 16:12, 27 October 2025 (UTC)[reply]
I took evidence to mean things in the article. I hope no reviewer would extend the GA crit to things not reviewed in the GAN process - like an edit reason or other disclosure. I can see the concern that this wording could allow or encourage them to, now that you bring it up. Kingsif (talk) 15:56, 27 October 2025 (UTC)[reply]
A difficult part of workshopping any sort of rule like this is you have to remember not everyone who uses it will think the same way you do, or even the way the average person does. What I'd hate to see happen is we pass something like this and then have to come back multiple times to edit it because of people using it as license to go open season on anything they deem AI, evidence or no evidence. I don't mean to suggest you would do anything like that, Kingsif, but someone out there probably will. Trainsandotherthings (talk) 01:52, 28 October 2025 (UTC)[reply]
I didn't think you were suggesting so ;) As noted, I agree. As much as obvious should mean obvious and evidence should be tangible evidence, and the spirit of the proposal should be clear... I still support it, as certainly less harmful than not having something like it, but I can see how even well-intentioned reviewers trying to apply it could go beyond this limited proposal's intention. Kingsif (talk) 01:59, 28 October 2025 (UTC)[reply]
I mentioned this above in my !vote, but isn't this already covered by WP:GAQF #3 (# It has, or needs, cleanup banners that are unquestionably still valid. These include {{cleanup}}, {{POV}}, {{unreferenced}} or large numbers of {{citation needed}}, {{clarify}}, or similar tags)? Any blatant use of AI means that the article deserves {{AI-generated}} and, as such, already is QF-able. All that has to be done is to modify the existing QF criterion 3 to make it explicit that AI generation is a rationale that would cause QF criterion 3 to be triggered. – Epicgenius (talk) 01:44, 28 October 2025 (UTC)[reply]
To keep it short, isn't QF3 just a catch-all for "any clean-up issues that might not completely come under 1 & 2" and theoretically both those quickfail conditions come under it and they're unnecessary? But they're important enough to get their own coverage? Then we ask is unmonitored gen AI more or less significant than GA crit and copyvio. Kingsif (talk) 02:06, 28 October 2025 (UTC)[reply]
Suggestion: combining the obvious use of AI with evidence that the submission falls short of any of the other six GA criteria (particularly criteria 2). Many of the current Opposes reflect a sentiment that this policy would encapsulate too much: instead of reflecting the state of the article, it punishes those who use AI in their workflow. This suggestion would cover a quickfail of articles with AI-hallucinated references (so, for instance, if a reviewer notes a source with a ?utm_source=chatgpt.com tag and determines that the sentence is not verifiable, they can quickfail it); however, this suggestion limits the quickfail potential for people who use AI, review its outputs, and put work into making sure it meets the guidelines for a Wikipedia article. Staraction (talk | contribs) 07:41, 30 October 2025 (UTC)[reply]
Sorry, I don't think I worded the tqi part well. I mean that, if there is obvious use of AI and any evidence of a hallucinated source, unverified citation, etc. at all that the reviewer is allowed to quickfail.
When discussion has ended, remove this tag and it will be removed from the list. If this page is on additional lists, they will be noted below.
An edit filter can perform certain actions when triggered, such as warning the user, disallowing the edit, or applying a change tag to the revision. However, there are lesser known actions that aren't currently used in the English Wikipedia, such as blocking the user for a specified amount of time, desysopping them, and something called "revoke autoconfirmed". Contrary to its name, this action doesn't actually revoke anything; it instead prevents them from being "autopromoted", or automatically becoming auto- or extended-confirmed. This restriction can be undone by any EFM at any time, and automatically expires in five days provided the user doesn't trigger that action again. Unlike block and desysop (called "degroup" in the code), this option is enabled for use on enwiki, but has seemingly never been used at all.
Fast forward to today, and we have multiple abusers and vandalbots gaming extended confirmed in order to vandalize or edit contentious topics. One abuser in particular has caused an edit filter to be created for them, which is reasonably effective in slowing them down, but it still lets them succeed if left unchecked. As far as I'm aware, the only false positive for this filter was triggered by PaulHSAndrews, who has since been community-banned. In theory, setting this filter to "revoke autoconfirmed" should effectively stop them from being able to become extended confirmed. Some technical changes were recently made to allow non-admin EFMs to use this action, but since it has never been used, I was told to request community consensus here.
@ChildrenWillListen: Does current policy/guideline prohibit edit filter managers from using the "block autopromote" setting? I am looking at WP:EF#Basics of usage and WP:EF#Recommended uses and it makes several references to "block autopromote" as an available option. It seems to me that edit filter managers already have discretion to use that setting under the current guidelines. Is there a particular edit filter that you feel the setting would be useful? Mz7 (talk) 05:08, 28 October 2025 (UTC)[reply]
Got it, thanks for that context. Unless I am missing something, there does not seem to be any written rule that prevents edit filter managers from using the "block autopromote" setting. However, it seems the reason why edit filter managers are hesitant to use it is because it is rarely actually helpful. Looking at 807 in particular, I see it listed at Template:DatBot filters, meaning a bot will automatically report filter hits to WP:AIV—maybe that takes care of the need to use "block autopromote"? Mz7 (talk) 05:33, 28 October 2025 (UTC)[reply]
For example, see Utube2 who is active right now and has triggered this filter. If we set it to prevent them from becoming extended confirmed, we wouldn't have to worry about this. This has been happening every single day for the last three months. ChildrenWillListen (🐄 talk, 🫘 contribs) 06:11, 28 October 2025 (UTC)[reply]
@Daniel Quinlan: "Revoke autoconfirmed" doesn't actually... revoke autoconfirmed (or anything else), as strange as it sounds. See [32], [33], and [34]. The name is extremely misleading and I'll look to get that changed.
Also, I wasn't aware of $wgAutopromoteOnce until now, and since that code path doesn't call into HookRunner::onGetAutoPromoteGroups, this might not even block the extended-confirmed autopromotion. ChildrenWillListen (🐄 talk, 🫘 contribs) 13:22, 28 October 2025 (UTC)[reply]
If (you think) the proposal there can't be implemented for technical reasons then you should note that in the discussion so participants and the closer are aware. Thryduulf (talk) 21:11, 28 October 2025 (UTC)[reply]
The blockautopromote filter action works fine (after the above discussion, I tested it on test.wikipedia.org) although it's worth clarifying that it doesn't technically "revoke" permissions in the way people are used to thinking about permissions. It helps to understand that autopromotion doesn't actually add users to a group. MediaWiki dynamically checks whether a user meets the conditions for autopromotion when it's checking a user's rights or groups. The blockautopromote action prevents those conditions from being met for a temporary period of time (five days). The rights are effectively revoked during that period, but once the period ends, autopromotion works normally and the "revoked" rights return to the user. The confirmed permission can also be granted manually during that period and edit filter managers also have an interface to undo blockautopromote mistakes.
I will also mention that enabling blockautopromote for one or two filters as proposed in the WT:PP RFC will have another immediate effect (i.e., without any additional configuration changes): it will lower the edit rate throttle for the five day period from the user edit rate limit to the newbie edit rate limit. Based on the current settings, that would shift the rate limit from 90 edits per 60 seconds to 8 edits per 60 seconds.
If the discussion at WT:PP reaches a consensus against that proposal (or does not reach a consensus), then I'd say that a discussion at that or similarly prominent venue (such as a village pump) would be required to start using the option. If that discussion is concluded with a consensus in favour then anything similar would probably be fine with just an EFN discussion, but anything significantly different would probably benefit from a more prominent discussion.
Does this perhaps overlap too much with [the discussion at WT:PP]? I'm inclined to say yes, but others might disagree. Certainly I can't see the utility in this discussion before that one is closed.
I don't think this is really prohibited so much as it's not something anyone particularly wants to use without a heavy level of specific community consensus. It certainly could be used, but in the modern age unless it's an emergency it's almost certain to go through a long discussion first. I don't think there's a lot of EFMs who are eager to use that option without a lot of code review and being confident of consensus in favor first. EggRoll97(talk) 01:30, 29 October 2025 (UTC)[reply]
In general, this proposal seems highly dangerous, and policy shouldn't change. Just go to WP:STOCKS and you'll find some instances in which misconfigured filters prevented edits by everyone; imagine that these filters also included provisions to block or revoke rights from affected editors. However, the proposal seems to be talking about a filter for one particularly problematic user; I could support a proposal to make an exemption for egregious cases, but I think such an exemption should always be discussed by the community, so the suggested reconfiguration is the result of community consensus. Nyttend (talk) 10:51, 29 October 2025 (UTC)[reply]
Come to find out there are a whole slew of Wikipedia:Outlines that, I guess, are supposed to be some sort of cross between cliff notes and DMOZ. They are classified as "lists", but they aren't lists. They are, I guess, the private project of a few people who seem to be operating parallel articles to main topics but without any narrative structure.
Why do these exist? How are they controlled editoirally? Should they all be merged into other articles?
How so? Categories are a structured and hierarchical data type. Outline articles seem to be attempts to force an article into something like an article without narrative or prose. jps (talk) 01:36, 30 October 2025 (UTC)[reply]
I look at it from another perspective - it's an enhanced version of categories where you very briefly describe what's going on and what the reader is likely to see of relevance to the outlined topic when they land on any given page (The comparison I like is .txt file and HTML). We don't give sources in categories, and outlines are supposed to do the same. I think we can obviate this in a way by forcing categories to show short descriptions for each article; but for example Greta Thunberg is mentioned in Outline of autism but this would be inappropriate info for a short description, at least at the current understanding of what short descriptions should be ("autistic climate activist" sounds denigrating)
I would be more comfortable with them in their own namespace since the ostensible subject of the article is not the "outline of Wikipedia" for example which, I gather, has not notability outside of Wikipedia's invention of the outline structure. jps (talk) 02:56, 30 October 2025 (UTC)[reply]
I've always thought categories should have a more user-oriented view by default with the shortdescs and thumbnail, like the Vector 2022 search suggestions. The current view with the plain links would be accessible with a quick toggle, and it'd remember your preference so it wouldn't be a burden on existing editors that prefer the current layout. novovtalkedits03:30, 30 October 2025 (UTC)[reply]
Outlines show up when you search Wikipedia, which is important because we want people to be able to find them easily. Portals don't show up in searches by default, and when they are included, their subpage entries make the search results very hard to read (because their many subpages clutter the results).
I think it's fair to ask whether individual outlines, or outlines as a whole, are serving their purpose, but they clearly are intended to live alongside and complement lists and articles. —Myceteae🍄🟫 (talk) 04:47, 30 October 2025 (UTC)[reply]
Fair question about portals. I've seen other editors say they think the should be in article space. The quote above mentions the navigation issues with portal subpages cluttering search results. I don't know the history of portals or what went into their creation and placement into a dedicated namespace. —Myceteae🍄🟫 (talk) 18:28, 30 October 2025 (UTC)[reply]
Doubtful. I see no examples on which to draw. Outlines are the precursor to writing. The ones I'm looking at look like they are stubs of the main articles. jps (talk) 02:55, 30 October 2025 (UTC)[reply]
You can outlining something before expanding that outline into a full article, or you can summarize existing information with an outline. This is the latter. Aaron Liu (talk) 23:57, 30 October 2025 (UTC)[reply]
I'm vaguely aware of outlines but have rarely ever looked at them. I don't think we should delete them and I'm struggling to see the problem. Of course, individual outlines can be nominated for deletion when they have issues. I anticipate that deletion will be a hard sell when the topic being outlined is notable and the outline contains relevant links to many notable articles, similar in a way to how the notability of standalone lists is assessed. —Myceteae🍄🟫 (talk) 02:46, 30 October 2025 (UTC)[reply]
I don't see an issue with them, though I don't use them personally. They're in mainspace for the same reason disambs are in mainspace, both being non article content... they are still a useful navigational aid for the encyclopedia. PARAKANYAA (talk) 06:23, 30 October 2025 (UTC)[reply]
At least disambigs serve a navigational purpose. Outlines are just a bunch of links some Wikipedians think belong together, as far as I can tell. How does one decide what does or does not belong in an outline? I see no means to adjudicate the content whereas with disambiguation, one can refer to the outside world or the spelling of the term as a means to decide what belongs on the page. I dunno, I am just having a really hard time wrapping my head around the use case. jps (talk) 17:43, 30 October 2025 (UTC)[reply]
How does one decide what does or does not belong in an outline? Consensus. i.e. how the content of every page on Wikipedia is decided. Thryduulf (talk) 17:51, 30 October 2025 (UTC)[reply]
Sure, but we sometimes labor under the illusion that there are certain principles worked out that we attempt to adhere to. I'm just not clear what the principles are for writing outlines, and, yes, I read WP:OUTLINES. Still clear as mud to me, but apparently there are a buncha others who get it even as I might not understand what they're saying. jps (talk) 18:00, 30 October 2025 (UTC)[reply]
It's basically a hierarchical overview ("outline") of a subject. They'd don't appear to get much attention so it's quite possible that individual outlines or the concept as a whole is poorly developed. If I thought a specific outline needed work I would edit it myself or start a discussion on talk. If that wasn't satisfactory I would reach out to Wikipedia:WikiProject Outlines or a more specific WikiProject or notice board related to the topic or type of issue. Or post here, as you've done. —Myceteae🍄🟫 (talk) 18:42, 30 October 2025 (UTC)[reply]
Outlines do obviously serve a navigational purpose. And yes, consensus, like everything else. How do you decide what goes in a category? The same way. PARAKANYAA (talk) 18:00, 30 October 2025 (UTC)[reply]
"Obviously" is a pretty strong word. They look to me like study guides or something, but I struggle to understand how they are part of an encyclopedia intead of, say, Wikiversity or something. jps (talk) 18:06, 30 October 2025 (UTC)[reply]
I can say the same about our mainspace nav pages, categories, and navboxes. Outlines help navigate like a set-index article, therefore it is part of the encyclopedia. Aaron Liu (talk) 01:51, 31 October 2025 (UTC)[reply]
They are intended to give an outline of a topic so the fact that they are reminiscent of a study guide or crib sheet doesn't seem off. —Myceteae🍄🟫 (talk) 02:12, 31 October 2025 (UTC)[reply]
No, they're not visible but they're really enjoyable to read. They surface pages I never knew about before, and also topics I wouldn't have known to look for. The only real issue is that anyone who isn't a wikipedia editor is probably never going to find one, and that won't be solved by deleting them. Mrfoogles (talk) 17:28, 5 November 2025 (UTC)[reply]
I strongly agree with this. Maybe the solution would be to integrate them with their much more visible cousin, the sidebar, somehow? Maybe we could automatically generate sidebars from outlines. – Closed Limelike Curves (talk) 01:55, 12 November 2025 (UTC)[reply]
I think these articles could, in theory, be some of the most important ones on Wikipedia, if done properly: outlines like these are an important part of any encyclopedia (though they're usually called an "index"). The issue is they really need better visibility, and that in terms of information, they're often redundant with sidebars—maybe we could automatically generate sidebars from outlines, so they're kept up-to-date and in sync? – Closed Limelike Curves (talk) 01:53, 12 November 2025 (UTC)[reply]
I think User:Closed Limelike Curves is on to something. There's really not a lot of these introduction articles today. And there wouldn't be, long term relative to the whole project, because one could service a hundred or more other articles. But having a properly built out sidebar (at sidebar scale) that can then branch to "Introduction" pages, while carrying all the deeper/myriad other articles, could be a great connective tissue for readers.
And it's got that Intelligence template on the right side: .
That's got a lot of articles in the template--expand/show all of the fields.
Now stick a great Introduction to intelligence article directly under the little graphic of the spy man. That writ large could even be a Wikipedia:Did You Know type creation arms race to highlight an Introduction page on the front page, for like a week at a time.
I don't know if the sidebar is the right move, but I agree it would be nice to somehow increase the visibility of outlines. These do have a lot of potential to serve readers. More visibility also means more editor attention, which would inspire improvements that address some of the (largely overblown) concern about the content and organization of these pages. —Myceteae🍄🟫 (talk) 14:56, 12 November 2025 (UTC)[reply]
We also have indexes, which are different. Indexes of articles and redirects with possibilities are alphabetically arranged and are used in much the same way as a printed index, within a defined scope. Outlines are grouped by logical connection, similar to categories, but in a way that the logical structure of topic coverage is apparent, generally only for existing articles, not usually redirects, and only for the specific topic, so usually a clearer way to show coverage of a field. Both are useful for assessing coverage of a field, and both partly overlap with navboxes and search engines. Their usefulness in very large fields is probably limited, as the size can get out of hand. Both can make use of annotated links to make them more informative, as can categories. No-one is obliged to use them, but if they are useful they can stay, we are not short of space. I find them all useful for spotting gaps in coverage. YMMV, Cheers, · · · Peter Southwood(talk): 19:24, 12 November 2025 (UTC)[reply]
Indexes do actually seem like they should be merged into categories, but that'd require a nicer interface for browsing articles in categories (some way of listing all the members of a category recursively, alphabetically). – Closed Limelike Curves (talk) 06:08, 13 November 2025 (UTC)[reply]
Often categories are added to an article or articles to a list without a source because the grouping is not disputed and sourcing it would be hard. As long as you can add Category:Protected areas of the United Kingdom to Environmentally sensitive area, you can add that article to "List of protected areas of the United Kingdom" or the relevant outline section. Though I agree with you on the facts mentioned like that there are 33 shires of Scotland—though most likely un-WP:Likely to be challenged they would do good with a source. Aaron Liu (talk) 15:53, 1 November 2025 (UTC)[reply]
I do not use outlines often, but I find them very useful as a reader. I don't think they need sources: Ideally they're more of a collection of links rather than the quasi-prose Doug linked above. Toadspike[Talk]20:11, 4 November 2025 (UTC)[reply]
Outlines are a very bizarre shadow Wikipedia that basically exists because of one very prolific contributor duplicating the idea of portals and pushing them. There's fundamentally no scope to outlines—you could create an "Outline of" on any article despite there not being any notability criteria to use. The fact that they are decoupled from the actual state of the articles they shadow, that they rarely have required citations, and that they are a manual, brute-force way of doing things—like the aforementioned 'hierarchies of a subject' idea—means they would be better as a dynamically-generated product. Most of the arguments about them devolve into WP:ITSUSEFUL; it's telling that there's no keep arguments in Wikipedia:Articles for deletion/Outline of Wikipedia that actually cite any guideline or policy for why it should be kept. As far as I know, outlines were never actually formally discussed and ratified as A Thing by the community. Der Wohltemperierte Fuchstalk20:37, 5 November 2025 (UTC)[reply]
Do they need to be ratified or has community consensus of allowing them to build and evolve over 15~ years sufficient endorsement? It's just a visual sorted/tree based view of articles around a subject. — Very Polite Person (talk/contribs)20:50, 5 November 2025 (UTC)[reply]
Well, we're discussing them now. And crucially, there wasn't much in the way of a coherent delete rationale presented at AfD. —Myceteae🍄🟫 (talk) 21:42, 5 November 2025 (UTC)[reply]
The keep argument was that there was no deletion argument. What's needed is consensus to delete.Outlines were originally created as "List of basic topics of...", which being lists needed no additional affirmation. Besides the mass move discussion to "Outline of..." I remember finding once but not right now, consensus that outlines should be a thing was also established at Wikipedia:Village pump (proposals)/Archive 78#Alternative Outline Articles Proposal, whose parent proposal (closing down outlines) was {{CENT}}-listed.
Some of their content may need refs, but much of their content is lists of bluelinks, which do not need refs, and annotated bluelinks, which may or may not need refs depending on how they are annotated. Many annotations are Wikipedia:Short descriptions, which are expected to be derived from properly referenced content in their home article, and displayed via {{Annotated link}}, which do not need additional refs in the outline page, in much the same way that content in the lead does not have to be cited. · · · Peter Southwood(talk): 19:56, 12 November 2025 (UTC)[reply]
Agreed. Outlines should follow the guidance at MOS:SOURCELIST. It is unsurprising per the scope and purpose of outlines that most contain minimal references. Of course, editors can cite, tag, remove, or discuss any disputed content per the normal editing standards. —Myceteae🍄🟫 (talk) 20:12, 12 November 2025 (UTC)[reply]
RfC: Increase the frequency of Today's Featured Lists
When discussion has ended, remove this tag and it will be removed from the lists. If this page is on additional lists, they will be noted below.
Increase the frequency of Today's Featured Lists from 2 per week to 3 or 4 per week, either on a trial basis, with the option to expand further if sustainable, or without a trial at all. Vanderwaalforces (talk) 07:02, 2 November 2025 (UTC)[reply]
Background
Right now, Today's Featured List only runs twice a week; that is Mondays and Fridays. The problem is that we've built up a huge (and happy?) backlog because there are currently over 3,400 Featured Lists that have never appeared on the Main Page (see category). On top of that and according to our Featured list statistics we're adding about 20 new Featured Lists every month, which works out to around 4 to 5 a week, and looking at the current pace of just 2 per week, it would take forever to get through what we already have, and the backlog will only keep growing.
Based on prior discussion at WT:FL, I can say we could comfortably increase the number of TFLs per week without running out of material. Even if we went up to 3 or 4 a week, the rate at which new lists are promoted would keep things stable and sustainable. Featured Lists are one of our high-quality contents and they get this less exposure compared to WP:TFAs or WP:POTDs, so trust me, this isn't about numbers, and neither is it about FL contributors being jealous (we could just be :p). Giving them more space would better showcase the work that goes into them. We could run a 6‑month pilot, then review the backlog impact, scheduling workload, community satisfaction, etc.
Of course, there are practical considerations. Scheduling is currently handled by Giants2008 the FL director, and increasing the frequency would mean more work, which I think could be handled by having one of the FL delegates (PresN and Hey man im josh) OR another experienced editor to help with scheduling duties. Vanderwaalforces (talk) 07:03, 2 November 2025 (UTC)[reply]
Options
Option 1: Three TFLs per week (Mon/Wed/Fri)
Option 2: Four TFLs per week (e.g., Mon/Wed/Fri/Sun)
Option 3: Every other day, with each TFL staying up for two days (This came up at the WT:FL discussion, although it might cause imbalance if comparing other featured content durations.)
Option 4: Three TFLs per week (Mon/Wed/Fri) as a 6‑month pilot and come back to review backlog impact, scheduling workload, community satisfaction, etc.
Option 5: Four TFLs per week (e.g., Mon/Wed/Fri/Sun) as a 6‑month pilot and come back to review backlog impact, scheduling workload, community satisfaction, etc.
Generally supportive of an increase, if the increase has the support of Giants2008, PresN, and Hey man im josh. Could there be an elaboration on the potential main page balance? TFL seems to slot below the rest of the page, without the columnar restrictions. CMD (talk) 10:01, 2 November 2025 (UTC)[reply]
@Chipmunkdavis Per the former, yeah, I totally agree, which is why I suggested earlier that one of the FLC delegates could help share the load, alternatively, an experienced FLC editor or someone familiar with how FL scheduling works could assist. Per the latter, nothing changes actually, the slot for TFL remains the same, viewers only get to see more FLs than the status-quo. It might fascinate you that some editors do not know if we have TFLs (just like TFAs) on English Wikipedia either because they have never viewed the Mainpage on a Monday/Friday or something else. Vanderwaalforces (talk) 17:06, 2 November 2025 (UTC)[reply]
Option 1, for two main reasons: (1) there is no reason to rush into larger changes (we can always make further changes later), and (2) FL topics tend to be more limited and I think it's better to space out similar lists (e.g., having a "List of accolades received by <insert movie/show/actor>" every other week just to keep filling slots would get repetitive). Strongly oppose any option that results in a TFL being displayed for 2 days; this would permanently push POTD further down, break the patterns of the main page (no other featured content is up for more than 1 day), and possibly cause technical issues for templates meant to change every day. RunningTiger123 (talk) 18:08, 2 November 2025 (UTC)[reply]
Option 1 – Seeing the notification for this discussion pop up on my talk page really made me take a step back and ponder how long I've been active in the FL process (and my mortality in general, but let's not go there). I can't believe I'm typing this, but I've been scheduling lists at TFL for 13 years now. That's a long time to be involved in any one process, as this old graphic makes even more clear. Where did the time go? Anyway, I agree with RunningTiger that immediately pushing for 4+ TFLs per week when we may not have enough topic diversity to probably support that amount would do more harm than good, but I think enough lists are being promoted through the FL process to support an increase to three TFLs weekly. In addition, I agree with RT that we don't need to be running lists over multiple days when none of the other featured processes do.While I'm here, I do want to address potential workload issues. My suggestion is that, presuming the delegates have the spare time to take this on, each of us do one blurb per week. With the exception of the odd replaced blurb once in a blue moon, I've been carrying TFL by myself for the vast majority of the time I've been scheduling TFLs (over a decade at this point). If I take a step back and ignore the fact that I'm proud to have had this responsibility for the site for this many years (and that the train has been kept on the tracks fairly well IMO), it really isn't a great idea for the entire process to have been dependent on the efforts of a single editor for that long. I just think it would be a good sign of the strength of the TFL process for a rotation of schedulers to be introduced. Also, in the event of an emergency we would have a much better chance of keeping TFL running smoothly with a rotation. Of course, this part can be more thoroughly hammered out at TFL, but I did want to bring it up in case the wider community has any thoughts. Giants2008 (Talk) 01:42, 4 November 2025 (UTC)[reply]
Option 1, though I would support any permanent increase to the frequency of TFLs as long as the coords or other volunteers have the capacity for that. Toadspike[Talk]20:13, 4 November 2025 (UTC)[reply]
Option 1, Slow changes are better. Also, this doesn't explicitly need to be a pilot (opt4) since we can always switch back to the status quo ante if unforeseen problems crop up. -MPGuy2824 (talk) 14:20, 12 November 2025 (UTC)[reply]
Option 1, in agreement with others. I would be open to an increase in frequency after some time, with input from editors involved in TFLs about the impact of the initial change. —Myceteae🍄🟫 (talk) 20:15, 12 November 2025 (UTC)[reply]
Paraphrasing allowed for species descriptions about 'obscure' and 'newly described' species
Hello all, I am running into a problem. I am adding articles about beetles. Many beetle species are very poorly studied. Hence: often there is only a few or even one source available that gives a description of the species (i.e. it's appearance). Another editor stated that I am not allowed to use close paraphrasing when adding an article about a species to wikipedia, and stated he intends to remove all of these paraphrased statements. I do not agree with his stance in this matter, because there is literally no other way to add these species discriptions. To make it clear: I do not just copy-paste the species description, and only use one or two sentences (a typical modern species description is about 1 page long). I will give two examples of what the other editor thinks is not acceptable, but I think is:
I changed this: Original - "The head (except black mandibles and labrum), antennae (except antennomeres 8-11 black), and legs chestnut-brown; eyes and scutellum black; pronotum shiny reddish-brown with medial 3 black (with bluish reflections) longitudinal vittae- 1 medial and 2 lateral;elytra shiny reddish-brown with 3 shining black oblique vittae from lateral to sutural margins; venter and legs reddish." into this: Wikipedia entry - "The head, antennae and legs are chestnut-brown, while the pronotum is shiny reddish-brown with three black vittae with bluish reflections. The elytra are shiny reddish-brown with three shining black vittae." That is not a copyvio in my mind. How else should I ever get a species description on wikipedia.
This second source is in German and I translated and changed it: Original - "Beschreibung. Länge 7,4-7,7 mm, Elytrenlänge 5,4-5,7 mm, Breite 4,7-4,8 mm. Körper eiförmig oval, dunkel kastanienbraun, Oberfläche mit matter Beschichtung, Labroclypeus, Tarsen und Schienen glänzend, bis auf laterale Bewimperung und einige Borsten auf dem Kopf kahl." Wikipedia entry - "Adults reach a length of about 7.4-7.7 mm. They have a dark chestnut brown, oval body. The dorsal surface is dull and glabrous, except for the lateral cilia and some setae on the head."
I think that this would be ok, IF it is a species for which only very few sources are available to work with (for a lot of these species there is one source with an actual description, and some listings in checklists an databases, but nothing else.
By the way, I am not the only one that thinks that feels that species descriptions should be free of restrictions. The database/website Plazi.org follows the following reasoning about the legality of using species descriptions published in copyrighted journals: [35]. I was searching for any (legal) challenges to Plazi (could not find one). Did find this: Scientific names of organisms: attribution, rights, and licensing | BMC Research Notes | Full Text, it is mainly about databases and checklists, but also states this: "Taxonomic treatments are not copyrightable: Taxonomic treatments and descriptions of species are not copyrightable because they lack creativity of form. Rather, they are presented with a standardized form of expression for better comprehension."
They also drafted a 'blue list', which includes components of names and taxonomy that are not subject to copyright
- A hierarchical organization (= classification), in which, as examples, species are nested in genera, genera in families, families in orders, and so on.
- Alphabetical, chronological, phylogenetic, palaeontological, geographical, ecological, host-based, or feature-based (e.g. life-form) ordering of taxa.
- Scientific names of genera or other uninomial taxa, species epithets of species names, binomial combinations as species names, or names of infraspecific taxa; with or without the author of the name and the date when it was first introduced. An analysis and/or reasoning as to the nomenclatural and taxonomic status of the name is a familiar component of a treatment.
- Information about the etymology of the name; statements as to the correct, alternate or erroneous spellings; reference or citation to the literature where the name was introduced or changed.
- Rank, composition and/or apomorphy of taxon.
- For species and subordinate taxa that have been placed in different genera, the author (with or without date) of the basionym of the name or the author (with or without date) of the combination or replacement name.
- Lists of synonyms and/or chresonyms or concepts, including analyses and/or reasoning as to the status or validity of each.
- Citations of publications that include taxonomic and nomenclatural acts, including typifications.
- Reference to the type species of a genus or to other type taxa.
- References to type material, including current or previous location of type material, collection name or abbreviation thereof, specimen codes, and status of type.
- Data about materials examined.
- References to image(s) or other media with information about the taxon.
- Information on overall distribution and ecology, perhaps with a map.
- Known uses, common names, and conservation status (including Red List status recommendation).
- Description and/or circumscription of the taxon (features or traits together with the applicable values), diagnostic characters of taxon, possibly with the means (such as a key) by which the taxon can be distinguished from relatives.
- General information including but not limited to: taxonomic history, morphology and anatomy, reproductive biology, ecology and habitat, biogeography, conservation status, systematic position and phylogenetic relationships of and within the taxon, and references to relevant literature.
- It would appear that no copyright law is infringed if a user extracts elements of the blue list from material that lacks legitimate user agreements.
They argue all of the above is not copyrightable. I can imagine wikipedia would not just want to accept that as truth, however: I do feel this support the argument that we could at the very least paraphrase these copyrighted sources, if we stick to one or two sentences, rewrite them, and only do it for 'obscure' species (so not for species like a kangaroo, a duck, etc., where countless of sources are available, but for species like a mosquito that is endemic to one forest in Sumatra, or a mollusk described last year, etc.) B33tleMania12 (talk) 18:23, 2 November 2025 (UTC)[reply]
It'd probably be more pointful to ping people who know something about copyright, like Diannaa.
Facts are not copyrightable, so (e.g.,) a fact "about the etymology of the name" is not copyrightable. But the expression of a fact can be (=is not always) copyrightable. Editors should write in your own words and sentences. However, if the expression is simple enough ("E. expertia was named after Alice Expert"), then even though Wikipedia wants you to write in your own words, that sentence wouldn't constitute a copyvio. WhatamIdoing (talk) 03:53, 3 November 2025 (UTC)[reply]
Long-term, if you'd like that to be a rule for all articles, then I suggest getting an actionable definition of "significant coverage" into the GNG. We still have disagreements about whether SIGCOV is about importance or volume, or if it is determined by the number of word in a source or the number of facts that could be used in an encyclopedia article. To give you an idea of how this matters, see User:WhatamIdoing/Database article, where I've written a 225-word-long Wikipedia article from a source that does not contain a single complete sentence about the subject of the article. Some editors say that source is SIGCOV, because obviously it covered enough facts for me to write a Start-class article about the subject, easily meeting the goal of SIGCOV as explained in WP:WHYN. And others say that it's not, because it's obviously impossible to have SIGCOV if the source presents the information about the subject of the article in any form other than multiple consecutive sentences of prose. WhatamIdoing (talk) 05:41, 3 November 2025 (UTC)[reply]
Yes, I know it's against NSPECIES. My view is that it's a bad guideline because it leads to mass generation of low quality articles that are poorly watched and maintained. (t · c) buidhe05:43, 3 November 2025 (UTC)[reply]
Well the idea was to make the article longer than just one sentence saying where it lives, but I cannot if I am not allowed to use anything else. There is enough to write an article that is actually saying something about the species in the original description, but if there is no way to use it, the article will indeed stay a stubby sub-stub until someone else writes something about it. B33tleMania12 (talk) 07:41, 3 November 2025 (UTC)[reply]
Regarding "We shouldn't have a wp article if there is only one source with significant coverage", which you argue "lead to mass generation of low quality articles that are poorly watched and maintained." This article is what you can do with a single source: Maladera cardamomensis (luckily it is CC-BY, so no issues with using the species description). I think this is substantial enough to deserve an article. In essence, this could be done for every species, because there will always be a species description. But then again: we must be allowed to use it (hence this discussion) B33tleMania12 (talk) 11:34, 3 November 2025 (UTC)[reply]
It is probably worth finding something more than a stub for future "This article is what you can do with a single source" arguments. Much more is possible, with a good enough source. CMD (talk) 16:15, 3 November 2025 (UTC)[reply]
@B33tleMania12: The important thing is to not copy-paste anything that could be remotely considered to be a creative choice. In your first example, I would replace "shiny" with the synonym "glossy" (or "reflective"). In your second example, I would not copy-paste "chestnut-brown", but instead say "reddish-brown" (and pipe-link that to "chestnut (color)"), which is also more accessible to lay readers, and this is the term you use in your first example. More importantly, try to reduce/explain technical language (see WP:MTAU). The goal is to rephrase this to make it as understandable as possible. For example, three shining black vittae need to be explained; something like "On the elytra there are [your explanation], called vittae, that are black" and you will have a very different sentence. You could also change the structure by first describing general features rather going section by section. For example, you could write something like "Both the pronotum ([explanation of term]) and the elytra ([explanation of term]) are red-brown with a reflective surface", followed by the details of these parts, and that would be very different from the source and easier to understand than the highly technical and formalized way the source puts it. --Jens Lallensack (talk) 08:43, 3 November 2025 (UTC)[reply]
Could I formally request that @The Knowledge Pirate: hold off on trimming any content he deems copyvio's until this discussion is done? When I started, I did not always add CC-BY and PD US Government tags.. adding these is off course no issue, but there are also many articles I made using sources that are not under a 'free' licence. Following his reasoning, these would be copyvio's, and thus be removed. However, if the conclusion of this discussion is that they are not, they would be removed and rev-del'd for nothing. B33tleMania12 (talk) 14:26, 3 November 2025 (UTC)[reply]
I agree it's better to have it rephrased in a more accessible format, but overall I think it should be ok to just copy-paste the defining species description, as long as this is legal of course -- it's much better to have than not, and rephrasing/explaining is a lot of work that can be done slowly after the description has been copied in: there are a lot of beetle articles. Mrfoogles (talk) 17:47, 5 November 2025 (UTC)[reply]
This is something that needs to be done properly rather than hedged around. A species definition is really important, because apart from the type-specimen, bottled in some collection, it is the definition of the species. From the outset, the person who publishes a description expects that description to govern what individuals are identified as belonging to the species, so if they describe Collywobbula thingii as "6 legs, brown thorax, pointy bit at end" they are expecting everyone else to report that they've found a C. thingii because it has 6 legs, a brown thorax, and a pointy bit at the end. There is no creativity, and there is a community understanding that this description is now common knowledge used by all in exactly the terminology stated. Once you start reporting it as having 6 appendages, a tan middle-section and a tapering terminus, all sorts of things go wrong. Things with 4 legs and two funny-looking stick-like bits, a dark-yellowy thorax and a round end that's thinner than the rest start to match the description. Further, a lot of the terms used in species descriptions are very formalised and themselves defined. If you ask two taxonomists/plant anatomists to describe the same plant, they will use the same terminology, and their descriptions are likely to sound like very close paraphrasing; they will automatically describe a flower as having three sepals and five petals, and a leaf as being glabrous on the abaxial surface because sepals, petals, three, five, abaxial and glabrous are the right words for the job, and petal-number, sepal-number, hairiness, are all characteristics that typically allow identification of plants. There is very little flexibility; it's a bit like accusing two kids of copying each other's homework because when they were asked to write what colour grass is, they both said "the grass is green" Elemimele (talk) 17:56, 12 November 2025 (UTC)[reply]
Thanks for this. I think it's important for editors to understand that facts (e.g., grass=green) cannot be copyrighted, nor does rephrasing a bullet list ("* grass color: green") into a sentence ("the grass is green") violate the list's copyright. When the order of facts is prescribed/conventional (e.g., describe animals from head to tail, top to bottom), the order can't be copyrighted either. Selection of facts (I put in the interesting bits; I leave out the boring bits) can sometimes be copyrighted.
IMO we should not copy species descriptions word for word (especially from sources that aren't writing in complete sentences!), but we should also expect all of the nouns and adjectives in both to be exactly the same.
Concur with the voice of reason. Sometimes messing with the correct terminology messes with the meaning and we end up with unintended original research by the Wikipedia definition. Cheers, · · · Peter Southwood(talk): 20:12, 12 November 2025 (UTC)[reply]
Thanks from me too. To me this sounds like moving towards outright encouraging the use of these species descriptions? I would be happy that we would. B33tleMania12 (talk) 21:26, 12 November 2025 (UTC)[reply]
When a new COI edit request is posted, it appears on Category:Wikipedia conflict of interest edit requests. When a volunteer starts to address the request, it can be tagged with the {{started}} template. But we still have to click on each request to go to the request on the talk page to see if it's been tagged with "started" yet. It would save time if the presence of the started template triggers some kind of visual alert on the category page. Currently, a lot of real estate and color coding goes to show that an article is edit protected, but that has very little impact on most editors handling these requests. Instead, if a field could be used to simply say "started", or "new" (default), it would make it easier for volunteers to clear the queue by highlighting new requests that aren't already being worked on by someone else. STEMinfo (talk) 23:46, 4 November 2025 (UTC)[reply]
@Jlwoodwa: Yes - I didn't know there was another location for the queue. On the link you shared, there's even more empty space, so it seems there would be room to put in a "started" icon or the word started in a stareted column to help the volunteers.STEMinfo (talk) 00:07, 8 November 2025 (UTC)[reply]
@STEMinfo, when was the last time you had an actual problem with wasted work because someone else was answering the same request that you picked?
There are usually about 200 open requests on that page, and I would be surprised if there were even 10 editors using the list (the cat gets about 20 page views per day). I estimate the odds of a conflict as being significantly less than 1% per article chosen, especially if you're picking an article that isn't one of the very top or very bottom entries. WhatamIdoing (talk) 18:25, 12 November 2025 (UTC)[reply]
I do not believe this message, which appears when a temporary account attempts to exit session, is necessary. The wikilinks in message is currently broken due to T409630, and no good faith user would believe that it is ok to disrupt Wikipedia, evade a block or ban, or to avoid detection or sanctions. The exit session dialogue is already cluttered enough, and the message can come across as assuming bad faith. Catalk to me!13:15, 8 November 2025 (UTC)[reply]
We have disabled system messages before; simply replacing them with a - is usually enough to hide them. As for the message itself, I'm all for simplifying interface messages (as long as they're still informative enough) so I have no major issues with this message being hidden for us. —k6ka🍁 (Talk · Contributions) 14:05, 8 November 2025 (UTC)[reply]
Ah yes, that feature wasn't too well documented. Yes, users of temporary accounts can use the "End Session" button to essentially log out of their temporary account (forever), no cookie-clearing required. I suppose there is a concern that it could be used for abuse, but it's not like a warning message would stop determined malice anyway. —k6ka🍁 (Talk · Contributions) 16:48, 8 November 2025 (UTC)[reply]
At a minimum, I support disabling the "Exit session" feature for blocked temporary accounts. Even if this only stops less determined vandals, removing the feature would still reduce the anti-vandalism workload. — Newslingertalk16:15, 10 November 2025 (UTC)[reply]
I agree that being "logged in" to a temporary account offers a worse visual experience than being logged out. As someone who spends a lot more time reading than editing, I'll log out of a temporary account after making an edit to get back to normal. ~2025-32801-03 (talk) 11:24, 11 November 2025 (UTC)[reply]
I don't think it is reader-friendly or useful if a template e.g. has a link to "The Princess of Hanover" when we actually mean Princess Caroline of Monaco, to "The Dowager Princess of Sayn-Wittgenstein-Berleburg" when we mean Princess Benedikte of Denmark, "The Emperor Emeritus" when we mean Akihito, or to "The Duke of Sussex" when we mean Prince Harry, Duke of Sussex. I would propose as a rule that these templates should use the article titles they link to (minus unnecessary disambiguation if applicable) instead of the formal titles. Thoughts? Fram (talk) 09:45, 10 November 2025 (UTC)[reply]
Hardly a good reason to keep these, as their position in the tree will often change anyway when the title holder changes (e.g. switching of King and Queen in the UK a few years ago). Fram (talk) 14:10, 10 November 2025 (UTC)[reply]
In many cases these are not really "positions" that transfer from one person to another. For example, there was no Duke of Sussex for 175 years until the title was recreated for Harry in 2018. It's possible there will never be another Duke of Sussex when he no longer holds the title. I think Fram is right here, using the article titles is much better for clarity and ease of navigation. I'll also point out that these are borderline WP:EASTEREGGs: in the British template, the link text is "The Duke of Edinburgh" but does not connect readers with Duke of Edinburgh (etc). —Rutebega (talk) 21:18, 13 November 2025 (UTC)[reply]
Should we adopt the new "protection padlock" feature?
Requires extra editor attention. On the English Wikipedia, bots and scripts are used to add the {{Protection padlock}} template.
Clutters the wikicode of the page, especially since it is placed at the top.
Adds two extra edits to a page's history (one when the page is protected to add the template, and a second one, after the protection expires, to remove it) in addition to the protection revision history lines
Inconsistent behavior across wikis causes confusion. For admins on the English Wikipedia a common pattern is: a page is protected with Twinkle, automatically adding the {{Protection padlock}} template, but the page needs to be reverted to remove vandalism, requiring another edit to re-add the template again.
MediaWiki can now display a page indicator automatically while a page is protected. This feature is disabled by default. It can be enabled by community request.
Starting with MediaWiki 1.43, protection indicators that are small lock icons on the top of a page might show up when a page is protected. This feature can be enabled using the setting $wgEnableProtectionIndicators.
The ability to distinguish between edit protection and move protection. The "protection indicator" feature seems to allow customization by protection level (full, extended, semi) and duration (finite vs. indefinite), but not by edit vs. move protection.
How much do these features actually matter? As a lowly IP editor temporary account holder, no idea. Admins would presumably have a better idea of that. Technically, some of these features could probably be added to the new "protection indicators" with additional CSS and templates, but the new system would end up no simpler than the present one. ~2025-32085-07 (talk) 18:44, 11 November 2025 (UTC)[reply]
My first thought is that there would be no harm in adding a large banner in addition to the automatic indicator if one is justified, so the change would be positive for those pages which only have a small icon (most) and neutral for those which have a banner. However, that doesn't account for the categorisation and tooltip issues brought up by 32085-07. Thryduulf (talk) 19:48, 11 November 2025 (UTC)[reply]
$wgEnableProtectionIndicators is currently set to 'true' on az.wikipedia and sr.wikipedia so I checked those sites to get a better feel for what this could look like. Here are some examples of protected pages from the Azerbaijani and Serbian Wikipedias:
Summary of what I figured out from reading the documentation and looking at its use in production:
The feature appears as a page indicator icon near the page title.
By default, the icon is a black padlock which does not vary by the level of protection (as on sr.wikipedia).
Using CSS it is possible to set up different icons for semi-protected pages, extended protected pages, fully protected pages, etc. (as on az.wikipedia).
It only seems to deal with edit protection. A fully move-protected page without edit protection has no indicator icon whatsoever. Perhaps that feature will be added in a future version.
I find the different colors and letters on the protection padlock icons very useful. According to the anon's research above, this new feature somehow doesn't provide these basic distinctions. I don't think the advantages outweigh the missing features yet, but we should keep an eye on it to see if this is a feature that the WMF continues to develop or abandons half-built. If there are phab tasks for this new feature, someone could add a link to our protection icons so that they can catch up with what our volunteers have developed here at en.WP. – Jonesey95 (talk) 01:33, 12 November 2025 (UTC)[reply]
With appropriate global CSS, it looks like the feature could handle applying for generic protections. You can see this on azwiki, where they've done some of that. Anything else, including where we want the title-text on the icon to talk about BLP or the like (e.g. {{pp-blp}}) instead of a generic message or when we want custom categorization, would still need a template overriding the feature. Anomie⚔02:51, 12 November 2025 (UTC)[reply]
This specific feature was written by a volunteer, not the WMF. And oppose enabling it here - this seems like a situation where it would be better to leave things working in the way they have been for decades rather than having two different ways to do the thing. * Pppery *it has begun...21:01, 12 November 2025 (UTC)[reply]
As far as I can tell, the new protection indicators can not add pages to categories, so automatic categorization into the Wikipedia protected pages category tree would not work anymore. I'm guessing quite a few workflows (both from human editors and bots) depend on these categories? How big of an issue would that be? --rchard2scout (talk) 10:06, 12 November 2025 (UTC)[reply]
Good-faith new users often interact with experienced editors mostly via templated warnings, declines, or draftication notices. Ideally, we would personalised these messages more, but there's insufficient capacity for that. I don't think these type of notices are that effective in teaching new editors how we do stuff.
What if we were to include an optional quiz as part of these notices. This could for instance test somebody's understanding of the text of the notice, and show where there understanding might not be there yet. For instance, for GNG, we might ask 3 questions where they assess if a source counts towards notability. For copyright violations, we can e.g. ask them a question about what to do if the source doesn't explicitly have a copyright notice.
I think this might have the potential to teach more newbies how Wikipedia works, and hopefully leads to fewer reverts or even blocks. Curious to hear what others think. Is there already a way to A/B test changes like this? —Femke 🐦 (talk) 16:37, 15 October 2025 (UTC)[reply]
It may work, but the problem is that if you ask those questions of two experienced Wikipedians you are likely to get seven different answers. Phil Bridger (talk) 16:51, 15 October 2025 (UTC)[reply]
A good quiz would likely directly get examples from the relevant policies or other simple examples where we do agree. It's about teaching the basics, not the complicated stuff. —Femke 🐦 (talk) 17:04, 15 October 2025 (UTC)[reply]
That assumes we even agree about what's directly in the policies. 😉 There are a decent number of things I think people have snuck into various policies to give them an easier time arguing against things than they'd have actually discussing whether the sources are reliable and so on. Anomie⚔17:49, 15 October 2025 (UTC)[reply]
The suggestion that we should quiz editors implies that we should take action against them if they answer the quiz incorrectly? But doesn't that presume that they would continue to make the same kinds of edits that led to them being warned, which may not be a valid assumption? DonIago (talk) 16:56, 15 October 2025 (UTC)[reply]
Perhaps it could be arranged a bit like the DYK section of the main page. A declined AfC on the basis of poor sourcing could be accompanied by a friendly "Did you know: in 1832 the Wikipedia community decided that the Daily Mail is as reliable as a chocolate teapot and anyone attempting to use it as a source would be ridiculed mercilessly?" (only replaced with accurate and less facetious text, of course). Or copyright template warnings could come with "Did you know: copyright extends not just to copy-pasting blocks of text, but also to close paraphrasing?" (probably accompanied by appropriate links to help-pages and policies). Elemimele (talk) 17:04, 15 October 2025 (UTC)[reply]
Some of our notices are a big block of shouty text. Too much bolding, linking etc to convey the message. Designing them better with a highlighted example might make sense. Examples are always good didactically. If we try this too, can we already A/B test this? —Femke 🐦 (talk) 18:00, 15 October 2025 (UTC)[reply]
The idea is that this is to make it easier to learn stuff. In the first instance. If this works, we might consider putting is as part the unblock process, where it forms the first step for quizzable 'offenses' (copyvio etc). But that's a question for later. —Femke 🐦 (talk) 17:07, 15 October 2025 (UTC)[reply]
I think you need to clarify your idea of how this process would work, because there's a big difference between giving a new editor a non-required quiz after they've received one warning, and giving a non-new editor a required quiz as a condition of their being unblocked. In your OP you specified "newbies", and I think it may fall afoul of WP:AGF to require new users to have to pass quizzes to resume editing; at worst you're effectively blocking them for a single offense. DonIago (talk) 17:17, 15 October 2025 (UTC)[reply]
The idea is that this is optional. They get a warning. As part of that warning is a shiny button saying 'test your knowledge'. When they make a mistake, they get a two-sentene explanation of how the policy works in that example. If they prefer to instead just click there link and read the relevant policy, that's fine too.
If this works, we might want to include something in the unblock process as well, where it might not be optional. But it needs to be thoroughly tested before any of that could happen in an optional system. —Femke 🐦 (talk) 17:56, 15 October 2025 (UTC)[reply]
Sorry, I somehow lost track of this. Also, sorry if I missed your original stipulation that this would be optional, as that makes this more of a fun learning tool rather than anything else, which obviously changes the scope of it. I'm open to the possibility of this, though it might be more complicated than we anticipate to come up with quizzes with unambiguously right answers. Still, I don't have any objection to pursuing it, though I agree with you that there's going to need to be some thorough testing involved. Even a question as ostensibly simple as, "Does statement X need to be accompanied by a citation?" is the kind of question that different readers may interpret in ways that could lead to conflicting responses. DonIago (talk) 03:03, 27 October 2025 (UTC)[reply]
I don't think that's the intention. I see it as a way to identify areas for improvement in editors (particularly, those who are genuinely trying to help but don't know much about our policies), not a punishment. Rosaece ♡ talk ♡ contribs12:28, 16 October 2025 (UTC)[reply]
Well, for copyright specifically, I often make a follow up comment to give the editor a link to the WMF student training module on plagiarism. Unclear success rate, given that people don't tell me if they've done it, but at least it makes me feel better if I have to AN/I them later. I certainly don't think it would be a bad idea to include links to the other modules in standard warning templates. They're not half bad, and they already exist. GreenLipstickLesbian💌🦋18:12, 15 October 2025 (UTC)[reply]
I think this is a great idea.
Quizzes can be optional and voluntary too. For example, a quiz about core content policies.
I think this is a great idea. Currently warnings describe what an editor has done wrong, but most of them don't really explain why it's wrong. Encouraging editors to learn from their mistakes is a good form of editor retention. Rosaece ♡ talk ♡ contribs12:35, 16 October 2025 (UTC)[reply]
In principle I like it a lot. I think it is worth a try, and suggest go ahead and provide a few samples for us to consider. I also suggest a database/list of appropriate questions connected with specific editing errors as a way to workshop it. · · · Peter Southwood(talk): 05:58, 23 October 2025 (UTC)[reply]
An alternative is not to use templates. Or at least not shiny icon laden box surrounded templates. Newsers oftn assume these are automatically generated and pay not attention. All the best: RichFarmbrough13:47, 2 November 2025 (UTC).[reply]
Another thing that would be lovely to test if we can do A/B testing is making those notes seem like they are handwritten and less formal. Would that make people pay more attention? —Femke 🐦 (talk) 13:49, 2 November 2025 (UTC)[reply]
I saw this thread yesterday and I wanted to chime in this idea I had, but I waited to long to act on it and now it's archived. So I guess I'll have to make a new thread.
It's clear that lots of new editors struggle making good content with AI assistance, and something has to be done. WP:G15 is already a good start, but I think restrictions can be extended further. Extended confirmation on Wikipedia is already somewhat of a benchmark to qualify editors to edit contentious articles, and I think the same criteria would do well to stop the worst AI slop from infecting mainspace. As for how this would be implemented, I'm not sure - a policy would allow human intervention, but a bot designed like ClueBot NG might automate the process if someone knows how to build one. Koopinator (talk) 10:50, 18 October 2025 (UTC)[reply]
I do t see a practical way to enforce that. I also dont think that peoples skill level with AI can transfer to an assessment of their skill level in wikipedia. —TheDJ (talk • contribs) 11:31, 18 October 2025 (UTC)[reply]
Regarding enforcement, I would suggest:
1. Looking at whatever process ClueBot uses to detect and evaluate new edits, and add a "extended confirmed/non-ec" clause.
1.1. I will admit I'm not entirely sure of how this would work on a technical level, which is why I posted this idea in the idea lab.
Too sweeping an opinion in my opinion. First you would have to be talking about specifically using unsupervised AI to write articles. Secondly I think it would be "insistance" rather than "willingness". And thirdly it could well be a WP:CIR or user education issue rather than a NOTHERE one. All the best: RichFarmbrough18:03, 6 November 2025 (UTC).[reply]
I would say it's a reasonable inference. Here's what I can say:
We can expect that extended-confirmed users are more likely to be familiar with Wikipedia's policies and guidelines, by virtue of having been here longer.
[43] LLM edit with no sources, survived for almost 2 months. Was created by an editor who was neither confirmed nor extended confirmed.
[44] Personal project by yours truly, AI assistance was used, careful review of text-source integrity of every sentence as I constructed the page in my sandbox over the course of 59 days before airing it.
I admit none of this is hard evidence.
I do feel LLM has its place on the site (otherwise I wouldn't have used ChatGPT assistance in constructing a page), but if it's allowed, the barrier for usage really should be heightened. Wikipedia's content translation tool is also restricted to extended-confirmed users.
LLM detection for text is very hard and has far, far too many false positives, especially for non-native speakers and certain wavelengths of autism. Aaron Liu (talk) 16:41, 18 October 2025 (UTC)[reply]
^ This is my experience. Also, a lot of edits are too brief for the already-dodgy AI "detectors" to be reliable for.
@Koopinator, you've made around 2,000 mainspace edits in the last ~2 years. Here's a complete list of all your edits that the visual editor could detect as being more than a handful of words added.[45] It's 78 edits (4% of your edits) – less than once a week on average. And I'd guess that half of your content additions are too short to have any chance of using an anti-AI tool on, so the anti-AI tool would check your edits two or three times a month. Why build something, if it could only be useful so rarely? WhatamIdoing (talk) 00:58, 19 October 2025 (UTC)[reply]
Well, how would that tool's frequency scale across the entire Wikipedia community? I'd imagine it'd be used at least a little bit more often then. (or, I imagine, multiple orders of magnitude) Koopinator (talk) 05:55, 19 October 2025 (UTC)[reply]
For brand-new editors, it might capture something on the order of half of mainspace edits. High-volume editors are much more likely to edit without adding any content, so it'd be much less useful for that group. WhatamIdoing (talk) 19:54, 23 October 2025 (UTC)[reply]
It should be possible to detect low hanging fruit AI text, based on certain common features. Raw AI inference cut and pasted from a chat bot is going to be easier to detect. I agree that the type of user doing this probably has no reputation at stake, doesn't care very much, more likely to be newbie and/or a non-native speaker from another Wiki. I don't know about policy, but a bot that sends a talk page notice, or flags the edit summary with a "[possible ai]" tag. No one is already working on this? -- GreenC17:10, 18 October 2025 (UTC)[reply]
mw:Edit check/Tone Check uses a Small language model to detect promotionalism. (See tagged edits.) I'd guess that it would be possible to add an AI detector to that, though the volume involved would mean the WMF would need to host their own or pay for a corporate license and address the privacy problems.
I think AI edits should be mandatory for everyone to disclose, both in articles and talk pages. There could be a box where you check it if your content comes from AI or is mostly AI, similar to how you can check minor edits. Bogazicili (talk) 18:40, 21 October 2025 (UTC)[reply]
I agree: Either it will allow the material to be posted and thus legitimize LLM use, or it won't allow the material to be posted and cause people to tell lies so they can get it posted. WhatamIdoing (talk) 02:18, 22 October 2025 (UTC)[reply]
LLM-generated content is a cancer on Wikipedia, and it will only get worse. "AI detectors" have many false positives, as do checks made by editors themselves, but just because we can't reliably detect something today doesn't mean we shouldn't implement a policy against it. I support mandating the disclosure of LLM-generated contributions by all users. We don't treat WP:GNG differently on articles created by extended-confirmed users or others, we shouldn't do it here either. Merko (talk) 22:21, 21 October 2025 (UTC)[reply]
If you think original content generated by a program is a negative to that extent, then I don't think requiring disclosure is the appropriate approach, since that would only be a prelude to removal. We should skip straight to requiring editors not to use programs to generate original content. isaacl (talk) 04:38, 22 October 2025 (UTC)[reply]
IP editing actually isn't that much of a problem here -- in my experience almost all AI text I find came from someone with a registered account. Off the top of my head I'd say less than 10% of it comes from IPs.
I came here to propose pretty much the same thing (policy, not bot). Having a blanket rule would be hugely helpful in dealing with editors, since it can get very tedious explaining why each AI edit they claim to have checked is in fact problematic. I might even go so far as to propose a separate user right (or pseudo-right?) called something like LLM user, for editors who can demonstrate they are sufficiently competent with content policies and have a legitimate use case. I don't think such a right should convey any actual abilities, but users found to be using LLMs without it could then be much more easily censured and guided towards other forms of editing. Applying exactly the same system but tying it to extended confirmation seems like it minimizes potential rule creep, but it's a blunter filter which might not be as effective, since I'm sure there are plenty of extended confirmed users who lack the requisite understanding of policy. lp0 on fire()21:03, 10 November 2025 (UTC)[reply]
Can we consider changing the license of codepages (such as MediaWiki:Common.js) from GFDL to GPL (or another appropriate software license)?
As one who is in the more technical side of things on Wikipedia, GFDL does not appear to be a suitable language for programming code, nor does CC BY-SA.
According to this page, we may be able to relicense CC BY-SA code under GPL 3, since CC BY-SA content is compatible with GPL 3, but not the other way around. It also allows for the use of several alternatively licensed programs on wiki such as MIT and Apache. The only problem I see is use is limited to noncommercial use, but that might be solved with the Lesser GPL.
I think this would be helpful especially for user scripts, templates, modules, and other wikitext that describe programs rather than articles. Aasim (話す) 16:04, 25 October 2025 (UTC)[reply]
I'm honestly surprised that we haven't had a discussion like this yet. Is this change needed? The only software license compatible with CC BY-SA is GPL, so I'd assume people would be incorporating our code under GPL.
use is limited to noncommercial use
No, that's CC BY-SA-NC, not CC BY-SA. And anyhow, you cannot use a CC BY-SA work under solely Lesser GPL terms, because the latter is not a compatible license. Aaron Liu (talk) 16:37, 25 October 2025 (UTC)[reply]
I was talking about the GPL. All the way at the bottom it states that one cannot use GPL in proprietary code (oh wait I mixed up proprietary with commercial). Okay that seems consistent with CC BY-SA which requires that any modifications be also released under CC BY-SA. Aasim (話す) 16:42, 25 October 2025 (UTC)[reply]
There's little benefit, and considerable drawbacks. Anyone wanting to reuse our javascript under the GPLv3 can already do so; anyone wanting to reuse post-migration changes to our javascript under CC-BY-SA-4.0 would be unable to. The latter includes, in particular, other-language Wikipedias and other Wikimedia projects. —Cryptic17:03, 25 October 2025 (UTC)[reply]
I was wondering if it would require a Wikimedia-wide RfC, since we would not be able to unilaterally change the license on just one wiki (WMF could pull the WP:CONEXEMPT card and refuse to make these changes).
The reason I was discussing this goes back to this thread where Enby and L235 licensed the unblock wizard under MIT rather than CC BY-SA. I was under the impression that MIT might be inherently compatible with CC BY-SA, as all it requires is that the same copyright notice be published on all source copies of the work. A lot of code useful to Wikimedia projects is on GitHub, GitLab, etc. under variety of different licenses (WP:UV and earlier WP:RW are both Apache licensed, and they had to specifically license for use under CC BY-SA for use on Wikimedia). This licensing mess with code can be avoided if we either (a) allowed users to import code in a similar manner that we allow them to upload files under a compatible license (and display that license prominently on the code page), or (b) chose a license so that no matter which code was imported it could be relicensed under that license. Aasim (話す) 18:47, 25 October 2025 (UTC)[reply]
The reason for requiring all uploads to have specific licenses is so that any reusers know they only have to deal with those licences. Your proposal B would break this goal. Proposal A is already possible. isaacl (talk) 18:53, 25 October 2025 (UTC)[reply]
I don't know if it is possible with code pages and templates right now. Currently what it says is "Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply" which if displayed at the bottom of code pages would imply code pages are also licensed as such (which I presume they are). I have posted stuff with an MIT license header on Wikipedia for example (see User:Awesome Aasim/rcpatrol.js and User:Awesome Aasim/CatMan.js) following what I saw with RedWarn. The only way to use off wiki scripts that are licensed under a different free license other than CC BY-SA would be to post them on wiki and use them under the CC BY-SA.
IMHO the GFDL makes less sense for code pages than GPL though (we have deprecated GFDL for most media files already).
We could also change the footer in user, template, module, and MediaWiki space to say "Content licensed under CC BY-SA unless otherwise noted" (as on Fandom wikis). Aasim (話す) 19:39, 25 October 2025 (UTC)[reply]
We don't prevent contributors from dual licensing their work under both MIT and CC BY-SA. If you mean upload someone else's code, then if the licence permits the work to be shared under CC BY-SA it can be done. As I said, the whole point is to allow easy reuse by knowing everything is licensed the same way. Adding more complexity in figuring out licensing makes it harder for reusers, not easier. isaacl (talk) 20:44, 25 October 2025 (UTC)[reply]
If it's a single author, or just a few people who're still active, they could probably agree to multi-license the code under the GPL or other licenses in addition to the standard GFDL and CC BY-SA licenses applied to all text content. There's also the fact that CC BY-SA 4.0 is one-way compatible with GPL 3. If there are many contributors, though, as for something like MediaWiki:Common.js, getting that agreement may be difficult. Anomie⚔17:15, 25 October 2025 (UTC)[reply]
By its nature, Javascript/Lua/wikitext code on Wikipedia is inherently available, which makes one of the motivations for using a software-specific licence less compelling. In theory there could be a different licence for pages hosting code, but it might be unduly confusing to less technically oriented contributers. Specifically regarding GPL, the question is whether or not it's desirable to require any incorporated libraries also be GPL. Mandating that code must be GPL-licensed would open up the possibility of using GPL-licensed libraries, but remove the possibility of using code that is not GPL compatible. isaacl (talk) 17:39, 25 October 2025 (UTC)[reply]
Maybe there is a more easily understandable default software license that is appropriate for code pages and user scripts. Or a cheat sheet showing which software licenses are compatible with CC BY-SA (I have not found any such page yet).
I feel like you're not trying to engage with the discussion about the purpose for licences. Concerns specific to compiled software aren't relevant to the interpreted code that is placed on Wikipedia pages. For reusers, fewer licences to deal with is better than more. I feel this follows a pattern for some of your proposals: they exhibit a partial understanding of the overall context, and your followup discussion fails to acknowledge explanations of this context. (Note Creative Commons maintains the list of licences it has evaluated to be compatible with specific versions of its licences. It's a short list.) isaacl (talk) 17:52, 30 October 2025 (UTC)[reply]
Maybe I am doing a terrible job at explaining myself.
With interpreted code, one can copy the software by copying the source code. I don't really understand the whole Concerns specific to compiled software then, as software licenses are often applied to the source code as well. If one has a proprietary program, and they reverse engineer and redistribute the source code, that source code is still infringing. One can change all of the variable names and it would still be infringing.
Isaacl is saying that the building and distribution concerns mentioned over using CC-BY for code don't apply to our code exactly because of what you say about our interpreted code. (Unsure about the modifiability part. And then the patent thing is still there, but probably not a concern.) Aaron Liu (talk) 23:09, 30 October 2025 (UTC)[reply]
If you're going to propose a change with the reasoning that GFDL does not appear to be a suitable language for programming code, nor does CC BY-SA, then you need to understand why that's the case. That's why I said your proposal isn't taking into account the overall context. Licences written for compiled software not only cover the human-readable source code, but the resulting work products, and have corresponding conditions. It would be more effective if you would investigate the context more fully before persisting in arguing for your proposals. (As I recall, I'm not the first person to say something to the same effect about your proposals.) isaacl (talk) 23:57, 30 October 2025 (UTC)[reply]
Once context is revealed, then it would be more effective to investigate further to understand that context, rather than persisting in arguing without taking that context into account. Otherwise, it feels like responses are pointless, since the proposer isn't considering them fully. isaacl (talk) 00:12, 31 October 2025 (UTC)[reply]
We can use GPL on any kind of work as long as we are clear as to what constitutes "source code" for that work (i.e. for a wiki the source code could be defined as what appears inside of the wikitext editor when one clicks the "Edit source" button)
GFDL (the current license in addition to the CC BY-SA) is suitable for reference works (like us); any scripts needed to render the document should also be licensed under GFDL (but can be dual licensed under GPL). But otherwise, it is recommended to use GPL.
This means that we probably have a default license that we can propose for user scripts (GPL) and gadgets (such as MediaWiki:Gadget-Twinkle.js), as well as some Lua functions that don't directly render text (such as Module:Yesno).
I see the primary benefit of adopting GPL as permitting users to import scripts from a variety of sources whose license is also compatible with GPL (such as MIT, Apache 2.0, etc.) without having to worry if the license is compatible with CC BY-SA. CC BY-SA only lists a few licenses compatible with CC BY-SA, and only derivative works. Perhaps CreativeCommons' website is not up to date with all known compatible licenses.
I also did find this table on a GitHub-managed website. It's a lot of information, but it may be relevant if we discuss this in the WMF Village Pump (or even in a Wikimedia-wide RfC). Aasim (話す) 01:34, 31 October 2025 (UTC)[reply]
I asked you on your talk page to present a concrete example of a problem you are facing that would be helped by your proposal. Are you trying to reuse Javascript code, CSS, wikitext template markup, or Scribunto Lua code that is under GPL? Given that everyone who submits code to English Wikipedia pages has released it under CC BY-SA, there's no issue with reusing any of it in other English Wikipedia edits. The thread to which you linked regarding Chaotic Enby's script was not a problem precisely because all code submitted to Wikipedia has a known licence.
As Aaron Liu states, the Creative Commons list is the definitive list of licences that are compatible with their licences. It's written right in the licence itself; I recommend that you read it. If you aren't interested in discussing why Wikipedia, by design, requires all contributions have specific licences, nor understanding what compatibility means (editors cannot upload GPL code and re-licence it as CC BY-SA, as the GPL licence is more restrictive), then this discussion isn't going to progress. isaacl (talk) 03:26, 31 October 2025 (UTC)[reply]
I asked you on your talk page to present a concrete example of a problem you are facing that would be helped by your proposal. Okay, it is the use of MIT and Apache and similarly licensed content from external repositories on Wikipedia.
I do believe MIT license is compatible with CC BY-SA (and in fact have imported MIT content myself), as all it requires is that the copyright notice be posted crediting the authors (which probably can be done by linking to a copy of the license). However, if it is not, then:
If we need to use code from the over 50% of GitHub repositories that use the MIT license, we cannot.
If a script author wanted to expand and port their own version of an MIT licensed library that is bundled with MediaWiki (such as OOUI), they cannot.
On the other hand, there are other libraries licensed under a GPL compatible license (such as Material Symbols & Icons - Google Fonts) that, again, if Apache 2.0 is not compatible with CC BY-SA, cannot be put directly into any Wikipedia scripts.
Another example: p5.js (licensed under LGPLv2.1).
This problem can be solved with external loading, but loading external scripts can expose a user to cross site tracking, especially if the code is hosted off of Wikimedia. Also I believe if someone wants to host on Wikimedia, they need access to Wikimedia Cloud Services (WMCS).
All of these examples are known to be compatible with GPL (and thus could be used in Wikipedia scripts only if Wikipedia licensed code under a different license from the rest of the site). Aasim (話す) 04:24, 31 October 2025 (UTC)[reply]
And if you want to know where I imported this MIT content from: Fandom (specifically Fandom Dev Wiki which has the note "Community content is available under CC-BY-SA unless otherwise noted." at the bottom of nearly every page). Aasim (話す) 04:27, 31 October 2025 (UTC)[reply]
There are different versions of the MIT licence; to which are you referring? Regarding the variants that require that the MIT notice be preserved: CC BY-SA doesn't ensure this will happen. So the CC BY-SA licence alone is insufficient to meet the conditions for re-distributing the code under the MIT licence; the MIT licence needs to be kept for the applicable code. Is it possible to discuss a specific code example? Is it something that can be recreated independently?
Lesser GPL is problematic, since it would require the entire derived work incorporating it to be lesser GPL. To maintain separation, it would be better to serve lesser GPL-licensed Javascript libraries from a MediaWiki server for use by MediaWiki projects. This would make the library unmodifiable by website users, so licensing for derived works wouldn't be an issue. However I don't know the Foundation's policy on acceptable licences for software it serves. isaacl (talk) 04:58, 31 October 2025 (UTC)[reply]
I apologize for being imprecise. Knowing that Wikipedia content is uniformly available for re-use under a specific set of licences is better than having some pages available under some licences, while other pages are available under another. isaacl (talk) 23:45, 30 October 2025 (UTC)[reply]
Now that I've seen your reply above: The list is the licenses that CC BY-SA is compatible with, not the licenses that are compatible with CC BY-SA. Aaron Liu (talk) 02:33, 31 October 2025 (UTC)[reply]
This isn't the first time I've thought about how we could improve our blocking system (courtesy ping to Chaotic Enby who has been helping me with the unblock wizard). But I don't think the name of indefinite block really gets across to the average person that you aren't permanently banned. Obviously we don't want to never indef people à la Larry Sanger, but I do think it's probably better if we rename indefs to something like conditional block to make it clearer that you basically need to stop doing whatever it is that got you blocked to come back. I'm not sure if there'd need to be an additional "infinite" category when we already have arbcom blocks/community bans, but please let me know if I'm missing something obvious here. Clovermoss🍀(talk)12:50, 26 October 2025 (UTC)[reply]
Oh, sockpuppetry is probably the big exception to why getting rid of infinite blocks entirely wouldn't work (even if the master gets unblocked the socks wouldn't). So keep indefinite as an option but encourage a new category of conditional in block templates etc? Because I really do think this phrasing change would be a gamechanger. Clovermoss🍀(talk)12:56, 26 October 2025 (UTC)[reply]
Regarding conditional blocks, we already have WP:CONDUNBLOCK as a process, so that could work for blocks where a conditional unblock has been suggested (or similar situations such as username softblocks), but might be confusing for cases where there isn't a straightforward unblock condition the user can agree to.I agree with the general spirit of making it clearer that indefinite blocks can be appealed, but the issue is that these blocks often exist on a spectrum of how feasible they are to appeal, and not all of them are as simple as "agreeing to not do the same thing again". Since there isn't a clear-cut distinction between these, we need to find a word that invites blocked users to work on learning from their block and ultimately appeal instead of giving up, but doesn't give false hopes to users in tougher cases, where a successful appeal might be months or years down the line. Chaotic Enby (talk · contribs) 13:09, 26 October 2025 (UTC)[reply]
Any ideas for how to go about doing that? I don't see expanding conditional unblocks as nessecarily being in conflict with the current process but I do want whatever we're coming up with to be practical yet helpful. Clovermoss🍀(talk)13:17, 26 October 2025 (UTC)[reply]
I would tend to agree with Thryduulf's suggestion of making "indefinite is not infinite" more prominent. It is true that these two words are quite similar-looking, which might lead to some confusion otherwise. Chaotic Enby (talk · contribs) 13:39, 26 October 2025 (UTC)[reply]
It's stated clearly in every block template that someone can appeal. If people see the word indefinite and stop reading the unblock template after that word, that's their problem. There will always be someone who finds something confusing or unclear. I'm not sure a change in terminology would fix any problems here. voorts (talk/contributions) 21:04, 27 October 2025 (UTC)[reply]
Might be too far in the other direction, but maybe "appealable block" or "fixable block" or "curable block" to distance from partial blocks/tbans, and differentiate from blocks like sockpuppetry/community bans/timeouts after appeals have become tendentious. Tazerdadog (talk) 13:21, 26 October 2025 (UTC)[reply]
All blocks are appealable, so that doesn't work. Partial blocks, tbans and at least some full blocks of finite length are also fixable/curable so I don't think that terminology is helpful either. Rather than changing the terminology, I think we need to make Indefinite does not mean "infinite" or "permanent" (from WP:INDEF) a lot more prominent. Thryduulf (talk) 13:27, 26 October 2025 (UTC)[reply]
The most practical way of doing that would be editing what's said in the Twinkle block templates. I think that would be a good idea and possibly easier to accomplish then renaming what the type of block is called. I wasn't expecting the idea to be controversial as it was. Clovermoss🍀(talk)22:50, 26 October 2025 (UTC)[reply]
It may be treadmilling if we keep trying to come up with more "clear" language as newer people, only familiar with the latest language, become experienced and decide that the language they're used to isn't "clear" enough for even-newer people. Anomie⚔14:01, 26 October 2025 (UTC)[reply]
In this case, "7 years ago" does put you in the "only familiar with the latest language" group, as "indefinite" replaced "infinite" well before that.I do see how 7-years-ago-you might not interpret "indefinite block" as "block of indefinite duration", instead struggling to make sense of it as meaning something like "block that is vague or uncertain" or "block designating an unspecified or identified target". Until you encountered terminology like "temporary block" or "36-hour block" that should have pointed you in the right direction, or clicked a link like the one to Wikipedia:Blocking policy#Indefinite blocks in {{uw-block|indef=yes}} or the like that explains it directly. Anomie⚔14:27, 26 October 2025 (UTC)[reply]
My argument is that if you have to explain to someone that something does not mean what you think it does (indefinite is not a commonly used word and most people are going to assume they're blocked forever when hearing it), that's not ideal. I don't think we should give up trying to change things just because we've changed them before and have the survivorship bias of eventually learning what it means. Clovermoss🍀(talk)14:50, 26 October 2025 (UTC)[reply]
🤷 "People are too dumb to know what 'indefinite' means, or to look it up, or to read the links explaining it" isn't a claim that's worth arguing over. Anomie⚔14:54, 26 October 2025 (UTC)[reply]
Yeah, Wikipedians tend to be pretty great at givings words definitions that have little to nothing to do with their IRL definitions (see: WP:R3 "Recently created, implausible typos", our speedy deletion criteria for normal typos) - indefinite, however, means the same thing. I mean, there's no shame in not knowing a word, especially if you joined Wikipedia at a young age and perhaps had never come across it before, but this is one that I think most people should know how to look up in a dictionary. GreenLipstickLesbian💌🦋18:42, 26 October 2025 (UTC)[reply]
Why would you look up something you are fairly sure what it means? I suspect the common understanding of indefinite for new people is infinite. Which is why we had to make that WP:INDEF. If most people are thrown by it, even if they are in the wrong, it is not ideal and creates unnecessary misunderstandings. 4.7.212.46 (talk) 19:00, 31 October 2025 (UTC)[reply]
Oh, no, you wouldn't - grew up in an immigrant house, and was literally just ranting this morning about how monocultural people seem so loathe to look past their own idiolect. But, well, at least for "indefinite", the word is used the same way IRL as it is on Wikipedia - it means that something will stay in a condition until some factor changes. Yes, people will still misunderstand it - but many people also believe that something becomes their "own work" when they copy it or screenshot it - which is why we have Wikipedia:OWN WORK. Is that because we've chosen words that create 'unnecessary misunderstandings'? There's a point where, no matter how simple or monosyllabic the words are, you can't stop misunderstandings.In this case, I actually don't suspect that "indefinite"="infinite" is a common misunderstanding, and nor do I suspect that most people are "thrown" by it. What I suspect that people get freaked out by the actual act of being blocked. And I'm not opposed to making that message clearer, but I don't see how. Adding more words? well, panicked people won't read more words - speaking as somebody with anxiety, the longer you make the block message, the less accessible it would be to me. (YMMV). Similarly, the longer and more complex a sentence is, the harder it is to read in your second language - for a simple example, I can pick up any dictionary and go "標準時"? Oh, that just means "time zone" - but replace it with "ある国家または広い地域が共通で使う地方時をいう" in a sentence, and now you've got to learn multiple grammar points and other words, then successfully push them together.Again, I don't think our block messages are that great - the second line "If you believe that there are good reasons for being unblocked" is the major sticking point for me, though. What on earth does that mean, "good reason"? An unfair block? Well, let's say the block was fair. So, there's no good reason - so okay, time to leave forever. Ditto "appeal", the word everybody is using in this conversation as if it's the least bit applicable, but, IRL, you only appeal a decision if it is flawed. But what if the choice to block wasn't flawed, I (as the blocked user) really did create a sock account, or add content cited to unsuitable sources? Then what's the point of appealing? There's none. In wiki speak, reversing a block often just means undoing it, I think, but not in the vast majority of contexts. Removing a word because it's long and could possibly be confused with "infinite", and replacing it with a shorter Wiki-word that makes no sense to outside word... I'm not on board with that. I will save you from an even longer message, but I've had this "this word makes no sense in this context" response to all the alternatives. I mean, I don't know how to make the block message more clear. "You have been blocked [for OO time/indefinitely]. If you understand why you were blocked and promise not to break the rules again, you may ask to be unblocked. If you believe the block was unfair, you may appeal and your case will be reviewed by an uninvolved administrator" works for me, but would that work for other people? I don't know. GreenLipstickLesbian💌🦋19:54, 31 October 2025 (UTC)[reply]
If many people are resorting to legal threats because they don't understand what an indef block is, then it sounds like they don't have the temperament to edit here in any case and blocking them was a good idea. DonIago (talk) 17:36, 26 October 2025 (UTC)[reply]
Perhaps split indefs into 2 categories based on the actions needed to lift the block? a "quick-fixes block" for username issues, newbies who missed a memo on their first dozen edits, or veterans who need a rolled up newspaper, versus "introspection needed block" for when the community is at the end of it's rope, bigger issues, or where a simple acknowledgement of what went wrong and promise not to repeat it no longer suffices. Tazerdadog (talk) 14:37, 26 October 2025 (UTC)[reply]
That could be a good start, and formalize what is already the case to some extent, although some blocks are on a continuum between the two. If a block for a minor issue (say, a username softblock, or a block to get a user to communicate on their talk page) leads to more serious issues being discovered, would the user be "reblocked"? Clarifying the situation (and new expectations) to the user would certainly be helpful either way, but the software block itself shouldn't have to be changed.This does move the parameters of the block beyond the mere technical and towards the social (see Wikipedia:Blocks and bans, with community-consensus blocks being considered de facto CBANs due to their appeal requirements). However, this is already the case to some extent with the idea that blocks don't apply to an account but to a person, and this could serve to build a framework that could unify, alongside bans, the "social" aspect of blocking that a software block enforces, and sort them out in a more understandable way. Chaotic Enby (talk · contribs) 14:50, 26 October 2025 (UTC)[reply]
Is it really so common to think that "indefinite" means "infinite" or "permanent"? "Indefinite" simply means "not for a definite period". I would have thought that anyone thinking it means something else would not understand English well enough to be writing an English encyclopedia anyway. Phil Bridger (talk) 17:20, 26 October 2025 (UTC)[reply]
I'd say people use it often enough as a euphemism for "permanent", as in "postponed indefinitely". I shortcut the definition in my mind to "without end" from "without any current plans for an end, although an end may be possible in the future". I know what it actually means, but I also know how people use it. If someone says "You're banned for the foreseeable future", it's easy to take that to mean you'll never be allowed back again, even if that's not what it literally means. 207.11.240.38 (talk) 15:46, 28 October 2025 (UTC)[reply]
But the notices do also present options for appealing blocks, which to me undercuts the idea that they're for the foreseeable future, unless one considers the possibility of a successful appeal to be unforeseeable? Now I'm mildly curious as to how many blocks get overturned on their first (sincere) appeal. DonIago (talk) 16:12, 28 October 2025 (UTC)[reply]
All of the suggested new names are less clear than the original name. The blocks for a dozen socks with abusive usernames are not particularly well described as "conditional", and making two categories of indefinite blocks is a massive complication with little demonstrated benefit (if any). —Kusma (talk) 17:32, 26 October 2025 (UTC)[reply]
I think people can disagree on whether we should try this but I do believe that more people understanding that blocks aren't nessecarily in place for eternity has huge benefits with little drawbacks. Clovermoss🍀(talk)18:20, 26 October 2025 (UTC)[reply]
My initial thoughts on the different types of bans that are enforced with indefinite blocks:
conditional bans have a very specific, easy to verify condition for unblocking. A username change is an example.
behavioural bans are made due to behaviour that is counter to English Wikipedia policy. The blocked user needs to convince the enacting authority that they can behave appropriately if unblocked.
site bans are made when the user is no longer welcome to participate in the community, due to a lack of trust that they will be able to behave appropriately
An advantage to focusing on the type of ban rather than the technical mechanism used to block a user is that it should lessen ambiguity. Today sometimes users propose a community indefinite block, not understanding that this has the same effect as proposing a site ban. Using categories based on the difficulty of appeal would make the consequences of enacting a ban more evident. isaacl (talk) 17:41, 26 October 2025 (UTC)[reply]
The blocked user needs to convince the enacting authority that they can behave appropriately if unblocked
Isn't that true for all blocks though? The main difference is which authority - cbans go to the community, arb bans go to arbcom, blocks by a single admin go to any random admin; the actual trust/welcomeness factor may not be all that relevant. For example, the blocks of editors like ClemRutter, while the actual editor is welcomed by many, are ultimately CIR blocks that aren't going to be undone again, likely ever. Creating a system that puts him in a lesser category than "idiot ten year old who made a bunch of socks, came back at age 13 and is trying to be a productive editor" just creates ambiguity, confusion, and false hope - putting him in a greater category is just going to cause needless offense and pain. (second is also real example, not linking because I had to forward that one to an OS, neither of us seemed to think a block was called for despite the ban evasion) GreenLipstickLesbian💌🦋18:25, 26 October 2025 (UTC)[reply]
When you are saying that some editors aren't ever going to get unblocked, I was under the impression you mean that there are some bans where the banned user isn't ever going to convince the enacting authority that they can behave appropriately. Thus, I don't think it is true for all bans. An indefinite block is the tool for enforcing a restriction, not the actual restriction itself. I think the best way to communicate the route to return to editing is to explain the restriction and the reason for it, rather than focusing on the tool enforcing the restriction, which can cover multiple situations. isaacl (talk) 00:52, 27 October 2025 (UTC)[reply]
I was under the impression you mean that there are some bans where the banned user isn't ever going to convince the enacting authority that they can behave appropriately.: Aside from self-disclosed pedophiles, criminals, etc, I'm of the mind that most bans involving on-wiki conduct are reversible given time and reflection. For example, Wonderfool, who deleted the Main Page twice here and several times on Wiktionary, was recently unblocked (now editing as Vealhurl). If Willy on Wheels somehow comes back and requests a convincing unblock, I'm sure the community would agree. ChildrenWillListen (🐄 talk, 🫘 contribs) 01:04, 27 October 2025 (UTC)[reply]
Yes, I also think that focussing on the reason for block or ban & discussing it with the editor is far more important than deciding what we're going to call any given rose, if you want to get the editor back.
To clarify the first point, no I do mean that it's easier to appeal certain bans than certain blocks or quasi-bans. I was disagreeing with your categorization system, specifically where you only applied the idea of "convincing the authority" to one type of block. GreenLipstickLesbian💌🦋01:11, 27 October 2025 (UTC)[reply]
Conditional bans with a very clear condition don't need convincing. Site bans are ones where there is no foreseeable path to return to editing. Thus with this categorization, convincing the enacting authority plays no role with these two categories. (To clarify, what is currently called a site ban would end up being split across the behavioural ban and "never coming back" site ban categories.)
I think it would better to tell people they are banned for specific reasons, with pointers to how they might be unbanned for cases where that is feasible. "Block" should only be used afterwards to describe how they are technically limited from editing. isaacl (talk) 01:27, 27 October 2025 (UTC)[reply]
I apologize for being confused. In each statement I made I discussed how it would be better to focus on the restriction rather than the technical tool being used, and how this would clarify the route to being unbanned. You agreed that it would be better to focus on the reason for the ban. Perhaps you can let me know where additional clarification would help? isaacl (talk) 16:21, 27 October 2025 (UTC)[reply]
Oh, yeah, and I'm still mostly stuck back on the entire idea of dividing the blocks into categories like 'banned for behaviour' or 'banned for behaviour, but in a way that annoyed the community' or 'banned for technical reasons' - I think there's too many edge cases to actually formalize that (even username blocks can require some degree of convincing,), and the actual line between 'blocked for violating a particular policy' and 'annoying one too many people' is very subjective indeed. We already do tell blocked editors that they need to work on the issues for which they were blocked. We already do mostly focus on the actual reason for the ban far more than the technical side of things, at least from my perspective of watching the unblocks queue like a puma for the better part of a year & looking through historical blocks, so it's not a new idea. The issue is getting said user to actually understand what part of a very abstract set of rules they broke, why it's important, and how they can avoid doing so again - and I just don't see how creating a somewhat arbitrary classification of blocks system could help with that? GreenLipstickLesbian💌🦋16:46, 27 October 2025 (UTC)[reply]
You stated that we shouldn't give editors false hope about being unbanned. I think a lot of the arguing today over whether someone said "support indefinite block" means they supported a site ban is because people want an option where someone is banned from all editing but is given a path to return. But because we don't distinguish between different kinds of site bans, there is no option for this distinction. I think breaking down site bans into "bye for now" and "goodbye" bans would provide this distinction and help with the false hope problem. I appreciate this is more work to figure out, but the only way to avoid giving false hope is to do the work. In my view, it's not a question of the community being annoyed, but if it does not feel there a path to trust the editor again, whether due to repeated poor behaviour, or sufficiently egregious behaviour. I think conditional bans would just provide a simple descriptor for bans where admins say "any admin who verifies this condition has been met can unban". isaacl (talk) 17:11, 27 October 2025 (UTC)[reply]
Alright, I think I see where you're coming from now - I can maybe see where you're getting at by saying that there could be benefits to creating two sites of site bans, the problem is that this would require the community taking such an option, and form an admin to be OK unilaterally lifting any form of block that had community consensus. After all, in cases with any degree of subjectivity (POV pushing, source-text integrity issues, promotional editing, close paraphrasing), who is to say that the condition has been met? In this hypothetical world, is the guy who promotes his video game, gets told off by an admin, takes to to AN/I only to find himself boomerang condition-banned OK to be unblocked when he agrees not to edit about his video game anymore? What if his example edit is making an edit to an article about a competitor? I'd argue that's still promotional, many other editors wouldn't. How about an edit to the article on a record label associated with the composer he hired? Nothing to do with the video game, of course - but there's a valid argument that this is promotional, and a valid argument that it isn't. An admin might, quite reasonably, think the condition to unban has been met - but oopsie, the community didn't agree. From their POV, is it worth jeopardizing their adminship on behalf of a new editor with NOTHERE/SPA tendencies? On the over end of the spectrum - let's just say that the community conditionally bans an experienced editor for making personal attacks or creepy comments to other editors. The editor has a lot of friends, so the closer did a little bartending and said that it was a conditional ban until the other editor agrees not to make any more personal attacks. Let's say they make an unblock appeal six hours later, agreeing not to make such attacks again- does that mean an individual admin friend , who didn't participate in the AN/I thread, can lift the ban, credibly claiming that they verified the unban conditions had been met? In my second example, there's a much greater incentive to risk adminship & hide behind the shield of "verification" (after all, you get your friend back) than there is the first example, which I'd argue is the type of incidental cban that occurs more often that neither you or I is entirely comfortable with. GreenLipstickLesbian💌🦋17:32, 27 October 2025 (UTC)[reply]
My thoughts on categorizing bans weren't about changing the appeal process (just as I don't believe the initial post was about changing process), just better documenting the intent of the community. There is no change to who has authority to lift an editing restriction: it remains within the authority of who enacted the restriction, or within the scope of the governing policy (such as restrictions imposed as arbitration enforcement). So a community-imposed editing restriction has to be appealed to the community. isaacl (talk) 17:51, 27 October 2025 (UTC)[reply]
But you can't change one without the other? Any change to how blocks are categorized will impact appeals, just because the type of block is what most people with little to no familiarity with the underlying situation are going to look at. Formalize a category of conditional bans that can be undone the moment some criteria is met? Well, okay, who decides that? The community? You can't legislate community response. Any individual admin? Same issue, most people (especially our admins) are reluctant to go against community consensus (high risk) to unblock somebody who was a poor enough editor to get blocked (low reward). Somebody else? No matter which way you cut it, you're creating (whether intentionally or not) a new appeal system - and one that's a lot more confusing to non-Wikipedians (the average people) than it is to top AN/I and project space posters. Also ditto Thryduulf - my brand new non-OS example of a "this is technically one kind of block, but the actual edits made it much more complicated" is Misterjamesveitch - softblocked to prevent impersonation of James Veitch (comedian). The AGF explanation for his edits is that it was actually him, but if he hadn't verified his identity that would have had to have turned into a hardblock for serious misconduct. GreenLipstickLesbian💌🦋18:20, 27 October 2025 (UTC)[reply]
We can add categories to articles without changing the process for writing articles. Categorizing types of site bans is for our convenience. It doesn't dictate process. We already have restrictions where the admin says that any admin can lift it if a given condition is met. The categories aren't inventing new types of restrictions. isaacl (talk) 18:28, 27 October 2025 (UTC)[reply]
And I'm arguing that the actual act of introducing labels would impact the process - also, categories absolutely can impact the writing process. That's why we have categories for stuff like ENGVAR or dates. Yes, they are meant to be descriptors, but "I spent years switching all the spellings in this article to American because the categories told me I could" is a totally valid excuse to avoid being sanctioned, even if the only reason the article is in the category is because of subtle vandalism. Conversely, categories that have no impact are going to have no impact period - I don't see how trying to classify blocks is going to make solving the issue which lead to the block any different, which is what actually matters, and not hundreds of editor hours wasted over what exactly to categorize something as.Also, the idea that we have "restrictions where the admin says that any admin can lift it if a given condition is met" is fictitious, ultimately. When an admin says that any other admin can lift a block once a condition is met, it means that they won't raise an objection or they themselves would unblock in such a case- they can't actually dictate that other admins not unblock. But we don't have a formal restriction system in place, and, given that admins are all fallible volunteers with minimal oversight, can never have one. GreenLipstickLesbian💌🦋18:45, 27 October 2025 (UTC)[reply]
We have the option to do either: we could change the process and have categories that reflect the changes, or we could not change the process, and define categories as we please to reflect current process. I'm looking at the latter, not the former. I was just laying out some initial thoughts on how, within the current process, bans could be categorized, rather than renaming a tool used to enforce many kinds of bans, with the goal of enabling the community to distinguish between site bans that aren't likely to get lifted versus those where there is a path to lifting the ban. So to me a discussion about how the process can be changed is a different discussion. It might be a fruitful one, but not one I'm trying to address with my thoughts. isaacl (talk) 01:14, 28 October 2025 (UTC)[reply]
And I suppose where I'm at is that I don't think it's possible to change the process of blocking and the process of appealing - they're simply too dependent on each other. Change what you call a ban, and the appeals process changes to match, even if you don't mean it to. The actual act of labeling impacts it. So, at least from my perspective, you can't talk about one but not the other. GreenLipstickLesbian💌🦋18:24, 28 October 2025 (UTC)[reply]
There are also blocks that are not clearly one or the other. For example editors who engage in promotional editing with a promotional username - especially when you need the context of the edits to see that the username is promotional.
More than one of my Oversight blocks have been of minors significantly oversharing while engaging in self promotion - sometimes they even spam their self-promotional material. While requests for unblock following oversight blocks are handled by arbcom rather than any random admin, the block log will typically just say "oversight block" and I'm sure the same applies to normal blocks too. Thryduulf (talk) 17:52, 27 October 2025 (UTC)[reply]
The examples you raise are, using current terminology, site bans which the enacting authority is willing to lift in favour of a topic ban. Unless otherwise stated, the enacting authority is the one who evaluates the response of the banned user. Within the categorization framework I raised, they are behavioural bans that the enacting authority is willing to lift in favour of a topic ban. isaacl (talk) 17:59, 27 October 2025 (UTC)[reply]
I don't consider this a problem and am perfectly happy with the current situation; however, if we need to make it exceedingly clear to those who may think that indefinite means perpetual, I propose calling indefinite blocks "blocks without a fixed duration". Everything else that's been proposed so far is liable to introduce even more confusion, in my opinion. Salviogiuliano18:49, 26 October 2025 (UTC)[reply]
I agree with others stating that most of the ideas presented thus far seem like a step backwards with respect to the intended purpose. To be honest, I think "indefinite" is so well suited to this kind of situation that I've started using it in similar contexts outside of Wikipedia, to no confusion as far as I am aware. signed, Rosguilltalk22:53, 26 October 2025 (UTC)[reply]
I think what is getting at here, is that we have a single block "period" that encompasses two very different situations. What we call "indefinite" blocks are called "infinite blocks" in the database, so it is entirely reasonable for people who are blocked for a "curable" reason to believe that they have been banned forever. Realistically, there are a lot of indefinitely blocked accounts that we have zero reason to think will ever be unblocked. At the same time, we also have a lot of accounts that are indefinitely blocked because they need to assure the community that they understand the reason for their block and will not repeat the behaviour that resulted in the block. Quite honestly, I don't actually see any benefit in time-limited blocks. Our blocking policy says that we shouldn't be giving "cool-down" blocks, but that is exactly what a 24 or 36 hour block is. Arbcom stopped giving out time-limited blocks way back in 2009, and has since that time made unblocks conditional on behavioural change. I can't see any reason why "conditional block" would be confused with "partial block". Risker (talk) 23:05, 26 October 2025 (UTC)[reply]
Respectfully, I don't think the average person has the slightest clue what blocks are recorded as in the database; I don't see how that could be a source of confusion. GreenLipstickLesbian💌🦋23:11, 26 October 2025 (UTC)[reply]
The average person doesn't get blocked, either indefinitely or infinitely. I hold our administrators in high enough esteem that they can differentiate between making a block that can be cured by the account and one that cannot. Even if that opinion isn't a widely-held one, I think that all our dropdowns should not use the term "infinite" anywhere, or should be a separate alternative to indefinite/conditional. Risker (talk) 23:37, 26 October 2025 (UTC)[reply]
Changing the dropdowns seems fine. However, this conversation started out with a claim that editors who got blocked were confused by the term "indefinite" (see OP:But I don't think the name of indefinite block really gets across to the average person that you aren't permanently banned, emphasis own); I don't see how changing the admin interface has much, if anything, to do with that?GreenLipstickLesbian💌🦋23:49, 26 October 2025 (UTC)[reply]
It's an idea lab, that means that we should iterate on the idea. There is no such thing as an idea that is fully formed on its first legs. Let's work on looking at the idea and talk about how we can improve on the idea, not just have knee-jerk reactions that something won't work. Some of the ways we can do that might start with "why did we choose these terms in the first place? when did we do that?" We've come up with lots of good ideas over the years, and improved on old ideas. Back in the day, there was no such thing as community bans, or blocks longer than a certain specific time, or admins handing out blocks longer than a month or so. It is good that we have given the space for people to come up with these ideas and helped them to develop them, and to figure out how to shut down experiments that haven't really worked. Please be charitable. The Wikipedia of 2025 is massively different than the one of 2002, or 2010, or 2015, and a lot of those positive changes have started out as seeds like this. Risker (talk) 00:16, 27 October 2025 (UTC)[reply]
I'm not trying to shut you down? You said that you thought the database could cause people who were blocked to think they were blocked forever, the OP was also talking about confusion for average editors, but when I asked you about that, you started saying that the average person didn't get blocked? I'm trying to follow your train of thought and see where you're going with this by asking you for clarification? GreenLipstickLesbian💌🦋00:21, 27 October 2025 (UTC)[reply]
Technically, people are blocked forever (with a duration of infinite) until someone decides to lift their block. The MediaWiki source code does not have any expectations on whether someone would come along and unblock a user. The problem here is a social one; most normal people don't seem to understand that they are able to appeal their indefinite blocks instead of engaging in sockpuppetry and/or making legal threats. The first thing most users see is Template:Blocked text, and the next is a Template:Uw-block placed on their talk page. Non-admins can't see what the dropdowns say, nor would most users worry about what's in their block log, so all changes, if any, must be made to these two templates. ChildrenWillListen (🐄 talk, 🫘 contribs) 00:34, 27 October 2025 (UTC)[reply]
...and, a quick look at CAT:RFU reveals that most new editors have the impulse to use LLMs to generate their unblock requests, which get declined almost instantly, leaving the users frustrated and unsure of what to do next. Keep in mind that most people use AI-powered tools daily, especially in the Global South, where people may not be confident in their ability to write in English on their own (even though many are actually pretty good at it.) A good first step would be to add clear instructions in the Unblock Wizard (do people even use that?) or elsewhere to refrain from using LLMs. ChildrenWillListen (🐄 talk, 🫘 contribs) 00:53, 27 October 2025 (UTC)[reply]
The unblock wizard is more of an idea than something that has actually been implemented at this time. Chaotic Enby created it after a discussion I started here expressing a desire for it because I've cared for a long time about how blocked users don't nessecarily understand the template/what they can do to get unblocked very well and I was inspired by the edit request wizard to see if we could maybe do something different. But an RfC needs to happen before it can be used in the way I envisioned. Clovermoss🍀(talk)09:23, 27 October 2025 (UTC)[reply]
The only dropdown I see that has "infinite" as an option comes from MediaWiki:ipboptions, of which Special:Diff/880298592 indicates it's that way because we can't have two options with the same label and says it still shows up as "indefinite" in the logs. Are there others? Anomie⚔00:08, 27 October 2025 (UTC)[reply]
So...it appears that "infinite" was added with no discussion, as a result of some sort of OOUI change? Why not simply change the dropdown back to indefinite then? There is no discussion that indicates why the word "infinite" was selected. Risker (talk) 00:16, 27 October 2025 (UTC)[reply]
"indefinite" is already at the start of the list. To have an indef option at the end too, some other name was needed. As to why "infinite", I have no idea. Anomie⚔00:19, 27 October 2025 (UTC)[reply]
We should be wary of introducing a second set of vocabulary. The names of blocks currently reflect their direct practical impact on the blocked user: partial, X-hour/day/month, indefinite. Naming blocks after the reason blocks were given, or the expected unblock path, or similar may make the jargon even more jargony. CMD (talk) 06:28, 27 October 2025 (UTC)[reply]
My view: If an editor thinks "indefinite" means "forever", they need to improve their vocabulary. "Indefinite" is the clearest way to say it—it literally means "not definite". See dictionary entry for "indefinite". Sure, clarify the PAGs as necessary. ―Mandruss☎ 2¢ IMO. 06:46, 27 October 2025 (UTC)[reply]
Except a not definite block is forever unless you successfully appeal it and a lot of people have no idea that you can, whether it's because the word isn't used that often, they assume it's something like "inflammable", or they don't understand the concept of a block being "indefinite" because other websites just permanently ban people and there isn't a block expiration time like there is for the other blocks. I hate to bring Larry Sanger up because I don't think his "9 theses" are practical and they're out of touch at best but stuff like "get rid of indefs" is one of those ideas people have been talking about elsewhere online. I've seen so many people discuss how they basically did stupid teenage things and don't have the secret arcane knowledge of Tamzin's essay because they think it means "game over forever". Given that Sanger describes that the practice as Wikipedia’s draconian practice of indefinite blocking—typically, permanent bans—is unjust. This is no small problem. Nearly half of the blocks in a two-week period were indefinite. This drives away many good editors. Permanent blocks are too often used to enforce ideological conformity and protect petty fiefdoms rather than to serve any legitimate purpose, he seems to think that too. I press x to seriously doubt that admins hand out indefs for "ideological conformity", but the fact the average person's reaction to that statement is not the Wikipedia line of "but it's not technically an infinite block even though it is until you appeal successfully" is a problem worth remedying imo. I'm going to refrain from commenting further because I don't want to bludgeon, but it took me awhile to figure out "how do I express what I'm trying to say here?". Clovermoss🍀(talk)09:44, 27 October 2025 (UTC)[reply]
Haven't read this whole thread, but FWIW, I think the best way to bring policy in line with practice (the practice that's reflected in my mildly heretical essay) is to make it explicit that WP:CLEANSTART is allowed five years after an indefinite block, provided that the block was not to enforce a community or ArbCom sanction, and was not a block that no reasonable admin would lift without community consensus; and that post-block cleanstarts on shorter timeframes may be tolerated on a case-by-case basis if there is no continuation of the underlying disruptive behavior, but that this is not something anyone should rely on. -- Tamzin[cetacean needed] (they|xe|🤷) 09:56, 27 October 2025 (UTC)[reply]
...and if we could trust the indef'd editor to correctly apply all of those provisional criteria to their own situation, they'd probably not be the kind of editor who got indef'd in the first place. WhatamIdoing (talk) 07:44, 28 October 2025 (UTC)[reply]
I have to wonder whether any amount of renaming blocks would really make a difference to that sort of misconception, considering studies have also shown that many people also don't realize that it's possible for them to edit Wikipedia in the first place. Anomie⚔13:33, 27 October 2025 (UTC)[reply]
I believe that editors should spend about as much time finding ways to simplify editing as we spend finding ways to complicate it. I'd estimate that this ratio is traditionally about 1-to-10. ―Mandruss☎ 2¢ IMO. 14:35, 27 October 2025 (UTC)[reply]
I wonder about the truthiness of statements like 'blocks drive people away'. Accounts are blocked. Wikimedia doesn't have the tools to block people. People come back with new accounts or as unregistered IPs or both. There is currently no way to stop them. If they are 'good' editors determined to edit Wikipedia and stay out of trouble they are likely have a de facto cleanstart of their own making. Sean.hoyland (talk) 10:51, 27 October 2025 (UTC)[reply]
For the standard "indef, not appealable for 1 year" sorts of blocks I think the current terminology is perfectly fine. I do think we should probably split off "indef immediate appeal" blocks for username issues or newbies doing something dumb from "true" indefs though. Loki (talk) 18:47, 2 November 2025 (UTC)[reply]
Perhaps instead of looking at the name used within discussions among editors, we should look at the templates posted to blocked users, and work on clarifying their messages. The name of the technical tool used to enforce the imposed editing restriction doesn't matter, as long as the message clearly explains the reason for the restriction, and the path to have the restriction removed. isaacl (talk) 17:18, 27 October 2025 (UTC)[reply]
+1. The same issues also apply for definite but long blocks (months to years). We'd prefer the editor to clean up their act instead of waiting out the block, no matter whether the block has definite or indefinite duration. —Kusma (talk) 17:59, 27 October 2025 (UTC)[reply]
+3. A blocked editor who has sufficient competence with the English language to constructively edit the English Wikipedia should always be able to clearly understand why they were blocked. They can disagree that that should be something people are blocked for, and they can disagree that what they were saying/doing was an example of that reason for being blocked, but they should always understand what the reason given means. Thryduulf (talk) 19:11, 27 October 2025 (UTC)[reply]
There is Wikipedia:Unblock wizard, which I recently discovered as it was mentioned in the nomination statements of one of the current RfAs. It's a pretty cool idea, and while I think there is room for improvement in its current form, it could make the process of appealing indefinite blocks much less daunting than it might currently be. Maybe something like this (User:Mz7/sandbox/uw-blockindef-wizard):
If you believe that there are good reasons for being unblocked, please review Wikipedia's guide to appealing blocks, then use the "request an unblock" button below.
How about something like "You have been blocked from editing for [reason]. This block does not have an expiry date set, but if you believe that there are good reasons for being unblocked you may appeal. If you do wish to appeal, please review..."? Thryduulf (talk) 11:09, 29 October 2025 (UTC)[reply]
Speaking as an American, I prefer "does not have a set expiration date. If you believe...". Otherwise, while I still think it's a little silly that people misconstrue "indefinite" as "infinite", this wording probably is more easily understood. DonIago (talk) 14:21, 29 October 2025 (UTC)[reply]
I see where you two are going, but those sentences both rate as much more difficult on a bunch of the online grade level/text difficulty checkers I measured them against when compared w/ "You have been blocked indefinitely". Also, the new versions may register as easier in difficulty than they are. Most people learn what expiration means in the context of foot products remaining good to eat, while indefinite pretty much just has the one meaning. Again, I do get why people might confuse it, but indefinite was ranked as a elementary school level word, so you should really know what it means by the time you're twelve, or you should know how to look it up in a dictionary. It's a lot easier than other words we expect people to know, like 'citation', 'plagiarism', and 'consensus', all of which got ranked as college-level. GreenLipstickLesbian💌🦋16:41, 29 October 2025 (UTC)[reply]
I'm admittedly surprised to hear that those alternatives would be considered much more difficult to parse. Is there perhaps a middle ground? "Indefinite" may be ranked as an elementary school level word (and as I've expressed, I personally don't see how it's all that ambiguous), but it's clearly tripping up a number of people, so it seems worth considering options that may trip up fewer people. DonIago (talk) 18:09, 29 October 2025 (UTC)[reply]
learn what expiration means in the context of foot products remaining good to eat
That should be enough to understand what an expiration of a block means. And "indefinite" is probably way more obscure than any of "expire"'s meanings (in fact it seems more a middle-school word to me).
I'm also surprised that these are regarded as more confusing, although I also don't regard "indefinite" as problematic it's clear that some people do. If "expiry date" or "expiration date" are problematic, would "end date" be better? Thryduulf (talk) 18:34, 29 October 2025 (UTC)[reply]
Wikipedia:Readability tools tend to over-focus on the number of syllables in a word or the number of words in a sentence without regard to whether the words are familiar or make sense in context. (Different systems have different metrics.)
If you just split the middle into two sentences:
You have been blocked from editing for [a reason]. This block does not have a set expiration date. If you believe that there are good reasons for being unblocked, you may appeal. If you do wish to appeal, please review...
then that will make a big difference to some of the tools, though not so much to the reality. Expiration, with four syllables, will be rated as difficult by several tools, and you could change it to end, but unless you're expecting a younger child to be reading this, it probably won't make any actual difference.
Alternatively, just try a different reading tool. They're wildly inconsistent, with different tools producing a range of "correct" ratings that can differ by 10 years of education or more for the same text. If you don't like the answer you got with the first tool, then pick a different one until you get the answer you want. Wikipedia:Readability tools links to about 10, if you want to try them out. WhatamIdoing (talk) 00:16, 30 October 2025 (UTC)[reply]
I'll throw in the idea I had, how about "Appeal only block" or "Appeal required block". It gets the info you want right out front, that they can appeal, and that its the only way to remove the block. HypnoticCringer (talk) 02:39, 29 October 2025 (UTC)[reply]
Indefinite blocks are logical name for blocks of indefinite duration. We could call them permanent blocks, but indefinite works better than any other suggestion I've heard so far. If we want a change to the system I would rather look at the blocking of IP addresses when we hard block accounts. I think such IP blocks are permanent and it would probably make sense and greatly reduce collateral damage to make these "intelligent blocks" either a fixed duration or O/S dependent. ϢereSpielChequers20:24, 31 October 2025 (UTC)[reply]
I think such IP blocks are permanent: Nope, underlying IP addresses are autoblocked for a duration of 24 hours, regardless of the block duration you apply on the account. That's why sockpuppetry is so common. ChildrenWillListen (🐄 talk, 🫘 contribs) 20:27, 31 October 2025 (UTC)[reply]
TL:DR. I got tagged...As a former user who deleted the main page (not sorry about that), I always thought "indef" wasn't quite right. "long-term block" would always make more sense Vealhurl (talk) 16:07, 1 November 2025 (UTC)[reply]
I still like my original suggestion of conditional block. Simple yet concise. However, if nothing about the name of the block itself is changed, I agree that making the Twinkle templates as clear as possible is a good idea. Clovermoss🍀(talk)17:06, 2 November 2025 (UTC)[reply]
Yes but indefinite is clearer and also includes asking for clemency. Appeals are for mistaken blocks, after 6 months you can promise to obey the rules as per the standard offer and in most cases that will get you unblocked. ϢereSpielChequers17:25, 2 November 2025 (UTC)[reply]
How about "until removed" for MediaWiki:Infiniteblock? Again this does not mean that the block will be removed, it just means that the block will go on until an administrator or the community decides to revoke that block, which could literally mean never. But it could also be less bitey to newcomers and other users who did not realize that their editing was harmful and give them a second chance at productive editing. Aasim (話す) 20:47, 5 November 2025 (UTC)[reply]
Well, my first thought is that, in the contexts its currently used, it sounds a bit tautological - all blocks remain until they are removed, it's just whether or not the removal is automatic (time limited blocks) or not. So I personally don't find it any clearer. I'd be curious to know what others think, though.
Right now {{blocked text}} (the template used for all block related messages) is configured to not display the block expiry if the block is indefinite. This means the only thing that this will right now affect is Special:BlockList and not the blockedtext system message (for now). Aasim (話す) 16:10, 6 November 2025 (UTC)[reply]
I agree that all blocks remain until removed, and so this doesn't add any additional descriptiveness. I don't agree that most people will interpret this to implicitly exclude automatic removal. isaacl (talk) 16:35, 6 November 2025 (UTC)[reply]
In the articles about Wikipedia's potential sources, such as The New York Times, mention the RSP rating in some fashion; e.g. "Wikipedia considers the Times a generally reliable source."
If it's permitted to link from mainspace to WP space, the article could even link to the RSP rating.
If this were placed in a separate section with a standard heading (similar to See also), that would make the information that much easier to find in the article. I know, many editors dislike one-sentence sections, and there's probably a guideline discouraging it. I think an IAR exception would be justified in this case.
If a citation |work= parameter linked to the article (I believe it should), a reader could see what we think of the source. That would support verifiability and improve transparency. ―Mandruss☎ 2¢ IMO. 13:27, 28 October 2025 (UTC)Edited per discussion 23:47, 30 October 2025 (UTC)[reply]
I doubt this will gain acceptance (whether Wikipedia considers something a reliable source or not really isn't relevant to the topic in the majority of cases), but it'll be interesting to see how many bad policy and guideline references people use when opposing it. Anomie⚔15:01, 28 October 2025 (UTC)[reply]
Alternatively, create a new CS1 citation parameter that produces an icon in the rendered citation, indicating its RSP rating. Green check mark, etc. That would be ideal, but it would require more work for both Trappist and general editors. ―Mandruss☎ 2¢ IMO. 15:09, 28 October 2025 (UTC)[reply]
That idea sounds like something better suited to a user script than a CS1 parameter. In fact, I'd be a little surprised if such a script doesn't already exist, as it seems like something people doing FA reviews and new article patrolling would find really helpful. Anomie⚔15:14, 28 October 2025 (UTC)[reply]
I'm leaning towards opposing this on the basis that "generally reliable"/"generally unreliable"/"deprecated" are Wikipedia jargon that most of our readers will misunderstand. signed, Rosguilltalk15:21, 28 October 2025 (UTC)[reply]
I don't think it's in the encyclopedia's interest to shield readers from understanding of Wikipedia content policy. If they "misunderstand", it's because they haven't been educated. Readers aren't stupid, for the most part. ―Mandruss☎ 2¢ IMO. 16:34, 28 October 2025 (UTC)[reply]
It’s not about readers being stupid, my concern is that it’s not possible to explain the nuances of how we treat this in practice while holding to WP:DUE. Anecdotally, RS tend to only discuss RSP when they’re talking about Wikipedia; I am skeptical that RS coverage of NYTimes, for example, is ever going to center Wikipedia’s assessment of it as a source. signed, Rosguilltalk16:56, 28 October 2025 (UTC)[reply]
I agree with Rosguill.
Also, who says that readers actually want to spend any part of their life "being educated" about Wikipedia's content policy? Most people are just looking for a quick fact: What's the website for this company? What's the name of that actor in that film? WhatamIdoing (talk) 17:02, 28 October 2025 (UTC)[reply]
I admit that I am a little wary of RSP ratings, because folks treat them as a binary yes/no when WP:RSCONTEXT is still a guideline. The New York Times for example would not be a reliable source for medical claims. Even the most fake of sources are reliable sources for their own claims (although WP:DUE is then often a problem). Jo-Jo Eumerus (talk) 16:49, 28 October 2025 (UTC)[reply]
In line with what Jo-Jo Eumerus says here, several of us have explicitly opposed a "cheat sheet" or "quick look up" that would give only the name of the source and its general category (this suggestion would have a link to further information, but we fear that most editors would only look for the color coding and not care about the details. For example, we've got one "GUNREL" and one "deprecated" news source whose explanatory text says that their sports coverage is okay – but you won't notice that, if you just look for the colored icon and believe that it applies to everything). No source is reliable for everything, and any source can be reliable for something. WhatamIdoing (talk) 17:05, 28 October 2025 (UTC)[reply]
I have contemplated writing a user script to indicate a media outlet's status from @Headbomb's WP:UPSD or RSP. (Just an idea at this stage. I haven't thought about how to implement it yet.) Due to the current PEIS issues, if I see an unfamiliar source mentioned in a discussion I am more likely to look for an article describing it than try to load the whole RSP. I agree that RS assessments generally shouldn't be included in articles as they could introduce more confusion or misunderstanding among non-insiders. ClaudineChionh (she/her · talk · email · global) 22:29, 28 October 2025 (UTC)[reply]
Of course, it's become indispensable for identifying problematic references just by glancing at a reference list, but it doesn't (AFAIK) tell me anything about a source when I'm reading an article about that source. ClaudineChionh (she/her · talk · email · global) 09:57, 30 October 2025 (UTC)[reply]
Usually, by the time I'm looking at a Wikipedia article about a source, I want to know about the source, rather than about which discrete pigeonhole an RFC shoved the source in. We have many sources at RSP that are "generally reliable, except for X" or "generally unreliable, except for Y", and several sources that have to be divided up (e.g., there are three separate rows in RSP for Fox News – politics and talk shows aren't reliable, but ordinary, non-political news may be okay). WhatamIdoing (talk) 00:45, 1 November 2025 (UTC)[reply]
This sounds a lot like navel gazing. Wikipedia doesn't consider a source to be reliable or unreliable, they may be a consensus of editors that a source is unreliable or reliable for the purposes of Wikipedia. Although discussions tend to use the former wording, the actual meaning is always the latter. That some editors on Wikipedia considers a sources as more or less usable when writing articles doesn't seem like something that should be included in an article about the source. -- LCU ActivelyDisinterested«@» °∆t°21:03, 30 October 2025 (UTC)[reply]
Consider putting the rating on the talk page instead? If we did that, then I bet some helpful person could make a script, so opting-in editors would see deprecated sources highlighted in %_colour, generally reliable ones in %_colour, and the ones in between in %_colour (hopefully user-configurable and thus colour-blindness friendly). Would be helpful for me to glance at an article's reflist and see that.—S MarshallT/C22:40, 5 November 2025 (UTC)[reply]
If Im skimming this correctly, the OP wants a mention on the source article page. I have thought about that for awhile, and thought maybe it could go on the talkpage. But this is yet another problem with RSPS. There is an RFC on that ongoing I think. ← Metallurgist (talk) 04:55, 13 November 2025 (UTC)[reply]
Non-technical summary section or page for technical topics
So many math pages (and other technical ones) are only readable by people in the field, when the concepts are valuable and searched for by quite a few people. Could we make them more readable with a new type of section that is standardly implemented or different types of page (i.e. toggle between non-technical and technical)? Otherwise/additionally, could we have some initiative to copy-edit them to be more readable?
Explaination:
As it stands, when I was younger and using wikipedia, up till now, when I look up more technical topics (such as in math or computer science) I am met with an absolute mish-mash of word salad and formulae.
This does most of the time serve formally as a definition, but unless I have prior knowledge of the topic helps me about as much as a dancing chihuahua in a raincoat (or less, for at least the chihuahua is entertaining, whereas this simply leaves me slightly befuddled and frusturated before I go on to search for a more useful explaination elsewhere).
Advancing from a standpoint of clarity, simplicity, and usefulness, I would propose that a new section be commonly implemented across some of the more trafficked technical pages (or ones useful for many people). Alternatively, we could have pages which have two different parts, a non-technical part and a formal part.
An example: scope, which 'in short' as it currently stands defines itself as 'the part of a program where the name binding is valid,' and meanders through about 4 paragraphs of text which is all formally (blah blah blah) important for it's definition and usage (and yes completely understandable to many programmers). However, for those who are more new to the topic, it means hot sh- I mean... for beginners in code looking to understand scope it also doesn't help much. Those 4 paragraphs practically can be condensed into: "Scope determines where a variable (like x = 1) can be accessed in your code. It's usually determined by where you define the variable. If you define a variable in a function, usually it's scope is only within said function; if you define it in the file, it can be accessed in that file etc." Which I hope my fellow devs can agree is a more effective definition practically. In fact, all this boondoggling of the formal definition would just confuse me - it's not just useless, it's anti-helpful!
Another (more minimal) formulation of this idea could be that: many practically useful ideas that were once technically developed (such as baysian statistics) are locked behind 'intellectual paywalls' or gatekept (unknowingly). Taking the concept and expressing it as it matters practically disseminates the knowledge more effectively, and helps people develop their understanding of the world. There could be some initiative to translate important, useful or fun technical concepts for a wider audience.
Yes, I can be bold and do this myself, in fact I probably will. However, it seems a useful thing for wikipedia to do in general. The most interesting contents of human knowledge are usually (if they have formulae) on this site incomprehensible to anyone outside of the field. We should make them accessible!
"An idiot admires complexity, a genius admires simplicity." At least I just want to verbalise a problem I've had (perhaps my own simple mindedness): that it seems many technical pages are correct in definition but are incorrect in transmission. Briefly noting I'm new to excuse me from any mistakes if I have made them here :)
Hello Julius Chandler. Our current guidelines already say that the article's lead should be understandable to a wide audience, per WP:EXPLAINLEAD. But we often fail to comply with that guideline. So instead of giving two different leads, these overly complicated leads need to be rewritten to be understandable to an interested layperson. There's a project to rewrite the WP:Understandability guideline, where I'm looking for one more person to give feedback during the workshop phase before launching Wp:request for comment asking people whether they prefer the old or new version of the guideline. If you have time, feedback on the workshop text would be welcome on its talk page. When you encounter an article with an overly complicated lead and you don't have the ability to simplify it, feel free to tag it with {{Technical}}. —Femke 🐦 (talk) 14:22, 30 October 2025 (UTC)[reply]
That makes sense. See my comments on the talk page and someone feel free to archive this. Also any interested editors in making technical concepts from mathematics and other fields should really get together and do some initiative if only to rewrite the 80/20 articles (if you get what I mean a la views/ease to rewrite). I'm not really sure how that works on here though. Julius Chandler (talk) 16:43, 30 October 2025 (UTC)[reply]
Thanks Julius Chandler! There's a couple of ways to organise people. One effective way is to organise a contest (which has lost on my to-do list), as gamifying stuff is the best way to really get momentum on Wikipedia. My idea was to make it a contest where people work in pairs (one layperson, one expert) to tackle our most-read articles across all technical topics. If you like that idea, and have the bandwidth to organise, I'd be happy to help set this up (feel free to leave me a message on my talk page). Other ways is to see if you can get people at WP:WikiProject Mathematics interested in a project by posting on that Wikiproject's talk page. —Femke 🐦 (talk) 16:52, 30 October 2025 (UTC)[reply]
WPMATH has tried to do this in the past. What they mostly need is people who are both willing and able to do the work.
Sometimes the simple English wikipedia does a better job. I would agree, our maths articles are generally incomprehensible. I asked a mathematician about this, and her comment was "that's because they're not written by mathematicians, they're written by grad students who want to show off", which is possibly a rather harsh generalisation. The difficulty is that if you try to edit maths articles, you encounter the arguments "we are not a textbook", "anyone who knows anything about the subject will understand", and "your edit that squares have four corners is wrong because in a Hoffmannian semi-polar set of isobaric coordinates the tertiary quadrilateral apex disappears to negative infinity". My feeling is that articles that can only be understood by someone who already knows exactly what they are trying to say, are utterly pointless. There is a fundamental difference between a textbook and an encyclopedia, but it's a bit subtle, and lost on hard-core maths article editors - who tend to turn the articles into extreme secondary reviews with little context or background. I don't know the solution. Elemimele (talk) 17:34, 4 November 2025 (UTC)[reply]
Textbooks and annotated texts: the purpose of Wikipedia is to summarize accepted knowledge, not to teach subject matter. Articles should not read like textbooks, with leading questions and systematic problem solutions as examples...
Perhaps we need guidance somewhere on what a technical article ought to contain. I think Integral is an example of a good article (although it failed its last GA assessment). Euclidean vector is another good one. These articles both give an overview, they give a historical context, and they describe why the thing (integral; vector) is actually useful - what its relevance is, in the world. Sequence is a contrast; it's not a good article. It gets bogged down in quite complicated nomenclature far too early, it has no historical context, it doesn't say why sequences are interesting, and it does things back-to-front (there's a section on definition by recursion before the main section on formal definition) - it basically just spirals into a a hole of ever-more-complicated nomenclature and then peters out into a long list of one-sentence links between the concept of sequences and other articles in maths, links that will mean nothing to anyone who doesn't already know that the link exists (e.g. "A metric space is compact exactly when it is sequentially compact.", which is offered with no further explanation or context). I feel that some sort of (flexible) template guideline would help: who is the expected audience (is this a record of the current state of theory for fellow mathematicians, a bit like I'd see (as a biologist) a review article; or is it a summary of the field aimed at an intelligent amateur?); what parts should it contain (overview, historical context, relevance/application, definition, further information about how it works, summary of its interaction with related concepts). This might help people work out what's missing from less-good technical articles. Otherwise, people can only look at the GA criteria, which don't actually make technical articles good. You can satisfy all the requirements of a GA article and still have an incomprehensible turgid text that doesn't help readers. The other thing that would help would be to formalise team-editing: to turn a technical article into a good technical article you need someone who understands the technical stuff really well (to get it right) and someone who doesn't (to make sure the result is understandable). Elemimele (talk) 17:08, 5 November 2025 (UTC)[reply]
Affine space is another really good example of a bad article. Someone's tried really hard to explain what an affine space actually is (good) but there's nothing to explain why they're called "affine" (bad: encyclopedias ought to include this non-mathematical information too), there's nothing to explain why anyone would find affine spaces useful (bad: we're supposed to say why a concept is relevant to the world), there's no summary of the history of the usage of affine spaces, the development of the concept (bad: this is also encyclopedia-stuff), and again it spirals into an unstructured heap of barely-connected concepts (very bad). Elemimele (talk) 17:25, 5 November 2025 (UTC)[reply]
Femke (and whatamidoing), agreed.
I think a beefier (or more concise) definition is required here. I might try to draft up a new or slightly edited version of that guideline.
Elemimele, you say that the description of 'grads trying to sound smart' is harsh, but to be frank, it is in my opinion a correct diagnosis, or one approximating it.
Indeed, the "not a textbook" guideline seems to be one quoted often at people trying to make math articles anything other than a heaped mess of hot s- equations.
However, I would not give up hope. In fact, the current guidelines if you read them, simply say we shouldnt provide content to work through, ask leading questions... etc.
That does not preclude attempts from turning what are mostly just lists of barely comprehensible words, even to their authors (and sometimes not even, I see them copying in lines from somewhere that are just wrong, without understanding 🫠) into actually readable content. In fact, if it did, I would propose removing that guideline.
As it stands, I would still suggest an edit to them, as Femke suggests. I may draft one up in the next few days or after some more feedback.
Of course, Chesterton's fence here, there are good reasons we don't want to be a textbook. However, it needs to be made clear that
Simplicity and ease of understanding, especially in earlier section, is necessary
An encyclopedia is supposed to help people understand a topic, and develop *joined up thinking* which can't happen if you expect the average Joe to understand that the polar coordinates of my embedded covariance normalisation are simply intuitively the same as the eigenvector of the set of all the times I sneezed.
I would remove/revise the rule about not asking leading questions. Certainly we don't want to pretentiously act like a textbook, get people to walk through exercises etc. However, some small rhetorical questions should be fine. Or rather:
Emphasise that a conversational, professional, clear tone is preferable to an academic who has recently graduated and while hopped up on something decided to go on a "Eulers top hits 100" writing spree to prove to all the journals that rejected them they are indeed, the next coming of Gauss.
Editors usually dislike rhetorical questions, and they really hate SEO-related questions (like section headings saying ==Who wrote this song?==).
NOTTEXTBOOK could get another obvious 'negative' ("questions or problem sets for students at the end of the page"), but adding some 'positive' ("Do add examples and explanations in plain English") might also help. Maybe Wikipedia:Wikipedia is not a textbook (a redirect) should be turned into an explanation? WhatamIdoing (talk) 20:07, 5 November 2025 (UTC)[reply]
I looked today for the first time at the article wizard. It seems to be no more than a set of warnings to ignore and click through, followed by free-form editing. That's not a wizard, it's just a slow-motion nag screen. A real wizard forces you to make appropriate choices in an appropriate order, and a real wizard would be nice to have on Wikipedia. In my opinion, probably the best thing it could do is disallow writing any text until you have cited several reliable sources, and then only allow you to type "under" the sources you've specified, so that you are required to attribute every word you type to one of your sources, and no freestanding sentences are possible. (With, of course, an "add new source" button easily accessible.)
The concept of writing an article "backwards" is often mentioned as a problem. The point of my suggestion is that the function of the article wizard ought to be to force people to write "forwards" - not to give warnings and advice and then turn them loose to do whatever. TooManyFingers (talk) 05:56, 1 November 2025 (UTC)[reply]
Sounds like good UI design. Well-designed wizards etc. can make various types of errors impossible, including the ones stated above. ―Mandruss☎ 2¢ IMO. 06:00, 1 November 2025 (UTC)[reply]
A wizard that is giving blank pages to end users is not ideal. I would like one where you have sections per-arranged in a template and this template would also include a place to paste a reference. LDW5432 (talk) 03:46, 4 November 2025 (UTC)[reply]
No, I was looking at it to think of ways it could be made even more useful than it currently is, by actually compelling some good habits rather than just recommending them during the "click through and ignore" preliminary messages. (Of course I know no one ought to click through and ignore, but it's clear to both of us that that frequently happens.) TooManyFingers (talk) 05:20, 4 November 2025 (UTC)[reply]
I think headings are barely even a concern. If someone gets them wrong, they're easy to change. IMO, forcing people to adhere to their source material - or else be shut out of writing until they specify the sources - could be extremely helpful.
If I see an article with no headings, I can often give it some semi-reasonable headings in just a few minutes of very light work (depending on its length and complexity of course). But an article that isn't written according to its sources is a discouraging, time-consuming, conflict-filled mess to sort out. TooManyFingers (talk) 05:39, 4 November 2025 (UTC)[reply]
Please note: this is not a proposal for AI editing of Wikipedia articles, but rather for AI annotation via templates to aid human reviewers.
Wikipedia has always had the priniciple of supporting material in articles citations to reliable sources. But do those sources actually check the content cited? It's asy to put citations into articles, but does anybody check them. This problem has also recently been exacerbated by the introduction of AI editing that generates pseudo-articles with bogus citations.
This is an ideal opportunity for the use of LLM technology. Here's the idea:
a bot reads a Wikipedia articles, and retrieves all the cited sources that are fetchable at that moment
point by point, it compares each paragraph/sentence in the article with the cited sources. If it's all fine, it just marks the article with a review template that states that the article has been auto-reviewed, and when.
If any material is either unsupported by the cited material or contradicted by it, it surrounds that material with some variation of {{citation needed span}}, with parameters that specify when it was auto-reviewed and what's wrong with it. Maybe from a small range of choices: "source disagrees", "source does not support", and with a free-text comment. Perhaps it also puts in a short checksum (say 6 hex digits) of the enclosed content, so that changes to that content are easily detectable in later scans. The article is also marked by an invisible template in the same way as above. It could also generate "source unavailable" annotations, or edit URLs if sources get moved.
And that's it for the automation. Now comes the human part. Once articles have been marked, they will automatically be put into categories by the template, marking them for human review. Human editors can then confitm whether the bot is right, by removing the bot metadata from the template, turning it into a human review, or by removing or amending the material, in the normal Wikipedia fashion.
So this is bot-annotation, not bot-editing: the bot should never make any changes to actual article text other than adding the templates. We can set the threshold for false positives quite high, so it should generate very few of them. And we can also make the bot respect human annotation: if it flags something as bogus, and a human editor disagrees and removes its annotation, the bot won't keep on making the same warning over and over again.
All this fits entirely within the existing Wikipedia ecosystem of bots, templates and categories.
Running this bot on millions of pages may work out quite expensive, but that's what grant funding is for. Even if it only costs $0.01 per article, that's still $70,000 to scan the whole encyclopdia - but the gains in reliability and authenticity should be well worth the cost. Whether this is a small amount of money or a large amount of money depends entirely which end of the telescope you are looking down.
Some extra comments:
Wikipedia Library access could provide access to references which are behind paywalls
Also, while it's at it, it can also check that the given citation template actually matches the content of the cited source - author, title, publication date, etc.
This is the holy grail, right? And if it can verify existing, why not also find sources for uncited material. People are working on this later idea right now. With human in the loop. But I think your idea makes sense, it only requires test cases to see how well it works in practice. If the false positive rate is low enough that editors trust it. — GreenC17:01, 3 November 2025 (UTC)[reply]
I think putting sources for uncited material into articles would be dangerous. The bot should only flag, never edit. What it might perhaps do is to add suggested source recommendations to talk pages to allow human editors to review those sources themselves. There should always be a human in the loop, or it just becomes encyclopedia-slop, and that's all too easy to generate these days. — The Anome (talk) 17:05, 3 November 2025 (UTC)[reply]
The Anome, "with human in the loop" is confirmed, what I said right. Nobody is advocating for fully automated AI anything, that's obviously a bad idea. — GreenC21:38, 3 November 2025 (UTC)[reply]
@GreenC I've had bad experiences with the latter. There are facts that I know are true due to first-hand experience, but when I've asked an LLM to find me reliable sources so that I can add them to Wikipedia, it confidently feeds me a bunch of links to websites that don't actually verify the statement. --Ahecht (TALK PAGE)19:14, 3 November 2025 (UTC)[reply]
I am not confident in the ability of llms to handle verifying citations as citation use lies further from being a close paraphrase towards uses that require actual textual understanding. There would need to be a pretty convincing display to support editing articles directly, even if it is just adding a template. Marking this for human review risks creating a whole new backlog as long as the encyclopaedia, plus an explicitly bot template implicitly suggests to readers that llms are involved in the editing process. The way I've envisioned such a tool being most useful is something similar to WP:EARWIG, where a report can be generated on request for easy review, perhaps in two neat columns. This would help with things like GAN spotchecking. CMD (talk) 17:16, 3 November 2025 (UTC)[reply]
You might be right: perhaps report generation is the right way to go, rather than direct editing of articles. But I think it should be a within-wiki process (perhaps on the talk page?) rather than an outside-wiki process. Putting it on talk pages would also mean that it could perhaps be flagged for the attention of relevant WikiProjects. — The Anome (talk) 17:28, 3 November 2025 (UTC)[reply]
The talkpage might work for short articles the way some bots post there, but it would be unworkable for longer articles. If you intend it to be something that can be updated to take into account human review (eg. noting that source X actually does support text Y) I could see how it might function on an onwiki subpage that can be updated, but that brings its own set of additional coding complications that a one-off post would not have. CMD (talk) 17:45, 3 November 2025 (UTC)[reply]
In early stages, you'd probably want to run it on a single ==Section== of an article at a time. Nobody's going to actually check hundreds of sources to see whether the AI got it right. WhatamIdoing (talk) 04:18, 4 November 2025 (UTC)[reply]
Given my experience with LLMs, I am not confident in their ability to understand and interpret sources well enough to have any use for this sort of project.
Recently Acrobat has incorporated a LLM that will summarize key points of a pdf document. I tried it on some reports from work and the results were less than inspiring. It did not understand what the most important parts of the document were, did not know the meaning of phrases, at times giving them the opposite of what they were saying, and was generally worthless in summarizing the document. ~ ONUnicorn(Talk|Contribs)problem solving17:46, 3 November 2025 (UTC)[reply]
Based on my understanding of LLM function and architecture, I don’t think there’s any reason to believe that they are suited to directly do the operation proposed by Anomie; perhaps as part of a larger piece of software that incorporates LLM functionality alongside small language model heuristics, it could work. But as a general rule, LLMs don’t verify things, they extrapolate guesses. signed, Rosguilltalk17:50, 3 November 2025 (UTC)[reply]
No. The sheer number of hallucinated references/references that do not support the content they are cited for in LLM-generated articles is convincing proof that LLMs cannot ensure source-to-text accuracy. The systems have no concept of "correct" and "incorrect", only "likely" and "not likely".
I find merit in the idea of a bot which can identify and tag articles for bias by looking for emotional language. Then a human can review it and stop hallucinations. LDW5432 (talk) 03:41, 4 November 2025 (UTC)[reply]
@LDW5432 assuming it wouldn't be too much of a stretch for me to interpret you saying emotional language above to mean "non-neutral language," I wonder if you might find value in the editcheck-tone tag.
I think, based on my recent real-world experience on other projects, LLMs might well do much better at verifying specific claims as they relate specific documents, rather than verifying it against their rather nebulous knowledge of the world. Using 'thinking' and asking them to explain their rationale for their decision, and then running a separate checker pass on verifying that explanation before coming to a final conculsion, should have very much better results than 'true or not?'. This because where they really excel is as language transformers, not as oracles. I've got API accounts on a variety of LLMs, and doing the Python coding isn't really hard - pehaps I should do some experiments, and see how well this works compared to human review, before people jump to conclusions about how well it would work. — The Anome (talk) 21:55, 3 November 2025 (UTC)[reply]
I think something like this is a good idea but I'm not super convinced by the specific implementation above.
In particular I don't want a bot adding {cn} tags to the actual article itself. If the point of this is to do bot-annotation not bot-editing, a {cn} tag is absolutely bot-editing. It's taken widely to mean that the content that is tagged is dubious, and for good reason. (Also I'd like to point out here that if we were going to do this the actual template we'd want to use is {failed verification}.)
Ideally we'd put this information into a separate list somewhere so a human can check it before any editing to the article actually happens. If that's not practical, the tag we'd actually want is a custom tag that says something like [a bot reviewed this claim and thinks it failed verification], though obviously shorter than that. Loki (talk) 22:53, 3 November 2025 (UTC)[reply]
@LokiTheLiar: could you please say a bit more about what you can "see" this list looking like and being used for? Asked another way: what information can you imagine being available within this list? How/when would it get updated? Where could you imagine this list living? Who is looking at this list and what action(s) are they taking on each item within it?PPelberg (WMF) (talk) 22:59, 7 November 2025 (UTC)[reply]
Verifying specific claims is definitely something that would be interesting, and, while I'm not certain that this will work out, I do believe that it is absolutely worth a shot to try to develop it. "Is X sentence supported by Y text" is a much more specific task than "write a Wikipedia article about Z", and one for which LLMs could potentially be used (and even, if needed, fine-tuned). It will take some time before we have something that is ready (and trusted enough) to be run at the scale of the encyclopedia, and it might not turn out to be reliable enough to be worthwhile, but it might just work, and I would be glad to help with this project if you want to go forward! Chaotic Enby (talk · contribs) 03:31, 4 November 2025 (UTC)[reply]
Yes. It's the specific nature of the problem that is interesting here, and makes it more plausible that this might actually work, by avolding treating the LLM as an oracle. Thanks for the offer of help, I'll see what I can do. — The Anome (talk) 03:39, 4 November 2025 (UTC)[reply]
I was unaware of this discussion whe I vibecoded today a python script that pulls the text of a list of Wikipedia articles, inputs it into the chatgpt model of your choice (I used gpt-5 mini), looks for 1 factual inaccuracy, and spits out the results into a wikitext table. With an n of 4, I found no issues, including one article where it didn't find anything, 2 articles where it found clear in inaccuracies, and 1 article where it found something that while supported by a source may be incorrect based on the weighting of other sources. Because I wrote it to use the OpenAI API, I didn't run it too widely though my back of envelope calculations suggests it could be run fairly economically (especially if I adjusted the prompt to cut down on the verbosity of the output (which is the most costly part). Best, Barkeep49 (talk) 03:52, 4 November 2025 (UTC)[reply]
I am much less comfortable with using LLMs to measure bias, as it is less likely that they will correctly weigh dozens of RS, and they might just as well flag words that carry some emotional weight without checking whether sources justify them. Especially since sentiment analysis is a much more common task and one which the model is likely to mix up with bias analysis. Plus, it's much harder to get an AI to search for, retrieve and synthesize many sources vs to read one given source they get as input. Chaotic Enby (talk · contribs) 04:04, 4 November 2025 (UTC)[reply]
In addition to what CE said, the biggest issue with using LLMs for bias analysis is that what sounds neutral to an outside observer and what is actually demanded by our WP:NPOV policy can be wildly different.
So for instance, the last sentence of the first paragraph of Zionism:
Zionists wanted to create a Jewish state in Palestine with as much land, as many Jews, and as few Palestinian Arabs as possible.
is not neutral-sounding at all, but because the NPOV policy is about reflecting the balance of sources and not some kind of view-from-nowhere, it's not only in compliance with NPOV but NPOV basically forces us to say it like that. The number of scholarly sources that support that statement is more than I've seen for any other claim on the whole wiki, so there's really no way we could even hedge it.
And especially in articles in contentious topic areas we have tons of cases like this, where high quality scholarly sources agree on something that doesn't sound particularly neutral in a lay political context. Loki (talk) 04:18, 4 November 2025 (UTC)[reply]
You don't even have to look at contentious topics. "There is no such thing as ghosts", which is not "neutral" to the billions of people worldwide who believe in ghosts.
I spent several years at Breast cancer awareness helping editors grasp the difference between what reliable sources said on the subject and what the popular opinion is. After all, neutral is what the best sources say, and while all significant viewpoints need to be represented, those viewpoints are best supported by scholarly sources instead of fundraising/promotional sources. (With Komen's near collapse a few years back, the pressure of Pinktober has decreased.)
Towards the end of every October, I check Poisoned candy myths, because we sometimes have people who are just sure that it's "not neutral" to plainly state that no child has ever died by because a stranger gave poisoned candy to trick-or-treaters. And almost every December, there's someone complaining that Santa Claus is not neutral, since it (gently!) says that Santa is "legendary" instead of "real". Most of them are afraid that their children will read the Wikipedia article and discover the facts (but kids who are capable of understanding that article are old enough that believing in Santa would be an age-inappropriate belief). WhatamIdoing (talk) 04:46, 4 November 2025 (UTC)[reply]
As others have said, you cannot just ask an LLM for fact checks or reliable sources that are new (as in, external to Wikipedia's current text) using a pretrained model with its existing knowledge base and a limited tool ability to web search or call the Wikipedia API. It will provide the same old ones (from Wikipedia) or it will hallucinate new ones that don't exist. But what you can do and it works reasonably well is download a bunch of PDFs or web pages and upload them and tell the LLM to read them all and provide you with verbatim quotes and page numbers and authors and dates for everything alongside whatever new generated text it makes - a report, or summary, or fact checks or tasks, in a constrained mode. Then you can check those with non-LLM code or by hand to eliminate hallucinations. Some will even highlight the PDF to show and make checking easier, YMMV. You can also give a document to an LLM, along with a statement, and ask it if the document supports the statement, and to provide verbatim proof. This produces fewer hallucinations and they are caught. I think having a bot to do this is a good idea. It could leave messages on a talk page or in its own set of user pages or in an interface. It would speed up improvement of thinly patrolled and maintained articles and it's a way to use LLMs for good without actually generating the article text itself, which does not work well and shouldn't be done. Andre🚐05:14, 4 November 2025 (UTC)[reply]
Nope doesn't work that way. Even when given sources, AI "summaries" usually introduce their own interpretations of the material -- which frequently follow the same contours as the usual WP:AISIGNS slop, just in this case put in someone else's mouth.
Here's an example from Grokipedia (choosing it because we know unambiguously it's AI text, because it really likes to claim it "fact-checks" everything, and also to dunk on Grok) This sentence from their "Woman article" -- Toni Morrison's Beloved (1987), drawing on the historical trauma of slavery, earned the Pulitzer Prize in 1988 and contributed to her 1993 Nobel Prize in Literature, emphasizing African American experiences through nonlinear storytelling -- is cited to this Reddit poll. Nothing after "Toni Morrison's Beloved" appears in that thread. Gnomingstuff (talk) 07:34, 4 November 2025 (UTC)[reply]
That is exactly what AI is incapable of doing: checking sources. Don't believe me. Pick a big topic on Crockipedia. Go through the "sources" at the bottom in the form of raw URL links. Start counting how many are inaccurate or utter fabrications. Have fun with it. We need to keep AI as far away from WP as possible as its enshitification of the internet proceeds apace. Carrite (talk) 08:47, 4 November 2025 (UTC)[reply]
Pointing to one particular human editor who is doing a bad job on Wikipedia, to criticize the whole of the editors collectively, would be absurd. I believe it's similar to use Grok as an example and extrapolate the claim to all AI. Nobody here is saying Grok is good, and nobody is suggesting we use Grok. JezzaHehn (talk) 14:40, 12 November 2025 (UTC)[reply]
On the contrary, the motivation behind this is to avoid exactly the sort of errors found on Grokipedia. This is a bug detector, not an article generator. I've been experimenting with the following procedure:
Select an article using 'Random article'
Get Claude to perform a review of that article, giving it the article's wikitext as an input (Claude, and I imagine other LLM agents, has been blocked from accessing Wikipedia directly.)
Based on that, tell it to perform a set of web searches to find sources to confirm or deny any factual errors it thinks it may have found. (It incorrectly 'believes' that it cannot access the web unless actually told to.) It is forbidden to use Wikipedia as a source. I may later add more stringent criteria on sources.
Based on the output of those seatches, perform a review of the claims based on the evidence it has found
Finally, based on that, select the single correction out of the remaining errors that it is most confident about.
This multi-stage systematic approach has worked very well. Among other things, it has successfully made Claude detect and correct its own initially mistaken error reports, on finding that the sources actually back the article, leaving only valid reports of minor typographical errors. I've hand-reviewed all the remaining error reports, and every one of them has been accurate.
This should work as well with any other LLM, and it would probably make sense to use different LLMs to perform reviews to eliminate common-mode errors. — The Anome (talk) 10:45, 6 November 2025 (UTC)[reply]
This sounds promising, but I think it would be better if the prompt also included the full text of all of the available sources. Then you could ask it to verify each claim based off the citation and output chunks of the source which verify the claim. That would massively speed up the manual process of verification, but still leave it to humans to make the final evaluation. Hopefully including the sources would reduce the hallucinations as it would only be working with the prompt. SmartSE (talk) 13:41, 6 November 2025 (UTC)[reply]
I wonder if it's possible to compare the reliability of such a process to human reliability. AIs are error-prone, but more error prone than human reviewers? I am not so sure, especially when dealing with long articles where humans get tired/overwhelmed/eyes-glazing-over after a while. Granted, my personal idea of using AI would be to compare each claim to its reference, not the entire article as a whole. That denies us several benefits of whole-article comparisons but might (or might not) produce fewer hallucination errors. Jo-Jo Eumerus (talk) 08:20, 7 November 2025 (UTC)[reply]
When it comes to the field of medicine, especially but not only radiology, AI is being used rolled out to the "real world" quite tangibly including GPTs and other LLMs, not just image analysis models. (citation for this claim) So assuming we do not consider the accuracy of Wikipedia to be more vital than the accuracy of medicine being practiced on humans, I'm confident LLMs can be used to increase the accuracy of human editors who are trying their best to add good citations. JezzaHehn (talk) 01:13, 11 November 2025 (UTC)[reply]
I'm not sure this claim really makes sense (even though I'm fairly positive about this idea in general).
By analogy, computers have been used to do math in all fields including medicine for a long time, and have been highly reliable at that since the 50s. However, despite this fact, computers have never been useful at writing encyclopedia articles.
Just because an LLM is good at one thing doesn't mean it's good at some totally different thing. We're not doing medical scans here, so proof that AI is good at medical scans isn't very relevant. Loki (talk) 01:35, 11 November 2025 (UTC)[reply]
To clarify my point, I bring up the distinction between "image analysis models" and "GPTs and other LLMs" because the use of AI in radiology is not limited to the analysis of the pixels in scans, but also includes textual summaries of medical information, intended to be read by medical professionals. The crux of my point is that if LLMs, when handled with an appropriate amount of delicacy, are good enough for medicine with the highest demands of accuracy, then we would have a great bit of hubris to claim that a well-handled LLM isn't good enough for Wikipedia. JezzaHehn (talk) 14:17, 12 November 2025 (UTC)[reply]
The key part in that last sentence is well-handled. The community is hostile to LLMs mainly because poorly handled LLM-generated output is bad, and the vast majority of editors using LLMs do not properly check the output. More often then not, users just copy-paste raw LLM output to create articles; this page, which I preserved in my userspace, is an example of what this raw copy-pasted LLM output looks like, complete with communication intended for the user and Markdown formatting instead of wikitext.
I feel as if I should add onto this. I am the one who originally created Wikipedia:WikiShield and added the LLM detection. Firstly, there are actually two different LLM prompts being used: one that checks the edit, and one that checks for usernames that go against Wikipedia:UAA. Both have issues, and need to be fact checked. But, when used properly, they can be incredibly useful. There have been multiple cases where I have missed a username that the LLM caught, and there have also been multiple times where I would have missed some pretty obvious vandalism if it weren't for the LLM detection.However, it still has a LOT of issues. One of the funniest ones is that the Wikipedia:UAA LLM flagged @Pro-anti-air (one of the co-creators of Wikipedia:WikiShield) not once, but twice. So, yes, LLMs can be useful; but they still require a lot of checking on the human's part.My stance is this: The LLM should not directly change your mind on anything. There is a reason we have human editors and not a bunch of AI bots. What LLMs can be used for, is to present information you may have missed, which can then be taken into account and analyzed by the human editor. – LuniZunie ツ(talk)01:59, 13 November 2025 (UTC)[reply]
I agree that some type of LLM add-on will increase neutrality on Wikipedia. And if a human can review what the LLM does then even better. LDW5432 (talk) 01:44, 11 November 2025 (UTC)[reply]
Should Wikipedia have a neutrality policy that gives more specific guidance on how to describe genocide or alleged genocide? If so, what should it say? (A new WP:GENOCIDE was proposed on Talk:Gaza genocide, where many comparisons have been made to other genocide articles. I am not expressing an opinion on this question, just moving the conversation here.) -- Beland (talk) 01:43, 4 November 2025 (UTC)[reply]
To clarify (and based on comments so far), this is not necessarily a suggestion that Wikipedia should come up with its own definition of genocide and just use that (but advocate for that if you want). It could instead be a guide that points editors to common definitions used in the field, documents technicalities and sensitivities of various terminology, helps identify expert sources, and helps editors apply NPOV and other policies to statements. On the reader side, we have some of this information in List of genocides and Genocide definitions. We could also make this broader than just genocide to include other violent or otherwise sensitive types of event, add some words to Wikipedia:Manual of Style/Words to watch, or abandon this whole idea because instruction creep or some other reason. -- Beland (talk) 09:27, 4 November 2025 (UTC)[reply]
A serious question: why create a guideline for this particular label and not others? What purpose would it achieve given that guidelines are superseded by policies that are usually diligently applied to contested labels? M.Bitton (talk) 01:54, 4 November 2025 (UTC)[reply]
Would you prefer to see genocide, or some broader list of labels added to MOS:LABEL? Or if we're creating a new page, would you prefer it to cover a broader scope, like violence or government acts or something else? -- Beland (talk) 02:04, 4 November 2025 (UTC)[reply]
Oppose using a different standard for genocide than other events would itself be a form of bias, in favor of those who argue that genocide is exceptional rather than a recurrent theme in history. (t · c) buidhe01:56, 4 November 2025 (UTC)[reply]
One option is to write up guidance that maintains the current policy, but just explains in more detail how it applies to genocide. Another option would be to write an explanatory extension with a broader scope - I think some editors suggested violent acts in general. Genocide and murder, for example, have technical legal definitions which make them different from mass killing and individual killing; it might help editors to have an explanation of the special considerations around those terms for that reason. -- Beland (talk) 02:06, 4 November 2025 (UTC)[reply]
From my conversations with genocide scholars, the legal definition(s) of genocide are highly criticized for a number of reasons. I don't know if enshrining these into Wikipedia policy is a good idea (and indeed, as buidhe notes may itself violate NPOV). Katzrockso (talk) 02:11, 4 November 2025 (UTC)[reply]
It sounds like you're thinking the guidance would be "if it's not legally considered a genocide, Wikipedia can't say it's a genocide". That doesn't have to be the case. Given this concern, what would you advise editors on how to use the word "genocide" in Wikipedia's voice? -- Beland (talk) 02:15, 4 November 2025 (UTC)[reply]
No different than the advice for editors on any other topic - through consensus-building and the weighted balance of reliable sources. That this topic area needs a specific guideline is not clear to me at all. Katzrockso (talk) 04:16, 4 November 2025 (UTC)[reply]
Yes, it would be a form of bias to pick and choose between the many definitions of genocide used in academic research. Relatively few genocide scholars besides the lawyers actually use the UN Convention definition. (t · c) buidhe05:58, 4 November 2025 (UTC)[reply]
It's also possible to write a guideline that's biased in favor of those who argue genocide is not exceptional, depending on the wording, no? If you were going to advise editors on how to use the word "genocide" in Wikipedia's voice, given this concern, what would you say? -- Beland (talk) 02:52, 4 November 2025 (UTC)[reply]
Support Emotionally charged words like "massacre" and "genocide" should have a NPOV guideline for the project. The resulting policies should be added to WP:WORDS and other relevant sections. LDW5432 (talk) 03:18, 4 November 2025 (UTC)[reply]
I would oppose having one specifying on a single term and giving our own definition of the term (which might not follow what reliable sources use), although a wider guideline about emotionally charged words would absolutely be helpful. I don't think it should be part of Wikipedia:Manual of Style/Words to watch as these aren't words that should be avoided (and we don't want to introduce bias by toning down languages), but it could be a separate guideline cross-linked from there.As for the content of the guideline, it shouldn't write our own definition for these terms, but give indication as to how we should best follow reliable sources. For example, how much weight should be given to experts vs media vs governments, what level of consensus (affirmative consensus vs silent consensus) is enough for these labels, when to attribute claims vs use wikivoice, or whether we should have separate guidelines for titles and prose (cf. Tamil genocide for an example where the title uses "genocide" but the prose clarifies it as a specific framing rather than a consensus). Chaotic Enby (talk · contribs) 03:26, 4 November 2025 (UTC)[reply]
I agree with this take. There are cases where having a literal flowchart can be useful (see WP:DEATHS) but I don't think this is one of them. Instead we should have a more general guideline about what kinds and numbers of sources we need to have to justify charged terms. Loki (talk) 04:22, 4 November 2025 (UTC)[reply]
As I questioned above (funny that you replied to me there as I was writing my comment here), what necessitates an extended guideline for this topic that isn't already covered by our existing policies and guidelines? All of the things you describe seem to be adequately covered by our existing guidelines, from what I have seen. At best, what you describe here seems to warrant an essay or a page on Wikipedia:WikiProject Genocide, not a guideline. Katzrockso (talk) 04:22, 4 November 2025 (UTC)[reply]
While I agree that it would proceed from our existing guidelines, there have been recurrent discussions about how exactly they should be applied to represent sources about possible genocides, and it could be good to have some reference points to avoid circling around the same arguments again and again. Chaotic Enby (talk · contribs) 04:28, 4 November 2025 (UTC)[reply]
I agree that it would be useful to have somewhere to collect common thinking on the topic to avoid repetitive discussions, but this is when an essay is warranted, not a new guideline. Katzrockso (talk) 04:32, 4 November 2025 (UTC)[reply]
Community consensus on the application of our existing can be established through the discussions on each particular page in question, there is still no compelling reason for a new guideline on this topic area. How particular guidelines and policies are applied within particular topic areas is typically covered by essays (I only see topic specific guidelines for at WP:LGL for naming conventions - largely to the extent that these are formalized other non-Wikipedia guidelines, notability and style), which don't "carry the weight of community consensus", but still fulfill the rationale you provided for having a guideline. I worry about instruction creep and the fact that this seems like it might be the first content guideline that applies to a contentious topic area. Katzrockso (talk) 04:48, 4 November 2025 (UTC)[reply]
Good information can be helpful no matter what tag is at the top of the page. For example, I suspect that many editors would benefit from a handy summary of the difference between various legal definitions of genocide vs current scholarly understandings. WhatamIdoing (talk) 04:54, 4 November 2025 (UTC)[reply]
I see both of your arguments, and agree that a guideline might be too heavy-handed for this, although I'm still worried that an essay might be ignored as, well, "just an essay" even if it carries broad community consensus. Instruction creep is definitely something to be careful of, so I'm absolutely open to non-guideline alternatives. Chaotic Enby (talk · contribs) 05:01, 4 November 2025 (UTC)[reply]
It's also possible for an essay to become a guideline if it's widely supported and followed. Just getting something out there that can be iterated may be more productive than arguing too long about what might be said in an abstract way. -- Beland (talk) 07:34, 4 November 2025 (UTC)[reply]
Yes, I started writing something earlier about how I'd rather see specifically what we are talking about in order to evaluate whether or not it should be a guideline. It's very difficult to support such an abstract idea of a guideline, for me. Katzrockso (talk) 09:52, 4 November 2025 (UTC)[reply]
Truthfully I felt we are really too close to this conflict and that everyone has their own biases in determining whether or not the Gaza War is a genocide. While the discussion on that talk page has raised examples of sources pushing back terms to describe the Armenian genocide and similar massacres/genocides, other scholarly content accessing these events are also made decades after the event, and with sufficient distance to discuss the event objectively. Right now, I felt there's really too much emotions across all parties (and potentially some antisemitic/anti-Israel/Islamophobic bias) to really properly access the conflict, especially since this is part of a broader contentious topic.--ZKang123 (talk·contribs) 04:03, 4 November 2025 (UTC)[reply]
Oppose. I don't think it has been established that a guideline is necessary here vs the already existing guidelines and policies on this topic that address this adequately. Katzrockso (talk) 04:28, 4 November 2025 (UTC)[reply]
Oppose as its not really a good idea to treat genocide different from similar words like massacre, etc. Instead, what we should be doing is not trying to rush to name such events in Wikivoice until many years have passed and we can then judge what the academic consensus is, assuming their is one. It is the same approach to how we handle scientific topics (For example, we do not assert COVID-19 was zootrophic but instead say the scientific consensus is that it was zootrophic and did not have a lab origin). Masem (t) 04:49, 4 November 2025 (UTC)[reply]
Comment - I am not opposed, but I expect the discussion over how to formulate the policy will be heavily weighed upon by the question of whether the Gaza genocide will make the cut. Additionally, I worry that the definition which comes out of this will be such that it is effectively impossible to call a genocide in Wikivoice until decades after the fact. There seems to be a group of editors (Jimbo included) which believe that the opinions of directly implicated governments and affiliated NGOs should weigh strongly against the designation. Such a policy would be very corrosive to our ability to describe objective reality. StereoFolic (talk) 05:04, 4 November 2025 (UTC)[reply]
As you point out, I suspect that any such discussion will simply be the relitigation of every previous genocide discussion combined and multiplied. I am not sure how productive such a discussion could be or whether meaningful consensus could result it. Katzrockso (talk) 05:06, 4 November 2025 (UTC)[reply]
Yes I worry about this too. There might be a push to adhere to strict rules, for example confining it to genocides that have been litigated at the ICJ or warrants issues for genocide at the ICC or other tribunals, which would ignore extensive studies into genocides of native Americans for example, just because the predated certain international conventions. Or there might be an appeal to constrict such calls to events that enough time has passed for consensus maybe putting into question something like the Yazidi genocide. If there is to be a consensus it will never be just custom made to exclude this one event, it will inevitably lead to more genocide denial down the line. Tashmetu (talk) 08:35, 4 November 2025 (UTC)[reply]
One way to resolve these worries would be to propose text you do want to see, and make some enlightened arguments. I think if a guideline has to cover all genocides and alleged genocides, it becomes difficult to argue for an unfair rule to favor a preferred outcome for a partisan fight without that becoming somewhat transparent as a tactic if as it fails to fit less controversial cases. Or if the drafting process goes off the rails and produces something unacceptable, there's always the option to vote against making it a guideline. -- Beland (talk) 08:53, 4 November 2025 (UTC)[reply]
@Beland: What would it cover that WP:LABEL and other guidelines already don't? Let me give a somewhat related example, there has been a liberal use of dictator being added to a lot of BLPs and otherwise without discussion, sources or the weighing thereof. But that is perfectly countered by extant guidelines like LABEL which I have argued for and used in discussion. Would we then need a separate WP:DICTATOR guideline, I think not.
But to add, I think both the genocide proposal and the dictator example given by me can be covered in use cases (for when and how to voice these) at the extant guideline pages. Gotitbro (talk) 06:35, 4 November 2025 (UTC)[reply]
I think "dictator" is a good example of a value-laden label, but I disagree that "genocide" functions in the same way. Whether or not someone is a dictator is not typically the subject of significant scholarly analysis (there are exceptions here and there, especially in the historical literature), but whether or not an event is a genocide is. This makes "genocide" distinct in that while it may have value-laden implications, the actual usage of the term in Wikipedia should be governed by e.g. our other content guidelines that emphasize WP:RS. Another important distinction is that genocide refers to "events", while MOS:LABEL examples refer to people/groups. Part of the justification for MOS:LABEL is WP:BLP, which doesn't apply here for an event (genocide). Katzrockso (talk) 07:38, 4 November 2025 (UTC)[reply]
Well, there are governments being accused of genocide, which are made of living people, some of whom have international criminal warrants issued against them, and some of whom are aghast what has been happening.
But it's true that events are just a different class of thing than people which may require different advice. For example, whether to describe an event as a death, killing, murder, manslaughter has to take into account whether the cause of death was indisputably another person, and whether a specific legal category has been assigned to the killing through a conviction. Labeling an shooting as a terrorist attack or militant action or liberation attempt may have similar considerations to labeling someone a terrorist. Is this transportation event a collision or an accident?
I can actually brainstorm a fair number of event-related words and phrases to watch: direct action, sabotage, protest, activism, eco-terrorism; civil war, rebellion, insurgency, terrorism, resistance; strike, supply disruption, work stoppage, lockout; occupation, liberation, invasion, annexation, reunification, restoration; coup, revolution, liberation, regime change, change in power; parade, protest, demonstration, riot, uprising, insurrection, rebellion.
There are also more people-related words we don't cover, but which are sensitive: refugee, asylum seeker, alien, immigrant; homeless, unhoused person; "discovery" of the Americas.
We could just expect editors to educate each other about the technical considerations and connotations and cultural sensitivities around various words and otherwise expect them to follow sources or common sense, or try to document terminology for sensitive events for reference and to guide discussions toward faster and more predictable consensus. We could also scope such an expansion broadly - whatever we can think of that's been the subject of e.g. a page move dispute or lede RFC - or narrowly, just for words where there's a burning need to ensure they are treated consistently across many articles, either because we are being inconsistent or we are just arguing too much and codifying where we always land would save time.
While yes, governments are made up of people and warrants have been issued for arrests, the application of the term "genocide" to events doesn't have such a direct implication on people in a way that is relevant to Wikipedia. From the basis of determining that any particular event is a genocide, I don't believe we have gone to attach these labels to individual people, but still keep the type of attribution requested by MOS:LABELS.
WRT the event-related words to watch, This is good pushback, I think many of those words could broadly construed as sensitive and value laden labels that are subject to the same sorts of disputes as the ones in MOS:LABELS, so parts of my argument aren't quite as strong there. I do think that genocide, ironically, is unique in its position that its extension is uniquely studied in academia - other than maybe terrorism, I can't think of any large body of academic research that consistently studies whether or not any particular event constitutes a type of event or not. In this case, genocide is if anything the one category of event that does not need a specific guideline to govern its use per MOS:LABELS or anything similar, imo. Katzrockso (talk) 09:38, 4 November 2025 (UTC)[reply]
LABEL is broad enough to cover insances beyond bios/orgs to including events. The reason MOS:TERRORISM to it for example. These to me are close enough to not warrant a separate adjudication.
@Beland: "Are you voting for adding "genocide" and "dictator" as examples at MOS:LABEL, then?" Yes. And if it is not considered bloaty most of the rest of the examples given by you above. Gotitbro (talk) 10:49, 4 November 2025 (UTC)[reply]
Oppose: We should primarily go with what the reliable genocide scholars say regarding each case, and follow the official standards in this world, not develop our own standards that go against them, especially as this can easily be abused to lessen said standards so much that we do not recognise serious crimes against humanity. David A (talk) 08:29, 4 November 2025 (UTC)[reply]
Do you think it would be helpful to have a WP:GENOCIDE that documents various genocide definitions that should be referenced by articles? And maybe gives some advice about where to look for reliable genocide scholars or how to figure out which are and aren't reliable? Do editors need advice on how to evaluate statements made by scholars and what sort of sources to discount from "scholarly consensus" or to report with attribution (like governments involved in a conflict)? -- Beland (talk) 08:47, 4 November 2025 (UTC)[reply]
My next question was going to be, what are the major definitions we should be highlighting? Then I thought, oh, maybe we could just link to Genocide definitions...but there are so many definitions there! It sounds like the 1948 Genocide Convention is almost universally used for legal purposes. Do scholars tend to only reference that, or are there other common definitions used in the academic literature? Or in other reliable sources, for that matter? -- Beland (talk) 09:00, 4 November 2025 (UTC)[reply]
List of genocides may be a good anchor; that list is scoped to only include events "recognized in significant scholarship as genocides". Perhaps if it isn't on that list, it shouldn't be described in wikivoice as a genocide. That's partly just a matter of synchronization, but it could also serve as a public documentation of what our threshold for that is, with sources that can be used for easy comparison.
That list article also has a good summary of definitional controversies. It seems we now think of ethnic cleansing and politicide as distinct atrocities from genocide, and I'm not sure how "forced pregnancy, marriage, and divorce" is treated in modern times. The article also says: "The academic social science approach does not require proof of intent, and social scientists often define genocide more broadly." I find that a bit mysterious and it may help editors to clarify that, and help readers to explain how that relates to inclusion on the list.
See my comment below on how it may be described outside of the legal definition. I was active in crafting the new inclusion criteria for that article. I would just clarify that while The academic social science approach does not require proof of intent most definitions from this area still have intention, and treat it with some primacy. -- Cdjp1 (talk) 09:25, 4 November 2025 (UTC)[reply]
While the legal definition is engaged with regularly and thoroughly in the literature, it is also highly contentious, as the majority tend to view it as too restrictive (due in part to the political climate it was developed under), though a minority also view it as too broad. These views have existed since prior to the adoption of the Convention, and are not just "humanities and social science scholars" but are also expressed, again regularly, by legal scholars in literature. There are a couple of definitions (more aptly called frameworks, in my opinion) that scholars will gravitate towards, and these definitions come from the more prominent individuals in the field. But there is no singular standard alternative used instead of the legal definition. -- Cdjp1 (talk) 09:19, 4 November 2025 (UTC)[reply]
Yes, there has long been a discussion around how Lemkin expended every last bit of him political capitol to get the UN to adopt a definition of genocide that gutted its meaning. Katzrockso (talk) 09:45, 4 November 2025 (UTC)[reply]
Comment - While I support the notion in theory, and have been mulling over the idea of starting one myself for a few years now (I do have a draft), I have not pushed forward with it as it seems as though we would ultimately end up in OR territory with it. If we do start working on an essay (with the view of it eventually becoming a guideline or policy) I will engage with the matter, but for now, I can not make a vote either way to it existing. -- Cdjp1 (talk) 09:14, 4 November 2025 (UTC)[reply]
OR because we'd be coming up with our own definition of genocide? That's not the sort of thing we really do; as David A and I were talking about above, I would expect it to be more about looking at existing definitions of genocide and helping editors navigate them and apply NPOV and other policies to them. -- Beland (talk) 09:18, 4 November 2025 (UTC)[reply]
I understand that isn't what we (should or otherwise) do, as I said though, when I try to play out pushing and developing such an essay, I ultimately end up seeing us discussing the matter in ways I consider to be within OR territory. This view could (and hopefully) be ultimately wrong, but is the reason why I have not pushed forward on it. -- Cdjp1 (talk) 09:23, 4 November 2025 (UTC)[reply]
Editors seem to be pretty good at yelling "original research!" and deleting as needed, so I expect we'd be able to distinguish between that and making better editorial decisions, which actually still requires some thinking and occasional guidance. I think we're at the point of working on this now...it seems better to be concrete and vanquish fears about what might happen by going ahead and not doing the wrong thing or demonstrating we can recover from it. So I'd welcome a draft even if we decide it's not a direction we want to go in. -- Beland (talk) 09:34, 4 November 2025 (UTC)[reply]
There were suggestions of making up our definition/analysis of genocide in the other thread, I believe VPP suggested something like this. Katzrockso (talk) 09:40, 4 November 2025 (UTC)[reply]
@Very Polite Person:, I think this is referring to you. Are you interested in a guideline that says "if reliable sources say X, Y, and Z have happened, it can be called a genocide in Wikivoice", or something that references existing legal and academic definitions and helps editors look for reliable sources that reference those (and maybe documenting considerations and sensitivities around terminology, etc.)? -- Beland (talk) 09:50, 4 November 2025 (UTC)[reply]
Yes, this was in reference to Very Polite Person from this comment in particular and I hope I didn't mispresent their position (which is legitimate even if I disagree with it). Thanks for tagging them into this discussion. Katzrockso (talk) 09:55, 4 November 2025 (UTC)[reply]
This topic is contentious and argued enough that at least an essay with some centralized guidance and summaries of previous discussions and community consensus would be useful. Not as a tool for winning arguments or forcing specific practices, but as a shortcut to common understanding. ~2025-31078-40 (talk) 12:23, 4 November 2025 (UTC)[reply]
Support - My vision for such a centralized policy page would be a place to explain the synthesis of various Wikipedia policies related to covering the topic of genocide on Wikipedia. It should explicitly not be trying to define genocide. Instead, it should focus on addressing commonly raised issues. For example, it can explain that Wikipedia policy does NOT require that the ICJ declare that an action is genocide in order for Wikipedia to use Wikivoice to refer to it as genocide. It should clarify that genocide studies is an academic field and that the opinions of scholars in that field should be given more weight (per NPOV) than government officials asserting denial. It should explain that Wikipedia is not limited to only using the legal definition of genocide, and instead it is up to reliable sources to use the word, which we can then attribute. It should explain that Wikipedians should refrain from original research and avoid synthesis of facts to conclude genocide or lack thereof (and that talk pages should not be filled with such material of Wikipedians soapboxing their own assessment of events, such as "the low/high number of deaths means that it [is] [is not] genocide!"). These are just a few suggestions, but the general theme is that it should not be trying to (1) authoritively define genocide or (2) declare certain events as genocides. JasonMacker (talk) 15:03, 4 November 2025 (UTC)[reply]
i agree with this. it could also address "it's a war not a genocide" and "not enough people have been killed to count as a genocide" Rainsage (talk) 23:15, 4 November 2025 (UTC)[reply]
It seems to me the root of the matter is that some folks don't understand that current enwiki consensus/practice is usually to weigh academic consensus higher than government and news sources. This can cause a lot of confusion and even indignance in topics where there are a lot of news and government sources saying X, and a lot of academic sources saying Y, and then we write our articles from the Y POV. The same thing happens constantly in WP:FRINGE topics such as COVID-19 lab leak theory. Perhaps the fix is as simple as strengthening the "academic consensus is always superior to other types of sources" wording in the various policies such as WP:NPOV, WP:RS, WP:FRINGE, etc. It is currently a bit weak, with only a sentence here and there (WP:SOURCETYPES, WP:BESTSOURCES). I say "academic consensus" instead of "academic sources" because we need to be careful not to elevate junk academic sources such as single studies (WP:SINGLESTUDY). What we're really interested in is review articles, textbooks, and policy statements from respected international organizations that summarize the academic consensus. WP:MEDRS does a great job of this for sciences. For humanities topics, we'd probably need to add to the list books by experts in the field. –Novem Linguae (talk) 03:40, 5 November 2025 (UTC)[reply]
The thing is Academic sources are just as much an opinion as anything else. And recent scholarly assessments of genocide have been sorely lackluster and confounded by confirmation bias, railroading, and argumentum ad populum, among a laundry list of other issues. If an alleged genocide is disputed by sources other than the alleged perpetrator and there is a large number of uncertain or hesitant opinions, it shouldnt be considered a genocide. The credibility and reliability of sources should also be assessed, especially when some claim it is a genocide before a month has even elapsed in a conflict, or sources claim that the genocide started day one or two of a conflict. They simply have no credibility, especially when responding to a genocidal massacre is called a genocide.
.
It also would be worth looking at the largely undisputed genocides of history and see how they were decided as such and when. Darfur genocide wasnt even written until a decade after it ended. Maybe we need a moratorium on deeming a genocide for a period of years after the conflict has ended. That would not preclude coverage of those claiming something is a genocide, but that might be better written as "allegations" or "question". It also would be worth looking at why some wars are considered genocidal, but other wars that saw millions of deaths and disproportionality are not. Why is Anfal campaign not a genocide? Why were the bombings of Hiroshima and Nagasaki not genocides? Why is the bombing of Dresden not a genocide? ← Metallurgist (talk) 04:47, 13 November 2025 (UTC)[reply]
Why is Anfal campaign not a genocide? Why were the bombings of Hiroshima and Nagasaki not genocides? Why is the bombing of Dresden not a genocide? Those sound like questions for those article's talk pages. Perhaps they have also had extensive discussions and RFCs on the topic, same as Talk:Gaza genocide. I imagine their talk page watchers went through a similar process and decided that the majority of their best sources did not call it a genocide. –Novem Linguae (talk) 09:22, 13 November 2025 (UTC)[reply]
Interested to hear what people think about this, I know there's been lots of discussion on reforming WP:ANI (like here and here) but I can't see that this has been suggested before from the archives. I think that when !voting on sanctions at ANI that are to be imposed by the consensus of the community, people who are involved in the underlying dispute should preface their !votes with something indicating that they're involved (like {{nacmt}}). This could be limited to the underlying dispute preceding escalation to ANI and historical disputes with that editor, or could be broadened to meet WP:INVOLVED (ie. disputes in the topic area). Reasoning is the same as at INVOLVED, involved [editors] may be, or appear to be, incapable of making objective decisions in disputes to which they have been a party or about which they have strong feelings; having some identifier makes it easier for newcomers to the report to analyse the discussion.
Imo the benefits of this is that it would encourage transparency and honesty, make it easier for newcomers to the report, and hopefully would make ANI fairer and slightly more functional (at the very least make it appear fairer, moreso to the reported editor). Whether closers ought to weigh involved !votes less or the same as uninvolved, idk. Downside is that it takes admin time to 'enforce' and could derail reports with people back-and-forth arguing about whether they're involved (maybe it could be written somewhere that this should be discussed on user talk pages instead). Thanks for reading Kowal2701 (talk) 19:05, 4 November 2025 (UTC)[reply]
First thought: I don't think we need a bunch of little discussion templates for this. I don't think we should make this "a rule". But I think that it would be an okay thing to model and to encourage, particularly in longer, more vote-like threads.
Second thought: It's sometimes difficult to decide whether you're involved or uninvolved. We see editors sometimes saying that they're "semi-involved", and there's the difficult case of an editor whose views are clear but who hasn't technically been involved in this specific dispute. For example, I had a userbox on my User: page for years that said I dislike comma splices. If Alice and Bob have a dispute at ANI about a comma splice, then am I "involved"? We might normally say that I'm not involved, but if someone's closing a contentious RFC, there is a preference for people who have never expressed an opinion on the subject, because we want people who are uninvolved. WhatamIdoing (talk) 21:41, 5 November 2025 (UTC)[reply]
As another unclear case, I was actively involved with an RFC about a topic, arguing strongly for one option, and have expressed similar views in related discussions, but did not participate in a second RFC about the same topic shortly afterwards (I wasn't aware of it). The closure of the second RFC was brought to ANI for review - am I involved or not? Thryduulf (talk) 21:50, 5 November 2025 (UTC)[reply]
Yeah it's probably better as a norm, that way people can do it at their own discretion, but idk how we'd encourage it without jotting something down in an essay/guideline like Wikipedia:ANI advice. WP:CBAN does say Discussions may be organized via a template to distinguish comments by involved and uninvolved editors, and to allow the subject editor to post a response, though I've never seen that done, maybe advice along these lines could be added there? Kowal2701 (talk) 17:03, 6 November 2025 (UTC)[reply]
Wikipedia:Banning policy#Community bans and restrictions says in part the community may impose a time-limited or indefinite topic ban, interaction ban, site ban, or other editing restriction(s) via a consensus of editors who are not involved in the underlying dispute (emphasis added). !votes on sanctions from involved editors shouldn't really be considered at all according to this. I agree with WhatamIdoing that disclosure probably shouldn't be "a rule" but if you notice it I wouldn't have a problem with mentioning it briefly, as we typically do with WP:SPAs. Ultimately though I think it's up to the closer to weigh consensus appropriately, especially in cases where an editor's involvement may be marginal. —Rutebega (talk) 21:55, 5 November 2025 (UTC)[reply]
This makes me think about the role of reputation. You don't really want editors to be posting "This editor has previously supported ____ in many other discussions", because that kind of comment promotes drama, and yet if I were closing a dispute, I'd probably take the person's reputation into account, if I happened to know it. WhatamIdoing (talk) 00:15, 6 November 2025 (UTC)[reply]
I'm surprised at that, I've seen very clearly involved editors even make CBAN proposals, and haven't gotten the impression their !votes aren't weighed. Yeah, little notes like WP:INVOLVED for unambiguous cases would probably be okay and hopefully uncontroversial, and help the closer out.
I was going to add to the OP that admin !votes should be weighed more heavily than those of non-admins. While we tend to stress consensus is based on quality of argument, I'm sure things like reputation and social capital contribute to weight. Like how WP:NHC says (bold mine) If the discussion shows that some people think one policy is controlling, and some another, the closer is expected to close by judging which view has the predominant number of responsible Wikipedians supporting it, not personally select which is the better policy. ("predominant number" seems to encourage vote-counting?) Kowal2701 (talk) 17:16, 6 November 2025 (UTC)[reply]
If there was no vote-counting of last resort, 50% more of our disputes would result in no consensus.
Otherwise, there are decisions that could never be made at all, because there is no written rule saying that Image A belongs in the infobox and Image B in the first section or vice versa, or that editors should prefer to merge or split content that could equally be one long article or three shorter ones. — User:WhatamIdoing23:56, 11 June 2025 (UTC)
It's not really "their !votes aren't weighed" at all. It's more like "Of course this notorious WP:CPUSHer would say that, so I'll count that less (but not zero)". Consider a CTOP subject such as a geopolitical dispute. We know that some editors occasionally try to 'win' by getting editors who disagree with them kicked out of the community. If one of them turns up at ANI claiming that their opponent hates kittens, etc., then you need to take the context of their relationship and their POVs into account. WhatamIdoing (talk) 20:22, 10 November 2025 (UTC)[reply]
This is a great idea. Most are unaware that "involved" !votes should be ignored, and those "involved" very often influence others with their arguments - you can't simply ignore them. In the real world it is sometimes called jury tampering or vote tampering, to use legal terminology, maybe call it !tampering in the same way it is a !vote -- GreenC16:51, 8 November 2025 (UTC)[reply]
Please, let's not extend the programming logic negation (!) jargon to more words. It's already opaque to those unfamiliar with programming or the reason why it's being added in front of "vote". It doesn't provide any additional concision (almost every instance in this section, for example, could be replaced with "comment"). Additionally, putting a negation in front would, by analogy with "!vote", convey the meaning that "this is an opinion expressed in the form of tampering but is not actually tampering". Assuming for the sake of argument that tampering is an apt word, I don't think negating it helps. isaacl (talk) 17:41, 8 November 2025 (UTC)[reply]
The m:SUBREF syntax <ref name=foo details="bar" /> does not support templates, unlike the original <ref extends=foo>bar</ref> proposal. This makes it more difficult to have standard rendering of, e.g., quotations, in subreferences, much less proper metadata. There should be some way to use a template within a subreference. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 09:29, 8 November 2025 (UTC)[reply]
^Double-checking to make sure: Cite error: Unknown parameter "details" in <ref> tag; supported parameters are dir, follow, group, name (see the help page).
At WP:TOOBIG, it states that at over 9,000 words, an article, "Probably should be divided or trimmed, though the scope of a topic can sometimes justify the added reading material." Many editors believe the scope of an article fits this "added reading material". Recent examples include Washington Monument and Wilmington massacre, although more examples can be produced if requested.
Long articles are discouraging for the average reader, who are more likely to be interested in an overview of the topic and not specific details. It also makes it hard for the reader to navigate the article, especially on mobile, and to find the most important information on a topic. Long articles can usually be spun out, have information summarised more effectively, or have excess detail removed. Articles with large scopes such as Philosophy, Earth, and Bacteria are able to keep their word count below 8,000.
I would like to change the phrasing of this sentence so that the guidelines makes it clearer that articles under 9,000 are encouraged and that very few articles can justify an extended length. One phrasing I thought of was, "> 9,000 words: Probably should be divided or trimmed. While the scope of a topic can sometimes justify the added reading material, these are rare exceptions." I am open to alternate wording, and hope to bring this as a formal proposal in a couple of days or weeks. Thoughts? Feel free to ping me. Z1720 (talk) 15:29, 8 November 2025 (UTC)[reply]
these are rare exceptions - How rare is it? Would be interesting to see stats for those categories (6k, 8k, 9k, 15k). -- GreenC16:38, 8 November 2025 (UTC)[reply]
wikiget -w "Washington Monument" -p | wc -w = 10,899 words .. this is plain text no markup (calls API:TextExtracts). Only need run this command on 7 million articles, sort the results into categories by size. In all the years, nobody has yet made a report of largest articles by word count? -- GreenC18:25, 8 November 2025 (UTC)[reply]
Eh, I would oppose, I think this guideline is already too restrictive and I don't feel the need to make it stricter. "Long articles are discouraging for the average reader, who are more likely to be interested in an overview of the topic and not specific details" don't agree with that. Spinning out can be impossible because sometimes there is no notable subtopic to spin out. There are quite a few topics where >9000 words is justified. PARAKANYAA (talk) 19:38, 8 November 2025 (UTC)[reply]
Yeah, I agree- I know this isn't the way we envision our articles being read, but I think most people read the first paragraph/lead and then, if they
need more, use the TOC to navigate to the section they're interested in. The size of an article is much more relevant when it comes to loading articles, which actually doesn't have all that much to do with word count. As an example, my laptop would rather die than edit United States at the 2024 Summer Olympics ever again, but that's at only 4k words. It's fine with the 9-10k word biographies. GreenLipstickLesbian💌🦋20:13, 8 November 2025 (UTC)[reply]
@GreenLipstickLesbian: I agree with your description on how readers read the articles, but come to a different conclusion: larger articles will produce larger TOCs, making navigation harder for our readers. By shortening the articles, we help our readers find the information that they are looking for. Spinning out articles helps readers go down the wiki-rabbit hole and make them more likely to read the lead of the new article. Larger articles make it harder to load on slow connections, less appealing to readers, and less likely that readers will read editor work. Reducing word count will also help with other loading issues: a smaller article has less space for images and less likely that templates will be necessary to help explain prose. Z1720 (talk) 02:29, 9 November 2025 (UTC)[reply]
Yeah, definately different conclusions. I find that I'm more likely to read something if everything is on the same page; going down wiki-rabbit holes (while something I very much enjoy doing!) is frustrating to me when I'm looking for a specific piece of information. It's often why I !vote "merge" at AfD. Even if everything was integrated very well across multiple articles, then I think we'd still be have the same issues wrt to the length of the TOC - but I'd lose my ability to CTRL+F. Similarly, spin offs tend to have viewer viewers & fewer page watchers, meaning serious issues are much more likely to slip under the radar. Tobacco smoke, for example, is an arguably standalone topic - it got deleted through WP:CP/N, but for 10 years we had a very highly viewed article where an editor used some very old papers to propose a somewhat... outdated view on tobacco smoke, cancer, and cigarette companies - the type of view that would have been removed instantly if you'd added it to the main cigarette, tobacco, or tobacco smoking articles.
While it's true that on first glance, a longer article has more room for reasonable tables, images, ect, I wonder how true that actually is? Many articles, even the smaller ones, lend themselves to a lot of images and very little text. Similarly, and maybe this isn't so true at the FAC/GA level, but many editors I've encountered see to have no issue adding massive sections of images to tiny articles. And, again, some of the worst offenders I've come across in terms of page load times have been articles like the Olympics one I linked about - over 300,000 bytes for just 4k words and 4 images. I'd rather have a maximum byte-size rule than a strict wordcount; something that would apply to all articles equally. I think we could trust editors to determine how to split that between prose and non-prose content. GreenLipstickLesbian💌🦋03:20, 9 November 2025 (UTC)[reply]
In re sometimes there is no notable subtopic to spin out: Maybe Wikipedia:Notability needs to have an explicit sentence authorizing the article about a subtopic that is the result of an article split. The point would be to stop all the wrangling, not to make unwanted or inappropriate content exempt from deletion or re-merging. I think this would be particularly helpful for splits resulting in a list, because WP:LISTN is a section that editors have wildly different interpretations of. WhatamIdoing (talk) 19:39, 10 November 2025 (UTC)[reply]
A: This article is much, much too big! Let's WP:SPLIT it.
B: Good idea. The last section is just a list, so how about we split that off into a List of X?
C: Sounds good. I'll do that now.
D: Mwa ha ha, I'm taking the List of X off to AFD, because nobody already cited SIGCOV IRS in the List of X to prove that organizing information about X in the form of a List of X is separately notable from X itself! Wikipedia should not have such unimportant unencyclopedic unwanted content, no matter what the Wikipedia:Editing policy says to the contrary!
Maybe this could be in WP:PAGEDECIDE - sometimes it is worth creating an article that is not necessarily independently notable but for size considerations is worth splitting off. Independent notability is an interesting consideration here that people bring up a lot but isn't really covered by the guidelines. Katzrockso (talk) 19:46, 11 November 2025 (UTC)[reply]
That would probably work.
Yes, we've never really written this down as a rule, but the existence of thousands of discography articles, few of which have sources talking about the albums both as a group and separate from the band/musician that made them, pretty much proves that we actually do this in practice. WhatamIdoing (talk) 20:15, 11 November 2025 (UTC)[reply]
A better change may be to provide more relevant advice. "divided or trimmed" doesn't really work, division gives the impression of cutting in half or thirds etc. which is not really what we look for at all. Trimming is fine on the margins but doesn't do more than that. What we really want is for subtopics to be spun out, we want an article to be summarised, rather than divided. CMD (talk) 03:13, 9 November 2025 (UTC)[reply]
@Chipmunkdavis: Your proposal would probably require rewording the whole table, which I am open to considering. Here's how that might be phrased:
Some useful guidelines for article length:
Readable prose
What to do
> 15,000 words
Almost certainly should be reduced in size. Sections that are notable should be spun out, overly detailed information removed, and information summarised more effectively.
> 9,000 words
Almost certainly too large. Although the scope of a topic can sometimes justify the added text, these are rare exceptions. Consider summarising text more effectively and removing overly detailed or less important information.
> 8,000 words
Might be too large: the likelihood goes up with size. Consider summarising the text more effectively to reduce the word count before adding new text.
< 6,000 words
The article is within the target length. Summarising information more effectively will help with readability, although this might not be a high priority.
< 150 words
The article is probably missing key aspects of the topic. Consider expanding the text to add important details that will allow the reader to better understand the topic.
Overly long articles are a problem; but I am concerned with arbitrary limits applied equally to 7 million articles. It works better with soft wording, and the option for each article to decide its own local consensus. The guideline is a starting point for discussion, not a hammer looking for nails. -- GreenC06:36, 9 November 2025 (UTC)[reply]
@GreenC: Several editors state that an article's scope justifies a longer length when the article has unnecessary detail or phrases that could be summarised more effectively. This change is trying to encourage editors to copyedit the article first before making that statement about the scope. Z1720 (talk) 15:18, 9 November 2025 (UTC)[reply]
unnecessary detail .. what goes into an article can be controversial. Is it unnecessary? It should follow the wiki method of editing. BTW I edited Washington Monument a few months ago, reducing detail in one section by completely rewriting it. Nobody objected because it was a clear improvement. There was no need for a guideline about word count. Don't want this guideline to force changes that are controversial, it's a recipe for dispute. Thus the wording could be softer not hard line ("certainly too large"). -- GreenC17:12, 9 November 2025 (UTC)[reply]
I'm not certain that the change will have the intended effect. The heart of the matter is what, if any, circumstances justify going above 9000 words. A considerable percentage of articles could contain more than that much material. To minimize argumentation over whether a specific article should constitute an exception I'd think we need to lay out potential rationales as well. Vanamonde93 (talk) 18:32, 11 November 2025 (UTC)[reply]
Only about 1 in 500 articles has between 9K and 15K words. Here are some that are in that range, depending on which tool you use for counting:
Many more could be made that long while still making reasonable editorial decisions. That Earth isn't 20,000 words long is because editors were able to make it comprehensive with less than 9000 words. And if we can do that with one of our most important articles, what warrants exceeding that length? If we simply say exceptions are rare, any editor can argue that their pet article warrants the length, particularly if local consensus supports it. At the same time, there's clearly some recognition that longer articles are sometimes justified, otherwise we would propose a hard cutoff. So when are they justified? Vanamonde93 (talk) 20:57, 11 November 2025 (UTC)[reply]
@Vanamonde93: I can't find a discussion where this is clearly defined. I have my own thoughts but not sure if this is the appropriate place to have the discussion. I might consider Scotland in the early modern period as an article that can justify its length (although I might do a copyedit to see if the word count could be reduced). If others are interested, I can explain why somewhere. Z1720 (talk) 21:23, 11 November 2025 (UTC)[reply]
@Z1720: I'm not sure that we have defined it with intention, though related ideas are implicit in many parts of the MOS. I have my own ideas of course, and I'm happy to chat here or elsewhere. I don't oppose your proposal, I just don't see it as going far enough to meaningfully move the needle. Vanamonde93 (talk) 22:15, 11 November 2025 (UTC)[reply]
@Schazjmd: I think that's a good thing. It means more detailed information is available in other articles, and Earth can remain a broad introduction. Editors who want to read more information can go to the article that interests them. Z1720 (talk) 23:26, 11 November 2025 (UTC)[reply]
When reading articles about geographic locations in desktop mode, I am slightly annoyed if the coordinates are not available in a convenient and predictable spot near the article title. This forces me to hunt for the coordinates in the infobox or article body. It also means that the article will not be correctly geotagged.
Conversely, when browsing on mobile, coordinates added using |display=title alone aren't visible at all. For some examples of articles with this issue, see Islandmagee, Ostia (Rome), and Matthias Church.
To avoid both of these problems, I would tentatively propose that |display=inline,title should be preferred in most* articles about settlements or geographic features. It seems that it would be possible to use a bot or semi-automated script to enforce this rule.
Perhaps my proposal is already the accepted approach and the articles above have just unintentionally deviated from it, but I'm not sure. MOS:COORDS doesn't really seem to address this issue and I couldn't find any other relevant guideline. This issue has probably been discussed before; links to past threads would be appreciated.
* There are obviously cases where |display=inline is appropriate. For example, the article Extreme points of the United Kingdom discusses several different points and it would be wrong to geotag the entire topic to any specific one. There are likely other edge cases I haven't thought of. I'm only referring to how to format the "main coordinates" in articles about uniquely identifiable locations: villages, mountains, buildings, etc. ~2025-32085-07 (talk) 23:36, 9 November 2025 (UTC)[reply]
Hello. In my opinion, the title is a goofy spot for coords and we should list them only in the infobox alongside all the related metadata about a place. It's a weird historical artifact and anachronism that the coords get such special placement and their special page placement has been a constant headache for years with different views and different skins, as you note. Is there a reason coords are so special that they can't be put in the infobox? The coords seem as relevant to Pittsburgh as its population. --MZMcBride (talk) 20:47, 10 November 2025 (UTC)[reply]
Coordinates are still somewhat “special” in that they link to an external tool. However I personally don’t think that’s reason enough to separate them. novovtalkedits00:02, 12 November 2025 (UTC)[reply]
They don't require this, we make a choice (we can also show them with the built in maps, but it's difficult to change something that has been around for as long as this. They are mostly special, in that they have to directly relate to the primary topic of the page and the page has to detail a specific spot that is not too large or otherwise vague. —TheDJ (talk • contribs) 11:33, 13 November 2025 (UTC)[reply]
Recently I have learned that there is an International Mentoring Day on 17 January. The UK and the US also have national commemorations to celebrate mentoring and thank mentors of all sorts (i.e. in corporate mentoring programmes; adult-led youth groups; and teaching). In the UK, this is 27 October; in the US, the entire month of January.
With this in mind, I would like to propose that Wikipedia:
Start an annual commemoration on January 17 of this coming year with notification about the day somewhat in advance, and encouragement to all editors to take a few minutes to thank their mentors whether current or past, as well as those who offer guidance as Teahouse, Help Desk, and Village Pump staff;
Share stories about how mentoring helped; and
Offer "Did You Know?" tidbits around and on January 17 about how the commemorations came about in the UK and the US.
As we are a little over 9 weeks away from January 17, there would be adequate time to plan for its commemoration on Wikipedia if the decision is taken to carry this idea forward. ~2025-33078-41 (talk) 17:52, 12 November 2025 (UTC)[reply]
Hello all. I've been working on a bit of a proposal with some admins, which I've included below.
While the viewdeleted bundle of three userrights: browsearchive, deletedhistory, and deletedtext are currently only accessible to administrators, that does not necessarily comprise the only group that would derive a benefit to workload in having access. For example, those working in copyright, edit filters, SPI, and many other areas dealing with content likely to be deleted due to disruption or other reasons would benefit immensely from having direct access to deleted revisions. It also includes a swath of people who simply do not wish to be an admin, for whatever reason, but would benefit from this in anti-abuse workflows. I propose that a process be established to grant some viewing permissions to those qualified to be able to view deleted revisions, but not necessarily needing the full admin toolkit. I'm aware this is unbundling, though I believe it avoids the perennial proposals of unbundling by not touching the delete, block, or protect tools at all, and instead focusing on its intended purpose.
Thus I propose that a History Viewer group be added, with the following permissions:
Search deleted pages (browsearchive)
View deleted history entries, without their associated text (deletedhistory)
View deleted text and changes between deleted revisions (deletedtext)
View log entries of edit filters marked as private (abusefilter-log-private)
The group would be grantable/revokable by admins and the process for requesting the permission would be to post onto a dedicated PERM page, with a request that remains for a period of at least one week. The discussion must be advertised to AN, VPR, and BN. If the administrator closing the request finds that there is consensus to grant, they will add the permission to the requesting user. Editors applying should have a minimum of 2,500 edits and at least 6 months tenure.
See Wikipedia_talk:Requests_for_adminship/Archive_269#WMF_reply_about_userrights, particularly the response from Joe there, I think the general consensus is that the issue is trust. An RfA process with community votes implicitly proves that the user has this trust from the community. While the risk of deleted content containing extremely private information is low, it is not zero, and as such we'd not be comfortable allowing users access to this without first proving they have the trust of the community. I believe this process would be adequate to ensure trust of the community. EggRoll97(talk) 23:31, 12 November 2025 (UTC)[reply]
If you want to view deleted content then you need to either pass RFA, pass an equivalent process (e.g. an admin election) or be granted the permission by arbcom. So a request for this new right would require the support of a majority of those commenting and at least 25 supporters. I don't see the benefit in creating a new process when we already have RFA and AELECT. Thryduulf (talk) 23:30, 12 November 2025 (UTC)[reply]
Over the last few years, we have made relatively large strides in making adminship more accessible to more members of the community. I suspect that many of the people who could pass an RfA-like process which would be required to gain access to a permission like this could just go straight for RfA or AELECT and get the full toolset anyway. We want to encourage that too: I fear a permission like this could negatively affect admin recruitment if people feel like they need to go through this intermediate hoop first. Mz7 (talk) 23:48, 12 November 2025 (UTC)[reply]
I disagree, your argument could be applied to any user right because an admin has it. Most admin candidates have some form of advanced permissions anyway. Tenshi! (Talk page) 16:18, 15 November 2025 (UTC)[reply]
This permission is a core sensitive spot for why adminship is turning into a big deal. A while back, I tried to unbundle everything except this userright to make a patroller permission - IIRC the primary objection was that it wasn't technically possible. Tazerdadog (talk) 23:58, 12 November 2025 (UTC)[reply]
... and that's a bug, right? I didn't know this was a thing. I would be surprised if that were intentional. Otherwise why not write a user script to make deletedhistory trivially available to everyone? Mz7 (talk) 13:45, 13 November 2025 (UTC)[reply]
No, it's not a bug. This goes back to 2019, bringing parity with access available in Toolforge since 2013. And as I noted above, you need deletedhistory to see comments (edit summaries) of deleted revisions and to see revision-deleted usernames and comments. Anomie⚔20:26, 13 November 2025 (UTC)[reply]
I wasn't particularly sure where to put advertisement requirements, since it would need to be widely advertised to satisfy the WMF. I guess maybe a watchlist notice would suffice, similarly to RfA? EggRoll97(talk) 06:10, 14 November 2025 (UTC)[reply]
It seems to me that the whole Wikipedia:Vital articles concept, while probably useful in the early days of enwiki, has long since outlived its usefulness and is now just a timesink for a small group of editors, but without any actual current positive impact on the encyclopedia. It doesn't matter one bit whether an article is a Vital Article level 4 or 5 or not at all, readers and editors have their own priorities and don't need to be spoonfed which articles supposedly matter the most, as decided by at best the votes of a few people. Before starting a formal proposal / RfC, I would like to get some input on how others feel about this. I'll also inform the VA talk page of course. Fram (talk) 10:43, 14 November 2025 (UTC)[reply]
I haven't looked into this project before, but from a cursory look all I see is the elevation of topics to "vital" or removal thereof on a basis that seems to reproduce Eurocentric bias in Wikipedia. Katzrockso (talk) 11:01, 14 November 2025 (UTC)[reply]
Broadly agree with your comments - it is currently a timesink with few benefits for the wider project. The way this could be beneficial is if it drove forward improvements to the articles which have been identified as vital, but I don't see any of that happening — Martin (MSGJ · talk) 11:31, 14 November 2025 (UTC)[reply]
I guess the question is how much time does Vital articles actually take up? Like with wiki project assessments, there's editor-facing value in knowing (roughly) what level of quality articles are at, and (roughly) how important they are. Certainly after like Level-4 vital it's a random grab bag of kinda-important stuff that's pretty squishy, and I could see the argument it's so diffuse it's of limited utility at that level, but it seems browsing the talk page that it's not a world of edit conflicts and disputes that requires mothballing. Der Wohltemperierte Fuchstalk12:36, 14 November 2025 (UTC)[reply]
The upper levels are used by the Core Contest, but I am unaware of other uses. I believe VA was originally linked to WikiProject importance ratings, which are not really used much either. CMD (talk) 12:38, 14 November 2025 (UTC)[reply]
Thanks. I guess VA could easily be replaced by "articles which are high or top importance for at least one project" or something similar, would be equally valid or invalid as a selection criterion. Fram (talk) 13:39, 14 November 2025 (UTC)[reply]
Agree with the proposal. I do not see how labelling an article as "Vital" is a net-benefit to the project right now. The label is applied on the article talk page (a place few readers even know about) and the amount of time discussing an article's vital status could be better spent improving the encyclopedia. Z1720 (talk) 16:48, 14 November 2025 (UTC)[reply]
I opened a proposal about creating a top icon for levels 1 and 2. The editors within the project seem to more or less support it, but there is apprehension that the broader community would oppose it. It feels like there are attempts to quarantine the project from being used in applications. The list itself is pretty fascinating from a purely scientific standpoint when you start looking at the trends and broader statistics. I would rather see attempts to brainstorm uses and improvements to the resource then closing it. GeogSage (⚔Chat?⚔) 21:10, 14 November 2025 (UTC)[reply]
I would support such proposal. There are so many layers of subjectivity in deciding whether a topic is "important enough". If no consensus arises for full deprecation, I would support deprecating level 4 and 5 at list. Catalk to me!17:35, 14 November 2025 (UTC)[reply]
I have the oppposite view: levels one and two are filler for the more important lower levels of three to five. Wikipedia is a project too big to have one hundred subjects to be the "most important". It only starts to make sense at a larger sample size of 1000+. Plus the vital articles contest doesn't use those higher levels IIRC. -1ctinus📝🗨01:30, 15 November 2025 (UTC)[reply]
The list has been useful for me also in the maintenance of other language versions and other wikiprojects, such as Commons. --Thi (talk) 17:51, 14 November 2025 (UTC)[reply]
Volunteer editors are generally free to spend time on whatever initiatives they like as long as it doesn't have an undue effect or burden on those not participating. In my view, this initiative hasn't met that threshold yet. An occasional interpersonal conflict arises with all collaborations. And although I think my reasoning against increasing the prominence of vital articles is compelling, I appreciate that there is a non-negligible number of people who disagree with me, so it's not unreasonable for a proposal to be made from time to time. isaacl (talk) 18:41, 14 November 2025 (UTC)[reply]
The Vital articles represent a very interesting qualitative dataset. I've been researching them for a while, specifically getting data for the articles in the project based on page statistics and creating a "vital index" (Read here, still in early analysis though). There are some interesting trends that emerge from a purely quantitative perspective. There are some major issues with western bias, and major issues brought about by lack of participation, but the dataset remains a unique resource to understand if nothing else the priority of Wikipedia:Wikipedians participating. Levels 1 and 2 are okay, level 3 needs some work, and levels 4 and 5 are in flux. They mostly serve as a filter for the higher levels though. Attempts to make more use of the dataset/project have generally not gone over well. For example, in a discussion titled How can we increase visibility of this page to readers?, and Add topicons to levels 1 and 2 vital articles were met with general acceptance from active project members but apprehension about the wider community. I had preposed merging it or partnering with Wikipedia:Articles for improvement, but got little feedback from anyone on the two projects. In the talk pages for articles, multiple projects rank an article "priority," but this is usually done by one editor and never looked at again in my experience. The Vital Articles at least have a system where people are giving votes before adding it. I think we should brainstorm how to use the resource of the Vital Articles before tossing it. Fairly unique dataset. Adding a table below that shows article statistics by level, there are some clear quantiatative trends that emerge showing it isn't complete poppycock.
Table showing the average value for each variable by level:
I agree we should just get rid, it has no practical use (or at least not comparable to how much community time it sucks up). Also look at this research done by 1ctinus. To say its Eurocentric is a huge understatement. Kowal2701 (talk) 01:09, 15 November 2025 (UTC)[reply]
Makes me sad to see my work for a project I care deeply about and spent countless hours researching potentially go the way of the dodo. I agree that the state of the vital articles is flawed (and biased), but I don't see the purpose in archiving it. Most of the bias comes from a lack of diversity in penship, NOT the methodology. Most people editing come from the US or CANZUK; we naturally know more about our home countries than opposing countries.
In my opinion, it just needs better marketing somehow. I don't know what it would look like. This list is meant for editors, not readers, so it can't be mentioned into the main space.
I'd hate to see it go, mostly because I just think a list of 50,000 important things is interesting to read about and contribute to. -1ctinus📝🗨01:25, 15 November 2025 (UTC)[reply]
I think Vital levels 1 and 2 are pretty useless. Its existence is mostly semantical. I don't see how having a list of the 100 most important articles benefits the encyclopedia—it's too narrow. -1ctinus📝🗨01:27, 15 November 2025 (UTC)[reply]
Ideally, in my opinion, we should be using levels 1 and 2 to focus on the criteria "Essential to Wikipedia's other articles" and "Coverage." Essentially, at level 5 we have 11 categories: People, History, Geography, Arts, Everyday life, Philosophy and religion, Society and social sciences, Biology and health sciences, Physical sciences, Technology, and Mathematics. These are subdivided further into sub categories. Ideally, level 1 should have the parent article(s) for most of these 11 categories, level 2 should have sub-categories within them, with it becoming more general as it approaches level 5. The project sort of does this, but often times popular articles are elevated above the broader category. GeogSage (⚔Chat?⚔) 01:36, 15 November 2025 (UTC)[reply]
Again, personally I don't have an issue with volunteers spending time on whatever they like, as long as it doesn't affect anyone else or impose a burden on others. That being said, my personal opinion is that the rigid numerical limits aren't a good fit for a scenario where there is no inherent reason for a fixed limit. It makes sense for physical media, where there are practical limits so a cutoff has to be made somewhere. On the web, though, there isn't a compelling reason to have a hard cutoff of 100, versus a more flexible threshold. Note, though, that a lot of the interest in such lists is in the debate itself regarding the selection of topics, rather than the end list. I'd be more interested in figuring out ways to capture different approaches for weighing and evaluating the relative importance of articles. In spite of "top ten X" lists typically being web click bait, something like that might be a better way to give readers and editors different ways of looking at articles that could bring some less-well known ones to the forefront. isaacl (talk) 02:04, 15 November 2025 (UTC)[reply]
Our proposal in a nutshell: Temporary accounts offer improved privacy for users editing without an account and improved ways to communicate with them. They have been successfully rolled out on 1046 wikis, including most large Wikipedias. English Wikipedia has defined the criteria for Temporary Accounts IP Viewer (TAIV) right and granted it to 100+ users. We plan to launch temporary accounts on enwiki on October 7th 21stNovember 4th. If you know of any tools, bots, gadgets, etc. using data about IP addresses or being available for logged-out users, please help test that they work as expected and/or help update these.
Hello, from the Product Safety and Integrity team! We would like to continue the discussions about launching temporary accounts on English Wikipedia. Temporary accounts are relevant to logged-out editors, whom this feature is designed to protect, but they are also very relevant to the community. Anyone who reverts edits, blocks users, or otherwise interacts with logged-out editors as part of keeping the wikis safe and accurate will feel the impact of this change.
Temporary accounts have been successfully deployed on almost all wikis now (1046 to be precise!), including most large Wikipedias. In collaboration with stewards and other users with extended rights, we have been able to address a lot of use cases to make sure that community members experience minimal disruption to their workflows. We have built a host of supporting features like IP Info, Autoreveal, IPContributions, Global Contributions, User Info etc. to ensure adequate support.
With the above information in mind, we think everything is in good shape for deploying temporary accounts to English Wikipedia in about a month, preferably October 7th [update: on November 4]. We see that your community has decided on the threshold for non-admins to access temporary accounts IP addresses, and there are currently over 100 non-admin temporary account IP viewers (TAIVs).
The wikis should be safe to edit for all editors irrespective of whether they are logged in or not. Temporary accounts allow people to continue editing the wikis without creating an account, while avoiding publicly tying their edits to their IP address. We believe this is in the best interest of logged-out editors, who make valuable contributions to the wikis and who may later create accounts and grow the community of editors, admins, and other roles. Even though the wikis do warn logged-out editors that their IP address will be associated with their edit, many people may not understand what an IP address is, or that it could be used to connect them to other information about them in ways they might not expect.
Additionally, our moderation software and tools rely too heavily on network origin (IP addresses) to identify users and patterns of activity, especially as IP addresses themselves are becoming less stable as identifiers. Temporary accounts allow for more precise interactions with logged-out editors, including more precise blocks, and can help limit how often we unintentionally end up blocking good-faith users who use the same IP addresses as bad-faith users. Another benefit of temporary accounts is the ability to talk to these logged out editors even if their IP address changes. They will be able to receive notifications such as mentions.
How do temporary accounts work?
When a logged-out user completes an edit or a logged action for the first time, a cookie will be set in this user's browser and a temporary account tied with this cookie will be automatically created for them. This account's name will follow the pattern: ~2025-12345-67 (a tilde, year of creation, a number split into units of 5). All subsequent actions by the temporary account user will be attributed to this username. The cookie will expire 90 days after its creation. As long as it exists, all edits made from this device will be attributed to this temporary account. It will be the same account even if the IP address changes, unless the user clears their cookies or uses a different device or web browser. A record of the IP address used at the time of each edit will be stored for 90 days after the edit. Users with Temporary Accounts IP viewer right (TAIV) will be able to see the underlying IP addresses.
This increases privacy: currently, if you do not use a registered account to edit, then everybody can see the IP address for the edits you made, even after 90 days. That will no longer be possible on this wiki.
If you use a temporary account to edit from different locations in the last 90 days (for example at home and at a coffee shop), the edit history and the IP addresses for all those locations will now be recorded together, for the same temporary account. Users who meet the relevant requirements will be able to view this data. If this creates any personal security concerns for you, please contact talktohumanrightswikimedia.org for advice.
For community members interacting with logged-out editors
A temporary account is uniquely linked to a device. In comparison, an IP address can be shared with different devices and people (for example, different people at school or at work might have the same IP address).
Compared to the current situation, it will be safer to assume that a temporary user's talk page belongs to only one person, and messages left there will be read by them. As you can see in the screenshot, temporary account users will receive notifications. It will also be possible to thank them for their edits, ping them in discussions, and invite them to get more involved in the community.
User Info cardWe have recently released the User Info card feature on all wikis. It displays data related to a user account when you tap or click on the "user avatar" icon button next to a username. We want it to help community members get information about other users. The feature also works with temporary accounts. It's possible to enable it in Global Preferences. Look for the heading "Advanced options".
For users who use IP address data to moderate and maintain the wiki
For patrollers who track persistent abusers, investigate violations of policies, etc.: Users who meet the requirements will be able to reveal temporary users' IP addresses and all contributions made by temporary accounts from a specific IP address or range (Special:IPContributions). They will also have access to useful information about the IP addresses thanks to the IP Info feature. Many other pieces of software have been built or adjusted to work with temporary accounts, including AbuseFilter, global blocks, Global Contributions, User Info, and more.
For admins blocking logged-out editors:
It will be possible to block many abusers by just blocking their temporary accounts. A blocked person won't be able to create new temporary accounts quickly if the admin selects the autoblock option.
It will still be possible to block an IP address or IP range.
Temporary accounts will not be retroactively applied to contributions made before the deployment. On Special:Contributions, you will be able to see existing IP user contributions, but not new contributions made by temporary accounts on that IP address. Instead, you should use Special:IPContributions for this.
See our page Access to IP for more information about the related policies, features, and recommended practices.
If you know of any tools, bots, gadgets etc. using data about IP addresses or being available for logged-out users, you may want to test if they work on testwiki or test2wiki. If you are a volunteer developer, read the documentation for developers, and in particular, the section on how your code might need to be updated. If you know of tools, bots or gadgets that have not yet been updated and you don’t know of anyone who can update these, please reach out to us.
If you want to test the temporary account experience, for example just to check what it feels like, go to testwiki or test2wiki and edit without logging in.
Tell us if you know of any difficulties that need to be addressed. We will try to help, and if we are not able, we will consider the available options.
To learn more about the project, check out our FAQ – you will find many useful answers there. You may also look at the updates and subscribe to our new newsletter. If you'd like to talk to us off-wiki, you will find me on Discord and Telegram.
We would like to thank stewards, checkusers, global sysops, technical community members, enwiki functionaries and everybody else who has contributed their time and effort to this project. Thank you for helping us get here. NKohli (WMF) and SGrabarczuk (WMF) (talk) 11:38, 11 September 2025 (UTC)[reply]
It's still not clear to me what would be allowed to discuss publicly.
Temp account X seems the same as Temp account Y
Temp account X seems the same as older IP editor Y
We should rangeblock IP adresses X to stop temp account A, B and C
Temp account X is a school account for school X / a government account for department Y / ...
...
Should all these only be had "behind closed doors" somewhere, or are these allowed in the same circumstances as we would discuss them now (SPI, ANI, ...)? Fram (talk) 11:53, 11 September 2025 (UTC)[reply]
Thanks @Fram, first we wanted to emphasize, to make it just clear to everybody around, that temp accounts are just a different paradigm; they don't match 1:1 with IPs, and in some cases it doesn't make sense or there's no need to link them with IPs 1:1.
These restrictions only apply when you (1) use data from the IP reveal tool to make the link and (2) discuss publicly. All of the above can be discussed in a private venue where only TAIV users can see the information. Also if the link is only behavioral, then any user, including those who have TAIV, can make the link publicly. But if you do have TAIV and talk publicly, there may be an implication that you used the tool to make the link. CUs often get around this by declining to comment about IPs if they have run CU on a user, so they can avoid the implication that they linked the IP and user together using CU data.
Now to your questions:
This is OK if necessary for anti-abuse purposes, and you can even say "Temporary account X is using the same IP address as temporary account Y" as long as you don't mention the specific IP.
Not publicly, unless the link is made purely through behavioral evidence (i.e. edits).
Not publicly. You can, however, say "Please block the common IP ranges used by temporary accounts A, B, and C" publicly where the admin could use IP reveal to find which range you were talking about. Another option for non-admin TAIVs is to say "Please block this IP due to abuse from temp accounts" (without naming the accounts).
If you are using access to IP addresses to get this information, then probably not okay. If using edits, then okay.
Finally, a very important note just for context: on other projects, including large Wikipedias, we have seen a significant decline in IP blocks, indicating that temporary account blocks are often effective remedies for one-off abuse. Even if we agree that English Wikipedia is unique and whatnot, there is a pattern and hopefully discussions about blocking IPs won't be that frequent (phab:T395134#11120266).
Thanks. So access to IP adresses is treated as CU access basically? That seems like a severe step backwards in dealing with vandalism, sockpuppetry, LTAs, ... Curs both ways of course, we now also exonerate people with things like "the IPs used by that vandal located all in country X, but this new IP comes from country Y, making it unlikely to be a sock. This happens in standard ANI discussions and the like, not requiring any CU access, but will no longer be possible for most editors.
Your "Finally, a very important note just for context: on other projects, including large Wikipedias, we have seen a significant decline in IP blocks, indicating that temporary account blocks are often effective" seems like a non-existent advantage. We had many "single" IP blocks, these will be changed to "single" TA blocksn this is not an advantage or disadvantage of TAs. The issues are rarely with the simple straightforward cases.
A very simple example: when I look at the revision history of [46] I immediately see that the last three IP edits are made by the same person, using two IP adresses. If we are lucky, in the future, this would be one temp account. If we aren't lucky, then these would be two completely unrelated temp accounts.
Or take this edit history for a school. Since March, I see different IPs in the 120.22 range; it seems likely that this is either the school or the village or city, so no socking, unless these 4 were all from an IP provider in, say, France, in which case it's much more likely to be the same person in each case. From now on, no more means to raise such issues or notice them if you are not of the few (and if you are, you can't raise it publicly).
Or to make it more concrete still: we have this current ANI discussion where a non-admin raises an issue related to completely disparate IP adresses: "a certain editor who has been editing over several months from various IPs, all geolocating first to South Korea, then more recently to Japan. " If said IP disables or removes cookies, there is no way that most of our editors would be able to adequately see or raise such issues, they would just have to say "there is a range of temp accounts, no idea if there is any connection between them".
@Fram With respect to Your "Finally, a very important note just for context: on other projects, including large Wikipedias, we have seen a significant decline in IP blocks, indicating that temporary account blocks are often effective" seems like a non-existent advantage. We had many "single" IP blocks, these will be changed to "single" TA blocksn this is not an advantage or disadvantage of TAs. [sic] The point being made here is that even in larger wikis there has not a significant requirement to resort to IP blocks (which are still going to be allowed). It appears that based on the trends WMF is monitoring, there is evidence that most typical vandals are not shifting across temporary accounts by disabling or removing cookies. Sohom (talk) 15:08, 11 September 2025 (UTC)[reply]
I understand the point being made, and I don't see the importance of it. Most IP blocks that are now being made also don't require CU, SPI, ANI discussions, ... Basically, for the "easy" IP problems nothing changes, but the more complicated ones get harder to spot, discuss, ... "Most typical vandals" are not the ones I am talking about.
A report like this one from this month could no longer be publicly posted. In the future, the editor who posted it has temp IP rights, so he could notice that a group of temp accounts is from "This large IP range in Australia ", but wouldn't be allowed to post this fact. They link to an IP range edit log[47] which would no longer be possible in such a discussion, as that would disclose the IPs of the temp accounts. It would lead to such discussions being had in back chambers, out of view of most editors, and more importantly still impossible to be initiated by most editors. That kind of stuff is the issue, not the "one-off vandals will get a 31h block on the temp account instead of on the IP". Fram (talk) 15:19, 11 September 2025 (UTC)[reply]
There's also a lot of "appears to be a one-off-vandals" that with a quick check of some small ranges turns out to be someone vandalizing for months or years. That visibility will be gone, too. ScottishFinnishRadish (talk) 15:21, 11 September 2025 (UTC)[reply]
@ScottishFinnishRadish, @Fram You will still be able list temp-account edits by IPs and ranges at [[Special:IPContributions/<insert IP address here>]]. I don't understand how we suddenly be unable to make the requests that you are pointing to. Sohom (talk) 15:49, 11 September 2025 (UTC)[reply]
I will not request the temp IP viewer right under the above rules. I have had one ridiculous outing block for coupring someone's handle to someone's real name, even though they were listed as such on their Wikidata page and they used both in combination elsewhere as well: I will not risk getting another block because I somehow "outed" and IP address I learned through that right but was not allowed to share with the masses no matter how useful that might be. And no admin c.s. will be allowed to show such IPcontributions list when they may not reveal the IP address behind the temp account name. Fram (talk) 15:54, 11 September 2025 (UTC)[reply]
That's on you, the above directive is pretty explicit that you can report "hey Special:IPContributions/192.168.0.0/16 (not exactly that, but you get the drift) is a bunch of school kids, can a admin block it" or "hey Special:IPContributions/192.168.0.0/24 appears to a bunch of temporary accounts with very similar disruptive edits to game engines". It's a change of vocabulary yes, but the kinds of reports you are talking about are definitely doable and not being explicitly disallowed. Sohom (talk) 16:05, 11 September 2025 (UTC)[reply]
""hey Special:IPContributions/192.168.0.0/24 appears to a bunch of temporary accounts with very similar disruptive edits to game engines". " That makes no sense. IPs are not temporary accounts. And in any case you restrict such reports and the checking of such reports!), now made by regular editors (see my link to such a report in the current ANI) to a much smaller group of people. By the way, the people with the right can see the IP address belonging to a temp account: but can they easily do the reverse? Fram (talk) 16:14, 11 September 2025 (UTC)[reply]
appears to be a bunch of temporary accounts sorry for the typo. (A IP range can map to multiple temporary accounts since a TA corresponds to a machine). Also, you do realize that almost anyone with rollback or NPR will be able to make the same report with no problems. The persons who will be able to take action (i.e. block, revert) is already limited and almost all of the folks who can respond will already have TAIV (or will be handed TAIV at PERM with zero questions). Sohom (talk) 16:28, 11 September 2025 (UTC)[reply]
Yes, and when they state "Special:IPContributions/192.168.0.0/24 includes temp accounts X, Y and Z, two of which have been blocked already" or some such, they should get blocked for outing as making that claim publicly (linking IP to temp account name) will be disallowed. If we follow the WMF rules on this, people will need to be very, very careful not to accidentally break them. Even claiming "temp accounts X, Y and Z all locate to Perth, Australia, so are likely socks" is not allowed, as one can only know that through the IP adresses, and publicly stating anything learned by seeing the IP addresses is, again, not allowed. Fram (talk) 16:49, 11 September 2025 (UTC)[reply]
So that'll add how many seconds to the average task that is done 10,000 times a month by a few dozen people? A ten second increase adds dozens of hours per month to an already overwhelmed workflow. Or this extra stuff doesn't get checked anymore, which is more likely, and everyone wastes even more time dealing with unmitigated vandals. ScottishFinnishRadish (talk) 16:44, 11 September 2025 (UTC)[reply]
@ScottishFinnishRadish, If your gripe here is "this adds 10 seconds to a existing workflow", I see that as a okay tradeoff to the other alternative, which is "WMF (and Wikipedia) gets sued out of existence by frivolous GDPR lawsuits" or "we lose legitimately a significant chunk of good contributions from IP addresses by blocking all IP editing". Sohom (talk) 18:10, 11 September 2025 (UTC)[reply]
You forgot "there's not enough labor available to keep up with the increased workload and trying to keep up leads to administrator burnout and even less labor available for the increased workload which leads to increased burnout..." ScottishFinnishRadish (talk) 18:14, 11 September 2025 (UTC)[reply]
@ScottishFinnishRadish Admin burnout and not electing enough admins is a "us" problem. The fix is nominating folks at WP:AELECT, WP:RFA (the very same processes you are defending as set in stone) and fighting to make it easier for the community member to elect worthy candidates to adminship, not arguing against the implementation of a system that has been brought on to prevent us from being sued from existence and where WMF has put in significant effort into reducing the friction down to 10 seconds. Sohom (talk) 18:22, 11 September 2025 (UTC)[reply]
Uh, the very same processes you are defending as set in stone, what?We don't actually know what the additional time required will be, and having worked with the interface to place over 13,000 blocks, I think 10 seconds is on the low end of the scale. Editor and administration time is not cheap and putting a system in place that will result in a huge increase in labor cost without looking at the available labor is probably going to be worse than what we've seen at ptwiki.We're routinely dealing with bot attacks that will require an IP block as well as a temporary account block that use multiple IPs a minute. The end result of increasing the workload of defending against these attacks is no one actually doing the work. ScottishFinnishRadish (talk) 18:34, 11 September 2025 (UTC)[reply]
Uh, the very same processes you are defending as set in stone, what? - We are talking about the inflexibility of community processes to deal with TAs and why they might not scale.
We don't actually know what the additional time required will be, and having worked with the interface to place over 13,000 blocks, - While I respect your opinions here, I think you are overestimating the amount of time here, you see a bunch of edits across different TAs, a non-TAIV editor starts reverting posts on AIV that a bunch of TAs are posting similar edits, a admin looks at the IP addresses for a few accounts (two or three more extra click than normal), clicks on the IPContributions and widens the search space untill all the TAIVs listed in the AIV report are covered and blocks the IP range and we are done. (If a TAIV editor sees the same edits, they directly report the IP address range and a admin blocks). I do understand your point about friction but I don't see it in the vast majority of the cases we aren't adding anywhere the amount of friction where folks will "just not do it". (and I assume with time user-scripts will be developed to make process smoother and less clicks). Sohom (talk) 19:42, 11 September 2025 (UTC)[reply]
There is Autoreveal mode for users with the checkuser-temporary-account-auto-reveal right, which reduces friction for users who need to be able to scan a list of IP addresses of temporary accounts when viewing logs. KHarlan (WMF) (talk) 21:03, 11 September 2025 (UTC)[reply]
Thanks. So if, there is an IP vandalizing pages infrequently over months or years, and once discovered, I would like to go back to check if their previous edits were also reverted, that would now be impossible? ARandomName123 (talk)Ping me!17:39, 12 September 2025 (UTC)[reply]
That was my thought as well. Even if someone who cleans up/investigates copyvio has TAIV, the lookback seems quite limited so you would have to hope that each temp account is doing something obvious on a behavioral level to link them. And then that circles back to if you can name a CCI after an IP address & list the temp accounts there. @SGrabarczuk (WMF): I'm not comfortable with "it may actually be OK to document IPs" - there should be definitive clarification one way or other before the rollout occurs. Sariel Xilo (talk) 20:03, 12 September 2025 (UTC)[reply]
Even if we agree that English Wikipedia is unique and whatnot, there is a pattern and hopefully discussions about blocking IPs won't be that frequent (phab:T395134#11120266). I hope we agree that if EnWiki isn't unique, it's uniqiue in size (though I would argue that EnWiki, like all other large projects actually is unique in its practices and challenges, even if much is common). And so even if the number of range blocks decrease, the scale of exceptions may cause more problems than even other large projects. Best, Barkeep49 (talk) 15:09, 11 September 2025 (UTC)[reply]
I think this line of thinking (disabling IP editing) is short-sighted and will lead to a eventual demise of the the project (if we don't let people know we allow editing, we lose potential new editors/contributors). We should not' be making it harder for people to edit, instead we should be looking at ways to make it easier for folks to engage and edit our content (especially in the context of the fact that a lot of our content is being indiscriminately being remixed by AI). Sohom (talk) 13:07, 11 September 2025 (UTC)[reply]
Bizarrely, the only major test we have had of this has not in any way lead to the demise of said project. Protuguese Wikipedia has disabled IP editing since October 2020 (according to the Temp Accounts FAQ, question "Would disallowing or limiting anonymous editing be a good alternative?" where the WMF claims "there is evidence that this came at the cost of a significant reduction in non-reverted edits, weakening the growth of content in the Portuguese Wikipedia, and potentially leading to other negative long-term effects."
These claims seem false or at the very least severely overstated (no surprise, sadly, to see this kind of thing when the WMF wants to promote what they want or suppress what they don't want), there is no reduction in the number of editor edits[48] compared to e.g. 2019 (2020-2021, the Covid years, are a bad comparison). The same can be seen for the number of new pages[49]. The number of new editors is stable as well[50].
So contrary to what the WMF claims and what you predict, there are no negative effects from disabling IP editing (on the one large wiki who has done this). Fram (talk) 15:33, 11 September 2025 (UTC)[reply]
The number of new editors is stable as well The chart you linked to shows a slow decline/downward trend since 2020 to the present day (August 2023 was 9K, August 2025 is 7K). Again, this is not a Freenode style sharp drop-off we are talking about but a slow downward decline not unlike stack overflow. Sohom (talk) 15:55, 11 September 2025 (UTC)[reply]
Er, August 2023 was 7894, not 9K. August 2025 was 7227. As comparison, enwiki August 2023 was 93052, August 2025 was 85195. So Pt is at 91.5%, and enwiki is at, hey, 91.5%. Frwiki 11989 / 10656, or 88%. Dewiki 5919 / 5594 = 94.5%. So it seems like the decline for ptwiki is exactly in line with that of other large Wikipedia versions in general, and identical to the enwiki one. Fram (talk) 16:07, 11 September 2025 (UTC)[reply]
I misspoke, I meant June 2024, I think we can quibble statistics for a hot second, but there is a significant anecdotal and UX research behind the fact that you present people with a "sign up before doing the thing" screen, you see a steady user-attrition in that area of the funnel. If you are telling me that Wikipedia is somehow so special that this doesn't apply, I'm going to need a to see a lot more data than what you are showing me at the moment. Sohom (talk) 16:22, 11 September 2025 (UTC)[reply]
So you have no evidence for your claims, you compared apples and oranges, but according to you it should happen as you predict and I somehow need more than figures of the past 5 years to prove that this didn't that didn't materialize, actually didn't materialize? Perhaps what you and your "sighificant anecdotical research" e.g. haven't taken into consideration, is that there may be many more editors who stick around because they no longer have to deal with lots of IP vandalism?
Anyway, "I misspoke, I meant June 2024"? Oh right, that month with 7880 new editors, that makes all the difference in explaining how you came up with 9K... Please don't make such a mistake a third time or I will have to consider it deliberate. Fram (talk) 16:37, 11 September 2025 (UTC)[reply]
Fram, your message above is extremely adversarial and abrasive. I will refrain from engaging in this particular thread any further unless you reword your statement because your point here appears to be engage with me personally rather than with the issue more broadly. Your comment implies that I'm trying to deliberately misrepresent information in some way, which I sincerely am not and is a asusmption of bad faith.
To explicitly answer your question, there is a clear slow decline visible and yes, I misspoke, I meant June 2023. Also, here is the other thing, we do need some kind of IPMasking, otherwise we open ourselves to lawsuits related to GDPR. I do not have access to any data about editor attrition due to IPMasking, but the whole reason the WMF is doing IP masking is to make sure admins and patroller have the tools they need to still continue doing anti-vandalism even with the legislation-required changes. Best, Sohom (talk) 17:30, 11 September 2025 (UTC)[reply]
I see no reason to change anything in my statement when you cherrypick one month and rwice fail to pick the right one to boot. June 2023 is also a thousand up from June 2022, so what´s that supposed to prove? One doesn´t check trends over 5 or more years by comparing one month from midway in the set with one from near the current end, unless one wants to prove some otherwise unsupported point. Fram (talk) 18:20, 11 September 2025 (UTC)[reply]
If temporary accounts goes poorly - something that seemingly hasn't happened on other large projects - that seems like a logical response for the community to make. However, many people have been in favor of turning off IP editing for a while and so temporary accounts aren't forcing those people, or the community to that position. I have seen the value of IPs on their own merits, and seen the from Editor reflections many editors with registered accounts started as IPs and so we should be careful about turning off that gateway. Best, Barkeep49 (talk) 14:25, 11 September 2025 (UTC)[reply]
This is what's known as the Brussels effect. Its why for example, caps on bottles are tethered in the UK, even though its required under EU law. Countries outside the EU may have this treatment, so that companies don't create a queue for EU and non-EU lanes. JuniperChill (talk) 18:18, 11 September 2025 (UTC)[reply]
I’m concerned that IP info will disappear after 90 days. This will make it difficult to address long term abusers with stable addresses, of which there are a significant number. Instead, we’ll be playing whack a mole every 90 days or so, unless we can somehow retain info on IP use. — rsjaffe🗣️15:10, 11 September 2025 (UTC)[reply]
As I’m thinking about this some more, one way to retain the ip record is to block the ip rather than the temp account when we suspect a long term abuser with perhaps a stable ip. If the block of the ip isn’t sufficient, then block the temp account. — rsjaffe🗣️15:33, 11 September 2025 (UTC)[reply]
As I understand it, I as a CU cannot do this because the Ombuds have decided this is the same as the longstanding prohibition on connecting IPs to an account. But I hope non-CU admins could without jeopardizing the right. Best, Barkeep49 (talk) 15:42, 11 September 2025 (UTC)[reply]
Even CUs can block on behavioural similarities, unless that's changing too. A bigger question is perhaps, if an IP is blocked, is that block visible on the temp account and can others see the reason for the block as they do now? CMD (talk) 15:45, 11 September 2025 (UTC)[reply]
Correct. I could block a temp account based on behavior. But I can't do what SFR and rsjaffe are mooting: block the IP as a signal before blocking the temp account (or at least can't without obfuscating it in some other way). Best, Barkeep49 (talk) 15:50, 11 September 2025 (UTC)[reply]
If the IP is blocked but not the temporary account: All temporary accounts on that IP address will be prevented from editing, because all IP address blocks apply to temporary accounts (even if the IP address block isn't a hardblock)
If the temporary account is blocked but not the IP: The temporary account targeted by the block will be unable to edit. Additionally, if autoblocking is enabled on the block targeting the temporary account then:
The last IP used will be autoblocked for 1 day (in the same way as autoblocking works for registered accounts)
Attempts to edit using that blocked temporary account will also cause an autoblock to be created
Thanks very much, hopefully this can all be collated somewhere. I suppose the remaining question is whether other users see IP blocks and their reasoning, and if so how. CMD (talk) 16:32, 11 September 2025 (UTC)[reply]
Blocks placed on IP addresses will continue to be visible on Special:BlockList and other places that show blocks. However, a user wouldn't be able to see that a temporary account is blocked by an IP address block, unless they use IP reveal (TAIV) to get the IP address and then look for the block targeting that IP (such as opening the contributions page for that IP). WBrown (WMF) (talk) 08:41, 12 September 2025 (UTC)[reply]
Really? If that is the case, how are admins expected to handle say vandalism reports of a temporary account where an IP is already blocked? Always block the temp account as well? CMD (talk) 09:58, 12 September 2025 (UTC)[reply]
If the IP address is blocked, then the temporary account cannot edit. Therefore, the admin wouldn't need to take additional blocking action on the temporary account. However, if the temporary account switches IP addresses then they will be able to edit.
Given that, if the target of the block is intended to be the temporary account the admin should block the temporary account. This will usually mean that it is better to block the temporary account first as opposed to the IP address.
We have seen that blocks of temporary accounts on other wikis have been enough to prevent abuse in most cases. Generally an admin would want to block the underlying IP address(es) if:
If this user has evaded blocks by logging out, waiting for the autoblock to expire, and making another edit
Multiple temporary accounts are editing for a sustained period on the same IP (therefore, it's easier to block the IP than multiple temporary accounts)
The issue I raise is vandalism reports, as given we now can't see if an editor is blocked multiple reports could be made. I suppose an admin could reply "Already IP blocked" and that wouldn't disclose the IP connection, but I suspect if multiple reports come in a dual block of teh temporary account as well will provide the clearest information. CMD (talk) 12:25, 12 September 2025 (UTC)[reply]
Yeah, a dual block would be the most clear. Blocking just the temporary account should be enough for any user that has not used TAIV to view the associated IP address.
This is useful information. Is there any compendium of lessons learned so far? That would help reduce the disruption that I’m sure will occur as we learn over time how to address this new way of tracking unlogged-in users. — rsjaffe🗣️13:19, 12 September 2025 (UTC)[reply]
Or rotate their IPv6 address by simply restarting their router. Autoblocks should inherit the block settings of the TA, and if they are using IPv6 addresses, they should apply across the /64 range as well. ChildrenWillListen (🐄 talk, 🫘 contribs) 20:04, 12 September 2025 (UTC)[reply]
@Barkeep49, I can't block the IP as a signal before blocking the temp account - I'm pretty sure you can, I'd like somebody else to confirm it but as far as I know, this happens on other wikis, it's a tradeoff Legal is OK with. SGrabarczuk (WMF) (talk) 16:17, 11 September 2025 (UTC)[reply]
I'm glad to hear admins can. But (and I would hope @RoySmith or some other Ombud reading this corrects me if I'm wrong) the Ombuds have written that I cannot as a checkuser. They did so in a message sent to checkusers in March and when I wrote in reply I find the implication that CUs will have to take similar measures to blocking two connected IPs as we do to blocking a registered account and an IP address to be incredibly surprising. no one corrected me or said I was misunderstanding in anyway. Best, Barkeep49 (talk) 16:32, 11 September 2025 (UTC)[reply]
It's great that we'll be able to block the temporary account MAB or Salebot is using, then spend additional time to view the IP and check if it's a proxy before placing the proxy block, and if we're lucky finish that process before their bot has moved onto the next temporary account on another IP that will require twice as many blocks and three times as much time to take care of. Or, as Barkeep points out, since we've gotten conflicting information I might have to block the temporary account, find an active checkuser or other trusted editor I can disclose the IP to, have them block it, and waste multiple people's time. ScottishFinnishRadish (talk) 16:49, 11 September 2025 (UTC)[reply]
@Barkeep49 I don't remember your specific comment, but I assume it was in response to the OC's email of 17 March, which is reproduced for public view at meta:Ombuds commission/2025/Temporary Accounts. I encourage anybody reading that to note that it's full of weasel words like "limited experience", "initial", "preliminary guidance", "evolving landscape", "current understanding", etc. I should also point out that just like ArbCom, the OC doesn't make policy; we (again, like ArbCom) just get blamed for trying to enforce it. RoySmith(talk)17:16, 11 September 2025 (UTC)[reply]
On the other hand, if an LTA comes back within 90 days on a new temp account and we can behaviorally link it to the prior temp account, and find that both are on the same ip, then we can go for a prolonged ip block. I think there’s going to be a significant learning curve to this as we figure out how to address chronic abusers. — rsjaffe🗣️16:02, 11 September 2025 (UTC)[reply]
There's also this: When it is reasonably believed to be necessary, users with access to temporary account IP addresses may also disclose the IP addresses in appropriate venues that enable them to enforce or investigate potential violations of our Terms of Use, the Privacy Policy, or any Wikimedia Foundation or user community-based policies. Appropriate venues for such disclosures include pages dedicated to Long-term abuse. If such a disclosure later becomes unnecessary, then the IP address should be promptly revision-deleted. (Source) SGrabarczuk (WMF) (talk) 17:28, 11 September 2025 (UTC)[reply]
You can configure your browser to reject cookies, and in that case, a new temporary account will be created for every edit you make. See this FAQ entry. Note that if you do this, you can edit only 6 times/day before you have to create a real account, per this FAQ entry. OutsideNormality (talk) 03:30, 12 September 2025 (UTC)[reply]
Hi! I want to note that we are not implementing any tracking cookies in your browser. Tracking cookies are used to track your browsing history and activities, typically across multiple websites. We are adding a cookie to attribute your edits to an anonymized username. And your data (IP address) will be stored for a limited amount of time and be exposed to a smaller group of individuals. We have a similar cookie for registered accounts, except that it lasts for a longer time period. -- NKohli (WMF) (talk) 09:32, 12 September 2025 (UTC)[reply]
Cookies don’t anonymize edits, they de-anonymize them. They enable activity to be tracked across IP addresses. (Or whatever you want to call it that isn’t “tracking”—haha, gotcha! It’s totally not tracking because we defined tracking as something you do with muffins, not cookies!) This cookie has no other purpose and I don’t want it. 98.97.6.48 (talk) 00:51, 13 September 2025 (UTC)[reply]
The alternative is the expose your IP address every edit. The purpose of temporary accounts is to de-anonymize your activities on Wikipedia (which must be done in some way so blocks apply to the same person) while hiding your real-life identity, the latter of which is what the WMF probably means by "anonymize". Aaron Liu (talk) 11:41, 17 September 2025 (UTC)[reply]
Some questions about temporary accounts:
Would there still be a way for an unregistered user to view all of their own IP's (post-rollout) contributions, or equivalently the list of their own IP's past temporary accounts?
Some questions about temporary account viewers:
If an unregistered user only edits constructively and without engaging in vandalism, trolling, or similar shenanigans, then would it be against the rules for a TAIV to check their IP address, or could they just decide to do it on a whim?
What's stopping a rogue TAIV user from programmatically checking the IP of every single temporary account that has edited in the last 90 days and dumping that list somewhere? Would there be ratelimits put in place or something? 98.170.164.88 (talk) 05:49, 12 September 2025 (UTC)[reply]
To answer your first question about temp accounts, what do you mean by "their own IP"? :) This was a fundamental concern with how we handle unregistered editors. IPs can change, sometimes very rapidly. We cannot say IP 1.2.3.4 is always User ABC.
Contributions made before the launch of temp accounts will not be affected. So a user can see edits made by logged out editors an IP/range from before the rollout. Post rollout, a temporary account holder can look at their contributions from their temp account. If they have happened to have other temp accounts in the past, they'll need to remember which ones those are if they want to see their contributions from those temp accounts.
To answer your questions about temporary account viewers:
The policy lays this out so please refer to it. We tried to make it as succinct and clear as we could. If you have clarifying questions about anything outlined in the policy, please let me know. Happy to answer.
There is a log in place but we do not have any rate-limits. We trust that editors with this right will exercise their judgement and act in the best interests of the project. We also expect that admins will ensure users who are granted this right truly need this right to carry out anti-vandalism efforts.
Chipmunkdavis, Rsjaffe, and other interested parties: I have made an attempt to document the answers to questions in this discussion at User:Perfect4th/Temporary accounts. It's roughly topical; anyone who wishes to or has a better understanding than what I wrote is free to correct it, reorder in a way that makes sense, add further answered questions, etc. Happy editing, Perfect4th (talk) 18:23, 12 September 2025 (UTC)[reply]
Nice. Thanks for making this. Perhaps you should consider moving it to projectspace, or someone should create something similar to it in projectspace. I think a projectspace page to put tips, tricks, and notes on temporary accounts is going to be needed to help get everyone up to speed. –Novem Linguae (talk) 21:07, 12 September 2025 (UTC)[reply]
I'm just going to comment here that anyone who wants to see how temp accounts work in action can look at the other wikis, particularly Simple English for those who aren't bilingual (myself included). QuicoleJR (talk) 12:03, 20 September 2025 (UTC)[reply]
I'll admit I've for some time now been rather dubitante over this whole change, and my overall assessment is almost certainly irrelevant to the people responsible anyway, but I'm still not sure if it's ever been explained by the WMF or anyone else why masking schemes that preserve ranges were disfavored, if that has been explained somewhere a pointer would be welcome. 184.152.65.118 (talk) 21:17, 21 September 2025 (UTC)[reply]
Will it be possible for editors on a temporary guest account to "upgrade" their guest account to a "proper" account during the 90 days, retaining their editing history? I an imagine quite a lot of editors might start as guests, but find they are making good progress, finding it fulfilling being part of the project, and want to keep going. It would benefit both the community and the individual if they can move seamlessly to a named account. That way, we have continuity in any ongoing discussions in which they're taking part, and in interactions concerning their edits, and they can still go back to their older contributions, which will count towards their extended-confirmed status. In fact if they get kicked off, start a named account, and immediately reinforce a view they've expressed somewhere controversial, we have to make sure they don't get instantly accused of socking. Elemimele (talk) 14:46, 27 September 2025 (UTC)[reply]
Disclosing another's IP would run afoul of TAIV rules. I'm sure whoever chooses to use the process would be warned about the chance risks. Aaron Liu (talk) 19:43, 27 September 2025 (UTC)[reply]
I don't believe that's possible, in the same way it's not possible now with an IP. But there's nothing to stop somebody from making an account and noting "I used to edit as ~2025-12345-99" on their user page if they want to. RoySmith(talk)14:51, 27 September 2025 (UTC)[reply]
ChildrenWillListen, I don't think so; I was under the impression that the main point of having the temporary accounts was to conceal the IP of the person using them, so that they offer logged-out users a better level of privacy, consistent with the way privacy law is going. Yes RoySmith they can do so, but from a community perspective it still means we need to look back at a separate place for their edit history, and from their perspective they're back to square one for anything like extended-confirmed (not that that's tragic) Elemimele (talk) 17:46, 27 September 2025 (UTC)[reply]
Hey @Elemimele, I just wanted to confirm that it's not possible to "upgrade" into a registered account. Instead, temporary account holders will be (perhaps they already are, periodically) encouraged to create registered accounts. SGrabarczuk (WMF) (talk) 12:10, 28 September 2025 (UTC)[reply]
There's a certain IP address whom I work with, as they periodically make NFL drafts and I come along to improve and publish them (I find the drafts by checking their contributions). How will I be able to continue working with the IP editor if this change goes through? BeanieFan11 (talk) 16:35, 7 October 2025 (UTC)[reply]
Ask them to create an account and edit using that. We are long past the "if this change goes through" point. It is going to happen, it's just a matter of the exact rollout date for enwiki having changed. RoySmith(talk)16:58, 7 October 2025 (UTC)[reply]
If they insist on not registering a "permanent" account, at least they will get a user page for the temporary account where they could note that they previously edited from an IP address. If I were regularly collaborating with an IP editor I'd find it hard to resist reminding them that a regular account would make communication and collaboration much easier. ClaudineChionh (she/her · talk · email · global) 21:30, 7 October 2025 (UTC)[reply]
What I don't understand is the resistance to registering an account. Let's say, like Beanie's collaborator, you have a long-term static IP address that people know you by. That's essentially the same as having a registered account except that 1) there's a few editing rights you can't have and 2) anybody can find out where you are by looking you up in one of the public geolocation databases. So what are you gaining by not registering?
I'm not being facetious here; I really do want to understand why people are opposed to registering an account. Given the disadvantages of IP editing, there must be some offsetting advantages which makes it the right choice for some people. RoySmith(talk)21:46, 7 October 2025 (UTC)[reply]
@RoySmith: We have our reasons. And there are more of us still around than you might at first think.
Not everyone is so philosophical about it, and as tongue-in-cheek as WP:WNCAA may be, it's not actually unpersuasive on its own merits; for some that's sufficient. For others remaining unregistered is their way of trying to help others get involved with the project and indeed to induce future registrations [51].
Contrarianism also plays a role. And of course historically, and unsurprisingly given the type of people attracted to editing here, there were some who saw their role as challengers of perceived injustice and others who thought that by pushing back against needless demands being difficult if you will they were performing a genuine public service by keeping the project from becoming too rules-bound and authoritarian. Never all that common, and rare now, but every so often I do catch a familiar old scent.
As for myself, I'll concede accounts offer greater anonymity, and access to some additional tools though making use of some script assistance while logged out isn't that hard if one is so inclined. Even so my ideals, however dated, are what they are. Editing unregistered is a matter not only of habit, but of fidelity. We all have our roles and for some of us that means being forever IPs. Not that in my case it really matters much given how limited my activity has been over the last decade.
I suppose one day all of us relics will be gone. Long-term trends are negative, quite the contrast with early wiki culture which was largely hostile to registration, when it was even permitted some will ascribe that merely to the outsize influence of Ward's Wiki but believe me when I say it ran deeper. The net as a whole has just not come along as free and open as hoped. So some day yes, but not yet. 184.152.65.118 (talk) 01:21, 3 November 2025 (UTC)[reply]
I was looking at the contributions pages for temporary accounts on other language editions of Wikipedia to get a feel for how this will work. I discovered that, under the new temporary account system, every time an unregistered editor merely reads a Wikipedia edition they had not previously accessed, the date, time, and language of their reading will be publicly logged.
See for example Special:CentralAuth/~2025-54321-0, who made one edit to Polish Wikipedia on 7 September, then merely read an article on German Wikipedia on 3 October (without making any edits in German). Another example: Special:CentralAuth/~2025-100123, who edited Polish Wikipedia and then read the German, Serbian, and Chinese Wikipedias about 13 hours later.
I find this odd. Why does the software have to keep track of mere reads instead of actual edits? If the goal of the new system is to improve privacy, then this seems like a step backwards. 98.170.164.88 (talk) 04:51, 15 October 2025 (UTC)[reply]
Hello. This is a quirk of the account registration system in MediaWiki. Every time a user visits a project that they haven't visited before, the account "attaches" to the new wiki. This works similarly for registered accounts too (example). Since temporary accounts use the same mechanism to generate accounts, the same behavior applies to temporary accounts. -- NKohli (WMF) (talk) 09:27, 15 October 2025 (UTC)[reply]
It does the same for us logged-in users too. This is currently the only way to automatically log-in on wikis you haven't visited before that share your account because the account-creation date is a required field on each local wiki too. Aaron Liu (talk) 11:32, 15 October 2025 (UTC)[reply]
I have some questions regarding not-logged-in participants in deletion discussions and other situations where consensus is being determined. Under the old system it is allowed for these editors to participate, but it is not allowed for one person to create the appearance of being multiple people by using multiple accounts. (1) If comments by multiple temporary accounts appear to be in good faith, are we therefore forbidden from checking whether they are actually the same IP? That is, does this count as "investigation of or enforcement against vandalism, abuse, spam, harassment, disruptive behavior" or do we have to have more direct evidence of disruptive behavior to look at the IPs? (2) If a not-logged-in editor participates in good faith in a discussion but ends up getting multiple accounts (because their 90-day window expired or they disallow cookies), should they be required to disclose that these accounts are the same editor, or if not how are other editors without IP view access supposed to know this? (3) If they are not required to disclose the continuity of their identity in such cases, how are we supposed to distinguish a good-faith not-logged-in editor with multiple temporary accounts from a not-good-faith editor who is deliberately creating multiple temporary accounts to create the appearance of being multiple participants? —David Eppstein (talk) 05:17, 31 October 2025 (UTC)[reply]
@David Eppstein: (1) I would say that you need some kind of reason to suspect disruptive editing in order to look at the IPs, so if comments by multiple temporary accounts appear to be in good faith and are not disruptive, then it doesn't seem like we need to check the IPs. However, if there is some kind of reason to suspect that the temporary accounts might be the same person masquerading as multiple people, we can look at the IPs. It seems to me that the bar for looking at the IPs of a temporary account with TAIV is lower than the bar for looking at the IPs of a regular account with CheckUser: you do not have to explicitly provide a reason in the log to check the IP of a TA like you would for CU, and the log of temporary account IP accesses is only available to checkusers, and it does not seem like anyone is really auditing it at the moment. (This is merely an observation... not an encouragement for people to start revealing all IPs willy-nilly.) (2) I see this as not too different from before, in the case where a user was on a dynamic IP address that changes frequently through no fault of the user. If one person ends up with multiple TAs in the same discussion, they should not intentionally participate in a way that suggests they are multiple people. Relevant policy here would be WP:EWLO: I don't think they would necessarily be required to always make a disclosure, but they should not be actively trying to deceive or mislead us. (3) We would look at the behavior to distinguish between good-faith and bad-faith users of multiple temporary accounts. Specifically, are the accounts engaging in the behaviors that are prohibited at WP:ILLEGIT? Are they deliberately trying to mislead us, or did their TA change naturally? If we ask them directly, do they answer truthfully? For what it's worth, the criteria for WP:TAIV access is pretty intentionally low, so hopefully it should be easy to get IP access for most members of the community that would need it. Mz7 (talk) 18:43, 5 November 2025 (UTC)[reply]
I cannot emphasize this enough, with a level of privacy now afforded to logged-out editors, certain admins need to seriously stop letting schoolchildren inserting random characters into articles hurt their feelings, because now if they're not careful they're outing the identity of a minor with the block logs (whereas before the minor did it to themselves). If they're on a vandalism spree that needs to be addressed, but a little kid writing "hi" or "poop" on an article one time (and self-reverting on top of it) after the IP (which represents thousands of users) came off of a ten year block is not "persistent vandalism" (it's arguably not even vandalism under policy; "test editing" isn't considered vandalism). Personally, I think there needs to be some training for some of these networking-illiterate (for lack of a nicer term) folks who can't tell the difference between the inevitable (i.e. out of over 60,000 people, someone in Port Charlotte will vandalize or test edit Wikipedia once in a while, and ditto for a school district/university population of comparable size) and a problem that needs to be addressed, and there needs to be consequences for anyone who decides to be heavy-handed. Just my two cents. PCHS Pirate Alumnus (talk) 00:35, 5 November 2025 (UTC)[reply]
It depends on the range. On one hand, if an admin blocks a temporary account along with a /24 belonging to the Okaloosa County School Board (for example) in response to a complaint at WP:AIV, or if that temporary account stops editing shortly after the range block, people can connect the dots and it suddenly becomes easier to identify Jane who is in fourth grade and is obsessed with carrot cake because someone can find a Pinterest profile of a person named Jane Doe located Okaloosa County, Florida who is obsessed with carrot cake, or maybe a Minecraft user whose handle is her real name Jane Doe and has a carrot cake themed world. Now in an absolute worst case scenario the news channels could be reporting that a little kid connected with a predator because he was able to identify her partially because her IP was leaked over a series of dumb Wikipedia edits that were easy enough to recognize as unhelpful and revert with the click of a mouse (or even the action of a bot). The headline would be ridiculous, but it's still something we don't want to deal with. CheckUsers already have to be careful about accidentally making such connections, and now a lot more people with less experience are going to have to exercise the same discipline. On the other hand, if an admin is blocking /16 range for a major ISP like CenturyLink or Comcast to block one school, maybe the admin is being a little overzealous with his/her blocks. There's no reason to block millions of people to stop one kid or even 50 kids from writing something silly on an article. PCHS Pirate Alumnus (talk) 13:06, 5 November 2025 (UTC)[reply]
Users stop editing for far more reasons than simply a range block being issued. And you'll find a ton of users that become inactive all the time, and some of these inactivities are bound to coincide with a rangeblock. Not everybody with an extremely common name that stops editing after a range block did so because they fell within the range block. Far more commonly, they stop editing simply because they do not decide to continue editing. Aaron Liu (talk) 14:12, 5 November 2025 (UTC)[reply]
I'm speaking from experience with CheckUsers issuing blocks on IPs; there have been times I've been able to identify the IP address of a user just by looking at their block log. The exact details of how or why do not matter as much as the obvious increased probability of someone's identity being outed with less experienced administrators (and even non-adminstrators) now having access to private information (which would have been public before and therefore no big deal). Also, I'm well aware of wide range blocks. To say I think they're detrimental to the project is sugar coating it for the sake of civility. PCHS Pirate Alumnus (talk) 14:31, 5 November 2025 (UTC)[reply]
Sometimes you can assume who a rangeblock is intended for, but the common practice is to have another CU place a rangeblock or long-term IP hardblock for you to obfuscate such connections. Most of those possible connections you've assumed are likely a CU picking up a block for another CU. As far as the type of disruption that the wide rangeblocks are most often preventing here's a recent example from my own talk page, I will work to ensure you never feel safe at an WMF event ever again... I'll find your wife and tell her exactly what you are, a pedo£%ile protecting scumbag who tried to cover his tracks when he got caught. Maybe she'll be the one to shoot you? There is an enormous amount of serious abuse and threats that too many editors have to deal with on a day to day basis. Having to block wide ranges is unfortunate, but they are a necessary part of dealing with abuse on Wikipedia. Or maybe people should have to just deal with death threats and abuse? I guess that's always an option, too. ScottishFinnishRadish (talk) 16:14, 5 November 2025 (UTC)[reply]
I should clarify that there is a huge difference between enacting a large rangeblock to attempt stop a dedicated idiot like the one you have descibed and ephebiphobically enacting a large rangeblock to stop sporadic test editing and silly vandalism from elementary and high schools (and blocking public libraries, research hospitals, Ivy League univerities, and whatever else falls in that wide /16 range in the process), something I know for a fact some admins do. I've been on the receiving end of what you are describing and am well aware of the disruption someone like that can cause. PCHS Pirate Alumnus (talk) 17:11, 5 November 2025 (UTC)[reply]
Another factor is that many good editors get fed up and leave after spending a year defending the articles they wrote or maintain from test edits and other idiocy. Many passing IPs (now TAs) change dates or remove critical words like not to change the meaning of a sentence. Hilarious, but significant numbers of very good editors abandon Wikipedia because they feel that they are not supported. In addition to what has been described above, a range block can help retain good editors. Johnuniq (talk) 04:40, 6 November 2025 (UTC)[reply]
Just checking — The cookie will expire 90 days after its creation means that the cookie expiration is not refreshed by subsequent visits by the same browser? So an "IP editor" will get a series of user names – a new name per browser every 90 days? Which means that any discussions in the user's talk page will need to be linked or moved to the new account if the discussion is to continue? Is the cookie lifetime 90 days on all wikis? — GhostInTheMachinetalk to me15:35, 11 September 2025 (UTC)[reply]
Since its not possible to delete an account on Wikipedia due to attribution issues, does it mean temporary talk pages will be kept after 90 days? Messages from IP users get deleted after a few years, but remains visible in the edit history. JuniperChill (talk) 18:13, 11 September 2025 (UTC)[reply]
It is up to each wiki to decide how they want to handle the talk pages of old temporary accounts (leave them unchanged, blank them, or delete them). I don't expect enwiki to delete them. jlwoodwa (talk) 23:56, 11 September 2025 (UTC)[reply]
@KHarlan (WMF) "Yes, the cookie is per-browser..." If a user has 5 or 6 browsers, like one of my test computers has, this means they would get a different temp account on each browser, right? Does this have any real ramifications?
I don't see how this is any different than people with dynamic IPs completely changing their identity (my old ISP would give me a IP on one of several /16 ranges for example) in the same amount of time. Plus things have changed a lot since the 2000s (which is where a lot of people on the wiki seem to be stuck, no insult intended), a person can walk through downtown somewhere and access a variety of IP ranges like flavors of ice cream at Baskin Robins thanks to Wi-Fi using different ISPs at different businesses. PCHS Pirate Alumnus (talk) 13:23, 5 November 2025 (UTC)[reply]
To the best of my understanding: 1. it is 90 days after the temporary account was created (globally, not locally), which is public information, and 2. it has the same effect as blocking it for the remainder of its lifetime (modulo a brief difference in autoblock behavior at the end, perhaps). jlwoodwa (talk) 01:06, 12 September 2025 (UTC)[reply]
To answer your first question: When a temporary account has expired, this information is shown publicly on Special:CentralAuth. For example, at testwiki the temporary account ~2024-10120 is shown as having expired. I am not aware of an interface that shows when a temporary account is expected to expire (though you could estimate this by looking at when the account was registered and comparing it to the current date)
To answer your second question: Any block placed on a temporary account for longer than it's remaining lifetime will succeed. We do not prevent the blocking of temporary accounts for more than 90 days. One advantage with this is because there may be a need to track block evasion. For example:
A temporary account is editing disruptively and an admin decides to block the user behind the temporary account indefinitely (intentionally)
The admin communicates that this block is indefinite and editing the wiki again would be considered block evasion
The user ignores this and, after waiting till their old temporary account expires and waiting for any autoblocks expire, they edit again getting a new temporary account
A different admin receiving the report of block evasion can more easily see that there is still an active block on the first temporary account that applies to the user behind the account. Without a block longer than the expiry time of the temporary account, then the different admin would need to check that the intention was to block the user for more than the lifetime of their old temporary account
If there is no need to block the user behind the temporary account, then a block of 90 days as standard would be enough to always ensure that they are prevented from editing throughout the lifetime of that temporary account
"If there is no need to block the user behind the temporary account, then a block of 90 days as standard would be enough to always ensure that they are prevented from editing throughout the lifetime of that temporary account" Under what circumstances would we ever block a temp account without the need to block "the user behind the account"? Blocks (excluding some username blocks, which aren't relevant here) are always for the user behind the account, and not for the account itself. Fram (talk) 09:21, 12 September 2025 (UTC)[reply]
Yes, I agree that blocks are intended for the user behind the account and so in probably all cases the best approach would be to block the temporary account indefinitely.
I mentioned the last point primarily from the point of view that some wikis have requested that we change the default blocking period for temporary accounts on their wiki to 90 days (T398626). Without a change in blocking policy to indicate 90 day blocks apply to the user indefinitely, these 90 day blocks would no longer prevent that user from editing under the blocking policy after their original temporary account expires. WBrown (WMF) (talk) 09:38, 12 September 2025 (UTC)[reply]
Perhaps one nice thing about temporary accounts will be that they can be blocked like regular users, without special rules about block duration. There are many IPs out there that have only gotten 36 hour blocks or one week blocks, when a full account would have normally been indef'd. In other words, it simplifies blocking. (And of course the normal indef appeals process can be used. Indefinite is not infinite.) –Novem Linguae (talk) 21:47, 12 September 2025 (UTC)[reply]
Are blocks on previous temporary accounts only visible to admins, or are they visible all editors with TAIV permissions? CMD (talk) 03:14, 25 September 2025 (UTC)[reply]
@Chipmunkdavis the latter - TAIV gives you access to other temp accounts from the same IP.
To check the blocks on previous temp accounts from the same IP, use IP reveal, check the list of temp accounts using the IP, and then see if any have been blocked.
In addition (thanks to @WBrown (WMF) for this part), if the active temporary account is editing similar pages to other inactive temporary accounts, you could initially assume that these older temporary accounts are the same person as the active temporary account (especially if the topic isn't that active for editors). You could then confirm this by using IP reveal to look up the IPs of the temporary accounts you found and compare to the active temporary account.
Hey @ChildrenWillListen, yeah, I'm almost certain I've seen that question too but I'm not sure what you mean. A couple of thoughts:
How would you like to have them flagged, given that there are so many IP ranges? You can see different temp accounts using the same IP on Special:IPContributions (it will be blueified once temp accounts get introduced). You can read more about this page in the guide Temporary Accounts/Access to IP.
Definitely it's not possible to flag any connections publicly.
In the context of tracking abusers, we're trying to move away from treating IPs as the main identifiers. The connection between a person and a temp account, their editing patterns and other metadata is much tighter than that between the user and the IP. As an example, we expect that IP reputation filters will be useful in mitigating abuse without needing to target a specific IP address.
If the address is an IPv6, any temp acccount within the same /64 should be flagged. It is practically impossible for it to belong to someone else. We can filter by user agent here, particularly for IPv4 addresses since there's a possibility they're behind a NAT and the address is shared with multiple households.
Why not? We're just linking ~2025-3999-1 with ~2025-4002-3. No IP info is revealed. I'm not a lawyer, so I could be totally wrong here.
Currently, for people with TAIV access, you need two operations to find temporary accounts within an IP range, much like with the CheckUser tool. The more time you spend combating abuse, the less time you have to, well, build an encyclopedia. If this feature is introduced, a person can simply see at a glance that these accounts belong to the same network, and report/block if needed, which also reduces the number of IP reveals needed, improving temporary account privacy in the long run.
As for the connection between a person and a temp account, their editing patterns and other metadata is much tighter than that between the user and the IP, while this may be true in the short term, people can and will change their behavior, and sometimes technical evidence is the only way you can link them.
The answer to 2 is because you could link one temporary account to multiple IPs, eg. home and work. However, I agree with 3. Regarding How would you like to have them flagged, it would be useful if a temporary account contributions pages included any underlying blocks for IPs, and this could just include the type of block and reason without specifying the IPs. Similarly, any IP contributions page should on that page include blocks given to linked temporary accounts (presumably there is no need to hide the account name that way around). CMD (talk) 13:18, 26 September 2025 (UTC)[reply]
The answer to 2 is because you could link one temporary account to multiple IPs, eg. home and work.: No, because no IPs are revealed in the process. All you see is ~2025-3999-1 and ~2025-4002-3 share the same IP addresses. Unless you use the TAIV tool to reveal the actual IP addresses, you cannot come to that conclusion. ChildrenWillListen (🐄 talk, 🫘 contribs) 13:21, 26 September 2025 (UTC)[reply]
Has it always been permissible to link IPs to accounts publicly. For example, if an IP user gets blocked, and a new user does the exact behaviour (or vice versa)? Of course, CUs are not allowed to use the tool to link IPs with accounts. JuniperChill (talk) 00:41, 27 September 2025 (UTC)[reply]
If the behavioral evidence used to come to that conclusion is all public, then I believe it's always been allowed. Unlike things like a logged in user's IP address, there is no expectation of privacy for two accounts that behave the same and someone simply points out the similar behavior. WP:DUCK comes to mind. –Novem Linguae (talk) 03:32, 27 September 2025 (UTC)[reply]
@SGrabarczuk (WMF): This is definitely a welcome step, but I have a few more comments:
numbers are bucketed to protect privacy: 0, 1-2, 3-5, 6-10, 11+: How does this protect privacy? If the exact number of TAs is leaked, how would a bad actor be able to find the IP address of a temporary account?
We should *not* provide any details about which specific temporary account names are active on the same IP / IPv6 /64 range. Again, why? The whole point of temporary accounts is to prevent most users from seeing the IP addresses of anonymous contributors. It is not meant to conceal connections between different accounts operating under the same IP address/range. This information cannot be used to find the IP addresses of the underlying TAs.
Even if you don't agree with the above statement, it would be nice for people with TAIV access to be able to list the specific accounts with one click, since they would be able to do that manually anyway.
@ChildrenWillListen thanks for these questions and my apologies about the delayed response. Bucketing the numbers was a suggestion from Legal. We are talking with them about making these exact and we should have an update soon. Same goes for the second point you made -- we are currently looking into a way to show connected temporary accounts. I will be able to share more details once I have more clarity from engineering and legal about these. Thanks. -- NKohli (WMF) (talk) 11:10, 7 October 2025 (UTC)[reply]
@ChildrenWillListen: There is nearly the same probability of two or more people sharing an IPv6 /64 as being NATted behind a single IPv4 address. Homes and small organizations typically get a temporary IPv6 /64 assignment from their ISP for use on their internal network. All devices connected to the same internal network interface use one or more IPv6 addresses from the assigned /64. If you block the /64, all of the people connected to the internal network interface where it is assigned will be blocked. If you block a dynamically assigned /64 and it gets reassigned before the block expires, all of the people connected to the internal network interface where it gets reassigned will be blocked. 216.126.35.228 (talk) 01:07, 16 October 2025 (UTC)[reply]
Some mobile providers will assign the same /64 to all users within a certain area, though those often are already getting anon-blocked for long periods. Workplaces yes, but Wikipedia editing is not a common enough hobby where that's an issue unless the number of employees on a network is large and of course many employers don't want their employees editing on the clock anyway. That same relative rarity usually prevents issues when /64s are reassigned between end-users.
There have been a handful of genuine cases where one user within a household was blocked but the other, but it is to be sure an exceeding rarity. Admittedly I'm not around much, but I can't think of a case where someone's been unambiguously subject to a mistaken block for that reason since Roger Hui, and that was almost a half-decade ago. 184.152.65.118 (talk) 00:20, 3 November 2025 (UTC)[reply]
I do not think those APIs currently return whether a temporary account is considered expired. We can expose this information via the API if this would be useful for scripts (the pagetriagelist API returns whether temporary accounts shown in the API results are expired, so it should be easy to replicate this in another API).
If you could file a feature request on Phabricator for this along with any relevant use cases you see for it? If you'd prefer not, I can file it (if you give me a ping with any relevant examples you would see it being used for). Thanks and happy editing, WBrown (WMF) (talk) 20:51, 4 November 2025 (UTC)[reply]
WMF, in the FAQ it is claimed in the section "Would disallowing or limiting anonymous editing be a good alternative?" that this is "unlikely" because at the Portuguese wikipedia "On the other hand, there is evidence that this came at the cost of a significant reduction in non-reverted edits, weakening the growth of content in the Portuguese Wikipedia, and potentially leading to other negative long-term effects." As I described above, these claims seem false, and the growth or decline of ptwiki seems exactly in line with that of other large Wikipedia versions. There is no significant extra loss of new articles, user edits, or new editors compared to these other Wikipedias. See e.g. the number of active editors[52]. So based on what numbers do you claim these statements to be true? Fram (talk) 16:30, 11 September 2025 (UTC)[reply]
I'm very curious about this as well. Because the public research I've seen suggests it didn't harm ptwiki, but have had multiple conversations with various WMF staffers who firmly believe it did. While I expressed reasons other than this above why I supported keeping IP editing, that was before I realized that no matter what temp accounts reset after 90 days. So understanding what evidence we have about this would be important for me in any such discussion about disabling IP editing. Best, Barkeep49 (talk) 16:47, 11 September 2025 (UTC)[reply]
I too am interested in this question, and share Fram's concern that causal inference in statistics is very hard and at minimum a proper difference-in-difference model is necessary to attempt to capture the causal effect of disallowing IP editing on content, which we don't seem to have. KevinL (aka L235·t·c) 17:57, 11 September 2025 (UTC)[reply]
Hello! I want to first clarify about the metric. The leading metric we looked at for ptwiki is Net non-reverted content edits - defined as the number of content (main-namespace) edits that were not reverted within 48 hours, excluding bot edits, reverted edits, and edits that reverted other edit. We chose this metric because we felt it was most representative of the impact on the community's content health as a result of this change. Unfortunately this metric is not displayed by default on stats.wikimedia.org.
We have measured the impact of this change three times since the change was implemented: In August 2021, June 2022 and April 2024. Each time we saw a similar downward trend in Net non-reverted content edits. You can see how the numbers compare over the four years in the most recent report, Table 6. In Q1 of this year we saw a decline of as much as 36% compared to pre-restriction days. We also compared this trend with Spanish, German, French and Italian Wikipedias and did not see the same trend on those wikis.
You are right in noting that there have been many positive outcomes from this change as well - lower blocks, reverts, page protections -- all point to a decrease in vandalism on the project. The feedback from the survey was quite positive as well. However, we do not think the decline in net non-reverted content edits is worth the trade-off. @Benjamin Mako Hill and his team wrote about the Value of IP Editing to offer their perspective on this too in case you haven't seen it.
Lastly, I want to point out that before embarking on temporary accounts our team seriously considered turning off logged out editing as a viable alternative. Some of you might recall that we put out a call to communities that want to experiment with this change. The Farsi Wikipedia experiment was a result of this call. If this option did turn out to be viable, it would have been the easier way out - way less work than temporary accounts. Unfortunately the results from ptwiki and fawiki were not what we had hoped for. -- NKohli (WMF) (talk) 13:19, 12 September 2025 (UTC)[reply]
I find it disingenuous that you never mentioned the only metric that matters: the editors of ptwiki are happy with banning IP edits, and they have no intention of going back. Moreover, the metric you do focus on, net non-reverted content edits clearly shows that ptwiki was already in decline before it. Tercer (talk) 14:43, 12 September 2025 (UTC)[reply]
I disagree that editor happiness is the only metric that matters. I am here to serve our readers and so if our readers are being hurt by having old information, when new information would be possible, or (more importantly) incorrect information when correct information would be possible, that matters a great deal to me. It also matters a great deal to me about whether turning off IP editing harms the pipeline to gaining more new registered editors. Best, Barkeep49 (talk) 17:30, 12 September 2025 (UTC)[reply]
I generally tend to express my unhappiness instead of leaving right away if there is something I don't like, since I have invested a lot in the project. I imagine it's the same in other wikis Ita140188 (talk) 16:51, 15 September 2025 (UTC)[reply]
Perhaps you are a very dedicated editor that will stay no matter what, but you can't generalize this to everyone. Editors come and go all the time. I don't think there's really any doubt about whether unhappy editors tend to leave the project.
And in this particular case WMF has already been clear that it will push through regardless of editors' opinions, so "expressing your unhappiness" won't make any difference. Tercer (talk) 15:52, 16 September 2025 (UTC)[reply]
Also, it's not a good look reputation-wise if readers are exposed to more vandalism or long-term abuse, which they most likely will be in the long run with the temporary accounts feature. Not all vandalism is reverted quickly. For instance, to pick a relatively low-stakes example, if temporary accounts had been active here in May, I would never have discovered this edit because of the 90-day cutoff for retrieving IP information (see this comment of mine for more context). If push came to shove I would absolutely support discontinuing IP editing ... but we're basically damned if we do, damned if we don't. When I was invited to participate in the WMF's let's talk program, one of the reasons I agreed to do so was to bring up my concerns about this cutoff. ButI well know why it's been implemented. Graham87 (talk) 09:17, 14 September 2025 (UTC)[reply]
I think the only big issue with this is that everyone's complaining about traceability, since this doesn't really affect the reverting vandalism side of things aside from tracing. And this whole TA thing is literally reducing traceability, so you can't really get around that despite any attempt to do so. The alternative would be to set no expiration or longer expiration to the cookies, but then it would be basically 'we replaced IPs with something that looks a bit better but functions like an IP' 2A04:7F80:6E:D2B:900C:A6A9:FD99:F70 (talk) 14:31, 18 September 2025 (UTC)[reply]
Traceability is my main concern as well (see my comments above). In their FAQ, the WMF said that they are "open to extending" the 90 day period for IP retention. Maybe it should be increased?
In the same answer, they mention we could use "behavioral evidence or patterns of editing" but that's a bit hard to do for the occasional vandals with little edits. ARandomName123 (talk)Ping me!14:57, 18 September 2025 (UTC)[reply]
I could also see cases where a range IP user that clears cookies could either purposely or accidentally sock in a low volume way that would be really hard to notice based on behavioral evidence alone. In a recent AfD, I encountered an IP who nominated the article and then later voted when their range changed slightly. I don't think they intended to be malicious but I was able to flag that I thought these two edits were by the same user. But there wasn't really anything behavioral that stood out to connect the two edits and in the case of temp accounts, I wouldn't have been able to identify them as being from the same editor. Non-admins who frequently close discussions should probably have TAIV. Sariel Xilo (talk) 16:28, 18 September 2025 (UTC)[reply]
I wouldn't say the metric was already in decline before the date. It seemed to be just jumping up and down within the same range, but after that there was a very clear downward trend. Aaron Liu (talk) 11:47, 17 September 2025 (UTC)[reply]
There were roughly 195k non-reverted edits in 2017, 132k in 2019 (the baseline), and 107k in 2023. The decline from 2017 to 2019 was roughly 47%, much larger than the 22% decline from 2019 to 2023 that WMF considers so disastrous.
I don't think these numbers can be correct. I just checked, and the last 5000 non-minor mainspace non-bot edits on ptwiki[53] go back 2 days and 3 hours, which equals some 70,000 edits this month. This would mean that about half of those edits are not counted as "net non-reverted content edits", despite the much lower revert rate since disabling IP editing (revert rate in 2024 was below 6%). Is there any explanation anywhere what they actually consider to be "content edits"? Fram (talk) 16:37, 25 September 2025 (UTC)[reply]
Content edits are identified based on whether they are from a content namespace. This is the monthly average data (no idea why it's the heading of the first column instead of the last four); the numbers average all the monthly totals within that quarter instead of being a sum total. Aaron Liu (talk) 01:48, 26 September 2025 (UTC)[reply]
I'm not sure I agree with We chose this metric because we felt it was most representative of the impact on the community's content health as a result of this change. If community systems are overwhelmed in a community that has IP editing (with or with-out Temp Accounts) the edits that stay unreverted may be, on the whole, a net negative to the project and to its readers. Put another way: if a community is overwhelmed then the net non-reverted edits are lower pre-change than policies and guidelines would suggest they should be and if they are then not overhwelmed afterwards, may be showing the true rate. I also am not sure I agree that it is the only metric worth looking at - as I indicated above statistics about overall community health in terms of editor registration, retention, and "moving up the ranks" - also feel worth examination. I would suggest English Wikipedia is not currently overhwelmed and so we do have a good baseline - something I don't know was the case for ptwiki - but I do worry that these changes will overwhelm the system because of the extra work that it is going to require to dealing with unregistered accounts. Best, Barkeep49 (talk) 17:38, 12 September 2025 (UTC)[reply]
@Barkeep49 I did not mean to imply that this was the only metric worth looking at. Like you can see in the report we did examine multiple other metrics and also carried out community survey(s) to assess how the editors feel about the change. However this metric stands out as important to us because it indicates a sustained loss in high-quality contributions and has consistently been on a decline in ptwiki since the restriction was in place.
I would also like to add that our team has been continually working on delivering tools to assist with anti-vandalism work. hCaptcha, GlobalContributions, IP Info, AbuseFilter improvements (including IP reputation filters), UserInfo card etc to name a few. We strongly care about moderator burden and this is reflected in our team's priorities. If you have ideas for how we can do these better, your thoughts are welcome on the talk page. -- NKohli (WMF) (talk) 11:15, 15 September 2025 (UTC)[reply]
@NKohli (WMF) I did read the report and did see other metrics. In the most recent report the two other takeaways were favorable on disabling IP editing. The fact that the foundation has decided that the metric which showed a decrease is so alarming as to say it's a failure suggests that the WMF does think it's the only metric that matters. I appreciate you answer my question - I really do - but I think my original assessment the public research I've seen suggests it didn't harm ptwiki. needs to be amended to the public research I've seen suggests mixed results on ptwiki which does not, for me, justify the labeling the Foundation has chosen to attach to it. Best, Barkeep49 (talk) 14:36, 15 September 2025 (UTC)[reply]
It's also worth noting that @MuddyB: complained about the surge of vandalism on the Swahili Wikipedia (where he is an admin), following the enabling of temporary accounts, though as I understand IP editing may have been previously disabled outright on this wiki. [54]Hemiauchenia (talk) 23:32, 12 September 2025 (UTC)[reply]
Temporary accounts are going to be "rammed" down everyone's throats as they are being made for legal reasons. For better or worse, office actions exist for these sorts of matters. (And curiously Swahili Wikipedia was another one that had Vector2022 imposed over the wishes of the community. That said, Vector2022 has also now become universal across all wikis, as temporary accounts will also.) CMD (talk) 10:18, 13 September 2025 (UTC)[reply]
Please read the thread before commenting. The subject is banning IP edits as an alternative to introducing temporary accounts. It would also solve the legal problem. Tercer (talk) 11:25, 13 September 2025 (UTC)[reply]
You are misunderstanding the implications of the proposal. Banning IP edits is not an alternative to temporary accounts, both actions are technically independent of each other. Temporary accounts are becoming implemented whether IP edits are allowed or not. Even if en.wiki responds by banning article editing by IPs, we will still have to figure out how to work with temporary accounts on talkpages. CMD (talk) 14:35, 13 September 2025 (UTC)[reply]
Of course it is an alternative. If IP edits are banned there's no longer a legal reason for implementing temporary accounts. Are you claiming that WMF would nevertheless implement temporary accounts? Just out of spite? I find that hard to believe. Tercer (talk) 14:49, 13 September 2025 (UTC)[reply]
Why do you assume that when people say we should ban IP editing they are only referring to mainspace? But, yeah, as a practical matter, anonymous editing exists (and thus temporary accounts also exist) and that's not going to change any time soon. So the community needs to figure out how to handle them. RoySmith(talk)14:50, 13 September 2025 (UTC)[reply]
I think the WMF would implement temporary accounts because they already exist and have already been rolled out and will continue to be rolled out as a standard part of the underlying software for every wiki, whatever en.wiki does, rather than out of spite.I assume that in general the IP editing bans will be likely called for with the main space in mind because of the consistent raising of the pt.wiki precedent, as well as on-wiki precedent regarding how we currently handle protections and even weird situations like the ARBPIA ECP talk page restrictions. CMD (talk) 15:08, 13 September 2025 (UTC)[reply]
To be fair I don't think the latest surge in vandalism on swwiki is related to temporary accounts. WP:LTA/Wikinger decided to target swwiki in the past weeks/months on an almost daily basis. The LTA uses rapidly changing proxy IPs which is a burden to admins with or without temporary accounts.
I did a quick check and it seems to me that none of the swwiki admins enabled their access to temporary account IPs which also means they can't use features like IP autoreveal – and have no way of knowing (except based on behaviour) if a temporary account is a newbie or a potential LTA.
@Johannnes89 It's chaos—completely. Temp aacounts actors aee now on my blog, commenting gibberish. Good thing Wordpress can't comment without approval. Ditching them every now and then. Muddyb (talk) 13:21, 18 September 2025 (UTC)[reply]
I can’t see the blog comments but I bet all of them were written by WP:LTA/Wikinger. I don’t think the situation on your home wiki would be much different without temporary accounts (except that some tools currently require a few more clicks). Wikinger has annoyed different projects for years and unfortunately he currently chooses to annoy swwiki. You might want to check mw:Extension:IPReputation/AbuseFilter variables in case that’s helpful to fight against his proxy abuse (unfortunately many open proxy IPs are not known to IPoid). Johannnes89 (talk) 17:24, 18 September 2025 (UTC)[reply]
What's the point of comparing different months in different years (August 2021, June 2022 and April 2024)? This will not eliminate seasonality effects. Maybe it's not that 2024 saw less edits, but that April has generally less edits than August or June? Ita140188 (talk) 16:49, 15 September 2025 (UTC)[reply]
I think it inevitable that Wikipedia projects will disable anonymous editing in the future. As projects grow, the opportunity for anonymous editors to do anything productive continues to shrink. (1) The level of knowledge necessary to contribute positively to the projects keeps increasing. More policies, more guidelines, more standards, more templates. This growth in required knowledge is glacially slow but inexorable. (2) There is ever increasing lack of ability for editors to contribute in general due to the (ever unattainable, thankfully) goal of completing the project. The lack of productive work possibilities gives ever decreasing opportunities to anonymous users to contribute positively. (3) The ratio of administrators to the amount of work administrators need to do continues to worsen. Those are just a few of the factors in play that are driving this reality. Imagine, if you will, Wikipedia 50 years from now. There will always be growth to be sure, but the opportunity for anonymous users to do anything will be almost absent. There needs to be a long term strategy to reverse these trends, else new blood coming into the projects will die. We're already in a long term drought. --Hammersoft (talk) 14:52, 12 September 2025 (UTC)[reply]
There won't be any wikipedia 50 years from now. What wikipedia does is harness the energy of many people to read books, newspapers, journal articles, etc, and distill them into encyclopedia articles. In way less than 50 years from now, AI will be good enough to make that an obsolete concept. RoySmith(talk)01:26, 13 September 2025 (UTC)[reply]
Yeah, 50 years is quite optimistic. I can see the project lasting for another decade or two, but beyond that... I'm not so sure. Some1 (talk) 01:43, 13 September 2025 (UTC)[reply]
AI will be good enough to decide what is true? Leaving this to AI (ie. most likely to a private corporation) will never be acceptable, no matter how "good" the AI is. Wikipedia works because it's based on consensus among people. Ita140188 (talk) 16:37, 15 September 2025 (UTC)[reply]
Nah, there's enough spare capital Wikipedia won't go under. Most likely in 50 years it will be like Jane's Fighting Ships, minimal readers, but powerful nostalgic legacy project the caretaker's won't let go of. Server load drops with readership, and hosting fees are minimal. I would even venture the brand will be around in some form a century from now. 184.152.65.118 (talk) 23:55, 2 November 2025 (UTC)[reply]
I would second what Aaron Liu says. Doubtless editing has become more difficult than it was on average even a decade ago. At the same time we are always building new things and updating old ones. Massive amounts of information are added by casual and one time editors and quite often fully in compliance with policy, or easily tweaked to be so, without those doing the edits knowing the name of even one policy here. 184.152.65.118 (talk) 23:46, 2 November 2025 (UTC)[reply]
I think, let's see how it goes, but certainly keep this option open if it proves untenable, as I suspect it well might. I'm not tremendously impressed with WMF over this whole thing; there was a lot of pushback on the idea, so they came up with "We're legally required to do this!", but then when they were asked "By what law, where?", they wouldn't answer that. SeraphimbladeTalk to me08:46, 16 September 2025 (UTC)[reply]
any information relating to an identified or identifiable natural person’, including an online identifier that identifies the person directly or indirectly. Like an account identifier created just for that person that persists across IP addresses? I still think it's a lot of work to have gone through just to have communities disable IP/anon editing entirely. ScottishFinnishRadish (talk) 10:48, 16 September 2025 (UTC)[reply]
I think the defense is that it is the equivalent of a anonymized, randomly generated username. The data is (for all intents and purposes) anonymized and after 90 days, and a "random party" will not be able to map your TA to you with any level of certainty. Sohom (talk) 16:01, 16 September 2025 (UTC)[reply]
@Seraphimblade, You are framing the events in the wrong order: "We're legally required to do this" (WMF does nothing for a while) -> "We had some light regulator scrutiny" -> "WMF scrambles to implement IP Masking" -> "Community outrage at the initial idea" -> "WMF slows down, spends a lot more time building some anti-abuse tooling around it" (and now here we are). Also, the reason the WMF is cagey about why they need to implement it is because it's typically bad legal strategy to publicly proclaim "we are currently breaking this exact provision of GDPR". (And, yes we are probably flouting multiple privacy laws including but not limited to GDPR's absolute stance on IP addresses and CCPA's slightly more nuanced take on IP addresses) It's frustrating as a volunteer but I think understandable from the point of the view of the WMF. Sohom (talk) 16:16, 16 September 2025 (UTC)[reply]
It’s also bad legal strategy to publicly proclaim that you’re going to violate the ePrivacy Directive instead of the GDPR, by openly admitting in an FAQ that your cookies are not strictly necessary. But here we are. 98.97.4.79 (talk) 02:35, 17 September 2025 (UTC)[reply]
This cookie is necessary iff you chose to edit. (If your argument is somehow "we should not ever set a cookie", I'd like to see you defend the concept of a session identifier.) Regarding the ability to refuse the cookie, you are welcomed to refrain from editing and the cookie will not be set at all. (And if you clear your cookies regularly, a new one will be set, every session). If you do edit, you will be given a anonymous identity that will be destroyed/expired after 90 days. I don't see how any of this violates the ePrivacy Directive. Sohom (talk) 02:55, 17 September 2025 (UTC)[reply]
The cookie is not “strictly necessary for the delivery of a service requested by the user”, because the FAQ admits that editing will work just fine even if a browser discards the cookie. The purpose of the cookie is apparently to reduce the number of extra database entries created by the WMF’s own software, which is not a service requested by the user, so users must be presented with the option to accept or decline it. Sites can’t just say that cookies are “necessary” for their own private reasons; the law would have no effect if that were the case. 98.97.4.79 (talk) 03:14, 17 September 2025 (UTC)[reply]
Again, you have the explicit ability to click cancel on a edit or not edit at all which would be a declination to the cookie (and a declination does not adversely affect your reading experience). Also, the cookie is required not because of "the number of extra database entries" created, but rather for attributing the edit to a user, a service you request and agree to by clicking the big blue "Publish changes" (or the large "Reply" button). By doing that you are agreeing that the cookie essential for attribution is set on your device. Your argument does not make sense in this context. Sohom (talk) 03:29, 17 September 2025 (UTC)[reply]
Also, to put things into context Wikimedia's infrastructure is largely open-source in a way that no other top-10 website is. The Foundation does not share any identifiers, and has a privacy policy that is much more detailed than any other top-10 site. If you are looking for technical privacy violations, you are barking up the wrong tree here. The search engine you used to get to this site probably collects a order of magnitude more data about you than Wikimedia will ever get from it's temporary account rollout. Sohom (talk) 03:38, 17 September 2025 (UTC)[reply]
Access to specific website content may still be made conditional on the well-informed acceptance of a cookie or similar device, if it is used for a legitimate purpose.
Surely "the service as can be provided with a specific amount of labor" is the service? We could, in a strictly literal sense, serve pages as printed paper via FedEx, employing millions of clerks and envelope-stuffers, and this would require no cookies at all, but I scarcely think this would prove they were unnecessary all along. jp×g🗯️09:09, 17 September 2025 (UTC)[reply]
@Sohom Datta I wonder if the cookie would be unnecessary if it was technically possible to store all the device-identifying info at the host. Or is that even worse from a data collection standpoint? If course, the mechanism won't change this late in the game. David10244 (talk) 06:34, 11 November 2025 (UTC)[reply]
So the take away here is that despite claims to the contrary, there is no evidence at all that the disabling of IP editing at Portuguese Wikipedia had any actual negative consequences (i.e. results not also felt at languages which didn't disable IP editing or which weren't present at Portuguese Wikipedia before the disabling)? It seems that Portuguese wiki flourishes just as well (or as badly) as other languages in all meaningful statistics, that they are not considering reversing their choice, and that they have a lot less vandalism to revert. I suppose the WMf will adapt their FAQ and other documentation to correctly present this? Fram (talk) 10:26, 22 September 2025 (UTC)[reply]
The "Portuguese wiki flourishes just as well (or as badly)", I would not term a "reduction in good faith unreverted edits" in such a manner. Yes, editor morale is up, but there are less contributions overall potentially having less vandalism to revert but also potentially hurting the readers in terms of how updated the information is (or not?), make of that what you will. The answer is up for debate and to my understanding the WMF has decided to take the more pessimistic interpretation of data here (which is still valid within this context and does not constitute a misrepresentation). From your POV, you want to take the more positive interpretation due to your entrenched position/expected outcome of "turning IP addresses should not cause problems and instead will improve morale". What you have identified are a bunch of threats of validity, but these threats of validity are coming from a position of "I expected to see a different result" and the real answer is "there are indicators of a reduction in the number of edits but we don't really know for sure". Sohom (talk) 12:36, 22 September 2025 (UTC)[reply]
They have taken the one metric which vaguely supports their position if you don't consider that the same trend was visible before IP editing was disabled. And from that, they decide "Would disallowing or limiting anonymous editing be a good alternative? Unlikely." But sure, my "entrenched position", which is not the "real answer", is the issue here. Your "we don't really know for sure" is not the same as stating "unlikely".
I do wonder how many of the "non-reverted edits" prior to the disabling of IP edits were just unconstructive edits which were not found because the other editors couldn't catch them all. When I e.g. think back to the time IP article creation was allowed on enwiki, I recall that while many poor creations were found quickly, we still had a much larger number of unacceptable new articles which lasted for longer than 48 hours. If the same applies to "non-reverted edits" on ptwiki, then the decline in that number is even less of a sign of a problem. It's too bad that this metric happens to be the one we can't compare for ourselves (unlike the other stats, which turn out to indicate no problems at ptwiki compared to other wikis). Fram (talk) 15:36, 22 September 2025 (UTC)[reply]
I'm saying is that I think the data is up for interpretation, you are interpreting the data in a very specific way (that reflect your biases) and then heavily implying/making loaded assumptions about the WMF intentions based on that and trying to strong-arm that conclusion. (to be blunt) I think the WMF's interpretation of the data is also a valid perspective on the data (which does not invalidate other perspectives including yours). While I disagree with your heavily implied conclusion of "they cherry picked data", I agree that the WMF should have done a better job of distinguishing between subtle vandalism and good-faith edits, but I view that as a much more subjective metric that can be infinitely bikeshed and argued about, so I do understand why the WMF went with the specific parameter that they did. Sohom (talk) 16:06, 22 September 2025 (UTC)[reply]
You should read again the FAQ entry. WMF wrote that "The results have been largely harmful", and "we cannot say that disabling logged-out editing on any project is a beneficial solution". Such strong conclusions simply do not follow from this ambiguous data. And yes, it does reek of ideological blindness. Tercer (talk) 17:20, 22 September 2025 (UTC)[reply]
I have read the FAQ, you are quoting editorialized text out of context. Most studies/report/research present a broad conclusion ("we found X"), while underlying that are always caveats and assumptions about other factors (X, Y, Z) possibly being (ir)relevant. If we decided to demand the level of rigour that you demanding from the WMF, such that no statement can be ever be stated unless every possible confounder (X, Y and Z) was fully resolved, we'd need to start revising a large majority of academic literature. Yes, the WMF should have done a better job of representing the other relevant factors but that does not detract from the fact the interpretation is valid within the data they had and you are within your right to disagree with that conclusion since you interpret the data differently. Sohom (talk) 17:55, 22 September 2025 (UTC)[reply]
Trying to get my head around the WMF claims about why the Portuguese experiment is not successful. Their 2024 study[55] claims:
Revert rate decreased by 47%
Non-reverted edits is 20% lower
Non-reverted mainspace edits 22% lower
They compare 2019/2020 with 2023/2024, which is partially a bad fit because 2020, the Covid year, was an outlier in nearly all statistics. So let's compare 2019 to 2023. This[56] is the total number of edits by editors per month. Comparing 2019 to 2023, we get
January: 155613 vs. 183959
February: 137478 vs. 156183
March: 155340 vs. 174412
April: 139152 vs. 144277
May: 169299 vs. 155512
June: 160814 vs. 150569
July: 175301 vs. 148916
August: 162330 vs. 151748
September: 158192 vs. 146822
October: 151785 vs. 156850
November: 158451 vs. 139514
December: 139242 vs. 147410
Total: 2019 = 1862997, 2023 = 1856172. I hope I have not miscalculated anything, please check!
So 6 of the 12 months show an increase in edits, 6 show a decrease. In total we have an extremely feeble decrease of 6825 edits, or less than 0.5%!
But we have a 47% "revert rate decrease", which are at least 2 edits each time which are no longer being made (1 or more edits to revert, plus the revert), for which we sadly only have a percentage, but which dropped from 10% of the edits to 6% of the edits. Meaning that (4*2) 8% of the edits being made in 2019, some 150000 edits, no longer need to be made in 2023 (and of course this also means that 75000 vandal revert edit, which are "non-reverted content edits", no longer need to be made: perhaps these are the "missing" edits in the WMF reasoning?)
Overall, it seems that we have an actual clear increase in good edits on ptwiki between 2019 and 2023, instead of the decrease the WMF claims and uses as its basis to declare the ptwiki disabling of IP editing unadvisable. Fram (talk) 16:58, 24 September 2025 (UTC)[reply]
Edits reverting vandalism were not counted as "non-reverted content edits":
Net non-reverted content edits are defined as the number of content (main-namespace) edits that were not reverted within 48 hours, excluding bot edits, reverted edits, and edits that reverted other edits.
Thanks. Then I understand even less where they have found such a drop, when the tital number of human edits is nearly the same and the number of vandal-revert couples clearly dropped. Fram (talk) 17:38, 24 September 2025 (UTC)[reply]
Hello again, on behalf of the Product Safety and Integrity team. First, thank you for all the comments above and all the effort you are putting into making this a smooth change. We wanted to acknowledge all the discussions here and on Discord, changes to existing tools, updates to meta-pages, the mention in yesterday's Signpost, and other steps you've taken. We are grateful for your openness and curiosity about temporary accounts and new tools.
Technically, everything appears to be ready for deployment next week. However, we have decided to postpone the deployment to October 21st (by two weeks). We are going to take this time to hold more discussions – we want to meet with you to discuss the deployment and clarify anything about the tools you may still be unsure about. We will also put together some additional guidance and documentation to help you prepare to use the new system.
Taking this opportunity to look back at all the discussions, we wanted to comment on a couple of points:
Users who currently can block IP addresses will still be able to see and block IP addresses from temporary accounts.
From our deployments so far, we do not see evidence that volunteers are experiencing increased burden in managing abuse from logged-out editors. Since 2023, we've been working with stewards and other trusted volunteers to figure out what is needed to effectively handle abuse from temporary accounts. This appears to have been successful on other wikis, and we would not be proposing deployment if we were seeing evidence that this was going to increase community burden.
Since this project was first announced years ago, our approach has changed. Initially we called it IP Masking, which focused on just one problem – IP addresses being so visible. Now, it's called Temporary Accounts, which is not only about hiding IPs – it's an additional and separate layer, with new tools built specifically to allow more precise actions (per Wikipedia:IP addresses are not people).
Some tips on the tooling:
We updated AbuseFilter to support matching against the IP address of a temporary account, though this isn't technically bi-directional support.
Special:IPContributions allows viewing all edits and temporary accounts connected to a specific IP address or IP range. (Bear in mind that a temporary account may be using multiple IPs though.)
We expect that IP reputation AbuseFilter filters will be useful in mitigating abuse from logged-out editors, without needing to target a specific IP address.
The User Info card makes it possible for anyone to see the bucketed count of temporary accounts active on the same IP address range.
If you'd like to test some of your workflows with temporary accounts enabled or learn more:
We have set up a Patchdemo. Log in as user: IP Viewer, password: password321!. You may also patrol temporary accounts via Test1 and Test2 history pages.
We invite you to read the Access to IP page and suggest changes to it.
My primary ask is that you seriously consider extending the duration IP addresses are available (currently will be set at 90 days). Please monitor this experience closely, as I believe that erasure will cause us to lose some control over persistent threats. Among other things, we'll be unable to assess collateral damage from blocks as readily as we currently do, and we'll lose the ability to track periodic IP hoppers to identify the proper breadth of a range block. — rsjaffe🗣️21:01, 3 October 2025 (UTC)[reply]
It seems highly unlikely that will happen. However, I could see it making sense to spin up a TAIV wiki, similar to checkuser-wiki, where TAIVs could maintain data on an as-needed basis. I suspect there will be some pushback to that idea, but consider that if we provide people with a secure and convenient way to store the data, they will use that. If we don't provide that, the data will still get stored, except now it'll be on post-it notes, files on people's laptops, Google Docs, and all sorts of other places where we have less control over it. RoySmith(talk)22:43, 3 October 2025 (UTC)[reply]
I agree. Most permanent accounts have little to no edits. Of the almost 50 million accounts we have on Wikipedia, only 2,5 million are autoconfirmed, and even then, the vast majority of them are inactive. The original reason why I wanted to create an account was so that I don't reveal my IP address, but I didn't notice I have other benefits like editing semi-protected pages (although I almost never edit protected pages even if I have the ability to), moving/renaming pages directly and gaining additional permissions (I have page mover and template editor, both of which have <500 users). I only made a few edits on the month I created the account, before properly editing from 2023. JuniperChill (talk) 17:50, 5 October 2025 (UTC)[reply]
We've said why multiple times in this thread, if you didn't take the time to read through it, please do before commenting. Sohom (talk) 02:05, 4 October 2025 (UTC)[reply]
I have an idea to help track IP hoppers while still preserving privacy. My initial thoughts are that, when a temporary account is blocked, to add a block log entry to the underlying IP(s) as well, so that retrospective analysis could detect the amount of disruption occurring from a specific IP. To protect identity, the log entry could have a different time (e.g., rounded to the hour or to the day), and would only list the type (partial vs regular) and duration of block. Omitting the text entry from the original block prevents leakage of any identifying info in the narrative the blocking admin added. Any thoughts? — rsjaffe🗣️18:52, 12 October 2025 (UTC)[reply]
A minor addendum to the list of feature changes - we also updated Nuke so that when temporary accounts are deployed, administrators entering an IP in the tool will fetch all pages created from any temporary account which used that IP. Samwalton9 (WMF) (talk) 07:14, 6 October 2025 (UTC)[reply]
Ah yes, that FAQ, where you still use the debunked claims about Portuguese wikipedia to dismiss calls to simply disable IP editing instead. Please see the section right above this one on the discrepancy between the numbers used by the WMF (from the sole metric which supposedly supported tjhis), and the actual numbers from this metric, and the evidence from other metrics. If you can't present this fairly, then why should we believe any of your other claims about experiences with temp accounts on other wikis? Fram (talk) 09:43, 6 October 2025 (UTC)[reply]
What’s even the point of all this? Wouldn’t it be objectively more efficient, and far less time and energy consuming when it comes to fighting vandalism, to simply allow only registered accounts to edit? The way this entire process is being handled feels like the WMF is forcing temporary accounts on everyone without genuinely considering the many meaningful and well-reasoned concerns and proposals raised by numerous editors. Based on evidence from the Portuguese Wikipedia, it seems clear to me that disabling IP editing had no negative consequences and actually freed up a significant amount of time and energy for editors and administrators there. I genuinely don’t understand the rationale behind the WMF’s refusal to consider proposals allowing only registered accounts to edit. — EarthDude (wannatalk?) 14:58, 6 October 2025 (UTC)[reply]
A very minor counter-vandalism consequence is that you now have to use regex search if you want to monitor talk pages by individual IP range (or do any other kind of searching-based review of temporary account edits), since the only searchable identifier unique to temporary accounts is a tilde character. Temporary account viewing permissions won't do anything to help there. Monitoring all new talk page edits by date still seems to work, though. Gnomingstuff (talk) 20:20, 11 October 2025 (UTC)[reply]
I'm really late on this (but still aware that this has been going on for a while), and I honestly think that this change, while probably necessary, basically kills my motivation to continue tracking LTAs on Wikipedia (I've already been inactive due to real life stressors, but I still sometimes have a few LTAs/anon vandals I look at).
As one example, I often check multiple IP ranges for evidence of a specific LTA or vandal, and with the hiding of IPs, I'll have to make way more assumptions about a particular anonymous user rather than saying "yeah, they're editing these pages + I know they've used this IP range before, so this is probably X vandal." This especially will happen when I talk to admins and "vandal fighters." I'm not sure if I'll be able to use the TAIV right to keep checking specific ranges for vandals. wizzito | say hello!16:05, 1 November 2025 (UTC)[reply]
e.g. I check the range 50.48.0.0/16 (block range · block log (global) · WHOIS (partial)) pretty regularly, because I am aware of two disruptive editors on there - one (actually active right now) makes bad copyedits to racism and hate crime-related articles (see User:Beyond My Ken/Bad copyediting IP) and the other makes unsourced edits to mostly voice actor, Pokemon, and political articles (although they did acknowledge their behavior relatively recently). I guess I won't be able to look at that /16 anymore and have to monitor specific pages for specific behavior, which is hard but somewhat doable. wizzito | say hello!16:10, 1 November 2025 (UTC)[reply]
Nope, they won't yes, looking at the range will be logged (for compliance purposes), but a reason will not need to be provided. Sohom (talk) 16:22, 1 November 2025 (UTC)[reply]
Well, I mean, you have a pretty good justification, and they're not going to check on every single person accessing IP addresses unless they detect bot-like behavior or something like that. This is all security theater for legal purposes, not something actually meant to protect the IP addresses of anonymous contributors. ChildrenWillListen (🐄 talk, 🫘 contribs) 16:22, 1 November 2025 (UTC)[reply]
@NKohli (WMF), EMill-WMF, SGrabarczuk (WMF), and Samwalton9 (WMF): we have tried to duplicate the figures you (WMF) have provided to justify why the ptwiki example shouldn't be followed, but no matter how hard we try, we don't come anywhere near the given reduction in non-reverted content edits which is used as the sole justification for this. Depending on how we count, we get no reduction at all or a very minimal one, not the 20%+ one you use (see the latter parts of the above section, with calcs by me and by User:Aaron Liu). I (and judging from the above discussion quite a few others) really would like a better answer to this before proceeding with this. Fram (talk) 08:29, 8 October 2025 (UTC)[reply]
I wasn't involved in that experiment, just the work to ensure Nuke would continue to work & expand its capabilities for temporary accounts. Hopefully someone with more insight can get back to you soon, though it may take some time since this is about the details of data analysis. Samwalton9 (WMF) (talk) 09:55, 15 October 2025 (UTC)[reply]
@Fram, Tercer, and EarthDude: You're experienced editors, and I'm an intermediate editor. Beginner editors can't find this thread. If IP editing gets disabled, what stopping experienced editors from demanding that registration also get disabled and put behind a referral system?
Why is the top 0.01% of editors ignoring the curse of knowledge and speaking for the bottom 99.99%? When impatient new users face the user friction of mandatory registration, they also become unhappy[,] leave and the project dies. On the internet, there are many existing projects that frustrate power users, but projects that frustrate new users are dying.
@Graham87:Reputation-wise, vandalism and partisanship facilitated by anonymous editing is a relatively minor criticism of Wikipedia. Immediately after, the lead section criticizes clique behavior (from contributors as well as administrators and other top figures), social stratification between a guardian class and newer users, excessive rule-making, ... and ..., which would be worsened by disabling IP editing. Most vandalism isn't the sneaky kind that would affect our factual reliability. Furthermore, opponents of TAs appear mostly male, so how would disabling IP editing affect women wanting to start editing, and the systemic bias along gender ... lines? 173.206.134.138 (talk) 00:50, 9 October 2025 (UTC)[reply]
(edit conflict) With respect, I don't believe that your comparison to Nupedia is relevant. Nupedia was not a wiki and was written predominately by SMEs. Wikipedia and its sister projects are by nature collaborative, so I don't think that anyone will demand...that registration also get[s] disabled. The disabling of IP editing on ptwiki was a choice by that community and they have seen a reduction in vandalism on that project. Keep in mind that ptwiki is a lot smaller of a project than enwiki. I concur with CWL in that your accusations of gender bias in the opposition of IP editing here also appear to be unfounded; opponents of TAs appear mostly male is roughly in line with the overall gender bias of the project, which is mostly male, though in recent years involvement by female and LGBTQ+ editors has steadily increased. Aydoh8[what have I done now?]01:01, 9 October 2025 (UTC)[reply]
To answer the direct question, there is nothing stopping experienced editors demanding anything. Self-selection towards experienced users is always going to happen naturally. However, experienced editors are the group who have developed a variety of onboarding or outreach tools, so the potential that they would decide to end registration seems a small concern. As for the idea that social stratification would be worsened by disabling IP editing, that seems quite back to front. Disabling IP editing means there won't be a "class" of users flagging themselves as new and/or unwilling to be a recognised member of the community. CMD (talk) 03:42, 9 October 2025 (UTC)[reply]
Oh no, Wikipedia is going to die if we require users to create an account. Like almost every single website on the internet. That's why the internet is dead, right? So much friction! One has to think of a username... and a password. No, that's too much effort we would be demanding, clearly our very survival depends on implementing temporary accounts. Tercer (talk) 09:05, 9 October 2025 (UTC)[reply]
I am an experienced editor, but I will oppose any proposal demanding that registration also get disabled and put behind a referral system and I would expect others to do the same whatever happens to IP editing. That's what stops it happening. Phil Bridger (talk) 17:53, 9 October 2025 (UTC)[reply]
With the rollout of temporary accounts in just over a week, I think it would be a good idea to send a mass message to the user talk pages of all administrators of this imminent substantial change. We can use Wikipedia:Administrators/Message list for this purpose. For reference, in the past, we have used the list to inform administrators about the extended-confirmed protection level back when it was brand new, see e.g. Wikipedia talk:Protection policy/Archive 17#Draft mass message to administrators. If we sent one for that, then I think it makes sense to send one for this as well. I'm thinking we could send something like this, much of which is copy-pasted from above (see above for attribution):
Hello, {{subst:ROOTPAGENAME}}. This message is being sent to remind you of significant upcoming changes regarding logged-out editing.
Starting 4 November 2025, logged-out editors will no longer have their IP address publicly displayed. Instead, they will have a temporary account (TA) associated with their edits. Users with some extended rights like administrators and CheckUsers, as well as users with the temporary account IP viewer (TAIV) user right will still be able to reveal temporary users' IP addresses and all contributions made by temporary accounts from a specific IP address or range.
How do temporary accounts work?
Editing from a temporary account
When a logged-out user completes an edit or a logged action for the first time, a cookie will be set in this user's browser and a temporary account tied with this cookie will be automatically created for them. This account's name will follow the pattern: ~2025-12345-67 (a tilde, year of creation, a number split into units of 5).
All subsequent actions by the temporary account user will be attributed to this username. The cookie will expire 90 days after its creation. As long as it exists, all edits made from this device will be attributed to this temporary account. It will be the same account even if the IP address changes, unless the user clears their cookies or uses a different device or web browser.
A record of the IP address used at the time of each edit will be stored for 90 days after the edit. Users with the temporary account IP viewer (TAIV) user right will be able to see the underlying IP addresses.
As a measure against vandalism, there are two limitations on the creation of temporary accounts:
There has to be a minimum of 10 minutes between subsequent temporary account creations from the same IP (or /64 range in case of IPv6).
There can be a maximum of 6 temporary accounts created from an IP (or /64 range) within a period of 24 hours.
Temporary account IP viewer user right
How to enable IP Reveal
Administrators may grant the temporary account IP viewer (TAIV) user right to non-administrators who meet the criteria for granting. Importantly, an editor must make an explicit request for the permission (e.g. at WP:PERM/TAIV)—administrators are not permitted to assign the right without a request.
Administrators will automatically be able to see temporary account IP information once they have accepted the Access to Temporary Account IP Addresses Policy via Special:Preferences or via the onboarding dialog which comes up after temporary accounts are deployed.
Impact for administrators
It will be possible to block many abusers by just blocking their temporary accounts. A blocked person won't be able to create new temporary accounts quickly if the admin selects the autoblock option.
It will still be possible to block an IP address or IP range.
Temporary accounts will not be retroactively applied to contributions made before the deployment. On Special:Contributions, you will be able to see existing IP user contributions, but not new contributions made by temporary accounts on that IP address. Instead, you should use Special:IPContributions for this (see a video about IPContributions in a gallery below).
Rules about IP information disclosure
Publicizing an IP address gained through TAIV access is generally not allowed (e.g. ~2025-12345-67 previously edited as 192.0.2.1 or ~2025-12345-67's IP address is 192.0.2.1).
Publicly linking a TA to another TA is allowed if "reasonably believed to be necessary". (e.g. ~2025-12345-67 and ~2025-12345-68 are likely the same person, so I am counting their reverts together toward 3RR, but not Hey ~2025-12345-68, you did some good editing as ~2025-12345-67)
Excellent idea, thanks @Mz7 for working on this! Tbh I had the same idea and was going to start a draft today :D I was/am going to send this message to CUs, TAIVs, basically anybody with access to temp account IP addresses. SGrabarczuk (WMF) (talk) 14:56, 13 October 2025 (UTC)[reply]
Agree that this should be a MassMessage (in addition to a watchlist notice). That MassMessage should give a brief rundown of Wikipedia:Temporary account IP viewer#What can and can't be said. If you can be desysopped for sharing information with unauthorized parties, you should get a clear warning; not all admins are active every month of the year, so a watchlist notice would be insufficient in my view. On the other hand, to alert non-admins, a WLN would be great. Maybe include do one nowWP:PERM/TAIV applications, hopefully decreasing the rush? HouseBlaster (talk • he/they)16:58, 15 October 2025 (UTC)[reply]
This is a good idea. I am a little worried the mass message might be getting a little too long, but I do think it is important to note that directly connecting IPs to temp accounts is going to be against the rules. Mz7 (talk) 02:19, 16 October 2025 (UTC)[reply]
I'm trying to get my head round the "Guide to temporary accounts" that we have now received, and what I see is that it just got a lot harder to be an admin. I haven't noticed this question being put above or below: isn't there any concern that the new system will lead to a drain of active admins present and future? Especially of the not-very-technically-minded admins such as myself. (We exist, and we even have some uses.) I'm really not sure I want to be an admin any more, with this tricky roundabout method for avoiding the disallowing of IP edits. Compare the discussion of Portuguese wikipedia above. Bishonen | tålk14:51, 31 October 2025 (UTC).[reply]
Same for non-admins with TA access, the chances of e.g. inadvertently "outing" someone (by e.g. linking temp accounts through stating their IP addresses where it is no longer allowed) seem to have increased significantly, and I will not ask for that right to avoid just such issues. I still see zero benefits from this whole system over disallowing IP editing completely (or in the mainspace at the very least). Fram (talk) 15:10, 31 October 2025 (UTC)[reply]
Per the comment below there are some improvements on the way to reduce the amount of clicks required to reveal the IP information. The IP auto reveal should be on for 3 months at a time which for all intents and purposes should mostly return the state of things back to normal-(ish) for 3 months at a time. My personal thought is for us to atleast give the feature a try. Sohom (talk) 15:12, 31 October 2025 (UTC)[reply]
Bish, you can just ignore them or post it to someone else to deal with, eg me while I'm around! Not sure where the tools menu to turn it on permanently is meant to be.
I don't know from trick or treaters, but I did see an orange T-Rex shuffling down the street this afternoon. Which is kind of weird because I thought T-Rex season was over by now. RoySmith(talk)01:00, 1 November 2025 (UTC)[reply]
Personally, I'm planning on not enabling the ability to view the IPs, and leaving it all to people who're comfortable being mini-checkusers with all the restrictions that implies. I don't do much work in the areas of adminship that deal with IPs as anything other than just-another-identifier anyway. Anomie⚔17:40, 31 October 2025 (UTC)[reply]
That discussion is disappointing to me. There was all this talk about sending out a mass message to all admins, making a guide for this, discussion about changes before this happens, and even videos to help guide people with using the system. But yet, the same users avoided the questions asked to them, with one exception who said they were not involved.
Personally, that is making me distrustful of this whole thing. Hopefully things go smooth as there seems like there is the potential for it to be a problem. --Super Goku V (talk) 23:09, 1 November 2025 (UTC)[reply]
The mass message was sent out, the guide was community driven (there is a bunch of WMF docs and those are being improved) and the changes below are being worked on. Sohom (talk) 00:13, 2 November 2025 (UTC)[reply]
Yep. My point was that so much was done here by certain editors, except answer the questions regarding the Portuguese Wikipedia numbers. That is why I am starting to be distrustful as the claims from the FAQ are not being backed up. --Super Goku V (talk) 03:08, 2 November 2025 (UTC)[reply]
There have been many questions and I believe many people have been doing their best to answer them. @Super Goku V: Is there a specific question that you want to be better addressed? jlwoodwa (talk) 01:09, 2 November 2025 (UTC)[reply]
There is a question, but it can only be addressed by certain users. The question is to the WMF employees about how they got the 20% reduction in edits for the Portuguese Wikipedia. There was one employee who did answer, but only to say that they were not involved with that. --Super Goku V (talk) 03:24, 2 November 2025 (UTC)[reply]
This does seem helpful. Thank you very much for including it, Sohom.
(I will need more time to fully read it, but having done a quick read I find the line We find evidence for this tradeoff even within the editing activity of editors registered prior to the cutoff, demonstrating how participation by unregistered editors stimulates activity across the board to be the most significant so far.) --Super Goku V (talk) 15:01, 3 November 2025 (UTC)[reply]
Chiming in here because I got the admin mass message, I wanted to share a few things that are relevant I think. Many years ago at the Foundation, I did some work with data scientists to measure anonymous editing and experiment with inviting (not requiring) them to log in. I had this hypothesis that many anonymous editors simply didn't realize or hadn't even considered signing up. It essentially didn't work. It increased registrations, but resulted in a net loss of total unreverted edits. We even tried several different approaches, like both before and after someone saved.
Very interesting to note as well is that the results differed slightly across English, German, French etc. (though none of them worked overall). This is the most important lesson to me, because it reinforces that there are significant differences in how policy or software changes work on different Wikipedias. For context, ptwiki is heavily dominated by Brazilian editors. In Brazil, culture is extremely social and it's the number two country by total time spent online per person [57][58]. When we talked to Brazilian editors, many experienced editors said that they wouldn't even mind if we offered Facebook login to make signing up for Wikipedia easier (obviously this wouldn't be allowed by the privacy policy and we never even considered doing it).
TL;DR: Even if you think disabling IP editing was good for Portuguese Wikipedia, it might not have the same impact on English-speaking readers and editors. In my view, it is pretty likely to be more negative here, given the cultural dynamics of ptwiki in general. Our past experiments indicate that even optionally asking people to log in just distracts them and doesn't increase high quality edits. Steven Walling • talk02:48, 3 November 2025 (UTC)[reply]
Which may all be true, but it doesn't explain why the WMF needs to make apparently false claims about ptwiki and the results to justify their decisions. Not the first time their "research" and "claims" turn out to be incorrect and skewed to support the WMF narrative. And then they wonder why some people are so negative and distrustful about the WMF... Fram (talk) 10:17, 3 November 2025 (UTC)[reply]
I think you've just done the same thing you're critizing the WMF for, which is also harmful. The WMF hasn't made false claims it's made incomplete or if one wants to reach for the maximally strong word misleading claims. Best, Barkeep49 (talk) 12:39, 3 November 2025 (UTC)[reply]
They claim a 20% reduction in productive edits, but this isn't true. When asked about this, no reply, and no change to their claims. There is nothing "incomplete" about these claims, and yes, they are misleading, because they are wrong. And the longer they remain silent about this, the more it looks as if they are deliberately wrong. But if it makes you feel better to claim that pointing this out is the same thing as making these claims, be my guest. Fram (talk) 13:08, 3 November 2025 (UTC)[reply]
Thank you for providing the link to the report and for trying to answer this Barkeep49. However, this was already discussed above in the same areas I was talking about.
If the WMF editors asked would be able to answer the questions asked or explain why they cannot, then I would appreciate it as it would at least attempt to address my concerns. --Super Goku V (talk) 14:37, 3 November 2025 (UTC)[reply]
Hello all, sorry about the radio silence here. We are taking some time to review the different points being made in this thread and asking our research analysts to evaluate some of these questions, on top of their existing work. Our apologies that this is taking a bit of time but we do still plan to post some substantive thoughts here once we have them. --
I appreciate the acknowledgement of this. While this isn't an answer, it does explain why there hasn't been one so far. (As an aside, thank you to the analysts for reviewing this.) --Super Goku V (talk) 19:24, 7 November 2025 (UTC)[reply]
Update: Removing clicks and tightening rate limits
Hey again! This is another update from the Product Safety and Integrity team. We took the time to meet with functionaries about how to make the temporary accounts deployment go smoother for your community.
A big theme of these discussions was that requiring users to make even small amounts of clicks and choices can add up to real time and cognitive load being piled on top of a community's anti-vandalism work. We also identified some relatively low-lift technical improvements to make it a little harder for vandals to engage in common block-evasion techniques.
Based on our talks with functionaries, we made two decisions:
We are introducing technical changes to significantly cut down clicks and choices needed to show IP addresses, and to tighten up how temporary accounts are rate limited.
To avoid these last-minute changes creating bugs or instability, we will delay deployment one final time, to November 4th. This lets us deploy these changes through our normal processes.
The biggest change is that we will allow IP auto-reveal to last for up to 3 months to reduce the practical and cognitive load involved in showing IP addresses. (T407222) This doesn't change who can see temporary account IP addresses, but should make the work easier for many of those who do.
We're also updating the onboarding dialog to allow users to turn on 3-month auto-reveal at the same time as they opt into generally having access to temporary account IPs. (T407257) This dialog is displayed to all users who can view temporary account IP addresses, the first time they visit relevant pages.
For rate limiting, we added a 10-minute limit to temporary account creations on top of the existing rate limits of 6 accounts per IP per day. (T405565) We are also now applying IPv6-based rate limits to an entire /64, rather than a single unique IPv6 address. (T406710) These changes are already deployed, so please do let us know if you see any issues.
These are the last changes we will make before deployment. We know that last-minute changes and delays are not ideal, but we felt that on balance it was worthwhile to take this bit of extra time to remove more friction and respond to community feedback.
We have also edited Mz7's draft of a mass message for admins to help introduce the feature and these changes. We'll coordinate on who will send a similar message for other users with access to temporary account IP addresses.
Finally, we've also created some instructional videos, to better explain how to work with temporary accounts:
10-minute limit to temporary account creations Should there be an exemption for the second account of the day? MediaWiki bugs gave me multiple TAs. Using Firefox, I received different TAs for frwiki and mediawiki.org. I also got blocked on zhwiki as a bot for clicking "Show preview" too many times in Firefox. After switching to Chrome, my first TA had an error with SUL cookies in incognito mode. 66.49.187.185 (talk) 04:00, 17 October 2025 (UTC)[reply]
SGrabarczuk (WMF), aside from the five projects who are converting from LiquidThreads to read-only Flow and from November 4 being the TA deployment date for this project, there is no specified date for the implementation of temporary accounts for the remaining projects (Wikimedia Commons, Wikidata, etc.) on phab:T340001. Codename Noreste (discuss • contribs) 04:50, 17 October 2025 (UTC)[reply]
Hey @Codename Noreste, that's correct, Commons and Wikidata must go after most large Wikipedias, and Spanish and Russian had their reasons to be excluded from the earlier deployments. We'll talk to these communities soon; we're just focusing on English now. Do you have a specific question about the remaining deployments? SGrabarczuk (WMF) (talk) 21:47, 17 October 2025 (UTC)[reply]
And it seems like according to the Phabricator link you posted, TAs are coming to the other WMF sites by the end of November (after English Wikipedia's introduction in five hours), with the notable exception of Russian Wikipedia. JuniperChill (talk) 19:02, 3 November 2025 (UTC)[reply]
I was gonna ask the same question (but say 00:01 to reduce the confusion between the start/end of day). But anyway, I would assume so given that computers (as well as Wikipedia itself) use UTC due to issues with DST in some regions. If so, TA implementation would begin in 4 hrs and 40 mins. UTC is also called GMT, but the latter is not used in the context of computers.
So in the US (assuming it starts midnight GMT), it would begin 3 Nov at 19:00 (ET) or 16:00 (PT). In NZ, its 4 Nov at 13:00.Edited 23:27 GMT: Actually, it turns out that TA implementation is from 4 Nov 08:00 UTC (midnight PT, 03:00 ET), per below comments. JuniperChill (talk) 19:18, 3 November 2025 (UTC)[reply]
That's why I said ET and not EST/EDT (I even checked if NY is 5 hours behind London, which it is). Its even more confusing that in the UK, winter time is called GMT while in the summer, its BST (British Summer Time). So when I see EST, I thought it meant Eastern Summer Time, not Eastern Standard Time (see article Eastern Time Zone for more). Plus the UK changes clocks at the last Sunday of March and the last Sunday of October while the US does it on the second Sunday of March and the first Sunday of November. This means that NY would be 4 fours behind London for a short period of time, until the UK catches up. Australia has three time zones in winter, but five in summer because not all states observe DST. Therefore, this is where the confusion lies regarding DST; hence why UTC exists. JuniperChill (talk) 22:02, 3 November 2025 (UTC)[reply]
Everybody should just use UTC and let people do their own local conversions if they want. This is why I have a UTC clock on my toolbar (in addition to the local clock). Computers are good at adding and subtracting. People, not so much. (Obligatory whine about why my damn car, which has 47 more computers in it than any vehicle should, makes me fix the dashboard clock manually twice a year). RoySmith(talk)22:35, 3 November 2025 (UTC)[reply]
I struggle with the conversion, so I just leave it in local time so at least I know how long ago a comment was. (As an aside, you do not want a device that assumes the time change, in case they change the rules. I had an alarm clock as a kid that I had to change four times a year.) --Super Goku V (talk) 02:04, 4 November 2025 (UTC)[reply]
(I'm the same person as the previous message, but I closed my incognito tab between that message and this one. When I tried to post this in my new incognito tab, I saw the message "Visitors to Wikipedia using your IP address have created 1 accounts in the last 24 hours, which is the maximum allowed in this time period. As a result, visitors using this IP address cannot create any more accounts at the moment." Isn't the limit meant to be 6 accounts per 24 hour period?) ~2025-30907-85 (talk) 08:26, 4 November 2025 (UTC)[reply]
This appears to be a bug. We're looking into it. There is a limit of 1 every 10 minutes with a maximum of 6 temp account creations allowed every 24 hours. -- NKohli (WMF) (talk) 09:30, 4 November 2025 (UTC)[reply]
Yeah, I don't think I would have been aware either if I didn't watch some admin talk pages many years ago. In other issues, I'm interested to see how LTA behavior will change in response to this. For example, will Andrew5 start clearing his cookies or use throwaway accounts instead (as I personally believe he already has, but this isn't a place to discuss that in detail)? wizzito | say hello!13:22, 4 November 2025 (UTC)[reply]
They usually aren't, except if file an unblock request on their user talk page (but in this case they can only edit their user talk page and only as long as the option to block talk page access isn't enabled) -> phab:T398673. Johannnes89 (talk) 17:44, 4 November 2025 (UTC)[reply]
This also adds significantly more time than I originally estimated. As a for instance, you need another page load to go from the temporary user IP contribution page to the IP contribution page if you want to use twinkle to place a block. ScottishFinnishRadish (talk) 16:33, 4 November 2025 (UTC)[reply]
Sounds like something that probably should have been addressed before a major rollout. This isn't a minimum viable product situation, this is the production environment on one of the top 10 visited websites in the world, and this seriously affects anti-abuse efforts which are entirely undertaken by volunteers. ScottishFinnishRadish (talk) 16:45, 4 November 2025 (UTC)[reply]
WP:AUTOBLOCK: There is an internal autoblock expiry time variable, which is set to 24 hours, meaning that autoblocks that are automatically applied will only last for that amount of time and will expire afterwards.That's just another issue, you have to look at the IP to see the history of editing to determine how long a block should be. It essentially adds a "check the IP" step to every temporary account block, which then adds an additional "Legacy IP edits" step to see the history from the IP, as the IP editing history is on a different page than the IP editing history since temporary accounts were created. Luckily this task only has to be completed ~10,000 times a month, so adding 30 seconds to a minute only adds 80-160 hours of volunteer burden a month. ScottishFinnishRadish (talk) 16:52, 4 November 2025 (UTC)[reply]
Why do you need to do that? When you encounter a vandalizing account you also don’t perform CU to check if there are other sockpuppets. Just block the TA indefinitely like any regular vandal account and only care about their IP / IP range if you observe a pattern of abusive behavior with multiple TA. Johannnes89 (talk) 16:55, 4 November 2025 (UTC)[reply]
The vast majority of reports at AIV were IPs. I look at the history of each to determine the correct block length. The point is to prevent disruption, not to allow someone to vandalize until they are blocked (assuming they're not changing their temporary account to avoid scrutiny and never get blocked) every 24 hours. ScottishFinnishRadish (talk) 17:09, 4 November 2025 (UTC)[reply]
You should treat vandalizing TA the same way as vandalizing accounts. If you would issue an indefinite block for a regular account, do the same with the TA. There's no point looking up the IP unless you suspect long-term abuse. Just as we rely on autoblock when it comes to blocking regular vandal accounts, autoblock is also sufficient in most cases when it comes to TA vandals.
~330x on frwiki (but ~200 of those from a single non-admin patroller who seems to use IP reveal much more often than anyone else on any project I've seen so far)
Those numbers used to be higher when TA got introduced on each project but after some time people realized that they need to look up the IP less often than we thought in the pre-TA time. Johannnes89 (talk) 18:07, 4 November 2025 (UTC)[reply]
You should treat vandalizing TA the same way as vandalizing accounts. If you would issue an indefinite block for a regular account, do the same with the TA. There's no point looking up the IP unless you suspect long-term abuse. That's entirely incorrect. If we're trying to prevent disruption then checking the editing history of the IP is a must. That log is also very inaccurate, I just checked the IPContributions page, revealing the temp accounts, of half a dozen IPs and it logged a single action. As I said, literally the first TA I looked at today was abusing the ability to reset their account to commit personally targeted bigoted vandalism. They had far surpassed the point where they would have been at AIV, but due to the way temp accounts work, they hadn't been reported. ScottishFinnishRadish (talk) 18:42, 4 November 2025 (UTC)[reply]
There's simply no need to check the IP for every TA just to make sure they haven't been operating other TA's on the past – based on your way of thinking you would also run CU on every vandalizing account just to make sure they haven't used other vandal accounts?
I would be curious to see which TA you are talking about (we could also continue discussing the specific example offwiki). Personally targeted bigoted vandalism that sounds to me like a something people could report no matter how many times the TA vandalized. Blocking one TA would have stopped the vandal due to autoblock – no matter how many other TA they created (blocking old TA once the vandal has moved on already is also not needed by the way, they can't re-used old TA once they abandoned them). Johannnes89 (talk) 19:01, 4 November 2025 (UTC)[reply]
Is anyone else getting a weird bug of usernames getting cut across the linebreak in mobile watchlist view? I suspect it might be an issue only for admins and/or editors with temporary account viewer permissions? signed, Rosguilltalk15:46, 4 November 2025 (UTC)[reply]
Is there really no way to see ips short of tracking down the edit on recentchanges or the watchlist? Nothing in history or diffs or Special:Contributions (for the temp account - though at least on that, I get the ipinformation collapsed box, with everything except the ip itself), and even turning on autoreveal doesn't do anything. —Cryptic17:18, 4 November 2025 (UTC)[reply]
Way up above, I was one of several people to express concerns about the fact that IP addresses are only stored for ninety days. I just found and reverted some fairly severe vandalism from May last year; if temporary accounts had been used then, that edit would've been even harder to track down without going to the article in question and checking the page history. Here's what happened; today I came across an IP address on my watchlist that turned out to be part of a problematic school IP range, 168.212.0.0/16. I noticed that the range had been blocked several times. I know many people wouldn't go to the lengths of doing this, but I audited all edits from that range since the expiry of the last block in April 2024, which is how I found that vandalism among other things that escaped the three-month window. I messaged the admin who had previously blocked the range and he re-blocked it. These sorts of problems are why I think that with temporary accounts the way they are now, we should be more severe with blocks of school IP addresses, because by default they'll be firehoses of vandalism. Feel free to move this message if it'd be more suitable somewhere else. Graham87 (talk) 15:19, 8 November 2025 (UTC)[reply]
And some of our worst IP hopping vandals, including this one, are just blasting right through temporary accounts; I hoped this rollout would help stop some of these people, given that cookies are connected to accounts now, but oh well... wizzito | say hello!23:55, 11 November 2025 (UTC)[reply]
One thing we should have done was made people unable to log out of temporary accounts unless they forced it themselves (e.g. clearing browser cookies). This would have probably stopped some vandalism. wizzito | say hello!00:07, 12 November 2025 (UTC)[reply]
Another fun fact: it looks like a TA gets created when you try to make an edit, even if it fails. In my case, I opened an incognito window, tried to edit WP:Sandbox and hit an edit conflict. Ended up creating a TA with zero contributions and zero log entries. RoySmith(talk)02:50, 12 November 2025 (UTC)[reply]
Is there a reason we want this, and is there any indication to the TAs that if they do not exit session it will boot them out anyway in 90 days? CMD (talk) 14:57, 12 November 2025 (UTC)[reply]
The link shows 15 edits, with 14 of them by one account and 1 edit by another account. Are there other accounts associated with this vandal? For vandals who change IPs, is the activity you've linked to similar to what one would have seen with legacy IP edits, the difference being that the activity is partially obscured because there are two different accounts externally visible, rather than a single IPv6 /64 range? KHarlan (WMF) (talk) 07:18, 12 November 2025 (UTC)[reply]
You are right, sorry about that. Yes, the first edited over the course of ~60 minutes, then a second account edited once ~90 minutes after the first one stopped editing, and a third one created 11 minutes after the second one. KHarlan (WMF) (talk) 07:33, 12 November 2025 (UTC)[reply]
KHarlan (WMF) The thing is: the first TA was blocked a minute after that 60 minute period. Another account showed up about 2 hours after that and kept on editing, presumably because the user was able to switch to another device or simply... log out? wizzito | say hello!09:58, 14 November 2025 (UTC)[reply]
Or even block the IP in general, since only one IP on the /64 was used for all 15 edits. Given how IPv6 works, it suggests to me like the vandal just logged out to avoid scrutiny instead of switching devices. wizzito | say hello!10:02, 14 November 2025 (UTC)[reply]
As an aside, long-term I think we will want to move away from using Twinkle for blocks and have everyone use Special:Block. I wrote the Twinkle block module 10 years ago as a way to block and issue a talk page template at the same time. That was the extent of it. Over the past decade, we've painfully been trying to maintain feature parity with Core. Now we're at a point where it's the opposite, and Core is missing functionality that's in Twinkle (phab:T392857). It will still be a "while", but the plan is to continue on with that effort and eventually there will be no need for Twinkle at all.
I guess just keep this in mind. Anything that you think Twinkle does better, please file a task or let me know, and we'll get it tracked. Ultimately I hope that, apart from browsing contributions or checking talk pages, admins will never need to leave Special:Block to do their job. — MusikAnimaltalk19:01, 4 November 2025 (UTC)[reply]
Twinkle's present killer feature are prefills and templates. It's also convenient that it's a dialog rather than a whole separate page. Izno (talk) 19:09, 4 November 2025 (UTC)[reply]
This is one of those things that happens 10,000+ times a month, so seconds add up to hours pretty quick. From the contribs page, it takes about 3-5 seconds to block through twinkle, including selecting the reason filling out the template. Just having to click and load Special:Block would double that time, and that's without filling things out, and then placing the template. ScottishFinnishRadish (talk) 19:25, 4 November 2025 (UTC)[reply]
A dialog is great, but it can't give you all the information you need with such limited space. For example, Twinkle only reports if there was any past block and gives only one set of details (and even then there are bugs!). I suspect most of us want to see the full log if there were any previous blocks. Special:Block does this, and likewise for any active range blocks.
The templates, prefills etc. are part of phab:T392857. That would indeed be a requirement if we were ever to retire Twinkle. Fortunately virtually ever wiki has this or a similar workflow, so it seems it's time to bring it to Core! :) Once we have that, I suspect you'll find Special:Block just as convenient as it will be designed as a "one-stop shop" for all things blocking. Heck, we could even throw in the contributions there in another accordion.
Anyway, fret not, as this is a long ways away. We'd need design and user research, a full product treatment, etc. Besides, Twinkle is a gadget which we as a community are free to continue to use and maintain. I just don't know if I will be able to maintain it, is all! The multiblocks project was a massive effort. I didn't have it in me to parity that in Twinkle. Especially after hearing the affirmation from @Novem Linguae, I figure engineering resources are better spent on Special:Block so that all wikis get the same benefits. — MusikAnimaltalk03:11, 5 November 2025 (UTC)[reply]
Yes, I almost framed it as that big for me also. Special:Block isn't especially slow if it had the parity of prefill/template, but having to make the context switch Sucks. Izno (talk) 19:27, 4 November 2025 (UTC)[reply]
What Izno and SFR said, essentially. Unfortunately for you, the software you wrote is good as hell and everybody finds it more usable than the other thing... jp×g🗯️22:24, 4 November 2025 (UTC)[reply]
Another Twinkle feature for sending a message to IP users is (was) the Shared IP notice. One could use the WHOIS link to fill out the name of the owning org. I guess this feature is no longer meaningful. But if a shared IP is blocked, how would an editor from that IP be able to know this in advance? David Brooks (talk) 22:42, 4 November 2025 (UTC)[reply]
The very, very similar temp account names make it a lot harder to spot the same editor in e.g. recent changes (not always easy with IPs, but at least often a lot easier than it is now). I didn't spot the pattern of edits from here in recent changes, would probably have been easier before today. Fram (talk) 16:15, 4 November 2025 (UTC)[reply]
This seems like a perfect opportunity for some add-on script which assigns colors to TA account names in the various listings. Perhaps as a hash of the account name, so they're stable across listings. I'm guessing it would be a no-brainer for somebody who is good at javascript. RoySmith(talk)16:20, 4 November 2025 (UTC)[reply]
It's not like there's a requirement for only one script though. A person with normal color vision could use a color-based script, one with color blindness but good acuity could use an icon-based script, etc. Anomie⚔15:14, 9 November 2025 (UTC)[reply]
One lunch later: User:Aaron Liu/TemporaPaint.js currently would give a text color. Sometimes the differences are small, though, so I plan to later add an inverted background color (which should also help readability a little) and maybe use a different hash function. Aaron Liu (talk) 17:54, 4 November 2025 (UTC)[reply]
As one of the many colour-blinds :) I've created my own text-based version that gives the temp user a replacement human-readable username. It's deterministic so a user should always get given the same username each time they're seen (although there is a risk of collisions, such that two users might be given the same name, albeit quite unlikely.) User:JeffUK/VerboseTemporaryAccounts.js - WikipediaJeffUK09:41, 11 November 2025 (UTC)[reply]
I created another CSS-only version (see User:isaacl/style/temporary-account-names) that inserts a bi-colour square next to temporary account names on history pages, and next to links to contributions from temporary accounts. It uses a very simplistic heuristic to detect links so it may produce false positives or negatives. (The implementation is subject to change as I gain more experience with it.) isaacl (talk) 04:24, 14 November 2025 (UTC)[reply]
Yeah, I came back after a break to do a little recent changes patrolling and honestly thought someone had set a few thousands bots on us....!. There needs to be a 'Temporary User' notification somewhere on the Temp user pages, Contribution pages, and Talk pages, to make it abundantly clear that this is a temporary user User contributions for ~2025-32507-53 - Wikipedia, and that this is not. User contributions for -2025-40404-6O - Wikipedia. It should also be clear from the signature that the IP user is a temporary account, or I can easily see people getting very confused about who they're talking to, a load of temporary users with nearly identical names commenting on an article, for instance, is going to be very hard to parse; requiring a user script to be manually installed to patch this is not acceptable, this will confuse new (and returning, source:me) users who won't know they need to do that.. JeffUK10:39, 10 November 2025 (UTC)[reply]
Before temporary accounts were introduced, most IP blocks used to be anon. only, which means they don't affect registered users operating under that IP/range. However, if a temporary account gets blocked, it'll autoblock any registered user using the same IPs for 24 hours (unless the admin unchecks autoblock, which most aren't right now). I'm not sure if this'll cause significant problems, but I'm putting it out here so people can be aware. ChildrenWillListen (🐄 talk, 🫘 contribs) 04:44, 5 November 2025 (UTC)[reply]
The Special:ActiveUsers count (NUMBEROFACTIVEUSERS) which appears on the Main Page has sharply jumped in the last few days, from around 115 thousand on November 3 to 140 thousand on November 7, because temporary accounts are now included, but IP addresses have not been listed. This number will probably fluctuate a lot in the future, because of how easy it is to create a new temporary account, so it could be more consistent with the magic word's previous behaviour to exclude them from the count. Xeroctic (talk) 08:35, 7 November 2025 (UTC)[reply]
For IP users it's quick and easy to tell that you're looking at an IP user, for temporary accounts they're indistinguishable from a normal account, unless you happen to know that a ~2 in the name means temporary. Requiring this special knowledge is a developer hack not a robust user-friendly solution. We need "Temporary User" in big letters on the Temporary Users' User Page, Talk Page, Contributions, and default signature, for this to even be remotely workable. JeffUK10:44, 10 November 2025 (UTC)[reply]
@NKohli (WMF): I know some people hate temporary accounts and the rollout has had some hiccups (which isn't surprising for such a huge technical change), but personally I think this is a change for the better, both from a privacy and legal compliance perspective. Despite the roll-out bugs, I think this feature could easily have been a total disaster if the Trust & Safety Product Team had not spent the past six years carefully planning its implementation, along with all the supporting tools that were needed for mitigating its impact. This is probably the biggest technical change to Wikipedia in the past decade and despite some predictions otherwise, the sky is not yet falling. So I would like to say thank you to the Trust & Safety Product Team for your hard work and collaboration with the community to roll out this monumental change. That is all. Nosferattus (talk) 08:10, 12 November 2025 (UTC)[reply]
The discussion above about temporary accounts is quite heated. It's clear that there is some opposition, but I'd like to know if it's a majority of editors or just a couple of loud critics. I don't think it will change anything, as WMF never asked our opinion and made it clear that it will introduce temporary accounts regardless of what we say, but I think it is important to register what our opinion is. Furthermore, I think both alternatives are undesirable, but the legal situation forces us to choose one of these paths, so I'm asking a simple yes/no question to focus the debate and facilitate counting. Tercer (talk) 08:57, 8 October 2025 (UTC)[reply]
How will asking the same question in the same forum, read by the same audience, enable you to determine whether it is "a majority of editors or just a couple of loud critics"? --Elmidae (talk · contribs) 09:49, 8 October 2025 (UTC)[reply]
The question hasn't been explicitly asked before. We have lots of back and forth above, about several different subjects. And yes, I'm interested in the opinion of the editors that participate in this forum, who haven't expressed it clearly or didn't participate in the above discussion (that includes you). If you want to advertise this topic in other forums to get a wider participation that would be great. Tercer (talk) 10:06, 8 October 2025 (UTC)[reply]
Like Elmidae, I have not contributed to the "Temporary accounts rollout" thread and do not plan to. WP:PETITION disfavors such yes/no questions "since they not only encourage the community to avoid meaningful discourse and engagement, but also limit their scope to only one initially-stated opinion or preference with little or no opportunity for discussing and reconciling competing or opposing points of view." Even if you had 60% of a whopping 1000 respondents to vote for banning IP edits rather than introducing temporary accounts, that would be insufficient for our consensus-based model. ViridianPenguin🐧 (💬) 17:09, 8 October 2025 (UTC)[reply]
What is definitely insufficient for our consensus-based model is WMF's imposition of temporary accounts without asking for our opinion or giving us any alternatives.
A cornerstone of our consensus-based model are WP:RfCs, which do involve asking editors what their opinion is. Perhaps I should have started one straight away instead of trying to gather opinions informally. Then at least I wouldn't have had two responses that are only talking about format instead of the subject matter. Tercer (talk) 17:38, 8 October 2025 (UTC)[reply]
I think we're asking this question too early. I think we should let temporary accounts roll out, then if it goes poorly, someone will likely propose an RFC to ban logged out editing at that time. However, WMF has rolled out temporary accounts to most other wikis now, and those wikis haven't imploded. I think that is good evidence that temporary accounts won't create a vandalism catastrophe, and that we can continue to permit logged out editing. –Novem Linguae (talk) 00:47, 9 October 2025 (UTC)[reply]
As a matter of fact I believe that editors should make the decisions about Wikipedia, not WMF. It is not too early, it is too late, because the decision has been made without our input and will be implemented regardless. As for the other wikis, we have seen feedback only from a single editor from a single wiki, and it was very negative. That's not good evidence, but it is evidence of the opposite of what you claim. Tercer (talk) 09:19, 9 October 2025 (UTC)[reply]
Reacting once an implosion has occured is too late. How is a community going to recover from said implosion anyway?
In phab:T364073 the Lithuanian Wikipedia disabled Content Translate because it had so many edits that the admins could not keep up.
Writing that, temporary accounts are not that bad. If an temporary account vandal ip hops without invalidating the account one ban is enough, while one ban per ip, or an rangeblock (if the ip's are close enough together without collateral damage) would be needed now. Tempoarary account vandals become hard when they invalidate their accounts. I checked the Norwegian Wikipedia in June/July and compared ip's in months of last year compared to temporary accounts in the same month of this year. I found out the ratio hovers around 1:1, with occasional spikes to 2 temp accounts per ip. That spike is normal to me, because temp accounts only exist for 90 days maximum. For an IP hopping, temporary account invalidator, there is always IP auto-reveal mode which shows for admins IP's in recent changes as they are shown now for a limited time. Snævar (talk) 13:58, 9 October 2025 (UTC)[reply]
Please stop talking about banning IP edits until the new system has had a six months trial. People have tried banning IPs in the past and it gets quickly shot down. Repeating the half-baked proposal is counterproductive because it biases people such that they reflexively vote no next time, rather than examine the issue yet again. Johnuniq (talk) 00:56, 9 October 2025 (UTC)[reply]
What about "Please stop talking about introducing temporary accounts until we do a six months trial of banning IP edits"? Unlike temporary accounts, that would be very easy to implement, and very easy to reverse, so we could actually have a trial. You know very well that temporary accounts is not a trial, it will not be revisited after six months.
And frankly, there's nothing "half-baked" about banning IP edits. It's technically simple, and has been done already. What else do you need to consider it fully baked? Tercer (talk) 09:23, 9 October 2025 (UTC)[reply]
All discussion prior to implementation will be entirely overshadowed by what actually happens when they're implemented. Either they will work out ok or they will make it to difficult to counter disruptive editing and will have to be limited somehow. Realistically I don't believe there is enough evidence to say for certain what will happen, but given the feedback I've seen from the WMF and the experience of other smaller Wikis I think it should be ok. -- LCU ActivelyDisinterested«@» °∆t°19:52, 9 October 2025 (UTC)[reply]
I don't think the impact will be large, but I'm certain it will be unambiguously negative. And I think that when Wikipedia is under attack from very powerful people, including the US government, WMF should be making the life of malicious actors harder, not easier. Tercer (talk) 19:50, 10 October 2025 (UTC)[reply]
While I’m not a fan of temporary accounts for LTA/sock-tracking purposes (IP geolocation is a cornerstone of linking accts to sockmasters), banning IP edits altogether would be a horrible idea - for every two-bit vandal, there’s a productive contributor that just didn’t want to or hasn’t decided to create an account yet. Hell, half of the recent USHL season pages have been maintained by an IP who’s filling a valuable gap in WP:NHL. TheKip(contribs)02:13, 11 October 2025 (UTC)[reply]
Yes, I am in favor of banning IP edits and temporary accounts. All editors should be required to create an account connected to a verified email address or phone number. Anyone can still edit and vandalism practically ceases to exist. 216.126.35.228 (talk) 17:14, 14 October 2025 (UTC)[reply]
This just defeats the whole point of all Wikimedia projects, in which anyone else can edit. If votes were allowed here, I would be strongly opposed to banning all unregistered edits, and strongly support the introduction of temporary accounts. Codename Noreste (discuss • contribs) 22:26, 14 October 2025 (UTC)[reply]
Let's see how the "temporary accounts" bit works out, first, once the dust settles. If it works okay and there's not a major increase in vandalism and abuse, then the temporary accounts thing is still not good, and it would've been better to keep IPs, but if it's not causing massive headaches, then it is what it is. If it is causing major headaches and we do see a noticeable spike in vandalism, socking, LTA activity, etc., then we'll have to decide on next steps at that point, which may involve restricting or disabling anonymous editing. But let's wait until we actually have the data, rather than just speculation. SeraphimbladeTalk to me22:58, 14 October 2025 (UTC)[reply]
Even if 99% of Wikipedia editors support disabling IP/temporary account editing, I believe the WMF will still have the final say on whether to implement this change (and I believe they've stated before that they will not get rid of IP/temporary account editing). Some1 (talk) 23:03, 14 October 2025 (UTC)[reply]
Some statistics [59] which might be helpful evaluating TA rollout on enwiki. I've checked those stats for my home wiki after TA deployment and everything seemed fine. Johannnes89 (talk) 21:36, 4 November 2025 (UTC)[reply]
Strongest possible oppose to either, frankly. I would not be here if it were not for IP editing, and I know many of our greatest editors started out as IP addresses. I think this whole temporary account thing is dumb, but that is outside the scope of this. We've been doing this for over 20 years and it's worked fine. That's all. Lynch4416:52, 2 November 2025 (UTC)[reply]
Not opposed to the idea, but I currently don't support it at the moment. If I had responded last week, I might have been more supportive to banning IP edits over the temporary account program, but Sohom posted a research article in the discussion above that does indicate with evidence that there are costs to prohibiting non-registered edits, so I do have concerns. --Super Goku V (talk) 15:09, 3 November 2025 (UTC)[reply]
I think this would be a bad idea, anonymous users are , on the whole, a net positive, and adding friction to the "Spot issue with article > Fix issue with article" process is going to reduce the quality of articles on the whole. I would not like "Spot issue with the article > Try to edit it > Get told you have to register > Lose interest" I would much rather see a more integrated, proactive solution to getting people registered, something like, the 'submit' page for any anonymous edit giving you a 'Now select a username and password' page, with a prominent 'skip' option. ("Spot issue with the article > edit it > register or not.") If having something like that in place, we find that the number of temporary users reduces significantly (with an equivalent increase in registrations), then we can review. This is how a lot of e-commerce sites do it, as they too are motivated to get the value (fixing pages/buying stuff) without adding friction, but also would like you to register if you don't mind thank you very much. JeffUK11:38, 10 November 2025 (UTC)[reply]
I have long considered the idea of temporary accounts to be unnecessary technical debt that only adds confusion to anti-abuse mechanisms. It's long past time that we followed other websites' leads in requiring registration. The fact that it is fairly easy to get access to the IP information means the privacy gains are not great; cutting off access to it also assumes catching such leakers in the first place, which is not happening if said IP address information was leaked privately and can't be discovered by us.--Jasper Deng(talk)08:24, 15 November 2025 (UTC)[reply]
The fact that it is fairly easy to get access to the IP information means the privacy gains are not great Before temp accounts, everybody could see an unregistered editor's IP. Now, it's 1000 people for English Wikipedia globally. Regarding the ease of getting to the threshold, see {{Registered editors by edit count}}. SGrabarczuk (WMF) (talk) 11:17, 15 November 2025 (UTC)[reply]
It's better than what it was before. The barrier to seeing IPs used to be nothing. Everyone saw logged-out editors' IP addresses even if they didn't want to. There's zero reason to know the IP address of a logged-out constructive contributor; the main benefit of attributing logged-out edits to IPs in the first place was that it made tracking vandals easier.
If we instead required registration for everyone, good-faith logged-out contributors would likely lose interest if they saw that they had to make an account. The bad-faith editors could just as easily vandalize after taking the 10 seconds to register. So requiring registration would be a net negative to the project.
I’m writing because when I became an admin, and later an arbitrator, it was my intent to do everything possible to protect the community that I belong to and love. I work for that community, not for the WMF. Also, when I ran for the committee I committed to providing transparency whenever possible. The recent incident at WCNA 2025 has pushed me into breaking the ANPDP in order to make sure the community that I serve is safe, and has all the information they need to make informed decisions about their continued safety.
On February 20th, I blocked Gapazoid, who is Connor Weston, the person who brandished a firearm and threatened suicide at the conference. Any oversighter can look at their user page, which I suppress deleted at the time, to verify this. The Arbitration Committee is also aware of his name as he appealed my block to the committee via email. I blocked them for child protection/pedophilia advocacy. I also immediately emailed WMF Trust and Safety, seeking and expecting a WMF ban. The following day Risker pulled talk page access due to continued disruption. They were already blocked on Wiktionary for the same reason.
For the next several months I pressed with every tool at my disposal for a WMF ban. This included discussion on several Arbitration Committee calls with the foundation. My fellow arbs joined me in pushing for this ban. It was such a sticking point between the committee and the foundation that the WMF held a “Process sync” call on June 24th for us to explain how T&S makes decisions about WMF bans.
During this process, on April 25th, Weston sent an email to the info queue saying they were going to travel to the WMF offices to protest my block. The message states, in full:
<information redacted>
This email was forwarded to T&S who verified receipt. I am also aware that information regarding Weston threatening suicide was sent to T&S.
On August 11th they closed the case, along with a second child protection case, with no action. To quote the email sent to me personally in response to my initial report:
Having carefully weighed the evidence, we found no indication that Gapazoid’s contributions amount to advocacy or encouragement of illicit activity.
In their response to the Committee they said:
We recognize that taking no action in these cases may not fully align with what ArbCom expects or hopes the WMF's role to be in situations of child safety concerns, particularly given the importance and sensitivity of child safety matters on the platform and the fact that you, ArbCom, have been trusting the Foundation for years to handle these matters. The fact is, however, that while the community can sanction based solely on its own judgement, the Foundation must be able to legally defend our decisions to take action, including their consistency with our policies over time. This includes the need to have evidence of a risk of harm that violates our policies. We don’t think that either of the above two examples would be successful in meeting that standard.
This decision allowed a suicidal pedophile who threatened to travel in-person to WMF headquarters to protest a block to gain access to an in-person meeting of our community. Even if they didn’t plan on using a gun to end their life in front of all of us, this would still be unacceptable.
In the weeks and months leading up to the convention the committee and other members of the community brought up concerns about event security, and were assured that appropriate security measures would be taken. At the event there was essentially no security. No bag checks and no checks with a metal detector or wand. After the incident there was an increased presence of security personnel and bag screenings, but no additional searching or screening of carried belongings.
Every member of the community deserves, and absolutely must be given, the ability to make informed decisions about their safety within the community. The WMF is responsible for taking every reasonable action to keep our community safe. In this case, they made the unreasonable decision of not banning a suicidal pedophile who had made clear their intent to protest a community block in person, and in doing so explicitly allowed the incident at WCNA to occur. The foundation’s actions put everyone attending the conference in life-threatening danger. Thankfully, due to members of our community, the worst case was avoided, but this was in spite of the WMF’s decisions and actions.
From a personal perspective, before I left to attend the conference my wife expressed concern for my safety. She’s aware of the anti-abuse work I do, and the threats of harm and death that come along with that. I told her, based on the WMF’s assurances, that it was safe for me to attend, they planned on having enhanced security, and she didn’t have a need to worry. Days later I would be sending her messages after evacuating the conference due to a suicidal man that I’d blocked from Wikipedia months earlier charging the stage with a gun. As soon as he began speaking, I recognized with horror who it was. I immediately informed WMF employees on-site. I was not informed that Weston would be attending, and either T&S didn’t screen the attendees or screened the list and let this pass. Either situation is completely unacceptable.
Thankfully, no one was physically hurt. Due to heroism by members of our own community, the threat was mitigated, but that doesn’t mitigate the trauma we all suffered. It is essential that nothing like this ever happen again, and because of that the community must be aware of the extent of the failures that occurred. The community did everything right in this situation, from blocking a pedophile, reporting it to T&S, forwarding the suicide threats and intention to protest in person, and pushing in every conceivable way for a WMF ban, and due to systemic failures from the WMF we were almost all party to an incredible tragedy.
Global trends: We are seeing 8% declines in human page views on Wikipedia as some users don't directly visit Wikipedia to get information. Learn about this new user trend, how the Wikimedia Foundation anticipate these changes, and how you can help.
The naming contest for the new Wikimedia project, known until now as Abstract Wikipedia, is ongoing.
Making it easier to say thanks: Users on most wikis will now have the ability to thank a comment directly from the talk page it appears on. Before this change, thanking could only be done by visiting the revision history of the talk page.
Account security: Improvements to account security and two-factor authentication (2FA) features were enabled across all wikis. Another part of the project is making 2FA generally available to all users. Along with editors with advanced privileges, such as administrators and bureaucrats, 40% of editors now have access to 2FA. You can check if you have access at Special:AccountSecurity.
Tech News: Read updates from Tech News week 42 and 43 including the community-submitted tasks that were resolved last week.
Wikimedia apps: The Wikipedia iOS App launched an A/B/C test of improvements to the Tabbed browsing feature into Beta for select regions & languages. Called “More dynamic tabs”, the experiment adds user-requested improvements and introduces article recommendations within the tabs overview, showing “Did you know” or “Because you read” content depending on how many tabs are open.
CampaignEvents extension:Campaignevents extension will be deployed to all remaining wikis during the week of 17 November 2025. The extension currently includes three features: Event Registration, Collaboration List, and Invitation List. For this rollout, Invitation List will not be enabled on Wikifunctions and MediaWiki unless requested by those communities.
Event registration tool: Autoconfirmed users on small and medium wikis with the extension can now use Event Registration without the Event Organizer right. This feature lets organizers enable registration, manage participants, and lets users register with one click instead of signing event pages.
Digital safety: Explore how you can help make Wikimedia safer by taking our new self-paced course, Safety for Young Wikimedians.
Wikimedia Core Curriculum: The Wikimedia Foundation has developed seven online video learning modules covering the core English Wikipedia policies. You are invited to use, adapt, and translate the course.
Advocacy: The Wikimedia Foundation has signed onto a statement that calls on governments and UN bodies to keep discussions about the future of internet governance accessible to non-government actors like industry and civil society. This statement is part of ongoing joint advocacy with affiliates to influence UN discussions about the future of internet governance such as the Global Digital Compact campaign and WSIS+20 deliberations.
GLAM: The Wikimedia Foundation and several affiliates have signed onto the Open Heritage Statement, which supports galleries, libraries, archives, and museums (GLAM institutions) to have the legal rights they need to collect, preserve, and provide access to cultural heritage.
For information about the Bulletin and to read previous editions, see the project page on Meta-Wiki. Let askcacwikimedia.org know if you have any feedback or suggestions for improvement!
Here is a quick overview of highlights from the Wikimedia Foundation since our last issue on October 24. Please help translate.
Upcoming and current events and conversations Let's Talk continuesWikimania Santiago will happen in 2027.
Wikimania 2027: Santiago, Chile is announced as the location for Wikimania 2027. The annual conference returns to Latin America after more than 10 years, following previous editions in Buenos Aires (2009) and Mexico City (2015).
Tech News: Read updates from Tech News week 44 and 45 including the community-submitted tasks that were resolved last week.
Activity Tab: The Wikipedia Android app expands the new Activity tab to all users. It offers a complete view of your Wikipedia activity: reading time, saved articles, edits, and donation history (for known donors). This change aims to make Wikipedia a more engaging experience for readers and contributors alike, while keeping all personal data private and stored locally on your device.
Tabbed browsing: Tabbed browsing is now available on the Wikipedia App for iOS. Tabs will let you keep more than one article open at a time, making it easier to explore complex topics, follow links without losing your place, and pick up where you left off.
CampaignEvents extension: Autoconfirmed users on small and medium wikis with the CampaignEvents extension can now use Event Registration without the Event Organizer right. This feature lets organizers enable registration, manage participants, and lets users register with one click instead of signing event pages.
Image browsing: The Wikimedia Foundation launched image browsing, an experiment that puts images on top of your Wikipedia article reading journey, on Arabic, Chinese, English, French, Indonesian, and Vietnamese Wikipedias.
Temporary accounts: Temporary Accounts are now enabled on 1,000+ projects including English Wikipedia.
Digital Safety: The Wikimedia Foundation is launching Digital Safety Office Hours to explore how to stay safe digitally, what does digital safety mean, what extra precautions can Wikimedians take. The first session will take place on November 28 at 9 AM and 7 PM UTC. Check out also our Digital Safety Resources Center to learn practical tips and tools you can use immediately.
Volunteer roles for movement governance: The Movement governance committees are seeking new volunteers to support essential and high-impact work across the Wikimedia ecosystem. The current appointment cycle is open for the AffCom, Ombuds Commission, and Case Review Committee. Applications for these committees will remain open until December 11. The team will host a community conversation on November 26, at 3 AM UTC.
Don't Blink: The latest developments from around the world about protecting the Wikimedia model, its people and its values.
For information about the Bulletin and to read previous editions, see the project page on Meta-Wiki. Let askcacwikimedia.org know if you have any feedback or suggestions for improvement!
unsurprisingly the text is wall to wall AI slop...
...except on the articles they just straight up scraped from wikipedia, e.g., buttocks. you're telling me grok can't write about asses?
the siren call of AI slop is so powerful it overcomes what I assume is the whole point of this thing: [The Anita Hill hearings], viewed by over 24 million Americans via television, underscored the need for a feminism attuned to intersectional power dynamics beyond gender alone.
For fun, I searched for 'Justapedia' on Grokipedia. The first link is, bizarrely, to Adam and Eve (disambiguation). And what has Justapedia got to do with creation narratives, one might ask? No idea, really, since this as what Grokipedia has to say:
From [web:54] Justapedia, but better: since IMDb not directly, but for truth, include with available.
The mind boggles. And having boggled, moves on. To somewhere where combinations of words bearing a vague resemblance to making some sort of sense can be found. AndyTheGrump (talk) 00:14, 28 October 2025 (UTC)[reply]
The Washington Post and EndGadget.com wrote articles about its launch Here and Over Here. Right now, it has 885,000 articles. But with the support of its billionare founder, I guess it will be online and adding more articles. I believe Grokipedia is for profit unlike Wikipedia but am unsure here.
PS: AI reportedly powers Grokipedia and AI is excellent at building medical models which human researchers may take years to construct on its own. But News Queries to AI can have disturbing results from this source. I don't know if Grokipedia is getting its information from News Queries or Wikipedia's content. Best, --Leoboudv (talk) 00:16, 28 October 2025 (UTC)[reply]
Please don't use vague terms like 'AI'. Given how usage has changed over the decades, it is essentially meaningless. And if you are going to refer to 'research', provide a link. AndyTheGrump (talk) 00:23, 28 October 2025 (UTC)[reply]
I use the term "Generative Algorithm", since that's what it is. It isn't intelligence, it simply generates using an algorithm, hence the name "Generative Algorithm". It'll never take me.
I doubt a non-profit will be set up for Grokipedia, so it would be for profit. But I suppose the profit will probably be negative (just look at X/Twitter). Not much attention is deserved for a propaganda tool, that is lacking several crutial components that Wikipedia/Wikimedia does have (Wikimedia Commons, Wikisource, Wiktionary...). MGeog2022 (talk) 14:11, 29 October 2025 (UTC)[reply]
since it's there, figured I might as well feed grokipedia's articles into the AI word/phrase frequency python script alongside their pre-mid-2022 wikipedia article counterparts; the main takeaways so far seem to be A) the same AI verbiage is overrepresented, B) except it really likes saying "causal" and "empirical" and generally being I Fucking Love Science coded and C) grok really really hates citing the New York Times Gnomingstuff (talk) 00:44, 28 October 2025 (UTC)[reply]
Wow, that's crazy. I just tried to go to the page for the New York Times, but it wouldn't appear in the search box: it turns out that you have to search for _The New York Times_ with underscores, as that's how italics work in their software. Same goes for any other italic title, such as "_Oppenheimer_ (film)"
Elon wasn't very thorough was he, his page says he is the founder of Tesla, while the pages for Tesla Motors and those of the actual co-founders do not claim Musk was a founder.
If you are referring to the Groipedia 'Buttocks' page, it attributes Wikipedia on the bottom (of the article, I mean). Whether this is adequate, I'm not sure. AndyTheGrump (talk) 01:31, 28 October 2025 (UTC)[reply]
I missed that because I was looking at other pages. The attribution does not seem to be on Malaysia for example. Striking this, looking more closely while I recognised some of the text that page does seem to have a lot of differences. I suspect the similarities may be from drawing through the training data? CMD (talk) 02:05, 28 October 2025 (UTC)[reply]
Sort of, but the relationship is fuzzier. For example, Grokipedia copies Piri Reis map pretty much word for word and attributes it. Its "newly generated" article on the cartographer himself, Piri Reis, mixes plagiarism of Wikipedia with hallucinations. Take this line for example, "The Kitab-ı Bahriye, or Book of Navigation, is a detailed portolan atlas and sailing manual compiled by Piri Reis between 1521 and 1526, consisting of two versions: an initial edition with 130 chapters and a revised edition expanding to 210 chapters and 434 pages containing approximately 290 maps." That has two "citations" but neither one contains the 130 or 210 chapter count. One source contains 215 as a chapter count of a later copy. 434 pages is a hallucination. "detailed portolan atlas" is a decription lifted from Wikipedia; the "cited" sources don't use the term "portolan". One calls it a "manual on the coastlines and islands of the Mediterranean Sea", which is true but does not equate to portolan chart. The confused dating of the two versions likely comes from Grok not understanding "between 1511 and 1521" in the Wikipedia article and therefore correcting it into an error. Rjjiii (talk) 11:54, 28 October 2025 (UTC)[reply]
Grokipedia had a quite good article on non-controversial topics such as Ramesses II (aka Ramesses The Great) and I incorporated a small piece of information about the discovery of this king's original granite sarcophagus into wikipedia's article on this king. I had known about this information but had forgotten about its rediscovery. But on so-called 'controversial' issues such as Lesbian, I notice Grokipedia uses words such as 'Gender Fluidity' which I disagree since a true lesbian would only love women. Or consider this quote under Grokipedia's paragraph titled 'Modern Definitions and Distinctions' where it says: "Modern discussions further delineate lesbianism from "political lesbianism," a 1970s radical feminist framework viewing same-sex relations as a deliberate political rejection of male dominance rather than innate desire, which empirical evidence on the biological and developmental origins of orientation—such as twin studies showing 20-50% heritability for female same-sex attraction—largely refutes by affirming its non-volitional nature." It is Not really written from a Neutral point of view (wikipedia never uses such words) and cites this source in footnote 17. I have NEVER edited wikipedia's article on Lesbian but must confess I don't even know what Grokipedia is saying here with this long quote and thought feminism was already a rejection of male dominance. Strange, --Leoboudv (talk) 04:27, 28 October 2025 (UTC)[reply]
While political lesbianism is/was a real thing promoted by radical feminists (it's very minor nowadays), that quote from Grok is very misrepresentative of political lesbianism, to the point of a strawman. Katzrockso (talk) 03:48, 29 October 2025 (UTC)[reply]
I started with Grokipedia's page on Pandeism ( https://grokipedia.com/page/Pandeism ) because it's corresponding well-enough-developed Wikipedia page is something I know enough about to spot oddities. Problem here is that there aren't a lot of oddities to spot because Grokipedia copied nearly the entirety. It did ditch the first header, added links under "external links" to nonexistent webpages on "Pandeism: An Anthology" (a real book, but not found at the given webpage) and an even more nonexistent "The Pandeist Manifesto, Robert M. Avrett" which is purely a fiction. Wikipedia's page has 110 refs. The page copied from Wikipedia, which you'd think would copy those refs, instead offers six refs, one being an archived copy of an old version of Wikipedia's own page, the rest being either unrelated or totally made up links. Hyperbolick (talk) 05:41, 28 October 2025 (UTC)[reply]
LLMs by design have a hard time citing sources. Cursory checks are sometimes failing at verification. The facts are not wrong, but the cited source does not verify. Hyperbolick has found hallucinated citations. -- GreenC06:30, 28 October 2025 (UTC)[reply]
Bigger headscratcher is Wikipedia's copied page is very thoroughly sourced, as expected for an academic topic, so why does Grok copy just body text but not copy sources? Hyperbolick (talk) 07:07, 28 October 2025 (UTC)[reply]
Grokipedia not only copies WP articles, it also 'fact-checks' them. Now what really surprised me is that in the few articles I checked which were originally co-written by me, the corrections made by Grokipedia were actually on point! After diving into the sources, I even corrected the WP articles accordingly. I recommend everyone to check the Grokipedia versions of articles they have worked on and to click on the 'See Edits' button in the top right corner. It gives you a succinct description of the 'issue', the 'fix', and the 'supporting evidence' Grok seems to have used. You of course need to check everything in the sources, but as an error-detector for WP articles it works beautifully. ☿ Apaugasma (talk☉)15:32, 28 October 2025 (UTC)[reply]
I'd be very wary of doing that for anything the slightest bit controversial. Using intentionally-biased software as an error checker is a sure-fire way to introduce further systemic bias. It isn't looking for 'errors' in the abstract, but errors per its training & prompting. AndyTheGrump (talk) 15:44, 28 October 2025 (UTC)[reply]
@Apaugasma That's an interesting idea, you should spread it around. A while back there was a German newspaper who made a WP/AI factcheck experiment, and though it concluded that the AI was wrong as often as WP, it also found some errors that could be, and was, corrected by human Wikipedians. Gråbergs Gråa Sång (talk) 15:46, 28 October 2025 (UTC)[reply]
It might work better for articles on dry scholarly stuff. Agreed that for anything controversial it's perhaps not likely to be helpful. I've noticed too that it regards imdb as RS. It suggested some other unreliable stuff too, so it's really important to have a very firm grasp of what is reliable or not on the subject. But in other instances it suggested top-notch sources, and at least in one case (here) it used them to correct a mistake that I believe only a very few human experts on the planet would have spotted. ☿ Apaugasma (talk☉)16:15, 28 October 2025 (UTC)[reply]
I've seen it cite Facebook, Discogs, Fansly and WikiWand, too (and not, as we might rarely do, when discussing those sites or their users directly).
It suggested "no confirmation of detrimental effects" for treating epileptic seizures as demonic possessions with Florida Water for me so I'm going to remain mainly doubtful of its output—this is the type of "correction" that can kill people and I really think they should have kept anything medicine related off-limits for the model. Bari' bin Farangi (talk) 10:15, 2 November 2025 (UTC)[reply]
An interesting response from historian Kevin Kruse: "Took a look at the entry they have for me in Elon's Grokipedia. There are some surprisingly deep details, dredged up from interviews I'd long ago forgotten about, and then there are some incredibly big points that are completely wrong. ..." (and he gives some examples of the latter). So it's possible that the entries will sometimes point us to useful sources. I have no sense how often that will be the case though. FactOrOpinion (talk) 01:58, 29 October 2025 (UTC)[reply]
What? No images at all. I'm already waiting for Grokimedia Commons :-D
Jokes aside, if we focus on the positive aspects of this, it will be bringing content from Wikipedia to people that wouldn't be otherwise reading it. If some people were dominated by politics to the point they didn't even use Wikipedia because of political prejudices, now, they will use the 99% of Wikipedia content that is non-political. MGeog2022 (talk) 19:56, 28 October 2025 (UTC)[reply]
Of course, the negative aspect is that Grokipedia is highly political in nature. What I mean is that there will still be only one sum of all human knowledge, and I think this is very important. Whether able of thinking independently from Elon or not (yes, it seems that some people belong to the second group), all people share the same knowledge base for non-controversial information: Wikipedia. MGeog2022 (talk) 20:14, 28 October 2025 (UTC)[reply]
I'm amused that Grokipedia is even copying disambiguation pages like One Piece (disambiguation): Grok version. The content is essentially identical (even down to the Wiktionary mention) though with no internal links it is pretty useless. That this page was copied from Wikipedia is not currently mentioned on Grok's page. Dragons flight (talk) 17:09, 28 October 2025 (UTC)[reply]
In a somewhat different case, Grokipedia apparently started with Wikipedia's Global warming (disambiguation) page, but then decided to elaborate it into a full page of prose with the same "disambiguation" title that oddly mixes the scientific topics with the cultural references from our disambiguation page. Dragons flight (talk) 17:36, 28 October 2025 (UTC)[reply]
To highlight an example of political bias, compare Peace Through Strength with Peace Through Strength. The Wikipedia version highlights the phrase's origin with Neville Chamberlain, as part of his failed policies of appeasement with Hitler. The Grok article does not mention Chamberlain at all. And for good reason, this policy has been the Republic party platform since the 1960s, most notably associated with Ronald Regan and the Cold War. The Grok article fails to mention Richard Nixon who used the slogan during the Vietnam War. But Grok criticizes Biden for withdrawing from Afghanistan, even through Biden never even used the phrase. The Grok article has many other problems, such as attributing a quote to Eisenhower that was actually made by Truman. The errors are harmful and clearly intentional. It's straight up disinformation and historical negationism. -- GreenC19:26, 28 October 2025 (UTC)[reply]
If you believe something on Grok is a copyright violation, where the text is derivative of Wikipedia, but not transformative, you can send a form-letter take down request: Standard CC violation letter .. it's free and easy. Citations and facts on their own are not copyrightable, it would be the prose wording. I see some sentences that they copied from Wikipedia, that I originally wrote. It would be nothing else but fun to blast Grok with valid notices of copyright infringement, we could document it somewhere as well. -- GreenC19:59, 28 October 2025 (UTC)[reply]
The Grokipedia article "Blood alcohol content" is not attributed to Wikipedia, but the HTML source code incudes
{\"id\":\"0e70995c99a7\",\"caption\":\"Breathalyser 'pint' glass - 2023-03-27 - Andy Mabbett\",\"url\":\"./_assets_/Breathalyser_'pint'_glass_-_2023-03-27_-_Andy_Mabbett.jpg\",\"position\":\"CENTER\",\"width\":0,\"height\":0},
I wonder if all of the "original" articles have their paper trails in the source code like that? Piri Reis has the image file names of all the images from the Wikipedia article and "captions" that are very close paraphrases of the Wikipedia alt text:
"images\":[{\"id\":\"119b1a6a56d6\",\"caption\":\"A photograph of a bust, stored at a museum, of a bearded and turbaned man\",\"url\":\"./_assets_/PiriReis_IstanbulNavalMuseum.JPG
{\"id\":\"c10eb78bfdb5\",\"caption\":\"A color map of the Venetian lagoon with major rivers, canals, and fortifications\",\"url\":\"./_assets_/Venice_by_Piri_Reis.jpg
Grokipedia's takes on Wikipedia's contentious articles (especially those that deal with the "culture war") are quite interesting to read. For instance, the last paragraph of J.K. Rowling's lead is possibly the most disputed part of that article.
From 2019, Rowling began making public remarks about transgender people, in opposition to the notion that gender identity differs from birth sex. She has been condemned as transphobic by LGBTQ rights groups, some Harry Potter fans, and various other critics, including academics. This has affected her public image and relationship with readers and colleagues, altering the way they engage with her works.
In the 2020s, Rowling emerged as a vocal advocate for recognizing biological sex as immutable and for preserving women's sex-based rights and single-sex spaces, citing concerns over self-identification policies eroding safeguards against male access, which has drawn accusations of bigotry from gender identity activists despite her explicit affirmation of trans people's right to live without discrimination.
Honestly, what irritates me the most is the lack of consistency among articles on Parallel subjects. General Fraternities (Sigma Nu, Alpha Sigma Phi, Delta Upsilon, etc.) have somewhat parallel articles on Wikipedia, but not on Grokipedia.Naraht (talk) 00:07, 29 October 2025 (UTC)[reply]
Naraht, check out the list articles. What it seems like has happened is that something about the table formatting confuses their software. So Grokipedia has many "List of" articles similar to Wikipedia's. However, for the pages here that are table heavy Grokipedia has articles on the topic of a list of whatever. So "List of choking deaths" mirrors Wikipedia and starts with, "This is a list of notable people who have died by choking", but some amazing articles like "List of accidents and disasters by death toll" are bizarre:
Lists of accidents and disasters by death toll enumerate catastrophic events—ranging from natural occurrences such as earthquakes, floods, and cyclones to human-engineered mishaps including transportation wrecks, industrial releases, and structural collapses—ranked in descending order of verified or estimated fatalities, excluding deliberate acts like warfare or terrorism.[1] These compilations draw from historical records, governmental reports, and databases like EM-DAT, which define disasters as occurrences overwhelming local response capacities and necessitating broader aid, with death counts often encompassing direct trauma alongside indirect effects like disease and starvation.[1] Predominantly, the uppermost entries feature geophysical and hydrometeorological hazards in populous, underprepared regions, as evidenced by the 1931 Central China floods along the Yangtze and Huai Rivers, which inundated vast farmlands and urban areas, yielding estimates of 1 to 4 million deaths amid poor record-keeping and subsequent famines.[2] Such tallies [...]
As of 15 November, the weird text haven't been removed yet. They also have weird text about list explosion incidents:
Industrial explosions arise from rapid chemical reactions or physical detonations involving stored or processed materials such as ammonium nitrate, flammable gases, vapors, or combustible dusts in manufacturing, chemical processing, and storage facilities. These events propagate through confined spaces or atmospheric ignition, generating overpressures that cause structural failure, fragmentation, and secondary fires, often amplifying fatalities beyond the immediate site.
Such text wouldn't be needed when you are just listing "explosion incidents". This showed that Grok is just mainly about scraping the Internet, rewrite them, and they won't care about the readability or the usability of the said article. ✠ SunDawn ✠Contact me!00:35, 15 November 2025 (UTC)[reply]
I also looked up JK Rowling out of the same curiosity, and what struck me is that content is all sourced to her website. It's interesting that Musk's idea of combating bias necessitates doing away with basic verifiability principles. I also looked up a more niche topic I remember lots of details about (Ondřej Kúdela) and Grok's account of the racism row just gets stuff wrong, and cites sources that don't support the text at all - in other words it just hallucinates and makes shit up like all LLMs do. – filelakeshoe (t / c) 🐱10:39, 29 October 2025 (UTC)[reply]
I'm not going to reproduce it here, but I have a thread on Twitter where I note that the rewriting of articles appears to be much more extensive than my initial impressions. For many long articles on major topics, Grokipedia is completely rewriting them. One of the rather inhuman features of this is that Grok tends to completely redo the sourcing. When comparing Grokipedia reference lists to Wikipedia references lists, on heavily edited articles, it is common for <10% of the citations to appear in both articles. For example, Earth has 293 citations on Wikipedia and 312 citations on Grokipedia's Earth. Only 2 of the URLs referenced by Grokipedia actually appear on Wikipedia citation list, and that's despite Grokipedia covering many of the same topics in a similar order. Obviously, Grokipedia still has pages that were copied from Wikipedia with few or no edits, but there is also a lot of divergence happening as well. Dragons flight (talk) 11:57, 29 October 2025 (UTC)[reply]
From what I've seen previously regarding both ChatGPT and Grok output when requested to produce a Wikipedia article (or for Justapedia, where they are actually instructing people to use LLMs for article creation), 'citations', even when not hallucinated, or so thoroughly scrambled as to be useless without spending far too long trying to figure out what is messed up, tend frequently to be guesswork - not the source confirmed to be supporting a statement, but something with a title that suggests it might. It doesn't actually cite anything, in any meaningful sense. Instead it next-word-guesses what it thinks the reader would like to see, as a string of text, like any other LLM output. Possibly Grokipedia is tuned to do a little better than this, but regardless, it cannot check anything that isn't online, and almost certainly does not confirm that a source cited is actually supporting what it is supposed to be. The Grokipedia LLM is most likely rejecting actual citations it can't access, and searching for replacement online stuff with vaguely relevant-sounding titles, per its usual MO. Sometimes such citations might be useful as a search result, but none can in the slightest be trusted to actually fulfil their intended purpose. AndyTheGrump (talk) 12:22, 29 October 2025 (UTC)[reply]
On the more citations, the first citation on the Earth page somehow has a numeral wrong so 149.7 is copied as 149.6, and neither the first nor second source fully covers the content cited to them. The overall citation number has a variable relationship with the actual text. On the level of rewriting, the overall structure of the Earth article remains copied from Wikipedia, as you note. I find AndyTheGrump's guess as to why citations might change persuasive. CMD (talk) 12:23, 29 October 2025 (UTC)[reply]
For the 51 pages cited in purple on the Twitter thread, which had each been heavily rewritten by Grok, I've done an analysis looking at the domains being cited on both Wikipedia and Grokipedia. The result is not as bad as I might have initially expected, though there are some clear oddities. For example, Grokipedia seems comfortable citing Reddit, Quora, Facebook, and other user contributed sites that we would generally discourage for most uses per WP:RS. At the same time Grokipedia cites most traditional news media at moderately lower rates than Wikipedia (though some organizations like Washington Post, BBC, and The Independent have their citation counts cut sharply). Scientific sources appear to be cited by both, though the distribution is different with Grok really liking sciencedirect for some reason. And finally, Grokipedia seems unable to cite references that aren't online, leading most books to be excluded.
Of course, this doesn't establish that the Grokipedia citations are any good at all. As others have suggested, Grokipedia may just be adding links based on an expectation that links are needed, without clearly establishing that the linked pages support the referenced content. It is yet to be established that any of these links are useful, but looking at what sources Grokipedia is favoring may suggest something about the point of view that is being adopted.
The 100 most cited domains in a sample of 51 Grokipedia / Wikipedia pages
Oh hey, reddit is being cited by grok, that's nice. When AI uses reddit as a source, it usually is good, such as gemini. Which is a really good AI with very few problems. Gaismagorm(talk)17:07, 29 October 2025 (UTC)[reply]
Not in this case. The article for "Woman" attributes some puffery about Toni Morrison -- Toni Morrison's Beloved (1987), drawing on the historical trauma of slavery, earned the Pulitzer Prize in 1988 and contributed to her 1993 Nobel Prize in Literature, emphasizing African American experiences through nonlinear storytelling -- to an 11-year-old Reddit thread that contains nothing more than "yeah Toni Morrison's my favorite author" type comments. Gnomingstuff (talk) 23:57, 3 November 2025 (UTC)[reply]
sciencedirect.com is Elsevier which publishes a large amount of scientific journals, I presume a reason for the discrepancy is likely different formats used for citations, i.e. wikipedia citing a paper using journal name, publication date etc, while Grokipedia just links to the version on sciencedirect.com Giuliotf (talk) 17:16, 29 October 2025 (UTC)[reply]
Dragons flight, thanks. Grok does cite books but will link to the publisher website for example oup.com - it's unknown where Musk got his training material from it's costly to acquire and digitize books (see Anthropic case). There were rumors he raided the Library of Congress which had a few million digitized books. — GreenC04:19, 2 November 2025 (UTC)[reply]
You can sort of see vestiges of what Grok's web search is looking for if you go into the html/network requests and look at the "description" field on the citations. A lot of them will have "Missing: _____" at the end, which I assume indicates the general ballpark of search query it was using. Gnomingstuff (talk) 20:45, 29 October 2025 (UTC)[reply]
Yes, I was sloppy. If a whole site gets blacklisted, then an individual URL can be whitelisted for a specific page (if it meets the requirements). What I meant is: If you get grokipedia.com blacklisted, then editors won't be able to casually post links in discussions such as this one.
I noticed recently that the Table of Contents on Wikipedia articles had disappeared. I eventually, by accident, found the TOC by clicking on the doo-hicky at the top left of an article. What accounts for this change? Is the disappearance something I did or a change of policy? I'm not necessarily expressing disapproval of the change -- just curiosity. Smallchief (talk) 21:39, 2 November 2025 (UTC)[reply]
They haven't disappeared for me, and I'm almost certain that they're visible by default. Is it possible that you clicked on the "hide" button without realizing it? FactOrOpinion (talk) 04:09, 3 November 2025 (UTC)[reply]
It's a sticky pref, so if you clicked it once, it will stay hidden (and vice versa). Unfortunately, since accidental clicks look the same to a web page as intentional ones, even an accidental click that you don't notice at the time will have the effect of hiding the TOC. WhatamIdoing (talk) 21:41, 3 November 2025 (UTC)[reply]
Aha! Thanks! I had shifted from the standard to the wider page width, apparently by accident. But, that raises another question. The TOC used to be integrated into the text of the article, following the summary paras. Now it's off on the side of the page. That seems an improvement, but when and why did that change happen? My curiosity as to mysterious events is unquenched. Smallchief (talk) 11:23, 4 November 2025 (UTC)[reply]
Hi. I'm feeling kind of depressed, because of Grokipedia. I checked couple of articles I made on Wikipedia that also appear on Grokipedia, and the Grokipedia articles are much longer and the sources seem legit to me. So what's the point editing Wikipedia anymore if AI is going to just make better and longer articles about topics? (Here are the articles I've made that also appear on Grokipedia: Indian burn/Indian burn and Peacocking/Peacocking.) --Pek (talk) 07:34, 4 November 2025 (UTC)[reply]
Other people will cover the current reliability of Grokipedia with more relevance than I could, so I'll stick to the emotional introspection portion of your question. What was your reason for editing Wikipedia before? If there was a perfect machine which could produce better articles than you ever could, would you be sad that you lost a hobby or happy that the quality and quantity of information has risen? If you're editing Wikipedia for fun, keep going until it stops being enjoyable. If you're doing it to produce quality articles, keep going as long as you feel your efforts are worth it. The best volunteer work doesn't feel like a sacrifice. ~2025-31035-62 (talk) 11:13, 4 November 2025 (UTC)[reply]
I've checked an article I contributed to (well translated) an quickly compared the two versions: Risiera di San Sabba
The Grokipedia article is much longer, but it is very much padded out with repetitions:
At one point it states Originally built in 1913 as a multi-storey brick rice-husking factory, the site's industrial infrastructure facilitated its adaptation for detention purposes during World War II.[3]
It later states The Risiera di San Sabba complex was constructed in 1898 as a dedicated rice-husking facility in Trieste's San Sabba district, approximately 4 kilometers northeast of the city center.[4][5]
It also later states This initial conversion leveraged the site's existing multi-story brick structures, originally built between 1898 and 1913 for industrial rice processing, to house prisoners amid the rapid German annexation of the Adriatic Littoral region into the Operationszone Adriatisches Küstenland (OZAK).[1]
And here we have a problem where the Grokipedia article contradicts itself. It has cited sources for both claims and it has correctly used those sources to extract a construction date, but it doesn't know what to do when sources differ. While it might have eventually got to the right answer, it is easy to see that this won't always be the case if it doesn't know how to deal with conflicting sources, particularly on a fact that is more controversial than the year when a building was built.
What we have is The building complex was built between 1898 and 1913 in the periphery of Trieste in the San Sabba (or San Saba) neighbourhood and was first used for rice-husking, giving it the name Risiera.[10] but we have a local government source about the location.
Another criticism I have is that the Grokipedia article has a lot of extraneous information that would be a better fit under the other articles, but I guess they have to do it this way as they haven't figured out how to include links yet, for example the entire Italian Armistice and German Annexation of Trieste (1943) section should be removed from the page as the content is either duplicated elsewhere on the article, or would better fit in a page about e.g. Operation Achse
Later in the article Grokipedia says In early 1944, extermination at Risiera di San Sabba primarily involved executions by firing squad, hanging, bludgeoning, and gassing via carbon monoxide emissions from a truck engine piped into a sealed room disguised as showers.[1] The cited source says nothing about rooms disguised as showers. I have no idea where it got this claim from other than it being widely written about for other death camps and therefore Grok AI making the assumption that it would belong here as well. This highlights another issue, LLMs are statistical models that guess the next word based on what was written before. This works well for very notable subjects which have had a lot written about them, but on more obscure subjects it starts to struggle, and if they are similar to other, much more notable subjects then you are tempting fate if you trust the LLMs.
To be fair to LLMs though, this mistake is one that a regular human who is familiar with the holocaust, but not this camp in particular might make. What is far more damming is when Grokipedia says: Higher claims of tens of thousands of on-site deaths have circulated in early antifascist accounts but lack corroboration from physical evidence, as the crematorium's limited capacity—capable of processing roughly one body per hour under optimal conditions—could not sustain such volumes over the camp's 18-month operation without extensive remains, which were not documented post-liberation.[1]
While parts of that statement may be true, none of it is backed up by the cited source, and I have no idea where they got that from
Later: Some Italian sources equate it to extermination camps to emphasize Nazi atrocities in the Adriatic zone, yet causal analysis of operations reveals a hybrid police-concentration model, with executions as reprisals rather than systematic genocide machinery.[2] This nuance challenges narratives inflating its role, potentially influenced by institutional biases favoring antifascist interpretations over forensic realism. The source doesn't actually say what that, it is mostly a series of witness statements (incidentally one of them seemingly contradicts the previous claim that the crematorium could only dispose of 1 body per hour), and the conclusions drawn from it, combined with the previous statement highlight a disturbing trend of trying to minimize Nazi crimes an cast anti-fascists statements as unreliable.
This stuff then becomes unhinged:
The anti-fascist framing persisted through the 1960s and 1970s, culminating in the site's designation as a national monument in 1965, but faced challenges from revisionist critiques questioning victim tallies and the camp's extermination status versus its transit function for Auschwitz deportations. Events like the 1976 trial of SS officer Joseph Stolic, convicted for war crimes at the Risiera, revived the narrative by spotlighting survivor testimonies of gassings and burnings, yet exposed tensions as right-leaning voices alleged politicized exaggerations to sustain partisan myths. Mainstream academic and media accounts, shaped by postwar institutional biases favoring leftist historiography, largely dismissed such debates as neo-fascist denialism, prioritizing the site's role in perpetuating a "civic religion" of Resistance over empirical reevaluations of operational records or comparative camp analyses.[43][44]
Setting aside that I could find no mention of a Josef Stolic anywhere (the only person convicted of anything in relation to the camp appears to have been Josef Oberhauser, so this likely a hallucination), this is an absolutely ridiculous framing to have for the issue that is being discussed. There is a serious discussion to be had about how some in Italy tried to dismiss anything bad that happened as the sole responsibility of the SS, the behaviour of different partisan groups towards each other and civilians, and the Foibe massacres, but this (and a lot of other parts of the article) are giving WP:UNDUE weight to what it refers to here as "right-leaning voices". The two sources cited here do look like serious academic works which I'm unable to access due to them being behind a paywall, but from the abstracts I seriously doubt they back up the claims or framing made on the Grokipedia page, e.g: It explores the ways in which emphasis on the period of the lager's functioning in the Adriatic Littoral Operation Zone from September 1943 to April 1945 reinforces perceptions of Nazi culpability and avoids Italian national reckoning with the realities of Fascist ethnic persecution and violence in the region. It examines how the monument and museum cast the partisan struggle as a united multi-ethnic front against the Italian Fascists and then against the soldiers of Hitler's Reich, leaving aside the unique and long-term contributions of autochthonous Croatians and Slovenes, subjected to ethno-nationalist persecution for more than two decades, who fought to defeat fascism and authoritarianism in the region.
I could go on but I think I made my point, Grokipedia might be ok for uncontroversial topics which have a lot of coverage, though you would still need to keep an eye out for hallucinations, but more obscure topics are likely to see more and more hallucinations. If some sources are conflicting, Grokipedia doesn't know how to deal with that, and Elon Musk's political bias is clearly showing in how some topics are portrayed, making the whole project not trustworthy. Wikipedia may have its flaws, or may not have the best coverage of some topics, but human eyes are better able to make sense of the available sources and its transparent process makes it a lot more trustworthy than Grokipedia which is a black box controlled by one man. Giuliotf (talk) 11:54, 4 November 2025 (UTC)[reply]
Well, for starters they usually aren't. There are some exceptions, but that just means we need to step up our game and not give up. Besides, Grokipedia lacks links and images, so we are still better. And our load times are better. I firmly believe AI is a fad, and eventually it will die out and only be used by people who can use it for legitimate uses. Gaismagorm(talk)11:56, 4 November 2025 (UTC)[reply]
On topic areas that I have been working on, the Grokipedia article contains blatant misrepresentations, misinterpretations or blatant errors, as well as pushing positions contrary to scientific consensus. I have no doubt that if you inspected those articles closely you would find innumerable errors, poor sourcing, SYNTH or other more subtle errors like Giuliotf has listed. Katzrockso (talk) 12:05, 4 November 2025 (UTC)[reply]
But as a Wikipedian can see, it wasn't Wales, or Sunday, and it's talking about WP:GOLDLOCK. I could have forgiven the last one, but per RSP, Gizmodo is supposed to be "generally reliable for technology." Gråbergs Gråa Sång (talk) 13:10, 4 November 2025 (UTC)[reply]
Gizmodo added a correction: "Correction, 9:05 pm. ET: An earlier version of this article stated that Jimmy Wales himself had locked the article on the genocide in Gaza, which isn’t true. The article was locked before he commented on it. Gizmodo regrets the error." — Chrisahn (talk) 21:06, 5 November 2025 (UTC)[reply]
Its sourcing sucks. Not only does it cite Reddit and Quora and Facebook for stuff, but literally the first source I spot-checked was a hallucination. The "Cat" (Felis catus) article cites the text This places it among the small cats of the Felinae subfamily, characterized by conical pupils and agile, cursorial adaptations suited for stalking prey to this document; its entry for Felis catus is almost comically terse, the page contains nothing anywhere about conical pupils or adaptations to stalk prey, and its only mention of "cursorial adaptations" refers to cheetahs, which are not small cats. So while some of this might be true, the "citation" is complete bullshit. Gnomingstuff (talk) 17:59, 4 November 2025 (UTC)[reply]
also, it very occasionally "slips out of character," resulting in hilarious shit like this from "Ken Paxton": The race was competitive amid Paxton's ongoing securities fraud indictment, but he maintained support in Texas's Republican-leaning electorate. No, don't cite wiki. Remove that. Wait, rephrase: The election occurred during Paxton's facing of felony securities fraud charges from 2015, yet he prevailed narrowly in vote share.Gnomingstuff (talk) 18:07, 4 November 2025 (UTC)[reply]
Better? Pffft. I read the article on Larries there and most of the new content is cited to social media posts, like Tumblr and Reddit. Since there isn't a prominent active anti-Larrie presence online, or an organized community, the article seemed to skew in their favour, and reading through gives the impression that hey! There's so much proof, it must be true! Wikipedia has the upper hand when it comes to reliability. jolielover♥talk03:39, 5 November 2025 (UTC)[reply]
Dear @Pek, though I personally think that Grokipedia is not and will not become a better encyclopedia than Wikipedia, I understand it is important that you feel that way. I think that something to consider is that Wikipedia is not necessarily about being the "best" or "better" encyclopedia. It wasn't created, for example, to correct flaws or biases in Encyclopedia Brittanica. It was created in service of the noble idea of the Wikimedia movement -- that the masses of humans can together, in a somewhat-democratic way, categorize and explain all knowledge and share it free for all. That idea was important then and important now, especially as we feel more isolated from our fellow humans. Even if the idea doesn't work as successfully as Grokipedia (though we'll have to wait and see -- it didn't work as successfully as Brittanica at first) it still should be developed because its an idea that helps fulfill our human potential, that is enjoyable, that creates memories and experiences like no other. That won't change even as competitors get better, because we will still work in service of this idea. ✨ΩmegaMantis✨blather17:56, 7 November 2025 (UTC)[reply]
Better? Grokipedia has no images (people like to look at pictures). It lacks navboxes - the well-organized navigational maps between articles in a topic tree, a major element of classic Wikipedia. Its uppercasing, lowercasing, and italics mistakes are all over the place, and does it lack links, categories, and other valuable Wikipedia features? I haven't spent enough time on it, but these 'lacks' are concerning and hopefully it will correct them (talking to you Elon, come on, step up all the way if you're going to offer a full encyclopedia). That they are copying Wikipedia articles is not a negative, it shows that humans are still in charge and can offer the best that money cannot buy. Randy Kryn (talk) 12:19, 7 November 2025 (UTC)[reply]
Exactly what the question says. Does wikitext support this for CSS? My userpage uses a lot of custom CSS and has a bunch of contrast issues depending on which colour mode a user is on which I need to fix by creating overprecise CSS. thetechie@enwiki (she/they | talk) 18:11, 4 November 2025 (UTC)[reply]
Hi @Vyacheslav84. The lead should be a summary of the article. It's good for the lead to duplicate—or summarize, rather—some content from later sections. However, the coverage in the lead should be shorter and, for technical subjects, simplified. You can cut any new details from the lead (second paragraph) and paste them into Spacecraft_electric_propulsion#Dynamic_properties, including the <ref> tags. I'm not sure if this answers your question. See Help:List-defined references for additional guidance on references. For future reference, Wikipedia:Teahouse or Wikipedia:Help desk is usually better for this kind of question but I am happy to try and help. Let me know if you have further questions. —Myceteae🍄🟫 (talk) 22:17, 5 November 2025 (UTC)[reply]
Is there anyplace on Wikipedia where a serious discussion can be had about "the future of Wikipedia, given AI coming, and our own recent problems with ideological bias in hot-button articles"?
I'm not interested in unproductive arguments that signal, or make wiki editors feel good. But as a 21-year editor with tens of thousands of contributions, I'm seeing a new epoch could be dawning, and would really benefit from a serious discussion with serious editors about this topic. Where is the best place to do this? N2e (talk) 03:56, 7 November 2025 (UTC)[reply]
WP:GROKIPEDIA doesn't replace Wikipedia (<-- read link why). Musk's claims of Wikipedia bias are self-serving and largely untrue. Other than that, not much has changed. We continue to be the best there is, based on the same powerful ideals of peer review and transparency that have been around for 100s of years. -- GreenC04:47, 7 November 2025 (UTC)[reply]
The serious discussion is not mainly about any particular one AI information source, so def not about Grokipedia. And that wiki essay has " has not been thoroughly vetted by the community. " in any case, as it says it its lede. N2e (talk) 12:51, 7 November 2025 (UTC)[reply]
I suspect a useful frame for thinking about the coming of age of AI information sources will be to look at it with an economic lens.
How will the coming of (increasingly, better over time) information sources from AI affect the "demand" for what Wikipedia has provided to global readers for the past couple of decades? Wikipedia was unique and amazing, and obviously filled a great need for information in the early 2000s. Wikipedia is, as Jimbo Wales has said, one of the jewels of the internet. But Wikipedia will not be immune to new tech and new offerings, from services that will have different cost structures for producing that information than that of human volunteers curating/writing/clarifying that information. And we should stay aware of it.
Good comparisons, today, of the services are hard. AI general info sources are too new; and of course, rapidly changing. But the fact that we cannot do a good academic comparison doesn't mean that the global readership for information will not, gradually over time, move to competitive offerings that AIs will produce. The topic is and should be a valid discussion. Let's start by creating metrics, and watch it over time.
How will the changes brought on by the coming of AIs affect our "supply" side? How will it affect our human editors and their willingness to write, to struggle, to create new articles, to fix poor articles? I don't know, but as a data point of one editor with 50k+ edits over 20+ yrs, I can say it is already affecting my willingness to work on certain articles and topics. One characteristic that can already be seen is that the vast decrease in cost to supply encyclopedic information (AIs will do much more, with less human direct input) is already markedly decreasing my interest in doing certain kinds of research and writing. Other editors will have myriad diverse reactions to it. But it would be unlikely that this makes merely a small effect, over time. Let's watch it, monitor it, and think hard about it; rather than wave it off with a schoolyard word fight that says "Wikipedia is better." (and, by implication, always will be).
My take is that human-mediated global information curation will continue to have a place in the future. But I do not think that in 2035 it will look as much like the English Wikipedia of today, as the 2025 Wikipedia looks very similar to the 2015 version. Change is coming; and I suspect we are at or near an inflection point.
What do others think? Little diff from previous changes in the technology and human socialsphere? (say with smartphones, social media more broadly, etc.) Or do you see substantive changes on the horizon? N2e (talk) 12:51, 7 November 2025 (UTC)[reply]
In terms of metrics, human page views are reportedly down about 8% compared to 2024 per WMF. Is there another metric you are interested in? Regarding the supply side, the WMF theory is that fewer views = fewer new editors. It's probably not that direct, there's bound to be some sort of selection bias in terms of the sort of person who would seek out information in a particular way and the sort of person who decides to edit, but that provides another implication to the view count metric. CMD (talk) 13:01, 7 November 2025 (UTC)[reply]
There is no particular metric that I think will sufficiently demonstrate the effect, CMD. There is likely an index of various empirical data that might usefully be generated (and 'human page views' + 'new editors' would no doubt be two of the datasets in the index) to allow interested people who care about Wikipedia monitor the competitive loss during the decade 'til 2035, where I would expect to see rather profound differences. I would posit that no plurality of editors, and certainly no majority of the Wikimedia Foundation board, is ready to accept such a view today. N2e (talk) 17:45, 8 November 2025 (UTC)[reply]
Distinguishing between human page views and non-human page views might be a challenge. I guess non-humans mostly talk to the API right now, but that may change as agents with access to our devices improve. Sean.hoyland (talk) 03:22, 9 November 2025 (UTC)[reply]
I think it's important to look at metrics other than views. I've written about this at User:Thebiguglyalien/Wikipedia is not about page views. I had the same thought that the reader-to-editor pipeline is the main worry here, but also that the 8% were less likely to become editors in the first place. Of course, we're already doing so little to reach out to readers and encourage them to edit Wikipedia that I have trouble believing this is people's top priority. Thebiguglyalien (talk) 🛸22:18, 7 November 2025 (UTC)[reply]
Page views are up (slightly) over the last five years[65], active users[66] and editing[67] are flat but strong, the main issue is getting new registered users[68] (although those figures are skewed somewhat by the COVID lockdowns) but that is a long term issue. -- LCU ActivelyDisinterested«@» °∆t°18:53, 8 November 2025 (UTC)[reply]
The number of new editors has been going down longer than that.[69] There are seasonal patterns (e.g., fewer during June and July), but overall the trend for our next generation is downward. WhatamIdoing (talk) 00:25, 9 November 2025 (UTC)[reply]
That would be expected though as when Wikipedia was created every generation was a potential source of editor recruitments, whereas now older generations have presumably move much closer towards the theoretical cap of new editors. CMD (talk) 02:04, 9 November 2025 (UTC)[reply]
I cut each link to the last five years to show a common data set. Longterm data is useful, but this is a discussion about recent trends. You could go back further[70] but it doesn't add more to the discussion. -- LCU ActivelyDisinterested«@» °∆t°13:51, 9 November 2025 (UTC)[reply]
Not sure it can be measured, but if we knew whether "important topics" are actually improving or are they stuck in a "too boring", "not interested", or "not knowledgeable about" limbo, and that's how we would know if this project was going good. On the other hand, it seems almost certain, we will never lack for editors of 'today's sensation'.-- Alanscottwalker (talk) 14:33, 9 November 2025 (UTC)[reply]
The stats on Wikipedia usage, editors, etc. are all quite useful, especially as we watch for deviations from trend. But with new competition of Encyclopedically-presented information from AIs, at vastly less human cost (higher productivity), we can't ignore the fact of the systemic limitations we've built into Wikipedia over the past decade, intentionally or unintentionally, which resulted in the biases we now exhibit. Our coverage of many political and controversial topics is too-one-sided (COVID, societal lockdowns, climate science, why so many conservative or right BLPs are "far right" but few progressive or left-leaning BLPs are "far left", gender issues, ...; just to name a few). I suspect our policies and practices on making large groups of sources "unreliable" while deeming other groups "reliable" has caused a lot of this. But the result is we have strayed from NPOV and this has turned off a part of our readers, and resulted in increasing publications and vocal opposition to "Wikipedia bias".
But the point is, these AI-generated or AI-assisted competitors to Wikipedia will not suffer from the same accumulated detritus that we do, and this will open up an opportunity for them to out-compete Wilipedia in the free and open information area. Of course, AIs will have their biases as well, but what will matter, as far as our human audience goes, is where do the human readers choose to go for such information, over time. (and the AIs will use our Creative Commons licensed information as well). Wikipedia's vaunted position of the past two decades is likely to change substantively in the next. 2035 will be a very different Wikipedia, and usage of Wikipedia. Will/can Wikipedia change to meet the moment? N2e (talk) 12:45, 10 November 2025 (UTC)[reply]
wwhy so many conservative or right BLPs are "far right" but few progressive or left-leaning BLPs are "far left"
If Wikipedia ever wanted to be the one, true encyclopedia, it was a ridiculous goal from the start, practically megalmonical. What Wikipedia promises is not being the one true encyclopedia nor even completely reliable (see our disclaimer on every page), it is being transparent about what we do, inviting critical thinking. Editors are not going to give up human critical thinking about sources, nor will we ask readers to give up human critical thinking about the sources and what they read, here and everywhere. Alanscottwalker (talk) 15:06, 11 November 2025 (UTC)[reply]
I think one problem with the way you have framed this discussion is with the idea that LLMs are, in any way shape or form, actual artificial intelligence. I've long been annoyed that we (as a society) started calling the new and improved generation of chatbots AI in 2023. Admittedly, they are far better at giving the illusion of intelligence than any chatbots that came before, but at the end of the day, they aren't actually intelligent or conscious.
Even if they were actually intelligent or conscious - and put aside for a moment that we really don't have a good, well-defined definition of what intelligence and/or consciousness is and is not - the fact that they are non-corporeal means that they cannot generate new information, merely rework and repackage information provided by humans. Now, to an extent, this is what our policies require us to do on Wikipedia - we are not allowed to present our original research, merely rework and repackage secondary sources. But if you really pay attention to how they re-work information you'll realize they can be shockingly bad at it. They have no concept of what is important in a document and what is not, they don't know how to accurately combine information from multiple sources while maintaining source to text integrity and avoiding plagiarism (something human editors also often struggle with), and they completely make things up to fill in any perceived gaps.
Now, you have referred specifically to changes in the next 10 years. While I think in the next 10 years these chatbots will continue to improve in their ability to fool people into thinking they are intelligent, I do not think we will see true artificial intelligence, and we certainly won't see it compact enough to be packaged into a robot body that can function without an internet connection and gives it some sense of what the actual corporeal world is like.
In other words, for the next 10 years at lest, humans will be the primary generators of information, while all LLMs can do is repackage it, poorly. I think we have a history of overestimating how much and how quickly things change in "the future", and I think a lot of the hype about how AI will change things falls in that bucket. Will things change? Yes. But in the next 10 years I think things will not feel like they have changed as dramatically as all that. We won't be living in the world of I, Robot.
What does all this mean for Wikipedia? In the short term (and I think of the next 10 years as the short term); I don't think much will noticeably change. Look at how little has changed in the last 10 or 20 years on Wikipedia. 19 years ago when I started editing, we were dealing with several persistent problems: 1. Juvenile vandalism (such as inserting, for example, the word Penis in articles) 2. People trying to use Wikipedia to sell something or promote their pet cause 3. The struggle to craft policies and guidelines to ensure that the information contained in Wikipedia was as reliable as we could make it 4. Human personalities clashing in the way they inevitably do when you get a group of 10 or more people together and try to get them to pull in the same direction. 8.5 years ago when I became an admin, we were dealing with several persistent problems: 1. Juvenile vandalism (such as inserting, for example, the word Penis in articles) 2. People trying to use Wikipedia to sell something or promote their pet cause 3. The struggle to hone and enforce our policies and guidelines and ensure that the information contained in Wikipedia was as reliable as we could make it 4. Human personalities clashing in the way they inevitably do when you get a group of 10 or more people together and try to get them to pull in the same direction. Today, as we speak, we are dealing with several persistent problems: 1. Juvenile vandalism (just last week I deleted several pages under CSD G3 that consisted of nothing but the word Penis over and over) 2. People trying to use Wikipedia to sell something or promote their pet cause (CSD G11 is probably the most used speedy criteria) 3. The struggle to hone and enforce our policies and guidelines and ensure that the information contained in Wikipedia is as reliable as we can make it (this goes to a point you have made several times about bias - it is worth wondering if, in depreciating certain sources which have proven to be unreliable, we have overcorrected. However, our ability to rationally have that discussion is complicated by the fact that the people who are most vocally in objecting to this overcorrection tend to also come across as promoting their pet causes) 4. Human personalities clashing in the way they inevitably do when you get a group of 10 or more people together and try to get them to pull in the same direction.
The rise of these LLMs has complicated these problems, especially since our ability to distinguish LLM-generated garbage from human-generated garbage is as unreliable as the LLM-generated garbage itself. We've been struggling to figure out how to manage this problem for around 3 years now. As they continue to improve, we will find it even harder than it is today to distinguish LLM-generated garbage from human generated garbage. We will need to continue to watch for people trying to use Wikipedia to sell something or promote their pet cause - just now they will be assisted by LLMs in doing so. We will need to continue to craft and hone and enforce our policies and guidelines in order to ensure that the information contained in Wikipedia is as reliable as we can make it - including by ensuring that the policies we craft around the use of LLMs is actually an actionable policy, and not a knee-jerk reaction to what feels like an unmanageable increase in junk. Human personalities will continue to clash, but now some of them will get an LLM to do their arguing for them.
Here are what I see as the biggest challenges LLMs pose for Wikipedia over the next 10 years: 1. The massive amount of LLM slop on the internet will make it MUCH harder for editors to identify reliable sources - but this isn't exactly new. The fact that all our content is licensed under Creative Commons means that even back in 2006 when I started editing we had problems with circular referencing and citogenesis. The fact that Wikipedia is one of the largest freely-licensed sources available to train LLMs means that a lot of their source information comes from Wikipedia, making large portions of their output similar to the circular referencing and citogenesis issues we've been dealing with for 20 years. We'll just have to get better at training people to look for the needles of good sources in the haystack of crap. 2. The fact that LLMs do have some actual use in helping to edit and refine human input, and schools have essentially given up on preventing students from using them in favor of attempting to teach students to use them "correctly" means that we will need to be flexible in allowing some limited use of LLMs in the writing process, and not shaming people who actually use them as tools rather than getting them to do their thinking for them. However, the trend I see here is people wanting to reflexively ban them entirely instead of experimenting and playing around with them in a spirit of curiosity and seeing the areas where they can do the maximum good with the minimum harm and crafting rules for use around those. 3. Casual readers who, say, want the answer to a trivia question at the bar will take Gemini's AI-generated summary instead of clicking through to read the Wikipedia article - but again, that's nothing new. Even before making Gemini's AI-generated summaries available, Google would output the answer to a question like "How old is Hillary Clinton?" in a little box, so people weren't clicking through then. And even before that, when people would click through to the article, they'd skim to find the information they wanted instead of reading the whole thing. 4. No one is getting any younger. 10 years from now, today's 40 year olds will be 50, 50 year olds will be 60, and 60 year olds will be 70, and the list of Deceased Wikipedians will have grown. Meanwhile, todays 10 year olds will be 20 year olds. If you ask me the two groups of people best positioned to be Wikipedia editors are retirees and university students. They have the time, the resources, and the education. Today's 10 year olds - the ones who are running around annoying everyone by yelling six seven, and last year kept saying skibidi toilet, will be prime age to begin editing Wikipedia. But if the schools don't start teaching them to think, to research and write and cite sources, if the schools let them get away with letting Chat GPT do their homework for them, we will have a really hard time recruiting editors and maintaining quality as today's retirees drop off and join the graveyard.
Change is gradual. We won't see a huge change in the next 10 years; just an acceleration and amplification of the problems we've faced for the last 20. But I worry about the next 20, 30, 40 or more if we continue on this course. ~ ONUnicorn(Talk|Contribs)problem solving20:03, 10 November 2025 (UTC)[reply]
There is no way I can spend the time reading and digesting the above wallpost so I dropped it into AI and asked for a summary and that only took 5 minutes to read and absorb and I think you made numerous excellent points about AI amplifying existing problems, not solving them; and the problem of a younger generation that gives up on traditional methods of research and writing as they lean on LLMs. -- GreenC22:26, 10 November 2025 (UTC)[reply]
Thanks for taking the time to engage. AIs will progress as they progress (and here, I'm using the descriptive linguistics sense of AI, as most use the term), and I have no dog in that fight. I'm skeptical of your argument about the effect of AIs/AI technology on Wikipedia over a decade, User:ONUnicorn. In any case, AIs, broadly considered, will present a massive competive option to our readership in the ongoing natural human search for information, and in this way, they will greatly affect Wikipedia. Moreover, as they improve from the state we find them in in 2025, 2 1/2 yrs after the initial non-reasoning chatbot LLMs of 2023, we who toil in the mines of writing Wikipedia will find that the very competition that AIs will offer our readers in providing alternative options, will (at the margin) affect the return (satisfaction, sense of long term value, etc.) that we editors get from writing on Wikipedia, and thus, many of us will write less, at the margin. Some may write more, or do many other things to make up for this technology evolution, but change is coming nonetheless. N2e (talk) 02:37, 13 November 2025 (UTC)[reply]
this news doesn't seem to bode well for the future of the site - according to that PDF, FBI gave Tucows a month. I think a month is enough for the person behind it to flee to somewhere FBI-proof, if they're not there already. sapphaline (talk) 12:59, 10 November 2025 (UTC)[reply]
(I'm not sure where the best place to put this is. Attempts at resolving this on his talk page have been unsuccessful, mainly because the user has not replied to either of my last two messages on his talk page, and an image message box at the top of Wikipedia:Categorization/Noticeboard told me to go to the village pump, but if there is a better place for this, feel free to move this there.)
I will admit that most of the chemical categories that I have created have names on the longer side, and there was a different user who expressed concern about this. However, I have never seen JWBE give any hint that this was his reason, and even if it was, Wikipedia:Categories for discussion would still be a better option (especially because deletion is not the only possible solution) than clearing my categories in order to get them speedy deleted and thereby bypassing consensus out of a sense of superiority fueled by the fact that most Wikipedians who participate in categorization do not have a PhD in organic chemistry. Also, in the discussion about merging Category:(cyano-(3-phenoxyphenyl)methyl) 3-methylbutanoates into its subcategory, no one mentioned the length of the category's title as a factor to motivate merging, even after the other reason (underpopulation) no longer applied, and in fact, the nominator proposed merging it into its child category instead of merging the child into the parent even though the child category had a longer name than the parent category; this suggests that most Wikipedia categorizers do not consider the length of my categories to be a problem (although the sample size is somewhat small).
Additionally, I shall mention that when two categories that JWBE had created (Category:Gamma-lactams and Category:Delta-lactams) got tagged for speedy deletion, and JWBE reverted those edits while calling them vandalism (links for gamma and delta). However, looking at the edit history of their pages and subcategories reveals that none of those members were added to either of those categories until after the respective category was tagged for speedy deletion.[nb 1] The most generous interpretation that I can think of is that JWBE forgot that he hadn't populated those categories yet and therefore (incorrectly) thought that someone else must have cleared them and meant to call clearing his categories as vandalism (although this seems unlikely because JWBE had given each of Category: Gamma-lactams and Category:Delta lactams an additional parent category just a few hours before they were tagged for speedy deletion, so he likely would have noticed that they were empty then), in which case his insistence (based on the fact that he reverted my edits to repopulate it and called them rubbish) on clearing my categories would be hypocritical. Even in the more likely case (where JWBE knew that he hadn't populated those categories yet but referred to tagging the categories for speedy deletion as vandalism anyway), the only reason that I can see for why he would feel justified in reacting this strongly to what essentially is another user's failure to read his mind (likely from thousands of miles away) yet have no qualms clearing categories that had had multiple members, thereby getting them speedy deleted, except for a sense of superiority (or even perceived infallibility to the extent that anyone who disagrees with him or makes an edit that he doesn't like must either be a vandal or be creating rubbish) due to being a professional chemist, and this type of mindset, with its consequent reinforcement of double standards, would seem incompatible with following established conventions. For example, if he meant to refer to my category as rubbish, the fact that he seems to think that most people who participate in Wikipedia:Categories for discussion should not have a say in the categorization of chemical articles would explain why he would want to bypass consensus in order to get my categories deleted.
Seems fine, if not long. I don't really follow ANI all too much but it seems more cogently written than 90% of the posts there. I would try to summarize more, there's no need to speculate on the user's possible motivations. That seems only to invite issues of WP:ASPERSIONS being cast at you.
The discussion style of The_Nth_User is in fact extremely voluminous going to be unreadable. He should stop any contributions in chemistry and find better places of personal interest. JWBE (talk) 22:17, 12 November 2025 (UTC)[reply]
I have also seen a *ton* of edits from new accounts of this form. Is this a new way of referencing anonymous accounts? Is the the mechanism to display names broken? Or is this some sort of weird scripting vandal attack? KNHaw(talk)06:38, 11 November 2025 (UTC)[reply]
Yesterday, i published this post on Requested articles. I want soneone to create a new article about the Munich German dialect. I tried to create it before, but the article was deleted because it wasn't professional enough. Can someone with more skills on creating good articles recreate it? Karamellpudding1999 (talk) 08:01, 12 November 2025 (UTC)[reply]
Hello I'm a student from LUISS university in Rome and I'm working on a presentation based on wikipedia's crowdsourcing process and one part of the work is to put myself in the shoes of a wikipedia contributor and find out some feeling he receives when editing or writing pages. The questions I would like to receive answers on are the following:
What does the editor think and feel:
What does the editor say and do:
What does the editor hear and see (about its surroundings):
What are his pains (what type of frustration does the user feel when contributing):
What are his gains (what does make him feel good when contributing):
Active support is really needed so thanks in advance and have a great day
You have already been told to read WP:NOTALAB. In my opinion at least, your research is being conducted inappropriately. You have continued to spam multiple user talk pages, uninvited, for which you risk getting blocked. You are also asking (badly-worded) questions without regard for anonymity, which your university should almost certainly have warned you against. You would do well to rethink your research, and do it properly. AndyTheGrump (talk) 11:01, 12 November 2025 (UTC)[reply]
i grant anonymity and the questions are the ones from empathy map which is highly researched on. I'm trying my best to conduct a good resesrch and more people than you think have responded in a gentle manner. Tartaluca (talk) 11:04, 12 November 2025 (UTC)[reply]
yeah I understood what you're saying I'm sorry I posted something in my user page. if you have any tip to continue the research in the right way tell me Tartaluca (talk) 11:16, 12 November 2025 (UTC)[reply]
Do you mind if I ask what subject you are studying at university? Don't answer if you don't feel happy to, but it might help us guide you if we had a better idea of what you are trying to achieve. AndyTheGrump (talk) 11:52, 12 November 2025 (UTC)[reply]
That might explain why you seem not to have been given proper guidance. What you are doing is engaging in social science research, where students are (hopefully) given a little more advice before conducting surveys etc. You say you are using Empathy map (on which we have an article, but not in my opinion, a good one at all, so not helpful to this discussion), but you don't explain what you are intending to do with your results. As it stands, the answers you get are going to be a whole slew of very different answers to some ambiguous and open ended questions (along possibly with a lecture or two on the gender-related aspects of these questions). How do you expect to condense that down, and summarise it all? Research involves more than gathering data, you need to be able to do something with it at the end. AndyTheGrump (talk) 12:23, 12 November 2025 (UTC)[reply]
Technically, they could be said to have started a discussion at the village pump even if it wasn't the intention of this section. Alpha3031 (t • c) 13:01, 12 November 2025 (UTC)[reply]
That's to help guide student editing projects, not to help conduct research on Wikipedia itself. I agree with AndyTheGrump that the questions as given will not lead to much, but whatever the case any researcher might benefit from gaining at least a little familiarity with the subject through the interviews posted on the Wikimedia Tiktok channel. CMD (talk) 13:21, 12 November 2025 (UTC)[reply]
Hi, I was wondering this question myself when I was using the Vital Articles template, and wanted to reach out the user @SethAllen623 to ask if he still works on his list, and if that is the case, if he would accept any help. If anyone can guide me, that would be much appreciated! ~2025-33093-42 (talk) 14:50, 12 November 2025 (UTC)[reply]
I was prepared to donate today, £15, then the prompts started for “would you like to add 60p to cover the transaction fee”, ok fine. Then “would you consider making this an annually payment”. Then “would you switch this to £3 a month instead”, and then “can we please contact you”.
Good for you. I hope more people do the same and the WMF realises that you can't be ethical but then throw ethics out of the window when you are raising funds. Phil Bridger (talk) 22:54, 13 November 2025 (UTC)[reply]
Agree… we all understand the need for donations, but I too am getting very tired of the constant pop-ups. To now hear that the WMF do an additional “hard sell” when you do try to donate is discouraging. Blueboar (talk) 23:23, 13 November 2025 (UTC)[reply]
I would find it very annoying as well if there are 3 more questions after the initial donation attempt. With Grokipedia/Encyclopaedia Galactica coming WMF should be doing more to combat the threat. WMF is winning by thousands of miles today but we should not be complacent. And annoying donors would be one of the things WMF should not be doing. ✠ SunDawn ✠Contact me!06:47, 14 November 2025 (UTC)[reply]
Hi @GimliDotNet, I'm sorry you had a frustrating experience and it's very useful to get this feedback. It looks like you were giving in the UK or Europe, where we are required to ask for consent to send emails to donors. That, plus the additional suggested upgrades on your initial gift, introduced too much friction.
We have been running some extra, short tests this month in anticipation of the end of year push. This feedback is very actionable to us, and we can look for ways to streamline the options we put in front of donors like you. Thank you very much for considering a gift and for taking the time to share this input. SPatton (WMF) (talk) 20:48, 14 November 2025 (UTC)[reply]
Why do you have to be required to ask for consent? Surely you shouldn't dream of sending spam anyway? This is what I mean by my references to ethics above. Phil Bridger (talk) 16:46, 15 November 2025 (UTC)[reply]
Wikipedia's donation banners have become somewhat of a meme among the general (online) public now... An r/interesting thread appeared on reddit's front page yesterday (titled "Jimmy Wales, Co-Founder of Wikipedia, quits interview angrily after one question." -- not sure if I'm allowed to link the reddit thread here) and has some funny comments, e.g.
Wikipedia is so dying, like we're so dead but it's for real this time. Please bro can you spare three fiddy?