Wikipedia talk:Protection policy
![]() | This page is not for proposing or discussing edits to protected pages. To request or propose a change to a page that you are not able to edit, place a message on its talk page. If the page is fully protected, you may attract the attention of an admin to make the change by placing the
|
This is the talk page for discussing improvements to the Protection policy page. |
|
Archives: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18Auto-archiving period: 2 months ![]() |
![]() | This project page does not require a rating on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||
|
![]() | The project page associated with this talk page is an official policy on Wikipedia. Policies have wide acceptance among editors and are considered a standard for all users to follow. Please review policy editing recommendations before making any substantive change to this page. Always remember to keep cool when editing, and don't panic. |
![]() | This page is written in American English, which has its own spelling conventions (center, color, defense, realize, traveled) and some terms may be different or absent from other varieties of English. According to the relevant style guide, this should not be changed without broad consensus. |
Proposed layout changes
[edit]I propose applying the following changes to the page:
- Reorder the sections (template protection was created later, hence its section was added at the end even though it was lesss powerful than Full) from this:
- PP protection
- Semi-protection
- EC protection
- Full protection
- Template protection
- to this:
- PP protection
- Semi-protection
- EC protection
- Template protection
- Full protection
- Simplify the table by removing the names of the colours from the table (which are a remnant of when the locks weren't labelled – some, such as "White", are also incorrect nowadays): This is an example of what it could look like, while this is what it looks like now.
FaviFake (talk) 09:01, 23 August 2025 (UTC)
- The first is trivial, done. Primefac (talk) 16:34, 24 August 2025 (UTC)
- Primefac Thanks; regarding this:
My bad, I meant the small table at the top, the one between the lede and the 1st section. This is my proposed design to simplify it. Since there wouldn't be the colour names, the cell borders are imo unnecessary. FaviFake (talk) 10:08, 25 August 2025 (UTC)[...] there are no colour names in the table?
- Primefac Thanks; regarding this:
Proposal: New protection level in between ECP and full
[edit]I believe we need this after pages like John F. Kennedy and Taylor Swift, which both have ECP, got unambiguous vandalism from extended-confirmed accounts. The latter was very transparently a case of somebody using a sleeper account to bypass protection. There have been many other cases where I've seen socks of blocked users try to get past semi-protection. It's evident this unfortunately won't always be sufficient even for times when full protection would be over-the-top. With that said, my recommendation is the threshold for a new protection should be at least 60 days and 1,000 edits, preferably ones that aren't just sandbox changes or dummy edits. Not yet sure what to use for a title, and I welcome ideas for one. SNUGGUMS (talk / edits) 17:49, 20 September 2025 (UTC)
- We've got one, it's called Template-protection. --Redrose64 🌹 (talk) 18:28, 20 September 2025 (UTC)
- That doesn't apply to main space pages, though. Templates are a separate thing. SNUGGUMS (talk / edits) 21:40, 20 September 2025 (UTC)
- Although the word "template" occurs in its name, it's simply a prot level intermediate between EC protection and full prot. It can be applied to pages in any namespace, and it does occasionally happen (examples in mainspace; user space; project space - there are others). --Redrose64 🌹 (talk) 22:35, 20 September 2025 (UTC)
- Most of the examples outside of template space are bot configuration, pages used as templates, and other sensitive or at least semi-sensitive and very rarely edited pages. I don't think it would make sense to have the same permission group cover both high-profile articles and the pages that are currently template-protected. I'm not ready to say we need a group between ECP and template-protection (see below), but if we started needing to protect articles at a level above ECP, but below full protection, I don't overloading template protection is the way to go. Daniel Quinlan (talk) 22:53, 20 September 2025 (UTC)
- Although the word "template" occurs in its name, it's simply a prot level intermediate between EC protection and full prot. It can be applied to pages in any namespace, and it does occasionally happen (examples in mainspace; user space; project space - there are others). --Redrose64 🌹 (talk) 22:35, 20 September 2025 (UTC)
- That doesn't apply to main space pages, though. Templates are a separate thing. SNUGGUMS (talk / edits) 21:40, 20 September 2025 (UTC)
- Already under discussion at User talk:Ymblanter#Donald Trump, then Wikipedia:Edit filter noticeboard#Disallow filter at Donald Trump, others I'm not aware of, and now Wikipedia talk:Protection policy#Modifying extended confirmed permission grants. ―Mandruss ☎ IMO. 22:14, 20 September 2025 (UTC)
- Editing filters are one possible solution. Regarding what Redrose64 has said, I wish some criteria was given for which non-admins have template protection editing rights. It wasn't clear when I looked through what the page currently gives on that. SNUGGUMS (talk / edits) 00:08, 21 September 2025 (UTC)
- The partial list is here; to these may be added all admins, since the templateeditor right is part of the admin bundle. Admins get it when passing a WP:RFA successfully; others who meet the guidelines at WP:TPEGRANT may be granted the TE right upon successful application at WP:RFP/TE. --Redrose64 🌹 (talk) 08:49, 21 September 2025 (UTC)
- Editing filters are one possible solution. Regarding what Redrose64 has said, I wish some criteria was given for which non-admins have template protection editing rights. It wasn't clear when I looked through what the page currently gives on that. SNUGGUMS (talk / edits) 00:08, 21 September 2025 (UTC)
- I've thought this for a long time. I suspect there's a good reason why it doesn't exist, but I haven't heard why yet so will share my opinion. Firstly I don't believe 60 days and 1,000 edits would be much better. I'd much prefer to see 1Y/10K edits, and even a super-restrictive 10Y/100K (admins exempt) if that fails. A doubling up on ECP would simply delay/reduce disruption, but wouldn't solve much in the long-term. But I do generally have a lot more confidence in an editor who has been here 1 year over 1 month, or with 10K edits over 500, and so on; even if this doesn't always match the standards you'd hope for, there is a strong correlation based on experience that's undeniable.
- As for template editor protection, this appears worthless for most ECP-failing main-space articles based on the granting criteria, lack of these editors (<200), and the fact it's not an automated right like ECP. So I'm baffled as to why there isn't an automatically granted protection level between ECP and Full, not just for vandalism, but also for edit warring, BLP violations, and other serious issues. Then we can start raising the bar of editor standards for any articles with higher protection than ECP (like losing such auto-granted rights for violating policy), knowing that enough experienced editors are 'just as
badgood' as newly-granted ECP editors. That might sound harsh, but like with other privileges (page mover comes to mind), there should be auto-granted rights that are also privileges (because they can be removed). I referenced page mover due to WP:PMCRITERIA referencing"no behavioral blocks or 3RR violations in the 6 months prior"
, which to me implies that should a page mover receive a block or violate 3RR then they should have the right removed and have to re-apply (correct me if wrong here). And yes I'm aware ECP editors can lose ECP rights for gaming among other things, so I'm only suggesting similar proportions for any higher protections. And before anyone is insulted by the term privilege over rights, it's intentional to distinguish the main difference between such rights. - It'd also be accurate to say that 1Y/10K sounds like a clique with a 'one/two mistake(s) and you're out' type vibe, but I'm all for it if it means full protection really is the last resort rather than the go to once ECP fails. Going from a restriction of a 1 month-old account with 500 edits to administrator only is an enormous leap, making it more often than not an over-reach when applied (because something less restrictive, but more restrictive than ECP, would nearly always make a lot more sense, even if only as a stepping stone). The most controversial part of higher protections would be holding experienced editors to a higher standard, specifically for articles with higher protections, as most editors don't appear to want to hold experienced editors to a higher standard than less experienced editors because of "contributions".
- Ideally there would be a higher standard of editing for those who don't struggle with following policy, guidelines, and general civil etiquette. As for now, experienced editors remain free to wade through the trenches of ANI and return to disruptive activities without any long-term incentive to change ways. The fact we still overly rely on TBAN/CBAN when instead for certain cases we could be just removing a 1Y/10K editing right instead. Consider also that ECP editors losing rights is rare as it serves very little (delays any potential future disruption only by 1 month). 1Y/10K would postpone automatically re-acquiring rights by one year instead, along with another 10K edits.
- I'd otherwise appreciate any links to discussions/RfCs on this on absence of any real change. CNC (talk) 18:22, 26 September 2025 (UTC)
- I do see what you mean on delays, CommunityNotesContributor, and FYI there is an ongoing RFC below titled "Revised proposal to improve extended confirmed grants" that proposes modifying EC qualifications to reduce chances of others gaming the system to obtain ECP rights. Feel free to leave comments on that. SNUGGUMS (talk / edits) 21:21, 26 September 2025 (UTC)
- Thanks, I read that RfC before your topic (it's actually how I found it), but it does nothing to solve the issue of the enormous gap between ECP and Full Protection. Will leave a comment though I guess. CNC (talk) 21:37, 26 September 2025 (UTC)
- If we can improve the EC gaming situation through less blunt measures (than a new protection level), the vast majority of full protection actions on articles will be due to content disputes and edit warring. And too many of those disputes involve multiple editors with pretty high edit counts. A new protection level isn't a great solution for that. Also, when sockpuppetry is less of a factor, warnings and blocks become a better tool than preventing most people from editing a page. I'm not saying we'll never need a higher protection level, but I don't think we're there yet if EC gaming can be mitigated. Daniel Quinlan (talk) 22:27, 26 September 2025 (UTC)
- I hear what you're saying about EC gaming per RfC below, no arguments there, and my suggestion is completely unrelated to that tbf. It is specifically because of
"too many of those disputes involve multiple editors with pretty high edit counts"
, regarding the use of full protection. This alone is confirmation that ECP is not enough and that Full Protection is overkill. So it's high time experienced editors had additional privileges that can be easily revoked. That might sound like reverse engineered logic, as it's with a longer-term perspective. Ie disrupting an article with a new 1Y/10K protection level could lead to having YCP (let's call it) revoked, this would by default impose a minimum one year editing restriction on YCP articles for such editors (unless appealed). No TBAN, CBAN, or sanctions, just revoking privileges that can be automatically acquired again. That's the TLDR I think. It'd be between a hammer and a warning. CNC (talk) 23:01, 26 September 2025 (UTC)- Having the option of revoking a permission as another tool alongside short-duration blocks is an interesting idea although I need to think about it more. Perhaps it could be a permitted remedy for some contentious topics or community sanctions in the future. I'm also not sure we need a new protection level for that. It could just as easily apply to EC or a revised EC with slightly increased requirements. Daniel Quinlan (talk) 23:43, 26 September 2025 (UTC)
- I'm not convinced being able to dramatically increase EC would gain consensus, and I see EC as having it's own purpose here (mainly for vandalism and inexperienced editors), distinct from from any higher protections (needed for more serious disruption), but sure I see your point overall. Another protection level would otherwise certainly help for PIA and other CTOP as editors walking on eggshells in that topic area certainly wouldn't be a bad thing. Most of us do it already, there's just a minority that don't bother it seems which is always frustrating. After a while I think the realisation is that you don't have to bother, as there are rarely repercussions, so might as well not be too cautious (unless you have some form of self-regulation). Ideally it'd also help fill the gap between a warning and a TBAN, the latter being quite an extreme resolution, whereas revoking auto-privileges for a higher protection level, much less serious. Ideally it'd become routine. Maybe I'm just dreaming of an editing-class of self-regulating editors who are forever in fear of losing editing rights for a higher protection level. I hope that doesn't make me a bad person. CNC (talk) 00:37, 27 September 2025 (UTC)
- Having the option of revoking a permission as another tool alongside short-duration blocks is an interesting idea although I need to think about it more. Perhaps it could be a permitted remedy for some contentious topics or community sanctions in the future. I'm also not sure we need a new protection level for that. It could just as easily apply to EC or a revised EC with slightly increased requirements. Daniel Quinlan (talk) 23:43, 26 September 2025 (UTC)
- I hear what you're saying about EC gaming per RfC below, no arguments there, and my suggestion is completely unrelated to that tbf. It is specifically because of
- If we can improve the EC gaming situation through less blunt measures (than a new protection level), the vast majority of full protection actions on articles will be due to content disputes and edit warring. And too many of those disputes involve multiple editors with pretty high edit counts. A new protection level isn't a great solution for that. Also, when sockpuppetry is less of a factor, warnings and blocks become a better tool than preventing most people from editing a page. I'm not saying we'll never need a higher protection level, but I don't think we're there yet if EC gaming can be mitigated. Daniel Quinlan (talk) 22:27, 26 September 2025 (UTC)
- Thanks, I read that RfC before your topic (it's actually how I found it), but it does nothing to solve the issue of the enormous gap between ECP and Full Protection. Will leave a comment though I guess. CNC (talk) 21:37, 26 September 2025 (UTC)
- I do see what you mean on delays, CommunityNotesContributor, and FYI there is an ongoing RFC below titled "Revised proposal to improve extended confirmed grants" that proposes modifying EC qualifications to reduce chances of others gaming the system to obtain ECP rights. Feel free to leave comments on that. SNUGGUMS (talk / edits) 21:21, 26 September 2025 (UTC)
Modifying extended confirmed permission grants
[edit]I don't think creating a higher permission group above ECP will be as an effective of a solution as improving how the extended confirmed permission is granted.
- Vandals gaming ECP already shift to other pages whenever their target is fully protected or otherwise protected by edit filters. Recently, Taylor Swift was vandalized because Donald Trump is fully protected. And Donald Trump is only the most recent target, it was United State Senate last month. There is no shortage of high profile pages.
- The main issue is that it's trivial to run up edit counts to 500 edits very quickly. Even though most extreme gaming is detected relatively quickly in several different ways, just a few minutes is long enough to seriously disrupt ECP articles, and some accounts slip through even with the new measures we have in place.
- One possible improvement would be adding a significant delay between reaching the 30/500 requirements and granting the right (e.g., a 10 day review period), but I don't think we're quite at that point yet.
The proposal:
- Update the site configuration so the
autoconfirmed
group is required before an account is grantedextendedconfirmed
. It's a small modification to thewmgAutopromoteOnceonEdit
setting (see the enwiki settings). - We allow several high accuracy edit filters to revoke autoconfirmed. This is already supported natively. The edit filter managers would also update MediaWiki:Abusefilter-degrouped to be more general and less accusatory. We could also direct affected users to a specific noticeboard, although most of them will be blocked not long after the rights are revoked.
I think this will help reduce the biggest problem we have right now: the 5-30 minute delay between ECP being granted and an administrator at AIV acting on an automated report from edit filter. Daniel Quinlan (talk) 21:49, 20 September 2025 (UTC)
- P.S. Thanks Sohom Datta, ScottishFinnishRadish, jlwoodwa, HouseBlaster and several others who helped workshop the idea in a chat. Daniel Quinlan (talk) 21:49, 20 September 2025 (UTC)
- I strongly support the above changes as a minimum enhancement for ECP page security. Creating higher page protection groups, no matter if its 1,000 or 3,000 or 10,000 edits, will simply have bad-faith actors game the system on draft articles and pages. Some editors also have habits of adding in single words or sentences at a time, meaning high-quality editors who add in more in a single edit are disfavored by increasing numerical requirements. The latter are the people we want, not the former. BootsED (talk) 00:41, 21 September 2025 (UTC)
OpposeOppose the second part (edited from full oppose to opposing the second part, as clarity on 17:57, 21 September 2025 (UTC)) though not necessarily a problem with the proposal itself, which is a good idea. The problem I have is thatPer phab:T405999 my oppose is an enthusiastic support. EggRoll97 (talk) 07:51, 2 October 2025 (UTC)abuseflter-modify-restricted
is required to modify filters using a restricted variable, andblockautopromote
, the one you are proposing using, is a restricted variable. Edit filter managers who are not administrators do not have this right bundled, and thus would be locked out of the filter entirely (I would see no change due to global rights, but would ordinarily be unable to edit it). I see this as a security problem, since it's removing filter editing from a group that should have it, that group being the 17 non-admins who are filter managers. Without assignment of the right to non-admin EFMs, I cannot support the proposal. And before anyone says put in for adminship, I tried on my technical merits already. EggRoll97 (talk) 08:42, 21 September 2025 (UTC)- Why is it a security problem to have fewer people and to able to edit it? The filter will take action only trusted to admins so it is reasonable that only admins will be able to edit a filter that takes that action. ScottishFinnishRadish (talk) 11:05, 21 September 2025 (UTC)
- @ScottishFinnishRadish: Because the action isn't trusted to admins only, see [1]. The /tools interface allows non-admin EFMs to restore autoconfirmed, even despite not being able to edit the filter itself. Also, it shouldn't be a matter of
trusted to admins
or not, considering non-admin EFMs pass a request to be assigned the right, and thus should be as trustworthy as an admin in that regard. There should be no difference in what an admin can do in AbuseFilter and what a non-admin EFM can do. EggRoll97 (talk) 13:04, 21 September 2025 (UTC)- @EggRoll97: I have posted a revised proposal below which I believe should address your previous concerns independently of whether EFM permissions are changed in the future. Daniel Quinlan (talk) 20:49, 22 September 2025 (UTC)
- @ScottishFinnishRadish: Because the action isn't trusted to admins only, see [1]. The /tools interface allows non-admin EFMs to restore autoconfirmed, even despite not being able to edit the filter itself. Also, it shouldn't be a matter of
- It looks like
blockautopromote
being listed underwgAbuseFilterActionRestrictions
is going to be a problem for more concrete reasons. Actions on this list are disabled when a filter is throttled. Given the high-volume nature of one or two of the filters we would use for this purpose, they would be throttled frequently. I think we will need to removeblockautopromote
from the list. - Just to be clear, I don't believe it's a security risk that non-administrator EFMs are unable to edit some filters. We have a large number of administrator EFMs who can manage filters using a restricted variable. Daniel Quinlan (talk) 16:48, 21 September 2025 (UTC)
- Why is it a security problem to have fewer people and to able to edit it? The filter will take action only trusted to admins so it is reasonable that only admins will be able to edit a filter that takes that action. ScottishFinnishRadish (talk) 11:05, 21 September 2025 (UTC)
- Comment: see also Wikipedia:Edit filter noticeboard#Add the abusefilter-modify-restricted right to EFM. Codename Noreste (talk) 19:50, 21 September 2025 (UTC)
Revised proposal to improve extended confirmed grants
[edit]![]() |
|
Background:
- The main issue with extended confirmed protection (ECP) is that it's trivial to run up edit counts to 500 edits very quickly. Even though most extreme gaming is detected relatively quickly in several different ways, just a few minutes is long enough to seriously disrupt ECP articles, and some accounts slip through even with the new measures we have in place.
- Full protection is not a good solution for this because it prevents editing to important articles and vandals gaming ECP will shift to other pages when their target is fully protected (as happened while Donald Trump was fully protected).
Revised proposal:
- Update the site configuration so the
autoconfirmed
group is required before an account is grantedextendedconfirmed
. It's a small modification to thewmgAutopromoteOnceonEdit
setting (see the enwiki settings). - We allow several high accuracy edit filters to revoke autoconfirmed. This is already supported natively. The edit filter managers would also update MediaWiki:Abusefilter-degrouped to be more general and less accusatory.
- Remove
blockautopromote
fromwgAbuseFilterActionRestrictions
so theblockautopromote
action won't be disabled when the filters have a high rate of matches (which already happens because ECP gaming happens at a high rate). This will also allow non-administrator EFMs to edit these abuse filters (they can already restore autoconfirmed when a filter removes it so this is not a big deal).
This will help address the biggest problem we have right now with ECP: the 5-30 minute delay between ECP being granted and an administrator at AIV acting on an automated report from one of several edit filters. Daniel Quinlan (talk) 20:47, 22 September 2025 (UTC)
Survey (extended confirmed grants)
[edit]- Support, I think this would me an improvement in stopping some of the worst abuse and disruption. ScottishFinnishRadish (talk) 21:54, 22 September 2025 (UTC)
- Support, obviously. This is a no brainer.—S Marshall T/C 23:18, 22 September 2025 (UTC)
- Support, obviously per above. -- Sohom (talk) 23:46, 22 September 2025 (UTC)
- Support, as an EFM. – PharyngealImplosive7 (talk) 00:56, 23 September 2025 (UTC)
- Comment (Summoned by bot): I'm leaning support, but reserving my final !vote until I've mulled over implications of the technical changes, which are not as second nature to me as perhaps they are to the admins and functionaries who have responded already. That said, I do have some procedural concerns. First and foremost, is this really the appropriate place for this discussion? It is not concerned with making changes to the protection policy but rather user rights management. Of related concern, is this proposal being advertised in appropriate community fora? This would be package of non-trivial changes with implications to the threshold at which new users gain important indicia of initial community standing. I think it is very much deserving of a WP:CENT listing, and at the very least should be promoted via WP:VPP. SnowRise let's rap 05:41, 23 September 2025 (UTC)
- @Snow Rise: Immediately after posting this RFC, I posted notices on both WP:EFN and WP:VPP. As the changes are focused on the
extendedconfirmed
group which is only used to allow edits to extended-confirmed protected pages (see Special:ListGroupRights), this page seemed like the most appropriate place. I also considered WP:EFN. This page is also watched by more people (2,447 compared to 351 for EFN). Daniel Quinlan (talk) 06:17, 23 September 2025 (UTC)- Thanks for the extra info, Daniel. I still think WP:UG was probably the right place for this discussion, but insofar as there is a listing at VPP, I think that is the more important piece. That said, I have some reservations in the discussion section below about the ambiguity concerning how the filters would be adapted; perhaps you could shed some light there? SnowRise let's rap 21:49, 23 September 2025 (UTC)
- I've posted a notice on WP:UG, and I've responded below. Daniel Quinlan (talk) 00:56, 24 September 2025 (UTC)
- Thank you again, Daniel; I appreciate the indulgence. I respect where this proposal is coming from (I've seen some flabbergasting displays of organized disruption of late that are almost off the charts compared to what I have historically observed in my time with the project), and I don't mean to be a spanner in the works just for the sake of it. I just want to make sure we are not moving so fast on this as a solution that we fail to create appropriate safeguards to prevent unintended consequences. SnowRise let's rap 01:31, 24 September 2025 (UTC)
- I've posted a notice on WP:UG, and I've responded below. Daniel Quinlan (talk) 00:56, 24 September 2025 (UTC)
- Thanks for the extra info, Daniel. I still think WP:UG was probably the right place for this discussion, but insofar as there is a listing at VPP, I think that is the more important piece. That said, I have some reservations in the discussion section below about the ambiguity concerning how the filters would be adapted; perhaps you could shed some light there? SnowRise let's rap 21:49, 23 September 2025 (UTC)
- @Snow Rise: Immediately after posting this RFC, I posted notices on both WP:EFN and WP:VPP. As the changes are focused on the
- Support I have seen people who created several accounts, waited 30 days, then made 500 edits in a very short time. They then run wild. More tools are needed. Johnuniq (talk) 09:31, 23 September 2025 (UTC)
- Support given the nature of the LTA disruption this is intended to combat I've feeling this will only end up being one part of a more extensive solution, but it's a good start and the implementation costs are low. 184.152.65.118 (talk) 00:52, 23 September 2025 (UTC)
- Support, the background suggests a need for changes. --TenWhile6 21:37, 23 September 2025 (UTC)
- Oppose. I agree that the background suggests a need for change. Per SnowRise, however, I am unconvinced that the proposed change in its current form is the correct and proportionate response to that need. I'm concerned we've got a case of the politician's syllogism here. Thryduulf (talk) 23:41, 23 September 2025 (UTC)
- Support points 1&2. Only support point 3 if there's no other option. Removing the throttle on autoconfirmed revocations would remove a major failsafe. A single misconfigured filter could lead to up to 100% of autoconfirmed users having the permission revoked upon edit. Yes, we could fix that relatively quickly, but it seems like something to be avoided if at all possible. Otherwise this is an xkcd:2677 problem. I've talked to Daniel Q by email and it seems to me—I'm not 100% sure—that the throttling issues that justify point 3 could equally be fixed by splitting the AC-revocation logic from other functions of the filter in question. I think how we should approach it is this: Consensus in this RfC should constitute community consensus to allow removing
blockautopromote
fromwgAbuseFilterActionRestrictions
, but whether we actually do this should be decided in a private Phabricator ticket where edit filter managers and sysadmins can discuss the use cases that might necessitate the change, and whether alternative options exist. -- Tamzin[cetacean needed] (they|xe|🤷) 01:44, 24 September 2025 (UTC)- I think that's reasonable, and the RfC should certainly not be invoked to hand-tie details if problems emerge or a better technical implementation becomes available that achieves the same objective. Authorizes if necessary, but does not require. 184.152.65.118 (talk) 20:52, 25 September 2025 (UTC)
- IP, For what it is worth, these RFCs are considered advisory, the sysadmins/deployers take the final decisions and if there is a strong technical reason to not do something, it will be brought up. Also, another thing to note is that the way the process works within the technical community works is that RFCs are mandatory/heavily encouraged for these kinds of configuration changes. (See Requesting_wiki_configuration_changes) If a better technical implementation surfaces that is significantly different, consensus would typically be required if it is a long-term configuration option change. (There are obvious exemptions for security and WMF reasons but for the purposes of this RFC it is better to think of it as authorizing this specific way to be used if required rather than a "do what it takes" scenario). Sohom (talk) 02:50, 26 September 2025 (UTC)
- I think that's reasonable, and the RfC should certainly not be invoked to hand-tie details if problems emerge or a better technical implementation becomes available that achieves the same objective. Authorizes if necessary, but does not require. 184.152.65.118 (talk) 20:52, 25 September 2025 (UTC)
- Support. I admit I don't fully understand what's going on here, but I understand enough to see that these changes could be beneficial and I trust that Daniel and the EFM team knows what they're doing. Per Tamzin this should be seen as consensus allowing these changes, not requiring them. Toadspike [Talk] 11:20, 24 September 2025 (UTC)
- Support it's definitely better than what we have now, and I also recommend having someone review the edits before granting extended confirmation to users (partially to ensure no sleeping before activity begins). SNUGGUMS (talk / edits) 12:29, 24 September 2025 (UTC)
- Support I do agree with Toadspike, this shouldn't be seen as a requirement. If we can't make a filter that is sufficiently free from false positives, we shouldn't have one set to revoke autoconfirmed at all. At the same time, this is a pressing issue on a scale that isn't feasible for our current admin numbers, and frankly is overwhelming for anyone trying to slow down the disruption, so I'm not opposed to revoking autoconfirmed with very finely-tuned filters. EggRoll97 (talk) 23:41, 24 September 2025 (UTC)
- The question is, what does a 'finely tuned' tuned filter look like? Part of the issue here is that a substantial portion of the general community has adopted an aggress posture on "gaming ECP status", but the community at large has never bothered to work through what exactly that consists of, let alone define clear metrics for detecting it in a consistent and reasonably fair fashion. It's a very nebulously defined problem. We have a crystal clear, community-sanctioned standard for the thresholds that are meant to normally trigger ECP: 30 days, 500 edits. But the idea that some people are doing wrong by complying with the word of that standard, while actually engaged in block evasion (or otherwise planning disruption), while reasonable in the abstract, leads to some rather obvious issues since distinguishing these motives through edit data is no simple task, with few super reliable metrics. We have people saying that this can be done with well-calibrated filters, but there has always been a dearth of evidence as to why we should treat these tests as based on empirically valid analytics. Certainly there has been no public facing research (or even detailed reasoning) that I have ever seen in any of the discussions on this topic to prove (or even indicate) why the "evidence" relied upon to distinguish bad actors from run-of-the-mill new users are reliable. Worse, I'm not even certain that many of the most strident activists for relying on these tools even know precisely how they function. I'm more than a little concerned that every time I have asked for even a bare bones description of the filters, I have been met with utter silence. There are plenty of innocent reasons why that may be happening, but I do worry it is entirely possible that many supporting this proposal (and others that have gotten us to this point) don't even really know how they operate, and can't give a cogent, detailed answer on why we can be confident on their reliability in catching bad actors or avoiding false positives. Or for that matter, some may be aware that the false positives are actually non-trivial (or highly unknown and subject to speculation), but are not eager to relay that fact, because they have decided themselves that the cost-benefit is worthwhile, in their personal views on the broader-level issues. This whole process is very insular and non-transparent so far, and that should be serious reason to consider pumping the breaks on going further whole-hog on letting an automated system slow down full authorization of new editors, given the implications for the project, ideological and practical. That's all the more a concern where ArbCom has independently expanded the implications/applicability of ECP massively in the last two years. I really think we need clearer answers on how all of this works before we just rubber stamp a proposal like this. There's a rush to do something, because something is clearly needed (believe me, I've seen it; the manner in which huge swarms of sock- and meat-puppets can be readily organized to overwhelm regular good faith editors has led to some astounding displays of disruption lately). But we could end up doing more harm than good if we don't base this particular decision on sound data. And meaning no disrespect to those who have supported this proposal already, but where is it? It really should come with or parallel to the proposal, and there should be no rush to greenlight this 'solution' until we have some more significant proof that it will do what is claimed, and that we have a clear idea of what the collateral consequences might be. SnowRise let's rap 17:40, 25 September 2025 (UTC)
- @Snow Rise, with respect, you're bringing some almost completely unrelated anxieties into this discussion. The EFMs are talking about ways to reduce bot-facilitated vandalism, not humans who might maybe be gaming to get into PIA. -- asilvering (talk) 18:39, 25 September 2025 (UTC)
- Well, I don't see how we could reasonably describe the concerns as unrelated; these filters (however they are calibrated) are going to impact all users who match a certain use profile. The rate of false positives will be a constant, relative to the coding of the filters, regardless of what the ultimate motives of the bad faith actors are, and regardless of what our motives are for implementing or altering the system. If there's something in the detection algorithms that specifically looks for conduct highly-indicative of machine-assisted gaming, that's great; I'd love to see that as it would be one point in favour of a presumption of a system that is conservative in its flagging and well-engineered to avoid false positives. The problem is, we haven't had confirmation of that here. It should really have come with the proposal, and its not heartening that no one with that knowledge has stepped up to provide it since. And there's another concern here: once the community greenlights this change, it takes oversight substantially out of the hands of the general community, unless this proposal is augmented to include periodic review of the filters themselves. If not, the filter devs and maintainers have functionally unrestricted ability to change the criteria by which people get ECP status and depending un just how drastic the changes are, it is unlikely that almost anyone in the community would even be aware, let alone someone inclined to question these changes. That's an awful lot of influence over the gates to full participating in the project consolidated in a few hands with essential no community review. And even if we presumed that every one of those devs was so committed to community transparency that they would hold a meaningful community consult for every change (and come on, unlikely, right?), there would still be pretty significant potential for human error (per Tamzin's concerns raised above). So, regardless of the category of bad actor this proposal is meant to target, the implications about the potential side effects remain a serious concern, because we just don't know what the net itself looks like. I'm by no means per se opposed to this proposal: I've tried to make that clear. But we really should have some more answers about the current technical implementation of the filters before we rubber stamp it. And very possibly some back-end protections to that once this system becomes automated, technical decisions which further clamp down on when new users become full-rights users have some sort of transparent review process, be it automatic or periodic. SnowRise let's rap 19:04, 25 September 2025 (UTC)
- Non-admin EFM here. Without revealing too many details of private filters, the filters we are talking about enabling
blockautopromote
are exceptionally accurate. Some of them have had no false positives for months at a time. - The edit filter community already requires a very low amount of false positives and consensus either at WP:EFN (public filters) or the mailing list (private) to enable a disallow filter, which stops the action (edit, move, etc.) without removing any permissions. In that way, we are already quite conservative. For anything removing rights, we would probably be even more cautious and ensure that the filter has had almost zero or no false positives.
- Besides, all admins and non-admin edit filter helpers have view access to private filters, so it is quite likely that if anything is being changed unfairly, someone will speak up.
- A transparent review process would be nice for these filters; I know, but the problem is LTAs are also looking at the filter logs and would use any opportunity to make the efficacy of the filters lower, potentially leading to false positives.
- Hopefully this has clarified how these actions would work without revealing the fine machinery of the filters themselves. – PharyngealImplosive7 (talk) 01:00, 26 September 2025 (UTC)
- Thank you, Pharyngeal--that is indeed helpful. Honestly, I still have misgivings. Combining a lack of transparency on the specific functionality of the edit filters themselves (even for reasonable WP:OPAQUE reasons) with the grant of a blank check to make those same filters automated in perpetuity (because let's be honest, once greenlit, this way of doing things is all but certain not to ever be rolled back) leaves a lot of uncertainty and more or less permanently commits the project to a more hardline approach to ECP enforcement without corresponding oversight checks from the general community. I have nothing but great faith in the EFM community's intentions to act in good faith in the project's best interests, but at the same time, you all might make decisions hardening access to EC status that the larger community would not support, but can't object to, because only the EFMs and admin corps would ever know about them. And I'd probably be a lot less nervous about that, if this were not coming so close on the heels of ArbCom essentially making every CTOP topic subject to ECP. For years a vocal minority has been trying to convince this project to adopt a "registered editors only" policy, which has been roundly rejected by the broader community. And yet, so much access to the project has been made into the exclusive purview of the veteran editor, bit-by-bit in ArbCom decisions and arguably overzealous proposals meant to protect against disruption, that the distinction has increasingly less and less meaning. The community nominally continues to be strongly committed to the "encyclopedia anyone can edit" ethos, and continues to generally affirm the rights of unregistered editors to contribute, but in practice it does not want to do anything to put the brakes on ArbCom's increasingly sprawling remit as it places every half-way-controversial subject matter under CTOP and ECP. And the same time, our fears of the barbarians at the gates cause us to increasingly support more and more restrictive measures in general community proposals as well. And the most frustrating part? I can't even bring myself to oppose this proposal outright, because despite my reservations, I also can't convince myself it is unnecessary, in light of some of the disruption I have seen lately. All I know for certain is that I am deeply depressed that these are our options. I know it's not a 1:1 relationship by any means, but I'm distressed that the slow but steady roll-back of the open participation and diversity of perspective principles on this project are such a mirror of what is happening in our societies at large. I just don't like this feeling. And while I don't feel that the perspectives of the support !votes are unreasonable, I am bothered that I seem to be part of such a small minority of community members who have such heavy reservations about this increasing consolidation of access to shaping our content and even participating in the consensus process. Each individual step we take on this road might seem entirely reasonable in the context in which it is taken, but I cannot shake the feeling that they are collectively slowly choking the life out of the future of the project as a whole. Anyway, that's my last word on the subject. I am officially, but ambivalently, Neutral. SnowRise let's rap 05:42, 26 September 2025 (UTC)
- Sometimes there is no good option. We want to be open and transparent. Trolls want to troll. Denying tools to combat the trolls ends up driving away good editors who are fed up with wasting their time on a project that can't defend itself. Johnuniq (talk) 05:56, 26 September 2025 (UTC)
- Thank you, Pharyngeal--that is indeed helpful. Honestly, I still have misgivings. Combining a lack of transparency on the specific functionality of the edit filters themselves (even for reasonable WP:OPAQUE reasons) with the grant of a blank check to make those same filters automated in perpetuity (because let's be honest, once greenlit, this way of doing things is all but certain not to ever be rolled back) leaves a lot of uncertainty and more or less permanently commits the project to a more hardline approach to ECP enforcement without corresponding oversight checks from the general community. I have nothing but great faith in the EFM community's intentions to act in good faith in the project's best interests, but at the same time, you all might make decisions hardening access to EC status that the larger community would not support, but can't object to, because only the EFMs and admin corps would ever know about them. And I'd probably be a lot less nervous about that, if this were not coming so close on the heels of ArbCom essentially making every CTOP topic subject to ECP. For years a vocal minority has been trying to convince this project to adopt a "registered editors only" policy, which has been roundly rejected by the broader community. And yet, so much access to the project has been made into the exclusive purview of the veteran editor, bit-by-bit in ArbCom decisions and arguably overzealous proposals meant to protect against disruption, that the distinction has increasingly less and less meaning. The community nominally continues to be strongly committed to the "encyclopedia anyone can edit" ethos, and continues to generally affirm the rights of unregistered editors to contribute, but in practice it does not want to do anything to put the brakes on ArbCom's increasingly sprawling remit as it places every half-way-controversial subject matter under CTOP and ECP. And the same time, our fears of the barbarians at the gates cause us to increasingly support more and more restrictive measures in general community proposals as well. And the most frustrating part? I can't even bring myself to oppose this proposal outright, because despite my reservations, I also can't convince myself it is unnecessary, in light of some of the disruption I have seen lately. All I know for certain is that I am deeply depressed that these are our options. I know it's not a 1:1 relationship by any means, but I'm distressed that the slow but steady roll-back of the open participation and diversity of perspective principles on this project are such a mirror of what is happening in our societies at large. I just don't like this feeling. And while I don't feel that the perspectives of the support !votes are unreasonable, I am bothered that I seem to be part of such a small minority of community members who have such heavy reservations about this increasing consolidation of access to shaping our content and even participating in the consensus process. Each individual step we take on this road might seem entirely reasonable in the context in which it is taken, but I cannot shake the feeling that they are collectively slowly choking the life out of the future of the project as a whole. Anyway, that's my last word on the subject. I am officially, but ambivalently, Neutral. SnowRise let's rap 05:42, 26 September 2025 (UTC)
- Non-admin EFM here. Without revealing too many details of private filters, the filters we are talking about enabling
- Well, I don't see how we could reasonably describe the concerns as unrelated; these filters (however they are calibrated) are going to impact all users who match a certain use profile. The rate of false positives will be a constant, relative to the coding of the filters, regardless of what the ultimate motives of the bad faith actors are, and regardless of what our motives are for implementing or altering the system. If there's something in the detection algorithms that specifically looks for conduct highly-indicative of machine-assisted gaming, that's great; I'd love to see that as it would be one point in favour of a presumption of a system that is conservative in its flagging and well-engineered to avoid false positives. The problem is, we haven't had confirmation of that here. It should really have come with the proposal, and its not heartening that no one with that knowledge has stepped up to provide it since. And there's another concern here: once the community greenlights this change, it takes oversight substantially out of the hands of the general community, unless this proposal is augmented to include periodic review of the filters themselves. If not, the filter devs and maintainers have functionally unrestricted ability to change the criteria by which people get ECP status and depending un just how drastic the changes are, it is unlikely that almost anyone in the community would even be aware, let alone someone inclined to question these changes. That's an awful lot of influence over the gates to full participating in the project consolidated in a few hands with essential no community review. And even if we presumed that every one of those devs was so committed to community transparency that they would hold a meaningful community consult for every change (and come on, unlikely, right?), there would still be pretty significant potential for human error (per Tamzin's concerns raised above). So, regardless of the category of bad actor this proposal is meant to target, the implications about the potential side effects remain a serious concern, because we just don't know what the net itself looks like. I'm by no means per se opposed to this proposal: I've tried to make that clear. But we really should have some more answers about the current technical implementation of the filters before we rubber stamp it. And very possibly some back-end protections to that once this system becomes automated, technical decisions which further clamp down on when new users become full-rights users have some sort of transparent review process, be it automatic or periodic. SnowRise let's rap 19:04, 25 September 2025 (UTC)
- @Snow Rise, with respect, you're bringing some almost completely unrelated anxieties into this discussion. The EFMs are talking about ways to reduce bot-facilitated vandalism, not humans who might maybe be gaming to get into PIA. -- asilvering (talk) 18:39, 25 September 2025 (UTC)
- The question is, what does a 'finely tuned' tuned filter look like? Part of the issue here is that a substantial portion of the general community has adopted an aggress posture on "gaming ECP status", but the community at large has never bothered to work through what exactly that consists of, let alone define clear metrics for detecting it in a consistent and reasonably fair fashion. It's a very nebulously defined problem. We have a crystal clear, community-sanctioned standard for the thresholds that are meant to normally trigger ECP: 30 days, 500 edits. But the idea that some people are doing wrong by complying with the word of that standard, while actually engaged in block evasion (or otherwise planning disruption), while reasonable in the abstract, leads to some rather obvious issues since distinguishing these motives through edit data is no simple task, with few super reliable metrics. We have people saying that this can be done with well-calibrated filters, but there has always been a dearth of evidence as to why we should treat these tests as based on empirically valid analytics. Certainly there has been no public facing research (or even detailed reasoning) that I have ever seen in any of the discussions on this topic to prove (or even indicate) why the "evidence" relied upon to distinguish bad actors from run-of-the-mill new users are reliable. Worse, I'm not even certain that many of the most strident activists for relying on these tools even know precisely how they function. I'm more than a little concerned that every time I have asked for even a bare bones description of the filters, I have been met with utter silence. There are plenty of innocent reasons why that may be happening, but I do worry it is entirely possible that many supporting this proposal (and others that have gotten us to this point) don't even really know how they operate, and can't give a cogent, detailed answer on why we can be confident on their reliability in catching bad actors or avoiding false positives. Or for that matter, some may be aware that the false positives are actually non-trivial (or highly unknown and subject to speculation), but are not eager to relay that fact, because they have decided themselves that the cost-benefit is worthwhile, in their personal views on the broader-level issues. This whole process is very insular and non-transparent so far, and that should be serious reason to consider pumping the breaks on going further whole-hog on letting an automated system slow down full authorization of new editors, given the implications for the project, ideological and practical. That's all the more a concern where ArbCom has independently expanded the implications/applicability of ECP massively in the last two years. I really think we need clearer answers on how all of this works before we just rubber stamp a proposal like this. There's a rush to do something, because something is clearly needed (believe me, I've seen it; the manner in which huge swarms of sock- and meat-puppets can be readily organized to overwhelm regular good faith editors has led to some astounding displays of disruption lately). But we could end up doing more harm than good if we don't base this particular decision on sound data. And meaning no disrespect to those who have supported this proposal already, but where is it? It really should come with or parallel to the proposal, and there should be no rush to greenlight this 'solution' until we have some more significant proof that it will do what is claimed, and that we have a clear idea of what the collateral consequences might be. SnowRise let's rap 17:40, 25 September 2025 (UTC)
- Support per nom FaviFake (talk) 18:47, 25 September 2025 (UTC)
- Support points 1 & 2 per nom. Neutral on point 3 as I don't think I'm knowledgable enough to comment. Graham11 (talk) 00:22, 1 October 2025 (UTC)
Discussion (extended confirmed grants)
[edit]- Are we sure that third bullet point is necessary? Is the match rate really so high it'd hit a throttle based on a percentage of all recent edits? -- Tamzin[cetacean needed] (they|xe|🤷) 21:58, 22 September 2025 (UTC)
- Yes, 100%. I'll send you some details via email. Daniel Quinlan (talk) 22:17, 22 September 2025 (UTC)
"We allow several high accuracy edit filters to revoke autoconfirmed."
– This is the most vague part of the proposal, as a general idea what would such edit filters look like? fifteen thousand two hundred twenty four (talk) 08:25, 23 September 2025 (UTC)- I'm years behind current edit filters but they would be standard filters that detect certain patterns of edits or behavior, and which have the ability to remove the autoconfirmed status of the editor as a response. The comment about accuracy would mean that manual checking of what the filter decided has shown that the filter is very accurate. Johnuniq (talk) 09:33, 23 September 2025 (UTC)
- Wouldn't it make more sense to have the filters trigger a notification which admins could then decide whether or not to act on? While I think there is a principled argument for tightening scrutiny of the EC confirmation process, I'm more than a little concerned about the degree of automation involved here, and the fact that respondents are being asked to provide blanket endorsement of the filters involved without the specifics of those filters being worked out for vetting in advance of authorizing the proposal. It seems to me that there is substantial possibility of false positives or just far too onerous a standard for qualification of EC status in a manner that could have broad implications for new user involvement and retention. Shouldn't we at least make sure that an actual human signs off on each denial of EC status to an account that otherwise meets the agreed community metrics for the granting of said status? Maybe I am just lacking the technical background on these filters and missing something obvious as a consequence, but taken as a whole, with the ambguities in the current version of the proposal, this feels a little calvalier. SnowRise let's rap 21:43, 23 September 2025 (UTC)
- That's how the filters work now (with scant few false positives), and then we have to full protect major articles for days at a time. The issue is that they're gaming EC before anyone has a chance to review the filter reports. ScottishFinnishRadish (talk) 21:47, 23 September 2025 (UTC)
- So no amendments to the filters themselves are immediately contemplated? We're just talking about changing the process so that technical revocation happens before admin review, rather than the other way around? SnowRise let's rap 21:54, 23 September 2025 (UTC)
- The filters are constantly being adjusted, tuned, and created. It's insufficient. ScottishFinnishRadish (talk) 22:20, 23 September 2025 (UTC)
- You don't have concerns that asking for the admin approval of withholding a user right (the usual criteria for which is defined by substantial community consensus) to be moved to the end of the process (or possibly obviated altogether, if admins/functionaries just stop keeping up with the log), creates a situation where there is a complete deficit of oversight here? This would mean that a small number of editors maintaining the filters would be able to put a very heavy thumb on the scale of who gets extended confirmed status. This in turn would put them in a position of substantially restricting the access of the new users to significant portions of the encyclopedia, since ECP has vastly expanded in scope of application under ArbCom rulings and other developments on the project over the last couple of years. If we are not going to couple this proposal with some sort of more robust vetting of the development of the filters, this strikes me as too much idiosyncratic decision making vested in too few hands (that are not expressly authorized by the community to make these kind of broad decisions about what looks like suspicious/gaming behaviour). At a minimum, I think we need to discuss what happens if no admin reviews the filter log on an automated withholding of the user group membership, after x amount of time. I also think we need way more public discussion of what the current filters look like and what 'scant few false positives' look like, and how we can even be confident of such an appraisal with limited insight into whether a given user's actions were good or bad faith, other than begging the question on the assumption that they were attempting to game the system. Understand that I have seen a lot of the type of abuse that this proposal is attempting to address, so I appreciate both the motivation and the need, and I'd like to support on that basis. But we have, between community discussions and decisions implemented unilaterally by ArbCom, already greatly restricted access to huge swaths of the project, including essentially the entirety of CTOP areas. This is yet another heavy step in the direction of locking down the project increasingly as the sole purview of veteran editors, with huge consequences to diversity of view points, editor recruitment, editor retention, and the workload deficit. I'm a little concerned about how light all of the details are for this proposal, considering the breadth of likely implications for all of those practical concerns. SnowRise let's rap 23:03, 23 September 2025 (UTC)
- We wouldn't enable the
blockautopromote
action except for filters verified to be exceptionally accurate. The goal of this proposal is to make Wikipedia more welcoming and open, not less so. Right now, it's possible to game ECP and disrupt a high-profile article to the point where nobody can improve it. It's possible to game ECP so that you can keep harassing someone indefinitely on their talk page. And our only recourse is to cut even more editors off from editing those pages. And those issues have gotten much worse in the last year. - Some people are already clamoring for an even higher level of protection than ECP, and I don't blame them because of how bad these issues are, but I think a better approach is to take aim at the small number of bad actors who are gaming ECP to be disruptive and abusive. These filters will only take action on accounts which are extreme outliers compared to typical new editors and not on ordinary contributions. When accounts trigger these rules, they are already posted to AIV for review, false positives can be reported to WP:EFFPR, users will also be able to appeal any revocation at a noticeboard, and administrators, edit filter managers, and edit filter helpers all regularly review filters and filter hits. All of these mechanisms have been in place for years to make sure filters are working properly and are being used properly, and I believe these mechanisms will continue to be effective. Daniel Quinlan (talk) 23:47, 23 September 2025 (UTC)
- I don't disagree that what you describe is a reasonable strategy for trying to serve the twin aims of short-circuiting disruption while simultaneously preserving access to as many editors as possible--at least in the broad strokes. But I'm still none the wiser on the specifics that I for one would say are crucial to making sure that the cure doesn't become more problematic than the disease. What are the criteria by which these filters assess contributors as bad actors, based on 'extreme outlier' metrics? Where should I be looking to understand the present analytics by which these filters operate? Or better yet, can you summarize the behaviours which would typically trigger the filters? I can't imagine I'm the only respondent who would have these reservations and is not a complete technical dope, but would nevertheless benefit from a more detailed explanation of the current criteria / rules by which the filters parse the data. Again, believe me, I see the need. And if the responses so far are any indication, you won't need my !vote for a consensus here. But personally, I can't rubber stamp this with my support without a more thorough understanding of under what conditions the user rights would be denied. You have spoken about protections on the back end, but let's bear in mind that when we are considering unintended negative consequences here, we are specifically talking about the community members who will have the lowest level of understanding of how to challenge an error. SnowRise let's rap 01:22, 24 September 2025 (UTC)
- @Snow Rise, currently we are dealing with a particular vandal who makes 500 edits in mere moments, 30 days after account creation, by adding numerals to their sandbox. They move so fast that in the time it takes a human admin to look at their edits, they've already reached extended confirmed. When I was unfamiliar with this particular vandal and someone reported an ongoing run-up to me, they got from 100ish edits to 400ish edits in the time it took me to check my Discord messages and open the "editor's" contributions history to show 500 edits. I assure you that this behaviour is not even remotely like any human editor who could plausibly be operating in good faith. -- asilvering (talk) 18:44, 25 September 2025 (UTC)
- That's great, and if the filters are substantially engineered towards sending up a flag in only those kinds of situations, that's the kind of detail which could reassure me that the system is well-calibrated towards catching obvious bad actors with minimal false positives. The problem is, no one who has the requisite familiarity with the technical implementation of the filters at present has yet spoken up in this discussion to give even a basic description of how they operate. And until they do, I for one do not feel comfortable supporting the proposal. And if they do, it may be enough to shift my !vote, but it won't exactly dissipate my concerns altogether, since there is no oversight function (that I am aware of, anyway) for reviewing changes to those filters by the general community. Meaning once we authorize full automation of the withholding of EC status, the rules by which that automation is conducted could be radically changed to not be consistent with the status quo we had in mind when we permitted that change. SnowRise let's rap 19:14, 25 September 2025 (UTC)
- The LTA that asilvering is referring to is Salebot1 (WP:LTA/SB1), in case you didn't already know. SuperPianoMan9167 (talk) 21:11, 25 September 2025 (UTC)
- That's great, and if the filters are substantially engineered towards sending up a flag in only those kinds of situations, that's the kind of detail which could reassure me that the system is well-calibrated towards catching obvious bad actors with minimal false positives. The problem is, no one who has the requisite familiarity with the technical implementation of the filters at present has yet spoken up in this discussion to give even a basic description of how they operate. And until they do, I for one do not feel comfortable supporting the proposal. And if they do, it may be enough to shift my !vote, but it won't exactly dissipate my concerns altogether, since there is no oversight function (that I am aware of, anyway) for reviewing changes to those filters by the general community. Meaning once we authorize full automation of the withholding of EC status, the rules by which that automation is conducted could be radically changed to not be consistent with the status quo we had in mind when we permitted that change. SnowRise let's rap 19:14, 25 September 2025 (UTC)
- @Snow Rise, currently we are dealing with a particular vandal who makes 500 edits in mere moments, 30 days after account creation, by adding numerals to their sandbox. They move so fast that in the time it takes a human admin to look at their edits, they've already reached extended confirmed. When I was unfamiliar with this particular vandal and someone reported an ongoing run-up to me, they got from 100ish edits to 400ish edits in the time it took me to check my Discord messages and open the "editor's" contributions history to show 500 edits. I assure you that this behaviour is not even remotely like any human editor who could plausibly be operating in good faith. -- asilvering (talk) 18:44, 25 September 2025 (UTC)
- I don't disagree that what you describe is a reasonable strategy for trying to serve the twin aims of short-circuiting disruption while simultaneously preserving access to as many editors as possible--at least in the broad strokes. But I'm still none the wiser on the specifics that I for one would say are crucial to making sure that the cure doesn't become more problematic than the disease. What are the criteria by which these filters assess contributors as bad actors, based on 'extreme outlier' metrics? Where should I be looking to understand the present analytics by which these filters operate? Or better yet, can you summarize the behaviours which would typically trigger the filters? I can't imagine I'm the only respondent who would have these reservations and is not a complete technical dope, but would nevertheless benefit from a more detailed explanation of the current criteria / rules by which the filters parse the data. Again, believe me, I see the need. And if the responses so far are any indication, you won't need my !vote for a consensus here. But personally, I can't rubber stamp this with my support without a more thorough understanding of under what conditions the user rights would be denied. You have spoken about protections on the back end, but let's bear in mind that when we are considering unintended negative consequences here, we are specifically talking about the community members who will have the lowest level of understanding of how to challenge an error. SnowRise let's rap 01:22, 24 September 2025 (UTC)
- We wouldn't enable the
- You don't have concerns that asking for the admin approval of withholding a user right (the usual criteria for which is defined by substantial community consensus) to be moved to the end of the process (or possibly obviated altogether, if admins/functionaries just stop keeping up with the log), creates a situation where there is a complete deficit of oversight here? This would mean that a small number of editors maintaining the filters would be able to put a very heavy thumb on the scale of who gets extended confirmed status. This in turn would put them in a position of substantially restricting the access of the new users to significant portions of the encyclopedia, since ECP has vastly expanded in scope of application under ArbCom rulings and other developments on the project over the last couple of years. If we are not going to couple this proposal with some sort of more robust vetting of the development of the filters, this strikes me as too much idiosyncratic decision making vested in too few hands (that are not expressly authorized by the community to make these kind of broad decisions about what looks like suspicious/gaming behaviour). At a minimum, I think we need to discuss what happens if no admin reviews the filter log on an automated withholding of the user group membership, after x amount of time. I also think we need way more public discussion of what the current filters look like and what 'scant few false positives' look like, and how we can even be confident of such an appraisal with limited insight into whether a given user's actions were good or bad faith, other than begging the question on the assumption that they were attempting to game the system. Understand that I have seen a lot of the type of abuse that this proposal is attempting to address, so I appreciate both the motivation and the need, and I'd like to support on that basis. But we have, between community discussions and decisions implemented unilaterally by ArbCom, already greatly restricted access to huge swaths of the project, including essentially the entirety of CTOP areas. This is yet another heavy step in the direction of locking down the project increasingly as the sole purview of veteran editors, with huge consequences to diversity of view points, editor recruitment, editor retention, and the workload deficit. I'm a little concerned about how light all of the details are for this proposal, considering the breadth of likely implications for all of those practical concerns. SnowRise let's rap 23:03, 23 September 2025 (UTC)
- The filters are constantly being adjusted, tuned, and created. It's insufficient. ScottishFinnishRadish (talk) 22:20, 23 September 2025 (UTC)
- So no amendments to the filters themselves are immediately contemplated? We're just talking about changing the process so that technical revocation happens before admin review, rather than the other way around? SnowRise let's rap 21:54, 23 September 2025 (UTC)
- That's how the filters work now (with scant few false positives), and then we have to full protect major articles for days at a time. The issue is that they're gaming EC before anyone has a chance to review the filter reports. ScottishFinnishRadish (talk) 21:47, 23 September 2025 (UTC)
- Wouldn't it make more sense to have the filters trigger a notification which admins could then decide whether or not to act on? While I think there is a principled argument for tightening scrutiny of the EC confirmation process, I'm more than a little concerned about the degree of automation involved here, and the fact that respondents are being asked to provide blanket endorsement of the filters involved without the specifics of those filters being worked out for vetting in advance of authorizing the proposal. It seems to me that there is substantial possibility of false positives or just far too onerous a standard for qualification of EC status in a manner that could have broad implications for new user involvement and retention. Shouldn't we at least make sure that an actual human signs off on each denial of EC status to an account that otherwise meets the agreed community metrics for the granting of said status? Maybe I am just lacking the technical background on these filters and missing something obvious as a consequence, but taken as a whole, with the ambguities in the current version of the proposal, this feels a little calvalier. SnowRise let's rap 21:43, 23 September 2025 (UTC)
- I'm years behind current edit filters but they would be standard filters that detect certain patterns of edits or behavior, and which have the ability to remove the autoconfirmed status of the editor as a response. The comment about accuracy would mean that manual checking of what the filter decided has shown that the filter is very accurate. Johnuniq (talk) 09:33, 23 September 2025 (UTC)
- Question EC happens after 500 edits plus 30 days. How are editors getting EC not already autoconfirmed? I think I missed something. RudolfRed (talk) 00:24, 24 September 2025 (UTC)
- Currently, EC is granted solely based on edit count and account age. It doesn't matter whether the account is autoconfirmed. If we add autoconfirmed as a requirement for EC being granted, it would mean that we could automatically revoke AC from accounts in the process of gaming EC via rapid edits, and keep those accounts from gaining EC until they can be reviewed by an administrator. For good faith users, it will have no effect because they will still have autoconfirmed when they reach the standard EC requirements. Daniel Quinlan (talk) 00:51, 24 September 2025 (UTC)
- Thanks for clarifying it for me. RudolfRed (talk) 01:50, 24 September 2025 (UTC)
- @RudolfRed: Also, admins are able to grant various rights to users, even to newly-registered accts with no edits yet. There are about sixteen of these, and they include: confirmed user; extended confirmed user; template editor. --Redrose64 🌹 (talk) 20:24, 24 September 2025 (UTC)
- Thanks for clarifying it for me. RudolfRed (talk) 01:50, 24 September 2025 (UTC)
- Currently, EC is granted solely based on edit count and account age. It doesn't matter whether the account is autoconfirmed. If we add autoconfirmed as a requirement for EC being granted, it would mean that we could automatically revoke AC from accounts in the process of gaming EC via rapid edits, and keep those accounts from gaining EC until they can be reviewed by an administrator. For good faith users, it will have no effect because they will still have autoconfirmed when they reach the standard EC requirements. Daniel Quinlan (talk) 00:51, 24 September 2025 (UTC)
- Question: What kind of edits do users typically make when they're gaming the system? If they are mostly userspace edits, would it be helpful to change the requirements for the EC user right to a minimum number of mainspace edits? If they are mainspace edits, are they constructive? Are the filters just catching them because they are made quickly, or are the edits vandalism or something repetitive like adding an extra space where it's not needed? I can envision a scenario in which someone who is really determined to vandalize a high-profile page could get around these proposed changes by, for example, making a small number of inconsequential grammar changes per day until they reach 500 edits. I think the only foolproof way to prevent this from happening would be to create a new protection level above ECP that requires users to have an administrator-granted permission (e.g. rollback, or create a new one for this purpose) to edit the highest-risk pages. I2Overcome talk 06:13, 24 September 2025 (UTC)
- Sometimes they edit main space articles with more innocent-looking contributions, and other changes are to sandboxes. I haven't seen user page edits as often prior to being granted auto-confirmed rights or extended confirmation. SNUGGUMS (talk / edits) 12:29, 24 September 2025 (UTC)
- Trolls can and do choose to attack any kind of page, not just high profile ones. Edit filters can detect many of the persistent trolls but they cannot currently stop a troll from gaming their way to EC. This proposal is for some simple changes which could be implemented quickly. The result would that edit filters could stop a new editor from reaching EC. People who maintain those filters will check what they do and ensure that false positives are rare. The penalty of a false positive would be that someone reaches 500/30 but is not extended confirmed—they can still keep editing and become EC after a manual review. Other plans (such as altering the way EC works) could take over a year to implement and would have zero flexibility. Johnuniq (talk) 01:45, 25 September 2025 (UTC)
- I want to say that continuing to rely on humans to play King Canute against this incoming tide is not a viable option, and that doing this is not going to resolve the problem. It will suppress some of the ECP gaming but a significant chunk of it will just be displaced or delayed. At some point, we will have to give up and make EC a right to be granted or revoked by humans.—S Marshall T/C 07:28, 25 September 2025 (UTC)
- Slightly tangential, but it may be worthwhile to assess potential collateral if the max permissible non-XC rate were reduced to the max permissible non-AC rate, assuming something to that effect has not already been done. 184.152.65.118 (talk) 00:52, 23 September 2025 (UTC)
- Honestly, whatever makes admin's work easier based on tweaking configurations sounds good to me, even if technicals go slightly over my head at the same time (hence I'll refrain from survey for not being clued up enough). Overall I'm much more interested in filling the protection level gap between ECP and Full Protection, or even understanding why there is such an enormous gap in the first place, but that's a completely different another topic. CNC (talk) 21:42, 26 September 2025 (UTC)
- I always felt that ECP should have a higher threshold to move it more midway between semi and full. ~Anachronist (who / me) (talk) 23:16, 26 September 2025 (UTC)
PROTECTION ERROR!!!
[edit]I am a new (but registered) editor, but it is semi-protected so I should NOT be able to edit this page right? but iI am able to EDIT THE PAGE!!! think there is some technical error or something else. anyways thanks for reading this. :| --[many citations needed] (User Talk:Linkeditz) 22:20, 27 September 2025 (UTC)
- Wait no I messed up I actually am auto-confirmed sorry for the confusion but anyways thanks again. :] --[many citations needed] (User Talk:Linkeditz) 22:32, 27 September 2025 (UTC)
- @Linkeditz: Just for future reference, these types of questions are generally best asked at the Wikipedia:Help desk or the Wikipedia:Teahouse. Regards. Daniel Quinlan (talk) 23:31, 27 September 2025 (UTC)
- Thanks for telling me this @Daniel Quinlan I thought it was a error with this page specifically and I turned about to be wrong about the error because I am autoconfirmed and I just did not notice I got a higher user rank so sorry about that. Also Regards. [many citations needed] (User Talk:Linkeditz) 19:31, 28 September 2025 (UTC)
- @Linkeditz: Just for future reference, these types of questions are generally best asked at the Wikipedia:Help desk or the Wikipedia:Teahouse. Regards. Daniel Quinlan (talk) 23:31, 27 September 2025 (UTC)