The captchas used on Wikimedia sites are not working very well: they obstruct humans and do not keep out bots well enough, burdening volunteers doing anti-abuse work. Many tasks have been filed about it over the years; this is intended to be a tracking task and a high-level overview of the whole sorry situation.
=== tl;dr ===
| human failure rate | major accessibility issues | spambots kept out | spambots missed |
| 20-30% (estimated) | visual only; English only | >99% (estimated) | ~2.000-10.000 / month |
=== Our captchas are bad at letting in humans ===
There is no easy way to separate good (human) and bad (bot) captcha rejections, but per T152219#3405800 a human failure rate between 20-30% seems to be a reasonable estimate. (Also reinforced by mobile app data collected some years ago, which was on the the high end of that range.) That's extremely high. Furthermore our captchas assume you can read (somwehat obscured) English text; users with visual impairments have no way of getting through them at all ({T6845}; arguably, this could cause legal compliance problems as well), nor do users who cannot read or type Latin scripts; and the characters are sufficiently distorted that people who don't speak English are at a disadvantage recognizing them ({T7309}).
| example of current captcha|
| {F31501192}|
We are also fairly unsophisticated about how we use captchas ({T113700}) so for some common new user workflows like adding external links the user will get multiple captcha challenges repeatedly.
=== Our captchas are bad at keeping out bots (and volunteers pay the price) ===
The captchas keep out the stupidest spambots (which are the majority of spambots, of course; we have about 100 failed account creation attempts per every successful one, including humans); experimentally disabling them has caused instant spam floods. But they are ineffective against even just slightly sophisticated spambots, even non-Wikimedia-specific ones: per the investigations in T141490 and T125132#4442590, the captchas can be broken with off-the-shelf OCR tools without any training or finetuning. Empirically, thousands of spambots need to be manually blocked and cleaned up after by stewards every month (per T125132#3339987), which is a huge drag on volunteer productivity (and arguably it is unfair and somewhat abusive to rely on volunteers' manual effort for tasks like that). The people doing this are already exasperated; they regularly call for help (see e.g. T125132 or T174877, there's many more) but are mostly ignored.
Occasionally, a more intelligent spambot completely overwhelms our defences, and we just disable new user registration on that wiki and wait until they get bored and stop. (E.g. T230304, T212667) If someone did that with the intent of disrupting Wikipedia (as opposed to making money via spam), this is probably one of the easier attack vectors today.
=== Improvements are held back by technical debt ===
There have been many discussions about improving things, but they went nowhere because 1) the captcha code ([[https://www.mediawiki.org/wiki/Extension:ConfirmEdit|ConfirmEdit]] extension) is one of the older and more gnarly parts of the codebase, and hard to work with; 2) the captcha infrastructure is essentially unowned (the [[https://www.mediawiki.org/wiki/Developers/Maintainers|maintainers page]] puts the extension under the Editing team, but that does not reflect reality, and does not really make sense given that team's focus on editing interfaces and client-side code, either); 3) we mostly lack the infrastructure for measuring captcha efficiency, so even though some of the proposed changes are relatively easy to do, we'd have to fly blind.
=== Past proposals / efforts ===
(TBD)