Page MenuHomePhabricator

Deploy improved FancyCaptcha
Closed, ResolvedPublic

Assigned To
None
Authored By
tstarling
Jul 27 2016, 10:13 PM
Referenced Files
F41753329: Figure_1_new_week.png
Feb 2 2024, 4:20 PM
F41723772: grafik.png
Jan 27 2024, 11:15 PM
F41711785: grafik.png
Jan 27 2024, 11:15 PM
F41720804: Screenshot from 2024-01-26 10-46-31.png
Jan 27 2024, 12:13 AM
F41720802: Screenshot from 2024-01-26 14-01-50.png
Jan 27 2024, 12:13 AM
F41720803: Screenshot from 2024-01-26 10-43-55.png
Jan 27 2024, 12:13 AM
F41720801: Screenshot from 2024-01-26 10-44-01.png
Jan 27 2024, 12:13 AM
F41691877: grafik.png
Jan 15 2024, 8:01 PM
Tokens
"Like" token, awarded by MarcoAurelio."Like" token, awarded by Bawolff.

Description

In 2014, I investigated FancyCaptcha's resistance to OCR. I found that it had essentially no resistance, that it could be trivially broken by open source software without image preprocessing or OCR engine configuration.

In these two changes, I implemented changes which were confirmed to defeat such naïve OCR attacks. Specifically, I tweaked the tunable parameters to improve distortion of the baseline, and added low-spatial-frequency noise and a gradient to defeat thresholding.
These changes were never deployed to WMF. I propose now doing so.

Here is some representative output:

OldNew
old.png (929×235 px, 74 KB)
grafik.png (58×313 px, 4 KB)
grafik.png (59×250 px, 4 KB)
grafik.png (59×287 px, 4 KB)
grafik.png (63×281 px, 4 KB)
grafik.png (51×263 px, 4 KB)
grafik.png (64×218 px, 4 KB)

The procedure to regenerate the captcha image set is documented at https://wikitech.wikimedia.org/wiki/Generating_CAPTCHAs

Related Objects

StatusSubtypeAssignedTask
ResolvedNone
ResolvedBUG REPORTLadsgroup

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes
Reedy changed the task status from Open to Stalled.Nov 1 2019, 11:35 AM
Reedy triaged this task as Medium priority.
Tgr added a parent task: Restricted Task.Jan 8 2020, 8:38 AM

@Reedy: Hmm, what exactly is this task stalled on?

Lack of any consensus. Lack of decent enough metrics to even deploy the change and see what happens. It's been open 4 years at this point, and no clear path of moving forward for deploying any of the improvements proposed.

It'll probably get declined at some point

Mainly lack of metrics; lack of consensus is a consequence. So the path forward would be T255208: Catalog and evaluate methods of analysis for Wikimedia captcha performance or something similar - and eventually, some reporting mechanism that gives some reasonably reliable numbers on how well the captcha performs against humans vs. against bots. It's a few weeks of work, IMO, it just has been hotpotato'd around between various departments.

Aklapper changed the task status from Stalled to Open.Nov 3 2020, 9:56 AM
Aklapper lowered the priority of this task from Medium to Lowest.

Lack of any consensus. Lack of decent enough metrics to even deploy the change and see what happens. It's been open 4 years at this point, and no clear path of moving forward for deploying any of the improvements proposed.

That doesn't make it stalled by definition but low priority :)

I might be stating the obvious but a change like this shouldn't take 10 years to be deployed (with the last comment being more than three years ago). Is there anything I can do to get this off the ground? Should I just make the patches and get it deployed and check some (*waves at the air*) metrics? Should I just get T255208 done first? Anyone willing to help?

I might be stating the obvious but a change like this shouldn't take 10 years to be deployed (with the last comment being more than three years ago). Is there anything I can do to get this off the ground? Should I just make the patches and get it deployed and check some (*waves at the air*) metrics? Should I just get T255208 done first? Anyone willing to help?

Given the amount of time that has passed and the lack of conclusive data on how this might improve FancyCaptcha's efficacy while not further hindering accessibility, I'm not sure a simple code-cleanup and deploy would be the best option here. We definitely wouldn't want to introduce a worse captcha experience for project users at this point.

AIUI the fundamental blocker here is not being able to differentate between increased captcha failure rate for bots vs. increased captcha failure rate for humans. So we'd need captcha success rate stats (which we sort of have, but not very good ones) plus some sort of bot detection.

Also IMO the patch proposed here is a non-starter because it makes the captcha a lot harder to read, while the security gains would be limited at best. @Bawolff had some ideas for captchas which at least at first glance don't look harder for a human (see comments starting at T125132#4432800).

I feel like we're over emphasizing perfect stats here. Its not like the original captcha had extensive testing. Surely we can test how readable any given new captcha proposal is, by asking for a bunch of volunteers from the community to solve some captchas and see how they do.

I agree with Brian here, Captcha is by design hard to measure. If you have a way to detect bot request failure, then why are relying on captcha in the first place? just use that method to block the bots.

Sure, user testing works too. I'm not sure it's less effort. You can look at effect of captcha on known-human users (e.g. IPs from some insitutional range), split things by user agent etc.

You can look at effect of captcha on known-human users (e.g. IPs from some insitutional range)

Sounds difficult... what network can possibly be guaranteed not to have any bot on it?

I agree it _might_ be possible to pick some statistical proxy which would give us an idea of the impact. For example, what portion of the users pass the captcha and then get blocked, and perhaps more importantly how many capcha attempts were needed by users who did pass the captcha and did not get blocked after making an edit.

grafik.png (66×255 px, 4 KB)

grafik.png (70×252 px, 4 KB)

grafik.png (68×223 px, 4 KB)

grafik.png (69×178 px, 3 KB)

Examples of output of the patch I'm about to make

Change 990694 had a related patch set uploaded (by Ladsgroup; author: Amir Sarabadani):

[mediawiki/extensions/ConfirmEdit@master] Add negative kerning and lines to captcha

https://gerrit.wikimedia.org/r/990694

Change 990697 had a related patch set uploaded (by Ladsgroup; author: Amir Sarabadani):

[operations/puppet@production] mediawiki: Use the new captcha

https://gerrit.wikimedia.org/r/990697

Change 990694 merged by jenkins-bot:

[mediawiki/extensions/ConfirmEdit@master] Add negative kerning and lines to captcha

https://gerrit.wikimedia.org/r/990694

FWIW, this is deployed in beta cluster as of now.

Change 990726 had a related patch set uploaded (by Ladsgroup; author: Amir Sarabadani):

[mediawiki/extensions/ConfirmEdit@master] captcha.py: Increase number of random lines in the text

https://gerrit.wikimedia.org/r/990726

^ This increases the number of lines, feel free to accept or abandon it. Some examples:

grafik.png (59×273 px, 3 KB)

grafik.png (66×296 px, 4 KB)

grafik.png (73×247 px, 4 KB)

grafik.png (60×298 px, 4 KB)

grafik.png (69×245 px, 4 KB)

I haven't thrown this into an OCR to check its performance though.

The examples do seem like the lines are a bit low, maybe there should be some lines at the bottom and some near the top of the word

oh indeed, made some changes:

grafik.png (58×313 px, 4 KB)

grafik.png (59×250 px, 4 KB)

grafik.png (59×287 px, 4 KB)

grafik.png (63×281 px, 4 KB)

grafik.png (51×263 px, 4 KB)

grafik.png (64×218 px, 4 KB)

Is it better?

I think so, it looks more random to me anyhow (this is just gut feeling, i haven't done any testing)

Ran Tesseract Open Source OCR Engine v4.1.1 with Leptonica on 1000 generations and only on 71 it produced any output which mostly were garbage:

( CutsSauna-
Ceeuighv ews —
ashySears
U Serubpadgy —
we
newts julips
bongdoar_
<vadenkama
=
( pathghebe
wi
\sewersaez—
——
Ss
~ spagemary—
<= Gegn
“ Fenépunk—
-eedasonja-
—waveduery
=
(CupssnowY
~<prdnknodez——
a
a
Sakhaurged
c_ ahmeddears—
Cstudsrusts——

Only one case it produced the exact value, and if you strip out the garbage, it still doesn't go higher than five cases in total.

Change 990726 merged by jenkins-bot:

[mediawiki/extensions/ConfirmEdit@master] captcha.py: Increase number and position of random lines in the text

https://gerrit.wikimedia.org/r/990726

Only one case it produced the exact value, and if you strip out the garbage, it still doesn't go higher than five cases in total.

For comparison, the out of 20 mentioned above (F4313219), Tesseract correctly recognizes all twenty of them

Edit: That wasn't a fair comparison, With the old version can, the rate is 33% (https://bawolff.toolforge.org/captcha/setB/), with the newer one probably higher.

The options you give tesseracrt can make a big difference, in my old test i used -psm 13 -c tessedit_char_whitelist=abcdefghijklmnopqrstuvwxyz

We might want to consider making them not real words, to prevent spell checking attacks.

Fair, with that options, you get 7.8% accuracy.

oh indeed, made some changes:

grafik.png (58×313 px, 4 KB)

grafik.png (59×250 px, 4 KB)

grafik.png (59×287 px, 4 KB)

grafik.png (63×281 px, 4 KB)

grafik.png (51×263 px, 4 KB)

grafik.png (64×218 px, 4 KB)

Is it better?

I did some testing, and posted the results at T125132#9468294

Change 990697 merged by Ladsgroup:

[operations/puppet@production] mediawiki: Use the new captcha

https://gerrit.wikimedia.org/r/990697

This is deployed.

Resolvable for now then?

From an accessibility perspective, the proposed solution will decrease usability for people with blurred vision and dyslexia, potentially other users with visual impairments, but it does not drastically decrease the overall accessibility. That being said, I believe this is a fine solution for now.

@Bawolff How much does it matter if the font is FreeMonoBoldOblique or DejaVuSans? I see https://gerrit.wikimedia.org/r/c/operations/puppet/+/990697 where it was switched because the former font isn't installed on mwmaint hosts.

Should we look more into how to get that installed or is DejaVuSans fine?

fonts-freefont might be the package.

I'm resolving this.

(Some more work is needed, changing the font if suitable, comms, measurements, etc. but doesn't need to happen in this ticket)

re: font choice. Visually, there is slightly more noise with the use of FreeMonoBoldOblique because of the serifs. This might have an effect on the ability to break it if that noise is helpful to make the CAPTCHA more difficult to break. From a readability perspective, DejaVuSans seems fine as that lack of additional serif noise may increase readability when all the characters are more close together with the noise of the abstract lines. But again, does that effect the ability to break it? Have these two fonts been tested against one another?

@Bawolff How much does it matter if the font is FreeMonoBoldOblique or DejaVuSans? I see https://gerrit.wikimedia.org/r/c/operations/puppet/+/990697 where it was switched because the former font isn't installed on mwmaint hosts.

Should we look more into how to get that installed or is DejaVuSans fine?

fonts-freefont might be the package.

FreeSans is more secure against more sophisticated attackers at the cost of somewhat lower readability (due to lower interletter spacing). DejaVu sans and FreeMonoBoldOblique are roughly the same level of security imo.

From an accessibility perspective, the proposed solution will decrease usability for people with blurred vision and dyslexia, potentially other users with visual impairments

I think the example in the task description oversells the readability of the current captcha (there is a random factor in how "hairy" it is and the more hairy ones can be quite difficult to read, even for average readers). Brian made a great dashboard where you can check out a more realistic set of examples: old captcha, new captcha.

The old captcha is hard to read in a certain way (characters can get quite distorted), the new captcha is hard to read in a different way (the characters aren't distorted but get squished together). I'm not sure what's the net effect is.

Fwiw: my intuition around different text captcha methods:

There are basically 3 steps in attacking a captcha - remove extranous elements, separate the word into individual letters, identify each letter. Thus those are the three knobs we can control to adjust difficulty.

  • non letter elements (lines etc) - about equally hard for a computer and human. If they are obviously different then letters than both humans and computers can discard them. If the look the same as letters then hard for both. The best thing about this method is that off the shelf OCR software can't handle it well, so the attacker is forced to program their own thing, even if the algorithms involved aren't super complex. Thus we can get some benefit for very little drop in readability.
  • overlapping letters - this is generally hard for computers and medium difficulty for humans (relatively speaking). This aims to prevent computers from seperating words into letters, so the entire captcha has to be taken as a whole. The algorithms involved in segmenting tend to be more complex, forcing the attacker to spend much more effort programming counter measures.
  • distorting letter shapes. Generally hard for humans and easier for computers. This usually doesn't prevent a computer from segmenting a word into letters, and once the computer is dealing with it a letter at a time, things become much easier as the computer has only 26 choices and only needs to decide which of the 26 it is closest to, which computers can usually do with ease if it is all readable by humans. Additionally this part overlaps with OCR, so there are excellent off the shelf solutions.

Ok, thanks Bawolff, won't worry about the package then.

Is this (new?) captcha-related issue on beta cluster related? T355962

It's unrelated, the new captcha was deployed to beta cluster for weeks now. I know what's the root cause of that bug.

FYI, this rebroke captcha.py for pillow 10, by reintroducing getsize, see also T354099

Some stats (web only; event / minute, 3-hour rolling average, last 7 days compared with previous 7-day periods; for the last one % of failed captcha attempts from all captcha attempts, 6-hour rolling average):

successful registrations
Screenshot from 2024-01-26 10-44-01.png (937×3 px, 190 KB)
link
successful captcha attempts
Screenshot from 2024-01-26 10-43-55.png (937×3 px, 194 KB)
link
failed captcha attemps
Screenshot from 2024-01-26 14-01-50.png (937×3 px, 254 KB)
link
captcha failure rate
Screenshot from 2024-01-26 10-46-31.png (937×3 px, 310 KB)
link

Zero effect on captcha success, but captcha failures are almost cut in half, and the changes correlate well with the time of the deploy (16:30 on the 25th). What does that mean?

I can think of two explanations:

  1. The average user finds the new captcha easier. We are down from a ~25% human failure rate to ~15%.
  2. Some class of not very smart spambots that could OCR the captcha into a string in the past but with near-100% failure rate now has its OCR broken so hard that it doesn't even try to submit anything.

Either way, this seems like a positive change (although in the second case, not very consequential).

I did some measurements similar to what Tim did in T152219#3405800. I suggest reading that comment before continuing to read this.
Note that before the deploy, the measurement was for a week, but after the deploy, I have data for a day (and will repost once a full week passes).

Wikis failure ratio graph:

before deploy (for a week)after deploy (for a day)
grafik.png (1×1 px, 64 KB)
grafik.png (1×1 px, 61 KB)

The plateau goes down from ~20% to 15-16% which to repeat Tim's prediction from seven years ago (T152219#3405800):

If we switched to a different CAPTCHA solution, we would want to see the height of the plateau remain the same, or be reduced.

(Why? The plateau basically represents large-enough wikis with mostly human activities)
Which is exactly what happened here, you could roughly estimate that we had 20% decrease in human failure (from roughly 20% to 16%). Any that is also aligned with what Tgr said in T141490#9491802.
Noting that it doesn't mean it's better for a11y, it's just easier to read for humans overall, it might make it easier for certain people while making it harder for other but the net change seems to be an improvement.

We know it's very likely human failure ratio going down because the slope for wikis with mostly spambot activities didn't move in a negative direction. It is too early to give an exact value for the derivative of the slope after 80% of failure rate but I'm not seeing that slope stopping at below 100% or having a lower derivative. I wait for a week for the exact estimate of impact on the spambots.

Here is list of large wikis failure ratio and the change:

wikifailure ratio after deploy (1 day)failure ratio before deploy (1 week)difference
ukwiki6.9%23.8%-16.9
metawiki24.3%40.9%-16.6
fawiki19.4%34.1%-14.7
kowiki6.2%19.9%-13.7
thwiki14.5%27.5%-12.9
arwiki24.5%37.0%-12.5
bnwiki22.0%33.8%-11.8
enwiktionary25.3%36.9%-11.6
svwiki14.3%25.0%-10.7
hiwiki30.2%40.7%-10.5
ruwiki13.7%22.9%-9.2
mediawikiwiki19.1%27.6%-8.5
viwiki16.5%24.3%-7.8
eswiki21.0%28.7%-7.7
cswiki12.0%18.9%-6.9
jawiki14.9%21.1%-6.3
enwiki16.0%22.2%-6.2
frwiki16.4%22.5%-6.0
itwiki12.5%18.0%-5.5
commonswiki19.3%24.4%-5.1
zhwiki16.6%21.4%-4.8
plwiki12.7%17.4%-4.8
nlwiki21.5%26.1%-4.7
uzwiki23.9%27.9%-3.9
trwiki18.4%22.2%-3.8
hewiki27.4%31.1%-3.6
dewiki15.4%17.9%-2.5
ptwiki23.3%23.8%-0.4
elwiki18.2%17.5%0.7
fiwiki13.8%13.0%0.8
wikidatawiki28.7%25.5%3.2
simplewiki21.0%17.8%3.2
idwiki28.8%22.9%5.9

I want to note that the wikis that had the highest redaction in failure rate were mostly wikis in languages that are not written with Latin scripts (Ukrainian, Persian, Korean, Thai, Arabic, Bangladeshi, Hindi, Russian). That would explain the massive redaction in human failure rate.

Mentioned in SAL (#wikimedia-operations) [2024-01-30T15:06:41Z] <claime> Manual run of mediawiki_job_generatecaptcha.service following timer failure - T141490

Results for a week are in:

Regarding impact on human failure rate

The plateau is at 16% which is lower than 22% on the old one, a 27% redaction on failure rate for humans.

Figure_1_new_week.png (1×3 px, 74 KB)

Regarding impact on non-latin wikis:

On wikis that had more than 100 captcha attempt in the week of 26 Jan - 2 Feb, here is the result:

wikilang codenew captcha failure rateold captcha failure ratedifference (%)
urwikiur19.8%38.1%-18.3
hiwikihi23.8%40.7%-16.9
fawikifa17.4%34.1%-16.8
rowikiro11.7%27.7%-16.0
bnwikibn19.0%33.8%-14.7
arwikiar22.3%37.0%-14.6
mywikimy17.6%31.5%-13.9
etwikiet5.5%19.2%-13.7
kawikika17.0%30.3%-13.3
mlwikiml15.6%28.8%-13.2
kkwikikk14.0%26.8%-12.9
mnwikimn7.0%19.6%-12.6
ukwikiuk11.4%23.8%-12.5
metawikimeta28.5%40.9%-12.4
thwikith15.3%27.5%-12.2
foundationwikifoundation56.4%68.0%-11.6
hewikihe19.8%31.1%-11.3
eswikies18.8%28.7%-9.9
svwikisv15.3%25.0%-9.7
cswikics9.7%18.9%-9.1
hawikiha9.5%18.6%-9.1
viwikivi15.6%24.3%-8.7
mediawikiwikimedia19.0%27.6%-8.6
kowikiko11.3%19.9%-8.6
trwikitr13.7%22.2%-8.5
uzwikiuz19.4%27.9%-8.5
ruwikiru15.0%22.9%-7.9
nowikino10.7%18.0%-7.3
jawikija14.0%21.1%-7.2
enwikiversityen28.8%35.9%-7.1
ptwikipt16.8%23.8%-6.9
srwikisr11.6%18.5%-6.9
bgwikibg13.8%20.3%-6.5
enwiktionaryen30.5%36.9%-6.5
hywikihy18.0%24.4%-6.4
enwikien15.8%22.2%-6.4
sourceswikisources41.5%47.9%-6.4
azwikiaz16.3%22.5%-6.2
commonswikicommons18.2%24.4%-6.2
zhwikizh15.3%21.4%-6.1
enwikibooksen28.2%33.9%-5.7
hrwikihr14.3%19.7%-5.4
jawiktionaryja35.8%41.2%-5.4
frwikifr18.1%22.5%-4.4
elwikiel13.1%17.5%-4.4
sqwikisq17.1%21.3%-4.2
enwikivoyageen28.4%32.3%-4.0
cawikica11.4%15.2%-3.8
huwikihu16.1%19.6%-3.5
mswikims17.6%21.1%-3.5
plwikipl14.4%17.4%-3.0
simplewikisimple14.8%17.8%-3.0
itwikiit15.2%18.0%-2.8
dewikide15.9%17.9%-2.0
dawikida11.7%13.6%-2.0
skwikisk13.2%15.0%-1.9
frwiktionaryfr38.2%39.3%-1.1
fiwikifi13.7%13.0%0.7
wikidatawiki26.3%25.5%0.8
idwiktionaryid36.9%33.9%3.0
eswiktionaryes52.7%49.5%3.2
enwikiquoteen39.4%35.3%4.1
frwikisourcefr58.2%53.6%4.7
slwikisl18.2%13.4%4.7
enwikinewsen50.0%44.5%5.5
zh_yuewikizh_yue11.1%2.8%8.4
nlwikinl34.9%26.1%8.8
enwikisourceen45.1%35.6%9.5
idwikiid32.9%22.9%10.0
jawikibooksja68.0%56.4%11.6
ptwiktionarypt50.6%38.9%11.7

Difference for non-latin wikis is 9.3% redaction on average (it's not weighted). And for latin wikis it's 2.8% redaction in captcha failure rate (can be anything from non-native speakers or general improvement)

Given what we have, the two-tailed p-value for null hypothesis of latin or non-latin scripts having no impact on captcha failure rate is 0.000139 and as such is rejected. (Warning: My math/statistics is rusty)

Notes: I ignored multilingual wikis, considered sr and sh latin wikis and some other small stuff.

Regarding the impact on bots:

The slop on 80% and above has been cut to one third (from 0.03278 percent/captcha attempt to 0.00947 percent/captcha attempt) which is not what we expected, It can be many reasons, including that improvements on human captcha solving is masking the impact on the bots (as it's quite large) or the fact that there has been less captchas in total (bots not attempting?), etc. etc. I will need to investigate this more but any idea would be more than welcome.

I have some experience on the grey market, and I would assure you that these changes didn't make the captcha any stronger against machine OCR. There a lot of weaknesses remaining. Some of them are following.

  • The random noise lines should not be grey, otherwise they can be easily discarded by a simplest contrast improvement using any graphic library.
  • The operator of OCR can detect the font you use, then OCR crops every letter and matches it against existing library of letter images for this font type. The font type should be different for each letter to prevent that. The letter size also should vary.

In general, don't invent the bicycle, use open-source captcha solution with higher strength that are still easy to be solved by humans.

And when you do so, a part of the bot makers will simply forward these captchas to some real humans somewhere in India or China, they recognize 1000 captchas for 10 cents.

The real difference could only be made with Google reCaptcha or a non-evil alternative that uses IP-Address matching and browser fingerprinting to match the user against a blacklist.

I understand and agree abut this is just about raising the bar a little to avoid captcha being broken with off-the-shelf OCR with default settings. No matter what we do, it'll be some way around it. It's just improving one layer of defense out of many. Further improvements on captchas are being talked about at the moment. We will keep the community updated on those.