Page MenuHomePhabricator

Explore alternatives to browser fingerprinting for anti-abuse efforts
Closed, ResolvedPublic

Description

Currently, MediaWiki software allows permissioned users to stop a person from performing actions (e.g. editing, creating pages, messaging other users) by blocking their username, IP address, or IP range. Block evasions are simple on Wikimedia wikis.

The Anti-Harassment Tools team at the Wikimedia Foundation has been tasked with looking to modernize our blocking tools to make blocks more effective and less easy to evade. The impetus is for Long-Term-Abuse but this project may be further reaching. There will be no silver bullet but it is the responsibility of the WMF to install safeguards for our users, our staff, and the content that we host.

Browser fingerprints are being investigated in T213351 but there are alternatives that we should explore. Please use this task to discuss alternatives.

Event Timeline

Rather than focusing on the bad actor why not focus in the target giving them tools to protect interactions? For example: they can protect their talk pages such to post you need to know the "secret" word (similar to a captcha), this way interactions are "whitelisted"

Rather than focusing on the bad actor why not focus in the target giving them tools to protect interactions? For example: they can protect their talk pages such to post you need to know the "secret" word (similar to a captcha), this way interactions are "whitelisted"

I don't think whitelisting can work in a Wiki, as working there is heavily dependant on communicating with each other.

What about a blacklist? We do that for email & echo notifications and the feedback has been positive.

Something as simple as a user-enabled time-limited semi-protection of their own talk page would certainly throttle some amount of abuse. A blacklist doesn't really work with LTAs, since they're using multiple accounts.

I think any conversation that starts from a place of "Lets stop all abuse" is not going to go anywhere. There's lots of different types of abuse on the wiki, involving different methods, motives and sophistication. Solutions are not going to be one size fits all and it will probably be a poor solution if it doesn't start from a place of usecases.

Anti-abuse will always require a defence in depth approach - to limit the scope to just the stickiness of the banhammer is mistaken. The flip side is making abuse easier to find, investigate and take action against. Reverting and blocking faster, as well as detecting all of it also has a deterrent effect. Making blocks stickier will run into privacy problems, but making detecting and dealing with abuse easier involves only code and UI design. Here are some thoughts:

  • Patching obvious shortcomings e.g. T196575
  • Workflow improvements e.g. T176867, T189391
  • Capability improvements e.g. T146837
  • Machine readability to help volunteer tool development and detection algorithms e.g. T56328
  • Transitioning the existence of harassment vectors (email in particular) to privileged information
  • Penetration testing of anti-abuse measures
  • User-managed blacklists as mentioned above

Thank you @MER-C. I completely forgot about T196575 — what an obvious first step!