While thinning out old revisions with the thin-out script, some token offsets are failing. This might be caused by a tombstone overwhelm after many recent deletions.
The current solution is to skip over those ranges by:
- decode the pagestate with `new Buffer('<pagestate>', 'hex').toString()`, look for the `_domain` and `key`
- get the token for that _domain and key with `select token('domain', 'key')` in cqlsh
- skip over that by adding a `where token("_domain", 'key') > <token + n>`
Since those failed ranges often correspond to extremely wide rows, it would be good to record and revisit those ranges later, in order to make sure that those super-wide rows are also thinned out successfully.
In order to do so, lets record the failed pagestates below:
## wikipedia data-parsoid
- `token("_domain", key) > token('en.wikipedia.org', 'User:OlEnglish/Dashboard')'`
- en.wikipedia.org, Wikipedia:WikiProject_Biography/Deletion_sorting
- hy.wikipedia.org, Վիքիպեդիա:Նախագիծ:Վիքիընդլայնում
## wikipedia html
- en.wikipedia.org, User_talk:77.65.63.46 ... -793006042568050703
- pl.wikipedia.org, Funkcja_Β; 314000445674974489 ... 314000545674974489
## wikimedia data-parsoid
- commons.wikimedia.org, Commons:Quality_images_candidates/candidate_list