While thinning out old revisions with the thin-out script, some token offsets are failing. This might be caused by a tombstone overwhelm after many recent deletions.
The current solution is to skip over those ranges by:
- decode the pagestate with `new Buffer('<pagestate>', 'hex').toString()`, look for the `_domain` and `key`
- get the token for that _domain and key with `select token('domain', 'key')` in cqlsh
- skip over that by adding a `where token("_domain", 'key') > <token + n>`
Since those failed ranges often correspond to extremely wide rows, it would be good to record and revisit those ranges later, in order to make sure that those super-wide rows are also thinned out successfully.
In order to do so, lets record the failed pagestates below:
## wikipedia data-parsoid
-
## wikipedia html
-