We have an error of token linked to the fact that hudi can use hbase as a backend for its index over primary key. In spark it can be overcome by rerunning the command (error happens only once), but for hive the error makes the underlying map-reduce job to fail.
|Open||None||T258511 Data Lake incremental Data Updates|
|Open||None||T231938 Get "edits hourly" on a daily basis|
|Resolved||Milimetric||T258532 [SPIKE] Prototype of incremental updates for mediawiki history for simplewiki , including reverts using apache hudi|
|Open||None||T262260 Make hudi work with Hive|