When using Cargo (git master) along with an external job runner script, it is possible to have cargo jobs in the job queue use conflicting _ID numbers for entries when cargoPopulateTable is being run:
2018-11-15 06:27:55 cargoPopulateTable Intel64-haswell dbTableName=subarch replaceOldRows= requestId=034de88f415198e8124e94e3 (id=16904,timestamp=20181115062652) t=96 error =Wikimedia\Rdbms\DBQueryError: A database query error has occurred. Did you forget to run your application's database schema updater after upgrading? Query: INSERT INTO `cargo__subarch__NEXT` (`CFLAGS`,`CHOST`,`CPU_FLAGS`,`CPU_Family`,`Description`,`Release_Date`,`Subarch`,`Title`,`_pageName`,`_pageTitle`,`_pageNamespac e`,`_pageID`,`_ID`,`Features__full`) VALUES ('-march=haswell -O2 -pipe','x86_64-pc-linux-gnu','aes avx avx2 fma3 mmx mmxext popcnt sse sse2 sse3 sse4_1 sse4_2 ssse3','64-bi t Intel Processors','The intel64-haswell subarch specifically supports processors based on Intel\'s Haswell microarchitecture. Haswell desktop processors are branded as 4th Generation Intel Core i3, Core i5, and Core i7 Processors.',NULL,'intel64-haswell','4th Generation Intel Core (Haswell)','Intel64-haswell','Intel64-haswell','0','1793','21 ',NULL) Function: Wikimedia\Rdbms\Database::insert Error: 1062 Duplicate entry '21' for key '_ID' (localhost)
To replicate this issue, use an external job runner script and specify a --cores option greater than 1, and regenerate a Cargo table. A successful workaround to this issue is to modify the script so that jobs are only executed one at a time, by removing the --cores option.
Desired behavior: Cargo should behave properly when cargoPopulateTable jobs are executed in parallel.