Allow mysql consumer to continue in case of duplicate key error
Now that EventLogging can run on Kafka, it will try to pick up
where it left off based on consumer offsets. This might cause
events that have been consumed to be reconsumed, which could
result in mysql duplicate key errors. This change catches
those errors and continues.
Note that this might cause missed messges in the case of batch
mysql insertion. However, this is not worse than it was before,
where the consumer would just die and start back up again from the
end of the 0mq stream. As a future enhancement, we could
seqeuentially insert the batch of events in the case of a duplicate
key error, so that events that have not actually yet been inserted
have a real chance of making it into mysql.