Enable more accurate smaps based rss checking
Training xgboost models in the hadoop cluster is running into some
issues where yarn regularly kills containers, but only some of them.
Based on review of yarn's code it appears this is because we are using
the default RSS calculation which is documented as less accurate.
Specifically it includes pages that the kernel is free to evict, and
double(triple, etc) counts read only memory shared by many processes.
A custom implementation of that algorithm was injected into a background
task of training mlr models and found that the more accurate algorithm
shows a constant memory usage. Enabling this will allow us to stop
over-allocating memory to account for this discrepency, and require
250Gb less memory for the 9 hour training process.