As discussed it would be great to get rid of Spark 1.6 puppet configuration on the Analytics Hadoop cluster to concentrate our efforts only on Spark 2.x. In theory there shouldn't be anybody still using it for regular jobs, but to be sure a more careful approach is needed.
sudo cumin '(R:Class = profile::analytics::cluster::client or R:Class = profile::hadoop::worker) and not an-coord1001.eqiad.wmnet' 'apt-get -y remove spark-core spark-python && mv -f /etc/spark /tmp/etc-spark1-to-delete'
Also removed /etc/spark on an-coord1001 and analytics-tool1001, and analytics1030 (hue).
I should have also excluded analytics1030 from my cumin command, it is the coordinator on the test cluster. Puppet fixed.