Spark jobs required to have hadoop config (core/hdfs-site.xml files) available to be able to access hdfs
It is possible to push a configmap with this configuration for each job but it will leads to pushing same config multiple times
So we should add common configmap for it (one per hadoop cluster)