In spark, you specify memory in K/M/G or Kb/Mb/Gb (case doesn't matter), and this gets interpreted as kibibytes/mebibytes/gibibytes = 2^10/2^20/2^30 bytes.
In skein, when you specify memory the same way you do in Spark, you get kilobytes/megabyte/gigabytes = 10*3/10^6/10^9 bytes ! If you wish to get bibytes you need to explicitly write the i: kib/mib/gib!
This is what makes our spark-skein jobs be configured as, for instance:
Skein master -> 3815 Mib
Spark driver -> 4G
With this settings we have ~4*10^9 = 4000000000 bytes available in container, while spark can request up to 4*2^30 = 4294967296 bytes. If the container is under memory pressure, Yarn will kill it.
Proposed solution: Parse the memory values passed from Spark to skein and set/convert their units.