See https://gist.github.com/generall/68fddb87ae1845d6f54c958ed3d0addb
and https://medium.com/@vasnetsov93/shrinking-fasttext-embeddings-so-that-it-fits-google-colab-cd59ab75959e
Essentially, we should be able to get our vector embeddings into a smaller RAM footprint. Let's experiment with this to see if it can help us.