Right now the edit quality is super super heavy and it's making deployments take a very long time. It's because of lots of binary files we store there and git does a horrible job on storing things it can't delta. The option would be to remove the git history but we don't want to lose history of our models, so I suggest we have two repos, one for R&D (!) and the other one with ten commits at the most for prod.
Thinking about whether these repos should share a common ancestor, I realized I don't understand the proposal. How will we update the shallow repo when changing model files? Do we have to perform git history surgery each time?
We really want something like git-lfs that tracks the history but never forces you to download it.
Maybe we should instead focus on making git-lfs work in prod. If we have that, then we are done.
Alternatively, we could set up git-lfs outside of prod and add a secondary step to our prod deploys that allow us to copy stuff from our trusted git-lfs repo to a place that prod can grab it.
Noting that git-fat is already in production, but still not a ready-made fit for our use case. We need to be able to pull the model files for both production and labs deploys, and there's [something] wrong with the rsync server that prevents us from using for labs.