Currently, during the first puppet run post instance creation, we run the block-for-export script through puppet that blocks until the NFS volumes that should be mounted are available. Block-for-export uses showmount to determine this, which in turn - for some reason, contacts rpcbind on the NFS server machine to determine the port for the mount daemon which it then queries to figure out the state of the NFS server. rpcbind runs on port 111, and mountd is pinned to port 38466 through the RPCMOUNTDOPTS config. NFS itself runs on port 2049.
We have to block logins until the mount is available before the instance starts allowing for ssh login because otherwise, there could be a temporary state in which /home isn't mounted yet, a user logs in, /home gets created, and then something whacky happens and the directory is overridden with the NFS mount. The block-for-export script runs with a 180s timeout, so even though we don't notice this wackiness often, it can happen currently too if the mount isn't available before the timeout.
The only reason we have to use rpcbind, or allow instances to talk to rpcbind or rpc.mountd is this showmount check, which seems to be stuck in the NFSv2/v3 world. If we figured out an alternative way to block user logins before the NFS mounts are available, we could get rid of this complexity. Also in the world where we are pushing labstore boxes into the public vlan, it would be good to only expose port 2049 to the instances.
The current block mechanism also doesn't allow for debugging by admins. The alternative I'm considering right now is using a mechanism like dropping a nologin file that blocks non-root logins until it is removed through some notification mechanism after the mount succeeds, which would allow for roots to easily log in to the instance to debug things too.