This hangs with large numbers of rows (couple hundred thousand). The issue is that SK passes the list of ids back to the server that the user has selected. When this list gets long, the server rejects it.
I believe the limit in nginx is client_max_body_size which I think might be set at the default 1MB. @Dwisehaupt could we increase this?
But that's not an ideal fix. I think it would be better in core if when all rows are selected and we're doing a download, we instead just don't pass the filter = [ids] at all and then we should get the full results of the search. I guess that could be a bit unexpected if we select all 10 rows and we get back 11 rows because one was added in the interim, but seems better than the current situation. We could set a minimum count before we don't filter, if this is a concern.