Until Parsoid removes metadata, large articles are going to consist of several megabytes of data (en:Barack_Obama = 3.4MiB). As browsers doesn't provide any mechanism for compressing post data, this is takes about 20s to upload on a typical 1Mbps up ADSL line.
Using a JS implementation of deflate, we could achieve 80-90% compression in a few hundred ms on a decent machine: http://jsperf.com/js-deflate
A couple of considerations with performing such a complex calculation in pure JS:
- Really bad JS engines or slow devices (old browsers, IE, mobile) may offer to little to overall speed benefit. We may want to detect these cases by user agent or performance profiling.
- The compression function will be synchronous and lock browser interaction. On slower machines this may give the appearance of crashing, or with memory leaks may actually crash. If this proves to be a significant problem we could look into encoding in chunks of 100k at a time so we could at least provide progress, and maybe improve memory usage (at the cost of overall compression).
See Also: