Request for at least 2 additional machines for the "video scaler" job queue runners. Software configuration already exists and is in production. (Budget only for 2 if need to purchase new ones, but we can use more if they're available for repurposing and not needed elsewhere.)
Labs Project Tested: production!
Site/Location:EQIAD
Number of systems: 2 minimum, but could use 4 or even 6 for a transition period if they're available for repurposinge
Service: videoscalers
Networking Requirements: internal
Processor Requirements: 20/40 cores/ / 40 threads or more
Memory: recommend 1-2GB per thread (64GB is good for 20/40 config)
Disks: Local disks should have room for at least 10-20 gigabytes of temporary files. No need for super-huge disks.
NIC(s): enough to be able to push ~1-4 gigabyte files to Swift service relatively speedily.
Partitioning Scheme: default app server-style layout
Other Requirements:
[FIXME: fill in the above with the standard app server config, and confirm that sounds right or if different requirements are needed.]
We need additional CPU capacity for video scalers to migrate video transcodes from WebM's older VP8 codec to the newer VP9 codec: T63805. This will reduce bandwidth and storage requirements for video playback by about 40% due to VP9's better compression, but encoding is 2-4x slower and the additional CPU headroom would be very helpful -- especially during initial migration when we have to re-encode the backlog while still handling new incoming files.
Note that past issues with VP9 and ffmpeg packaging are in the past since we migrated to Debian stretch with its newer ffmpeg -- no additional config/packaging should be required.