Page MenuHomePhabricator

Migrate Wikimaps Warper VM from Debain Stretch to Debian Bullseye
Closed, ResolvedPublic


The Debian Stretch VM image which Wikimaps Warper uses is being deprecated and will need to be updated to not become eligible for suspension or shutdown on 2022-02-01.

See and T211149 for the various steps needed.

Event Timeline

okay I can be assigned for this, unless a new ticket should be made?

There were similar steps that I went through to migrate from maps-warper2 to maps-warper3 from Ubuntu to Debian

I also want to update the application a bit with some upstream bug fixes

I think the first step would be to claim the current VM so it's not shutdown soon.

Created the maps-warper4 instance with Debian Bullseye in the project instance. it's got less RAM and CPU as the project has used it's quota.

Next steps:
Test connections to server,
Configure server,

Get software running locally on debian bullseye
copy / migrate data across


Locally running and updated, code has upstream fixes.
Keyboard shortcuts for the map added, and a link to OHM instead of embedded editor now.

Server instance appears to be configured correctly now: maps-warper4 with a temporary proxy set up for it.


  1. Copy files across.
  2. Update database again.

When ready to switch over, need to contact some of the good folks at wikimedia cloud services on IRC or phabricator to update the proxy, as the proxy was split off into something independent of the horizon interface a little while ago and I'm unable to change it like I did on previous migrations.

When old instance (maps-warper3) has been deleted, we can upgrade the maps-warper4 instance with the RAM and CPU to make it as fast as before.
After that, also
increase redis cache memory allocation
increase passenger allocation

Today, I attempted a re-size of the new instance, but broke things quite badly for half a day. So re-sizing is out of the todo list, too dangerous. (The new instance should work ok, but if it suffers from traffic in the future, we can look at it again)

Additionally, I deployed a fix to enable larger Jpeg images to be processed via GDAL which might have been causing errors with the warper.

New Todo. ETA Weds 1 June
Copy files across
Update database
Ask admins to switch proxy so that points from maps-warper3 to maps-warper4.

After then:
suspend the old instance

After after then:
if everything is fine, delete maps-warper3 is now running on new instance maps-warper4

Proxy was updated (thanks taavi!)

It should work fine but might run a bit slower than before as it has less CPU and RAM to play with, but I expect this only to be visible if many users are using it at the same time which is relatively uncommon.

So tasks now:

Monitor if everything is ok, any errors, memory usage, user reports etc.
Keep maps-warper3 instance alive for the moment just in case.


Later on:
suspend maps-warper3

after that
delete maps-warper3

Things seem fine, so far. some tweaking was done to passenger conf and redis conf to lower RAM allocations.

I have suspended maps-warper3. Will wait a week or two again and then we can delete it.

server crashed today. out of memory error. Possibly due to warping a really big map at the same time as other things.

I have reduced redis memory usage to 700mb and reduced apache memory usage by a little. Will keep on monitoring.

but might have to ask to get back the original specs if this keeps on happening, as it will need at least some RAM and CPU to process files and serve them at the same time.

happened again this morning, syslog says puppet and identify called oom-killer. So it was probably due to a large image being added, and imagemagic using up the available RAM, causing the crash.

maps-warper3 has been deleted!

However the new, smaller instance is too low spec and has many out of memory errors. So Im now trying to resize the instance. If resizing doesn't work, I can make another instance and do everything again (should be quicker this time!)

okay the server should be running at full power now with as much ram as it had before. Hopefully there will be no more crashes related to resources. I'll keep an eye on it as once the redis cache fills up and gets some traffic it can be further tuned.
(As a note to us in the future we should add/mount a cinder volume to the instance to keep at least the postgres database there (or a mirror) to make sure that any instance crashes and upgrades can be fixed easier. So instead of keeping the data on the instance, it lives outside. Map images etc already do live outside though)

Marking as resolved.