chippy on wikitech
User Details
- User Since
- May 11 2015, 10:48 AM (551 w, 4 d)
- Availability
- Available
- IRC Nick
- chippy
- LDAP User
- Unknown
- MediaWiki User
- Chippyy [ Global Accounts ]
Aug 25 2025
Nov 27 2024
Thanks for the report, and the note on my user page.
Dec 4 2023
Grafana logs seem to indicate a crawling increase in ram before the OOM which stops it
https://grafana.wmcloud.org/d/0g9N-7pVz/cloud-vps-project-board?orgId=1&var-project=maps&var-instance=maps-warper4&from=now-20d&to=now-10d
hi. I have restarted mapwarper
Dec 27 2022
@62mkv the server is running now. It occasionally bugs out just give it a few hours to fix itself.
(For the future - This ticket/issue is not the correct place for messages about that service being down. You can use the wikimaps-warper tag for new tickets)
Nov 14 2022
Jun 27 2022
okay the server should be running at full power now with as much ram as it had before. Hopefully there will be no more crashes related to resources. I'll keep an eye on it as once the redis cache fills up and gets some traffic it can be further tuned.
(As a note to us in the future we should add/mount a cinder volume to the instance to keep at least the postgres database there (or a mirror) to make sure that any instance crashes and upgrades can be fixed easier. So instead of keeping the data on the instance, it lives outside. Map images etc already do live outside though)
maps-warper3 has been deleted!
Jun 14 2022
happened again this morning, syslog says puppet and identify called oom-killer. So it was probably due to a large image being added, and imagemagic using up the available RAM, causing the crash.
Jun 13 2022
server crashed today. out of memory error. Possibly due to warping a really big map https://warper.wmflabs.org/maps/4828 at the same time as other things.
Jun 7 2022
Things seem fine, so far. some tweaking was done to passenger conf and redis conf to lower RAM allocations.
Jun 1 2022
Todo: ask someone to update the grafana graph to add the server: https://grafana-labs.wikimedia.org/d/000000059/cloud-vps-project-board?orgId=1&var-project=maps&var-server=maps-warper3&from=now-12h&to=now
warper.wmflabs.org is now running on new instance maps-warper4
May 30 2022
Today, I attempted a re-size of the new instance, but broke things quite badly for half a day. So re-sizing is out of the todo list, too dangerous. (The new instance should work ok, but if it suffers from traffic in the future, we can look at it again)
May 26 2022
Locally running and updated, code has upstream fixes.
Keyboard shortcuts for the map added, and a link to OHM instead of embedded editor now.
May 10 2022
Created the maps-warper4 instance with Debian Bullseye in the project instance. it's got less RAM and CPU as the project has used it's quota.
https://horizon.wikimedia.org/project/instances/d3521b71-1096-4fb5-8a34-33ae6191965c/
Jan 4 2022
okay, claimed the project to be in use on the https://wikitech.wikimedia.org/wiki/News/Cloud_VPS_2021_Purge#maps page.
okay I can be assigned for this, unless a new ticket should be made?
Jun 9 2021
@Bstorm the maps-warper3 server should recover fine from a reboot, and having the NFS offline for a bit. Many thanks in advance for your work!
Jan 31 2020
Had a look. I think it's because of the colorspace in the image. There's a known bug with greyscale or paletted images.
May 31 2019
@Bstorm everything looks fine on maps-warper3 thanks and warper.wmflabs.org is running ok
May 29 2019
maps-warper3 which runs https://warper.wmflabs.org/ uses /mnt/nfs/labstore1003-maps/project/warper/uploads/ and /mnt/nfs/labstore1003-maps/home/warperdata/ (but referred to via /home and /data/project in the application) for storing a fair bit of data too. I'll turn off the webserver during the move, thanks.
May 28 2019
@Bstorm I do maps-warper3 instance.
Jan 8 2019
Resolved. Have just deleted the old maps-warper2 instance as everything's been running okay for a while.
Dec 19 2018
I have shut down the maps-warper2 instance
I've removed the warper-old proxy. and shut off the maps-warper2 instance. I'd like to keep it around for a week or so before deletion though.
Dec 17 2018
maps-warper2 has been migrated to maps-warper3 and web proxy (warper.wmflabs.org) switched too, with everything seeming to work okay, but I'd like at least a day before we turn off the old instance just in case...
I think all the logrotate and the crontabs are copied across also. Wouldn't mind a day before we remove the warper-old proxy and then we can turn off the old instance.
for the record. https://phabricator.wikimedia.org/T208406 is relevant to maps-warper3 instance as we store around 230G worth of images in /home/warperdata
Instance provisioned maps-warper3
Dec 13 2018
I think it's inevitable that it will exceed the 300G space in less than a year as more maps and images get added, so maybe a custom one would be better.
Info about the maps-warper2 instance. it is soon to be replaced by maps-warper3 running Debian Stretch.
The maps project usage for this instance is to 1) to store about 230G of processed map images, 2) image thumbnails from the application (~600mb) 3) application database backups (~2G) and 4) the instance (and I imagine other maps instances?) has it's /home directory mapped to /mnt/nfs/labstore1003-maps/home/
In the old labs situation it had issues with running out of disk space, and hardware disk failures of additional local disks. I think using the project store to keep data was used after that in 2015.
This runs the http://warper.wmflabs.org/ application, started in 2014 and is separate from the old and new OSM work.
I think it would be okay for this the warper to run from an attached disk assuming it has enough space for all the existing and future additional map (let alone having /home on nfs is bonkers tho) It would be nice to have an off instance backup location.
Code's been updated to work with Stretch (and runs ok on local VM) https://github.com/wikimaps-dev/mapwarper/commit/151e6e8c10e27c5e40a14b5ee864b5b9571cce38
Dec 12 2018
Old instance is now in new location (for neutron) and site is working as before.
Dec 4 2018
Watch out for downtime due to: https://phabricator.wikimedia.org/phame/post/view/120/neutron_is_here/
I've created T211149 to track the maps-warper2.maps.eqiad.wmflabs instance migration.
Okay, so after the 18th of December, the instance won't be deleted? But when the Ubuntu Trusty LTS EOL happens, it probably will be... and it's likely that the Puppet code will force that change to happen before then? So we don't have any firm timescales of when the switch is being pulled (before April 2019) but 18th Dec is to see which ones are not being used and can safely be switched off?
Nov 20 2018
maps-warper2.maps.eqiad.wmflabs is still actively in use. I can fit in some time to upgrade before the deadline but I'm not sure how easy it would be to upgrade to Debian (if at all). Also unknown if the software which has only been tested in Ubuntu environment would work on Debian without significant development time. What are my options?
Jun 11 2018
tracking this here: https://github.com/timwaters/mapwarper/issues/156
Hi, many thanks for this bug report.
Mar 25 2017
maps-warper has been deleted now.
Mar 22 2017
It's been a couple of days, so I have stopped the old maps-warper instance.
Mar 16 2017
Okay I think the new instance should be working fine now. I'll announce it to the main users and see if any issues crop up. If alls good we can turn off the old instance in the next couple of days
16 march.
Mar 15 2017
For this phase:
This was installed via:
Having trouble installing libgdal-dev because of wikimedia repository choosing the older 4.8
Mar 13 2017
@Andrew Thanks for keeping an eye out for this - I am planning on working on the majority of this task this week. And will be updating this task as I go along.
Mar 7 2017
Done - Create maps-warper2 instance (m1.large). This has a better and larger setup for its root partition mainly ensuring the instance doesn't get hosed due to full logs anymore!
Apr 1 2016
Mar 31 2016
@Tgr Thanks, have updated the secret key and I was successfully able to call the API via Oauth! Seems all fixed now!
Mar 23 2016
copying to github and marking as resolved (It current expected behaviour and not a bug, and I'm not sure that this is a desired feature)
copying to github and marking as resolved. Many thanks
I believe this is fixed now, marking as resolved.
many thanks @Luke081515 !
Mar 22 2016
@Krenair yes, good idea, will change description. (@Luke081515 many thanks!)
Mar 21 2016
if this helps finding the cause, on commons.wikimedia.beta.wmflabs.org I'm seeing this, or a similiar error in the response body when calling the API via OAuth "Argument 3 to hash_hmac() must be of type ?string, bool given"
Jan 23 2016
I installed a new version of Passenger (Apache mod which runs Rails apps) and bumped up the timeout time.
So it takes 1-2 mins to start the application and once the app is running, it should be okay.
Dec 25 2015
Yes, I think it might be better to create a new project in the long term. There have been times when end users have reported outages which have led to them being assigned and fixed by other phabricator users (e.g. disk failure issues, OAuth troubles etc) - so allowing users to add some critical things here is also a benefit.
Dec 24 2015
Folks - we are tracking tickets etc on GitHub currently
https://github.com/wikimaps-dev/mapwarper/issues
Folks - we are tracking tickets etc on GitHub currently
https://github.com/wikimaps-dev/mapwarper/issues
Folks - we are tracking tickets etc on GitHub currently
https://github.com/wikimaps-dev/mapwarper/issues
Dec 4 2015
okay It appears to be fixed. Closing again. Thanks
Dec 2 2015
The new OAuth consumer registration has been approved, so it should be all be working now. Will close this issue if others can log in again. Many thanks!
Nov 24 2015
I've set up http://graphite.wmflabs.org/dashboard/#maps_warper1 which may be the kind of persistent monitoring we are after - so but now we'd just need some alerts if possible.
Oct 20 2015
Okay, I'm assuming that the /mnt filesystem is gone, not actually needed and not important.
Sep 24 2015
Very Many Thanks!
Sep 22 2015
Yes there does appear to be errors.
marking as resolved as removing old kernel and unused packages and unused locales freed up space on the one partition thats working.
I tried re-adding labs::lvm::srv but nothing happened, even after a reboot. Should i open a ticket for that? However as my comment on the blocking task T112641 says, I have freed up 1.5G of space by removing kernels, unused packages etc. I think we can mark this particular one as closed now.

