Page MenuHomePhabricator

Cloud VPS "wikidocumentaries" project Stretch deprecation
Closed, ResolvedPublic

Description

The end of life of Debian Stretch is approaching in 2022 and we need to move to Debian Bullseye (or Buster) before that date.

All instances in the wikidocumentaries project need to upgrade as soon as possible. Instances not upgraded by 2022-05-01 may be subject to deletion unless prior arrangements for an extended deadline has been approved by the Cloud VPS administration team.

Remaining Debian Stretch instances (live report):

Listed administrators are:

See also:

More info on current project instances is available via openstack browser.

Details

Due Date
Apr 30 2022, 11:59 PM

Event Timeline

StrikerBot triaged this task as Medium priority.Apr 13 2022, 5:00 PM
StrikerBot created this task.
Zabe changed the edit policy from "Custom Policy" to "All Users".Apr 13 2022, 5:29 PM

@TuukkaH: @Nikerabbit voi olla henkisenä tukena, jos saatte projektin siirtymään uuteen ympäristöön.

@Andrew @komla please don't delete these instances just yet.. I'm talking with the maintainers over email, and apparently the instances are still needed for a bit for migrating to newer versions. Thanks!

Thank you @taavi, I can confirm. We have failed to migrate the instances, and would be willing to do that soon, if we are lucky enough to still have them. @TuukkaH could you estimate the time we would need for it?

No step should take long if the order is correct. Something like this: (?)

  1. Create the new instances.
  2. Restart the old instances.
  3. Copy data from the old instances to the new instances.
  4. Shut down the old instances.
  5. Configure the new instances into production and check that everything is working there.
  6. Delete the old instances.

Who can do step 1? I suppose the new instances could be similar to the old ones but with the following changes:

  1. An OS that will be supported the longest.
  2. Doubled disk space since it was tight earlier.

After that, if I have the required rights I could do steps 2, 3 and 4 whenever I have a quiet day and steps 5 and 6 on another one.

Is this an adequate plan?

We've adjusted quotas to support creation of the replacement VMs.

Anyone with the 'projectadmin' role can create new VMs and volumes; that means ernoma, jiemakel, mjrinne, nikerabbit, tuukka, zache-tool.

Thank you all. We have not yet had the opportunity to conduct the operation. I wonder if there's a way to reach out beyond our own crew for help?

I have started to work on this migration. Who could restart the old instances? (step 2 above)

Hi, due to long silence on this ticket I deleted those VMs as per our deprecation policy. Notes are here: https://sal.toolforge.org/wikidocumentaries

I /might/ be able to recover them but it's unlikely. I'll have a look if you are unable to recreate without the old ones.

I was able to semi-recover hupu, now stored as hupu-restored.wikidocumentaries.eqiad1.wikimedia.cloud. It's pretty broken but you may be able to log in and recover some data. Roope is proving harder to recover... how badly do you need data off that one?

OK, now roope-restored-3 seems to be reachable. so you should be able to get at whatever data you need from those old VMs.

@TuukkaH Please respond on ticket if you have any plans to use those restored VMs. I will delete them in a week if I do not hear any response -- this upgrade project is nearly a year past deadline.

Hi! If you want those revived VMs to survive into next week, please respond and include a plan and timeline for upgrade.

@Andrew This has turned into quite a different migration operation than I'd been able to imagine - let's have a post-mortem off-Phabricator?

I have been able to restore hupu and have shut down hupu-restored for now. However, I haven't been able to find any non-OS files on roope-restored-3 so far. If there are none, I suppose it can be deleted again and we'll just have to live with it.

@TuukkaH as per your last comment, I'm going to delete hupu-restored and roope-restore-3 and leave the rest of this to you.