|Resolved||• hashar||T60772 common gating job for mediawiki core and extensions|
|Open||None||T69216 Have unit tests of all wmf deployed extensions pass when installed together, in both PHP-Zend and HHVM (tracking)|
|Invalid||Ryasmeen||T90647 Create Jenkins builds for Editing across repositories (MobileFrontend, VisualEditor etc)|
|Declined||None||T50407 Jenkins: Setup Vagrant for some jobs (tracking)|
|Declined||None||T45266 Write and implement tests for Wikimedia's Apache configuration (redirects.conf, etc.)|
|Resolved||• hashar||T47499 [EPIC] Run CI jobs in disposable VMs|
|Resolved||Cmjohnson||T86658 Phase out lanthanum.eqiad.wmnet|
|Resolved||Cmjohnson||T105901 wipe disks for lanthanum|
- Mentioned In
- T105901: wipe disks for lanthanum
rODNS6327d14aa278: reclaim lanthanum: remove lanthanum.eqaid.wmnet
rOPUPccb5add2b67c: Reclaim lanthanum: remove related puppet conf
rOPUP631377f17b02: Remove Gerrit replication to lanthanum.eqiad.wmnet
- Mentioned Here
- T47499: [EPIC] Run CI jobs in disposable VMs
Removed the blockers that have been achieved for lanthanum.eqiad.wmnet
There will be some puppet cleanup to conduct. Some part still reference lanthanum or its IP address ( 10.64.0.161 ).
@RobH you can reclaim lanthanum.eqiad.wmnet and wipe out all data there. It never had anything of important beside what is in puppet.
The following have been completed:
- merge @JohnLewis's dns change and push live
- decom from palladium: puppet keys, salt keys, puppetstoreddb
- shutdown system
- set port to disabled in switch stack and remove it from the private vlan (port currently has no vlan, which seems better than the wrong one)
The following need to be done:
- wipe of disks by onsite
- added back to spares page
I'll create the task for the disk wipe as a sub-task.
I'm assigning this task to @Cmjohnson to add back to the spares page when the sub-task for disk wipe is complete.