Marc Andre,
I have the disk shelf but we really need to rearrange everything.
If we can schedule some downtime (2 hours) for labstore1001 and labstore1002 to be adjusted in the cabinet and fit the new shelf that would be great.
Marc Andre,
I have the disk shelf but we really need to rearrange everything.
If we can schedule some downtime (2 hours) for labstore1001 and labstore1002 to be adjusted in the cabinet and fit the new shelf that would be great.
Status | Subtype | Assigned | Task | ||
---|---|---|---|---|---|
Restricted Task | |||||
Resolved | • Cmjohnson | T88802 Rack Setup new diskshelf for labstore1001 |
That's going to be "fun". I'll have a talk with Andrew (Yuvi is going on vacation) and try to synchronize something for as swiftly as possible, but that needs lead time to get everyone aware and ready.
Is this waiting on Yuvi because we need his help with the expansion, or because without him we're just too busy for any additional self-inflicted breakage?
I don't think it's waiting on Yuvi; Chris may just have misunderstood my comment he being on vacation. @Cmjohnson: feel free to give us plausible windows for this with a couple days' advance notice.
I'd prefer it if we didn't do something like that on a Friday. Also, a Labs outage should be properly announced in advance both to volunteers and staff members and preferrably coordinated with @greg as well. Less than 48h of a notice is not nearly enough.
Ah, good point - I forgot that "at the end of this week" might have been agreeable ealy, but is now more problematic as we near it.
Excellent Point. How about Tuesday at 10am Eastern. It does not appear to interfere with anything that would affect Labs.
The impact will be VERY visible. All shared storage on labs will stop working -- it will be an almost total labs outage, with lots of processes angry about filesystem timeouts after the storage returns.
Chris, suppose you can do https://phabricator.wikimedia.org/T89266 during the same window? Obviously there's only one of you, but we may as well combine our outages into one big one.
Shouldn't be a problem In fact, I will probably do that first since it's the simplest. I commented on the ticket T89266