User Details
- User Since
- Apr 16 2019, 9:00 PM (271 w, 15 h)
- Availability
- Available
- LDAP User
- Wpao
- MediaWiki User
- WPao (WMF) [ Global Accounts ]
Thu, Jun 20
During my call with the Dell Account team today, I asked them to push on this a bit more. The Dell Tech Support engineer hasn't been able to replicate the issue on his end, but I asked the Account team what the ramifications would be for Dell if they were to just ship us the 100+ replacement disks for all 14x servers (ie: would they not be able to RMA it with the drive manufacturer, etc.). So, they're going to follow up and get back to me next week. Thanks, Willy
Wed, Jun 19
Hi @Papaul - can you add the Dell Support ticket that you created in this Phabricator task, and provide any updates/progress on how that's going? Thanks, Willy
Tue, Jun 18
Tue, Jun 11
Cool, thanks @RobH. Adding @VRiley-WMF and @Jhancock.wm for visibility also, since I think they were working on this
Mon, Jun 10
Thanks @Volans, will do on the remaining Netbox errors.
Thu, Jun 6
@Papaul & @Jhancock.wm - was this one completed already via a different task?
Valerie is on vacation, so assigning to John
Ok, got it. Thanks for the info @dcaro. And just to confirm, cloudcephosd1001-1020 have the same hardware configuration (only with different drive manufacturers), and don't have any of the same issues as cloudcephosd1021-1034? Let's see what the Dell team comes back with after escalating up, and hopefully we can make some more headway there.
During my sync up call with Dell today, I asked our account team to see if they could push a bit more to get more hard drives RMA'd. The servers are still under warranty for a few more months, and they're going to escalate it up the chain a bit more, to see what they're going to do. In the meantime though, can we look into if something else might've changed when all these drives started having bad sectors? It looks we installed this batch of servers back in December 2021, then they were put in production in 2022. So it seems like they were running ok for a year, until the drive errors started popping up at the end of 2023.
Wed, Jun 5
Hi @dcaro - just following up on this. Can you provide the racking information for us, to start this install?
Wed, May 29
Removing the procurement project tag. We have spares from decom'd servers that we can use for this, instead of purchasing the 10g cards. @VRiley-WMF - can you work with @kamila on getting these hosts upgraded and moved to 10g switches?
May 24 2024
Thanks for the heads up @bking. I went ahead and checked Netbox, just to ensure all the servers were dispersed pretty evenly across the different racks...which they are (listed below is the rack and the quantity of servers in each rack). For reference, the bolded line items are the racks that are currently pulling a bit more on power. We could do a before and after snapshot using Grafana (https://grafana.wikimedia.org/d/f64mmDzMz/power-usage?orgId=1&from=now-30d&to=now), though I have feeling we should still be ok with the increased power.
Apr 15 2024
Since the only thing remaining in this task is bringing up the Dell switches in racks E8 and F8 (which I believe the Network SRE team is working on), I'm going to go ahead and resolve the main tracking ticket. Thanks, Willy
Apr 3 2024
Sure, no prob @LSobanski. Here's the list of the 24 active devices that still reference RT tasks in Netbox, along with their purchase dates (network equipment usually EOLs every 8yrs):
Apr 2 2024
Thanks for checking @LSobanski. It's definitely rare that we need to refer back to RT. In the last 5 years, the 2-3 cases that we've had to reference RT was typically due to tracking down information about core routers that we had purchased back then. In Netbox, we only have 24 active devices left that still reference RT tasks. As long as we're able to access these in someway (ideally quickly and easily) on the rare occasions that it's needed, you should be able to proceed with moving forward.
Mar 19 2024
Hi @elukey - do you want me to change the Lift Wing expansion requests for 16x servers in FY24-25 to 10g? Thanks, Willy
Mar 13 2024
++ @VRiley-WMF & @Jclark-ctr for troubleshooting the hardware. (host was installed a few quarters ago)
Mar 5 2024
Sounds good. @Jhancock.wm - I created a new sheet below, with the following fields. I entered in the hostnames and asset tag, but can you fill in the remaining items for old S/N, new S/N, and Phabricator Task?
Mar 4 2024
Thanks for confirming, @Volans. If everyone else is ok with making the correlation on the accounting spreadsheet, my vote is that we go with that route. Thanks, Willy
Mar 1 2024
Thanks @Volans, that makes sense. My preference would be to leave Netbox as is, and use the accounting spreadsheet to make the S/N connection to each other. Would we be adding a different tab on the accounting spreadsheet for that?
Feb 29 2024
If we change the serial number, I think it would create an error for S/N / Asset tag mismatch. (related to Riccardo's points earlier) We also reference the original chassis S/N when dealing with vendors for recycling servers (estimates, official documentation, etc) and purchasing replacement parts, so I'm still a bit hesitant with editing the S/N in Netbox as the solution. Since it doesn't sound like we receive any Netbox alerts when we replacing with a new motherboard, is there something that we could tweak to replicate the same thing? (ie: change the status or something of the donor server) Or worse case, just suppress these alerts somehow, until they eventually decommission?
Feb 28 2024
Hey @Volans - much appreciated for your feedback and for the suggestions. I was wondering since the physical serial number listed on the chassis doesn't change (it's only from a Puppet perspective that the serial number changes), is there anything on the Puppet side that could be modified to reflect the MB replacement? If there's something easy that could be done in Puppet to prevent the Netbox error from alerting, I kind of feel like it would be a more accurate representation.
++ @VRiley-WMF and @Jclark-ctr - can one of you pick up this request? We'll be repurposing one of the previously decommissioned cp servers to set up a temp server for Adam to use. Thanks, Willy
Sounds good @bking, thanks!
Hi @bking - thanks for coming up with the list. I have the following refreshes already on the CapEx doc, so you just have to fill in the missing columns for "Hardware Config", "Network Speed" and "Total Equipment Cost" (for custom configs)
Feb 27 2024
Thanks for picking this up @Jhancock.wm. @Marostegui - since this host looks like it's close to being refreshed in T355350, do you want to just wait for the refreshed server to be setup instead of fixing this one? Thanks, Willy
Feb 26 2024
Feb 23 2024
Hi @ssingh - the hardware should still be around, and we should be able to reallocate one of them for testing purposes. Can you shoot open a new Phabricator for us with all the necessary details (hostname, racking info, network setup, raid/partitioning, OS, and main poc)? Also, do you know how long Adam would need it for?
Feb 21 2024
++ @Jhancock.wm for visibility and in case any onsite support is needed
Feb 8 2024
++ @Jhancock.wm
Jan 10 2024
Thanks @VRiley-WMF. I have T354684 assigned over to you, so you can work with @fgiunchedi on coordinating downtime for the upgrades. Thanks, Willy
Jan 9 2024
Awesome, thanks @Jhancock.wm. Here's the codfw upgrade ticket for you to coordinate with @fgiunchedi on the downtime - T354685. Thanks, Willy
++ @Jclark-ctr & @VRiley-WMF
@Papaul / @Jhancock.wm and @Jclark-ctr / @VRiley-WMF - can you see if you have any spare memory onsite for Filippo? I think it's for prometheus100[5,6] and prometheus200[5,6]. (cc @RobH in case we have to order them)
Dec 15 2023
@Jclark-ctr or @VRiley-WMF - can one of you take a look at this one?
Dec 7 2023
Definitely. @Jclark-ctr & @VRiley-WMF - can you check if we have any spare drives from a decommissioned host? If not, we'll purchase one via @RobH). Thanks, Willy
Dec 1 2023
Nov 29 2023
++ @Jclark-ctr & @VRiley-WMF - can one of you two work on getting the drive RMA'd for this one? Thanks, Willy
Nov 23 2023
Nov 22 2023
Nov 10 2023
Thanks for working on this @bking. I'm mainly looking to see how much future growth you're looking at (a rough estimate is fine), if you have any requests for the type of servers we provide (ie: ARM, GPU, etc), or just have any feedback for us in general. We're getting pretty full at codfw, so when we purchase additional data center space, we want to ensure we're adding enough capacity for everyone's future needs over the next 3-5yrs. Thanks, Willy
Oct 30 2023
Awesome, thanks for working on this @VRiley-WMF. @nskaggs & @cmooney - since we have some discrepancies with the number of ports being used on these cloudvirts, should we come up with a plan/process to help us free up the second switchport on them? This will help us reclaim some switchports for new installs and server migrations. Thanks, Willy
Oct 25 2023
Oct 17 2023
@Jclark-ctr or @VRiley-WMF - can one of you follow up on Ben's question above on an-tool1010, along with Alex's comment on deploy1102? Thanks, Willy
Oct 3 2023
++ @Papaul , who's going to dig around a bit and provide some feedback
Aug 30 2023
Aug 11 2023
Aug 2 2023
It's not on the refresh list for this fiscal year; looks like it'll be due for a refresh in FY24-25. If the firmware upgrade on the iDrac doesn't work, we can try sourcing the fan if you want. (cc @RobH)
Jul 31 2023
Jul 19 2023
Cool, thanks for confirming @Papaul. Hopefully Iron Mountain will come back with the same confirmation as well.
Jul 18 2023
Jul 13 2023
Jul 12 2023
Jul 11 2023
Hi @Jclark-ctr - can you work with @aborrero on the timeframe and migration plan for these servers? Thanks, Willy