We've ordered a pair of 10G networks switches (+optics) for esams to support the new hardware needed for T86663. However, given the final configuration of the systems quotes being reviewed/approved now (20x systems @ 410W each, on top of the existing cp30xx power reqs), our power capacity limits in esams (4.6kW per rack) doesn't allow for installing all of these systems in just two racks supported by just two switches. It's unknown at this time if that power limit is merely due to our PDU configuration, or is a limit on power feed or thermal dissipation from the datacenter (and if it's one of the latter, whether we can easily get that upgraded without moving locations within the DC).
Description
Description
Status | Subtype | Assigned | Task | ||
---|---|---|---|---|---|
Resolved | BBlack | T86663 Expand HTTP frontend clusters with new hardware | |||
Resolved | Cmjohnson | T92514 Rack, cable, prepare cp3030-3049 | |||
Resolved | mark | T90000 esams power capacity issues |
Event Timeline
Comment Actions
I racked 10 of the new servers today, and (temp) cabled them up (just power) for testing. At POST, they consume up to 1700W so far.
Comment Actions
Faidon previously confirmed use of approximately 350-400W (judging from the PDU measurements) when he ran stress on one box.
I just did the same and got 322W using the DRAC power usage measurements of the server itself.
Comment Actions
After moving 10 servers out of OE10, with all remaining 10 servers still on one 16A, I ran stress on all of them. Power usage got up to ~3300W, which is dangerously close to the limit of one 16A breaker. So also in rack OE10, I've now split up power across multiple 16A breakers.
That means the problem is now resolved.