Page MenuHomePhabricator

Estimate hardware requirements for ordering new servers for Elasticsearch
Closed, ResolvedPublic

Description

We may need more servers in the future. This task is to estimate the hardware requirements so those figures can be passed on to Tech Ops.

Specs:
CPU: Dual Intel(R) Xeon(R) CPU E5-2640 v3
Disk: 800GB raw raided space (2x 800GB SSD RAID1 or similar, software RAID is fine)
RAM: 128GB

Number of servers:
eqiad: 36-31 = 5
codfw: 36-24 = 12
numbers are: [desired size of cluster] - [current size of cluster] = [number of servers to add]

Event Timeline

The goal is to increase the size of both eqiad and codfw clusters to 36 nodes. We want to keep the specs as close as possible to the current server to ensure uniform load.

Specs:
CPU: Dual Intel(R) Xeon(R) CPU E5-2640 v3
Disk: 800GB raw raided space (2x 800GB SSD RAID1 or similar, software RAID is fine)
RAM: 128GB

Number of servers:
eqiad: 36-31 = 5
codfw: 36-24 = 12
numbers are: [desired size of cluster] - [current size of cluster] = [number of servers to add]

@EBernhardson could you have a look and confirm this is the plan?

Seems right to me, the main idea was to balance out the clusters now that the eqiad hardware is similar to codfw. The original budget ask was using an estimate of 36 servers per cluster as the final state.

Hardware estimation seems completed. Actual hardware request is tracked in T149089.

I believe that this task to estimate the hardware is complete on Discovery's end. However, I'm unsure of the process for the hardware-requests project; I'm leaving this task open for now so it appears in that project's backlog. Please let me know when we can close this. :-)

I believe that this task to estimate the hardware is complete on Discovery's end. However, I'm unsure of the process for the hardware-requests project; I'm leaving this task open for now so it appears in that project's backlog. Please let me know when we can close this. :-)

Based on the lack of response, I'm going to close this task. If that causes some difficulties for anyone, please feel free to reopen.