Get another array for dataset1001; it's been short of space for awhile and we want to house full sets of openzim dumps and other files there now too.
Description
| Status | Subtype | Assigned | Task | ||
|---|---|---|---|---|---|
| Resolved | • Cmjohnson | T99808 dataset1001: add new disk array | |||
| Resolved | RobH | T93118 order new array for dataset1001 |
Event Timeline
The dataset1001-array1 is presently 12 * 2TB nearline SAS 7.2k rpm. I've requested a quote for another shelf.
While the hardware request is now handled in phabricator, the actual quotes are still in RT. (We're still working on our security settings and workflow in phabricator to handle those aspects of procurement.) The RT ticket is https://rt.wikimedia.org/Ticket/Display.html?id=9269
I'll update this task with order confirmation and shipping ETA. All price discussions must take place on the RT ticket at this time.
I'm assigning this task to @ArielGlenn, after IRC discussion about the space/disk requirements needed for this request.
We're going to need to document on task what we plan to store on this shelf, along with its projected storage requirements. The current idea is a single MD1200 with 12 * #TB disks; but we need more info to make a better judgement.
Per IRC discussion, Ariel will work with the other folks involved with the dumps storage and determine these figures, update the task, and assign it back to me for quote gathering.
I did some back of the napkin-style calculations. With a new array of 12 2TB disks we get 18T say from raid, regular dumps plus pgecounts plus misc grow about 4T a year, we have about 4T in requests right now as a one-off, likely to grow much slower. Given this I think we're better off by far with the 3TB disks, so RobH if you could see what that looks like.
This has been ordered and will arrive today/tomorrow. T99808 is for the installation, and receipt is tracked in RT https://rt.wikimedia.org/Ticket/Display.html?id=9269.
Resolving.