Page MenuHomePhabricator

[ceph] cloudcephosd1004-1015 think that their hard drives are HDD when they are SSD
Closed, ResolvedPublic

Description

Initial Issue Report

It seems that the hardware is different than the one for the hosts we used for the POC and current kernel/drivers are unable to properly detect the drives.

Kernel rotational setting:

root@cloudcephosd1012:~# cat /sys/block/sdb/queue/rotational 
1

Smartctl is able to show more info, though not complete:

# smartctl -a /dev/sdb                                                    
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.19.0-10-amd64] (local build)    
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org         
                                                                                                 
Smartctl open device: /dev/sdb failed: DELL or MegaRaid controller, please try adding '-d megaraid,N'
root@cloudcephosd1012:~# smartctl -a /dev/sdb -d megaraid,[                   
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.19.0-10-amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org
                                                                                                 
/dev/sdb: Unknown device type 'megaraid,['                                                       
=======> VALID ARGUMENTS ARE: ata, scsi, nvme[,NSID], sat[,auto][,N][+TYPE], usbcypress[,X], usbjmicron[,p][,x][,N], usbprolific, usbsunplus, intelliprop,N[+TYPE], marvell, areca,N/E, 3ware,N, hp
t,L/M/N, megaraid,N, aacraid,H,L,ID, cciss,N, auto, test <=======      
                                                                                                 
Use smartctl -h to get a usage summary                                                           
                                                
root@cloudcephosd1012:~# smartctl -a /dev/sdb -d megaraid,0                                      
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.19.0-10-amd64] (local build)                       
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org
                                                
=== START OF INFORMATION SECTION ===            
Device Model:     MTFDDAK240TCB
Serial Number:    200225EE619E
LU WWN Device Id: 5 00a075 125ee619e
Add. Product Id:  DELL(tm)
Firmware Version: D0DE012
User Capacity:    240,057,409,536 bytes [240 GB] 
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    Solid State Device
Form Factor:      2.5 inches
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Wed Nov 25 13:33:49 2020 UTC
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
...

Hdparam shows a very small subset of capabilities:

root@cloudcephosd1012:~# hdparm -I /dev/sdb 

/dev/sdb:
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0d 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

ATA device, with non-removable media
Standards:
        Likely used: 1
Configuration:
        Logical         max     current
        cylinders       0       0
        heads           0       0
        sectors/track   0       0
        --
        Logical/Physical Sector size:           512 bytes
        device size with M = 1024*1024:           0 MBytes
        device size with M = 1000*1000:           0 MBytes 
        cache/buffer size  = unknown
Capabilities:
        IORDY not likely
        Cannot perform double-word IO
        R/W multiple sector transfer: not supported
        DMA: not supported
        PIO: pio0

(for example is missing the TRIM command that's available on couldcephosd1003):

# hdparm -I /dev/sdc | grep TRIM
           *    Data Set Management TRIM supported (limit 4 blocks)
           *    Deterministic read ZEROs after TRIM

This limits considerably the ability to use the SSDs in a performant and lasting way.

Solution

The original hosts from a previous order were cloudcephosd100[1-3]. These were all setup with the entirety of ALL disks being set to non-raid mode, presenting as JBOD, and having a software raid1 mirror written to the two smaller SSDs. It appears if the SSDs are put into a hw raid behind the controller, their advanced SSD trim functions are not accessible. The fix was to take cloudcephosd1015 and convert all its disks to non-raid, updated netboot to use ALL eqiad cloudcephosd hosts with software raid, and reimage the host.

cloudcephosd1004:

  • - cloud-services-team depools host from service
  • - setup an ssh tunnel into cumin, so you can pull up the host's https mgmt. If you do this via ssh only it takes a bit longer to convert all the disks.
  • - ensure system is powered up, as the controller cannot access the disks otherwise. It cannot be in bios, or it cannot make changes.
  • - access https://cloudcephosd1004.mgmt.eqiad.wmnet, login to do the following steps:
  • - Configuration > Storage Configuration > Controller Configuration > Reset Configuration
  • - Configuration > Storage Configuration > Physical Disk Configuration > Drop down next to each disk, convert to non-raid
  • - Commit all changes via Apply Now, then use the pop up to watch the progress in Job Queue. If you don't see progress, the host may be powered down or in the BIOS, just reboot it and it'll apply the changes.
  • - reimage the host with the wmf-auto-reimage-host
  • - cloud-services-team returns host to service

cloudcephosd1005:

  • - cloud-services-team depools host from service
  • - setup an ssh tunnel into cumin, so you can pull up the host's https mgmt. If you do this via ssh only it takes a bit longer to convert all the disks.
  • - ensure system is powered up, as the controller cannot access the disks otherwise. It cannot be in bios, or it cannot make changes.
  • - access https://cloudcephosd1005.mgmt.eqiad.wmnet, login to do the following steps:
  • - Configuration > Storage Configuration > Controller Configuration > Reset Configuration
  • - Configuration > Storage Configuration > Physical Disk Configuration > Drop down next to each disk, convert to non-raid
  • - Commit all changes via Apply Now, then use the pop up to watch the progress in Job Queue. If you don't see progress, the host may be powered down or in the BIOS, just reboot it and it'll apply the changes.
  • - reimage the host with the wmf-auto-reimage-host
  • - cloud-services-team returns host to service

cloudcephosd1006:

  • - cloud-services-team depools host from service
  • - setup an ssh tunnel into cumin, so you can pull up the host's https mgmt. If you do this via ssh only it takes a bit longer to convert all the disks.
  • - ensure system is powered up, as the controller cannot access the disks otherwise. It cannot be in bios, or it cannot make changes.
  • - access https://cloudcephosd1006.mgmt.eqiad.wmnet, login to do the following steps:
  • - Configuration > Storage Configuration > Controller Configuration > Reset Configuration
  • - Configuration > Storage Configuration > Physical Disk Configuration > Drop down next to each disk, convert to non-raid
  • - Commit all changes via Apply Now, then use the pop up to watch the progress in Job Queue. If you don't see progress, the host may be powered down or in the BIOS, just reboot it and it'll apply the changes.
  • - reimage the host with the wmf-auto-reimage-host
  • - cloud-services-team returns host to service

cloudcephosd1007:

  • - cloud-services-team depools host from service
  • - setup an ssh tunnel into cumin, so you can pull up the host's https mgmt. If you do this via ssh only it takes a bit longer to convert all the disks.
  • - ensure system is powered up, as the controller cannot access the disks otherwise. It cannot be in bios, or it cannot make changes.
  • - access https://cloudcephosd1007.mgmt.eqiad.wmnet, login to do the following steps:
  • - Configuration > Storage Configuration > Controller Configuration > Reset Configuration
  • - Configuration > Storage Configuration > Physical Disk Configuration > Drop down next to each disk, convert to non-raid
  • - Commit all changes via Apply Now, then use the pop up to watch the progress in Job Queue. If you don't see progress, the host may be powered down or in the BIOS, just reboot it and it'll apply the changes.
  • - reimage the host with the wmf-auto-reimage-host
  • - cloud-services-team returns host to service

cloudcephosd1008:

  • - cloud-services-team depools host from service
  • - setup an ssh tunnel into cumin, so you can pull up the host's https mgmt. If you do this via ssh only it takes a bit longer to convert all the disks.
  • - ensure system is powered up, as the controller cannot access the disks otherwise. It cannot be in bios, or it cannot make changes.
  • - access https://cloudcephosd1008.mgmt.eqiad.wmnet, login to do the following steps:
  • - Configuration > Storage Configuration > Controller Configuration > Reset Configuration
  • - Configuration > Storage Configuration > Physical Disk Configuration > Drop down next to each disk, convert to non-raid
  • - Commit all changes via Apply Now, then use the pop up to watch the progress in Job Queue. If you don't see progress, the host may be powered down or in the BIOS, just reboot it and it'll apply the changes.
  • - reimage the host with the wmf-auto-reimage-host
  • - cloud-services-team returns host to service

cloudcephosd1009:

  • - cloud-services-team depools host from service
  • - setup an ssh tunnel into cumin, so you can pull up the host's https mgmt. If you do this via ssh only it takes a bit longer to convert all the disks.
  • - ensure system is powered up, as the controller cannot access the disks otherwise. It cannot be in bios, or it cannot make changes.
  • - access https://cloudcephosd1009.mgmt.eqiad.wmnet, login to do the following steps:
  • - Configuration > Storage Configuration > Controller Configuration > Reset Configuration
  • - Configuration > Storage Configuration > Physical Disk Configuration > Drop down next to each disk, convert to non-raid
  • - Commit all changes via Apply Now, then use the pop up to watch the progress in Job Queue. If you don't see progress, the host may be powered down or in the BIOS, just reboot it and it'll apply the changes.
  • - reimage the host with the wmf-auto-reimage-host
  • - cloud-services-team returns host to service

cloudcephosd1010:

  • - cloud-services-team depools host from service
  • - setup an ssh tunnel into cumin, so you can pull up the host's https mgmt. If you do this via ssh only it takes a bit longer to convert all the disks.
  • - ensure system is powered up, as the controller cannot access the disks otherwise. It cannot be in bios, or it cannot make changes.
  • - access https://cloudcephosd1010.mgmt.eqiad.wmnet, login to do the following steps:
  • - Configuration > Storage Configuration > Controller Configuration > Reset Configuration
  • - Configuration > Storage Configuration > Physical Disk Configuration > Drop down next to each disk, convert to non-raid
  • - Commit all changes via Apply Now, then use the pop up to watch the progress in Job Queue. If you don't see progress, the host may be powered down or in the BIOS, just reboot it and it'll apply the changes.
  • - reimage the host with the wmf-auto-reimage-host
  • - cloud-services-team returns host to service

cloudcephosd1011:

  • - cloud-services-team depools host from service
  • - setup an ssh tunnel into cumin, so you can pull up the host's https mgmt. If you do this via ssh only it takes a bit longer to convert all the disks.
  • - ensure system is powered up, as the controller cannot access the disks otherwise. It cannot be in bios, or it cannot make changes.
  • - access https://cloudcephosd1011.mgmt.eqiad.wmnet, login to do the following steps:
  • - Configuration > Storage Configuration > Controller Configuration > Reset Configuration
  • - Configuration > Storage Configuration > Physical Disk Configuration > Drop down next to each disk, convert to non-raid
  • - Commit all changes via Apply Now, then use the pop up to watch the progress in Job Queue. If you don't see progress, the host may be powered down or in the BIOS, just reboot it and it'll apply the changes.
  • - reimage the host with the wmf-auto-reimage-host
  • - cloud-services-team returns host to service

cloudcephosd1012:

  • - cloud-services-team depools host from service
  • - setup an ssh tunnel into cumin, so you can pull up the host's https mgmt. If you do this via ssh only it takes a bit longer to convert all the disks.
  • - ensure system is powered up, as the controller cannot access the disks otherwise. It cannot be in bios, or it cannot make changes.
  • - access https://cloudcephosd1012.mgmt.eqiad.wmnet, login to do the following steps:
  • - Configuration > Storage Configuration > Controller Configuration > Reset Configuration
  • - Configuration > Storage Configuration > Physical Disk Configuration > Drop down next to each disk, convert to non-raid
  • - Commit all changes via Apply Now, then use the pop up to watch the progress in Job Queue. If you don't see progress, the host may be powered down or in the BIOS, just reboot it and it'll apply the changes.
  • - reimage the host with the wmf-auto-reimage-host
  • - cloud-services-team returns host to service

cloudcephosd1013:

  • - cloud-services-team depools host from service
  • - setup an ssh tunnel into cumin, so you can pull up the host's https mgmt. If you do this via ssh only it takes a bit longer to convert all the disks.
  • - ensure system is powered up, as the controller cannot access the disks otherwise. It cannot be in bios, or it cannot make changes.
  • - access https://cloudcephosd1013.mgmt.eqiad.wmnet, login to do the following steps:
  • - Configuration > Storage Configuration > Controller Configuration > Reset Configuration
  • - Configuration > Storage Configuration > Physical Disk Configuration > Drop down next to each disk, convert to non-raid
  • - Commit all changes via Apply Now, then use the pop up to watch the progress in Job Queue. If you don't see progress, the host may be powered down or in the BIOS, just reboot it and it'll apply the changes.
  • - reimage the host with the wmf-auto-reimage-host
  • - cloud-services-team returns host to service

cloudcephosd1014:

  • - cloud-services-team depools host from service
  • - setup an ssh tunnel into cumin, so you can pull up the host's https mgmt. If you do this via ssh only it takes a bit longer to convert all the disks. (ssh -D 8080 cumin1001.eqiad.wmnet or ssh -L 8000:cloudcephosd1014.mgmt.eqiad.wmnet:443 cumin2001.codfw.wmnet )
  • - ensure system is powered up, as the controller cannot access the disks otherwise. It cannot be in bios, or it cannot make changes.
  • - access https://cloudcephosd1014.mgmt.eqiad.wmnet, login to do the following steps:
  • - Configuration > Storage Configuration > Controller Configuration > Reset Configuration
  • - Configuration > Storage Configuration > Physical Disk Configuration > Drop down next to each disk, convert to non-raid
  • - Commit all changes via Apply Now, then use the pop up to watch the progress in Job Queue. If you don't see progress, the host may be powered down or in the BIOS, just reboot it and it'll apply the changes.
  • - reimage the host with the wmf-auto-reimage-host
  • - cloud-services-team returns host to service

cloudcephosd1015:

  • - cloud-services-team depools host from service
  • - setup an ssh tunnel into cumin, so you can pull up the host's https mgmt. If you do this via ssh only it takes a bit longer to convert all the disks.
  • - ensure system is powered up, as the controller cannot access the disks otherwise. It cannot be in bios, or it cannot make changes.
  • - access https://cloudcephosd1015.mgmt.eqiad.wmnet, login to do the following steps:
  • - Configuration > Storage Configuration > Controller Configuration > Reset Configuration
  • - Configuration > Storage Configuration > Physical Disk Configuration > Drop down next to each disk, convert to non-raid
  • - Commit all changes via Apply Now, then use the pop up to watch the progress in Job Queue. If you don't see progress, the host may be powered down or in the BIOS, just reboot it and it'll apply the changes.
  • - reimage the host with the wmf-auto-reimage-host
  • - cloud-services-team returns host to service

Event Timeline

dcaro renamed this task from [ceph] cloud1004-1015 think that their hard drives are HDD when they are SSD to [ceph] cloudcephosd1004-1015 think that their hard drives are HDD when they are SSD.Nov 25 2020, 1:41 PM
dcaro removed dcaro as the assignee of this task.Nov 25 2020, 2:53 PM

related tasks for this hardware: T251619, T242133

<_dcaro> David Caro hmmm... the new servers for ceph (in codfw) have the same brand of disks (a bit smaller size), but they are detected correctly, I think it might be the RAID controller on the other ones that's messing things up

Mentioned in SAL (#wikimedia-cloud) [2020-11-30T18:12:18Z] <andrewbogott> removing all osds from cloudcephosd1015 in order to investigate T268746

Andrew added a subscriber: RobH.

I've moved the workload off of cloudcephosd1015.eqiad.wmnet so we can experiment. For starters @RobH is going to upgrade the firmware (including the raid controller), boot back to the OS, and then we'll see what it looks like. If we need to reinstall the OS for it to re-detect the drives that's also fine.

Troubleshooting:

  • udpated idrac and bios to newest firmware versions, 4.22.00.53 & 2.9.3
  • raid bios was already latest release 25.5.6.0009
  • after all updates, the SSD still reports:
robh@cloudcephosd1015:~$ sudo hdparm -I /dev/sdb 

/dev/sdb:
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0d 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  • this is not correct, as cloudcephosd hosts from the purchase before this one are identical in config, but hdparm detects all the features of those SSDs correctly (example: cloudcephosd1002 versus cloudcephosd1015.)
  • it turns out that the disks in cloudcephmon100[1-3] are all non-raid disks, while the new batch was set in raid mode. converted all to non raid and updating netboot to reimage with non hw raid (sw raid setup) to see if the SSDs then detect correctly.

Change 644578 had a related patch set uploaded (by RobH; owner: RobH):
[operations/puppet@production] swapping new cloudcephmon eqiad hosts to partition same as existing

https://gerrit.wikimedia.org/r/644578

Change 644578 merged by RobH:
[operations/puppet@production] swapping new cloudcephmon eqiad hosts to partition same as existing

https://gerrit.wikimedia.org/r/644578

Script wmf-auto-reimage was launched by robh on cumin1001.eqiad.wmnet for hosts:

cloudcephosd1015.eqiad.wmnet

The log can be found in /var/log/wmf-auto-reimage/202012011832_robh_7493_cloudcephosd1015_eqiad_wmnet.log.

Completed auto-reimage of hosts:

['cloudcephosd1015.eqiad.wmnet']

Of which those FAILED:

['cloudcephosd1015.eqiad.wmnet']

Script wmf-auto-reimage was launched by robh on cumin1001.eqiad.wmnet for hosts:

cloudcephosd1015.eqiad.wmnet

The log can be found in /var/log/wmf-auto-reimage/202012011838_robh_14418_cloudcephosd1015_eqiad_wmnet.log.

Completed auto-reimage of hosts:

['cloudcephosd1015.eqiad.wmnet']

Of which those FAILED:

['cloudcephosd1015.eqiad.wmnet']

Script wmf-auto-reimage was launched by robh on cumin1001.eqiad.wmnet for hosts:

cloudcephosd1015.eqiad.wmnet

The log can be found in /var/log/wmf-auto-reimage/202012011839_robh_15155_cloudcephosd1015_eqiad_wmnet.log.

Completed auto-reimage of hosts:

['cloudcephosd1015.eqiad.wmnet']

Of which those FAILED:

['cloudcephosd1015.eqiad.wmnet']

Change 644593 had a related patch set uploaded (by RobH; owner: RobH):
[operations/puppet@production] cloudcephosd update was not correct

https://gerrit.wikimedia.org/r/644593

Change 644593 merged by RobH:
[operations/puppet@production] cloudcephosd update was not correct

https://gerrit.wikimedia.org/r/644593

Script wmf-auto-reimage was launched by robh on cumin1001.eqiad.wmnet for hosts:

cloudcephosd1015.eqiad.wmnet

The log can be found in /var/log/wmf-auto-reimage/202012011857_robh_31173_cloudcephosd1015_eqiad_wmnet.log.

Completed auto-reimage of hosts:

['cloudcephosd1015.eqiad.wmnet']

Of which those FAILED:

['cloudcephosd1015.eqiad.wmnet']

Script wmf-auto-reimage was launched by robh on cumin1001.eqiad.wmnet for hosts:

cloudcephosd1015.eqiad.wmnet

The log can be found in /var/log/wmf-auto-reimage/202012011858_robh_31547_cloudcephosd1015_eqiad_wmnet.log.

Completed auto-reimage of hosts:

['cloudcephosd1015.eqiad.wmnet']

Of which those FAILED:

['cloudcephosd1015.eqiad.wmnet']

Script wmf-auto-reimage was launched by robh on cumin1001.eqiad.wmnet for hosts:

cloudcephosd1015.eqiad.wmnet

The log can be found in /var/log/wmf-auto-reimage/202012011919_robh_20552_cloudcephosd1015_eqiad_wmnet.log.

Completed auto-reimage of hosts:

['cloudcephosd1015.eqiad.wmnet']

and were ALL successful.

detailing the fix in this comment, and then will copy this into the task description.

The original hosts from a previous order were cloudcephosd100[1-3]. These were all setup with the entirety of ALL disks being set to non-raid mode, presenting as JBOD, and having a software raid1 mirror written to the two smaller SSDs. It appears if the SSDs are put into a hw raid behind the controller, their advanced SSD trim functions are not accessible. The fix was to take cloudcephosd1015 and convert all its disks to non-raid, updated netboot to use ALL eqiad cloudcephosd hosts with software raid, and reimage the host.

So the checklist to do this are as follows, using cloudcephosd1004 as an example:

cloudcephosd1004:

  • - cloud-services-team depools host from service
  • - setup an ssh tunnel into cumin, so you can pull up the host's https mgmt. If you do this via ssh only it takes a bit longer to convert all the disks.
  • - ensure system is powered up, as the controller cannot access the disks otherwise. It cannot be in bios, or it cannot make changes.
  • - access https://cloudcephmon1004.mgmt.eqiad.wmnet, login to do the following steps:
  • - Configuration > Storage Configuration > Controller Configuration > Reset Configuration
  • - Configuration > Storage Configuration > Physical Disk Configuration > Drop down next to each disk, convert to non-raid
  • - Commit all changes via Apply Now, then use the pop up to watch the progress in Job Queue. If you don't see progress, the host may be powered down or in the BIOS, just reboot it and it'll apply the changes.
  • - reimage the host with the wmf-auto-reimage-host
  • - cloud-services-team returns host to service
RobH triaged this task as Medium priority.Dec 1 2020, 7:58 PM
RobH updated the task description. (Show Details)
RobH updated the task description. (Show Details)

Mentioned in SAL (#wikimedia-cloud) [2020-12-01T20:06:51Z] <andrewbogott> removing all osds on cloudcephosd1014 for rebuild, T268746

Andrew is going to attempt all steps to fix cloudcephosd1014. So reassigning to him for that. If there are issues, I'm around to assist via irc.

root@cloudcephosd1014:~# hdparm -I /dev/sdc

/dev/sdc:

ATA device, with non-removable media
Model Number: MTFDDAK1T9TDN
Serial Number: 19472511BD26
Firmware Revision: D1DF003
Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0
Standards:
Used: unknown (minor revision code 0x006d)
Supported: 10 9 8 7 6 5
Likely used: 10
Configuration:
Logical max current
cylinders 16383 0
heads 16 0
sectors/track 63 0

LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 3750748848
Logical Sector size: 512 bytes
Physical Sector size: 4096 bytes
Logical Sector-0 offset: 0 bytes
device size with M = 1024*1024: 1831420 MBytes
device size with M = 1000*1000: 1920383 MBytes (1920 GB)
cache/buffer size = unknown
Form Factor: 2.5 inch
Nominal Media Rotation Rate: Solid State Device
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, with device specific minimum
R/W multiple sector transfer: Max = 16 Current = 16
Advanced power management level: 254
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6

	     Cycle time: min=120ns recommended=120ns

PIO: pio0 pio1 pio2 pio3 pio4

	     Cycle time: no flow control=120ns  IORDY flow control=120ns

Commands/features:
Enabled Supported:

  • SMART feature set
  • Power Management feature set
  • Write cache
  • Look-ahead
  • WRITE_BUFFER command
  • READ_BUFFER command
  • NOP cmd
  • DOWNLOAD_MICROCODE
  • Advanced Power Management feature set
  • 48-bit Address feature set
  • Mandatory FLUSH_CACHE
  • FLUSH_CACHE_EXT
  • SMART error logging
  • SMART self-test
  • General Purpose Logging feature set
  • 64-bit World wide name
  • IDLE_IMMEDIATE with UNLOAD
	    	Write-Read-Verify feature set
	   *	WRITE_UNCORRECTABLE_EXT command
	   *	{READ,WRITE}_DMA_EXT_GPL commands
	   *	Segmented DOWNLOAD_MICROCODE
	    	unknown 119[6]
	    	unknown 119[8]
	   *	Gen1 signaling speed (1.5Gb/s)
	   *	Gen2 signaling speed (3.0Gb/s)
	   *	Gen3 signaling speed (6.0Gb/s)
	   *	Native Command Queueing (NCQ)
	   *	Phy event counters
	   *	NCQ priority information
	   *	READ_LOG_DMA_EXT equivalent to READ_LOG_EXT
	   *	DMA Setup Auto-Activate optimization
	   *	Software settings preservation
	   *	SMART Command Transport (SCT) feature set
	   *	SCT Write Same (AC2)
	   *	SCT Error Recovery Control (AC3)
	   *	SCT Features Control (AC4)
	   *	SCT Data Tables (AC5)
	   *	SANITIZE_ANTIFREEZE_LOCK_EXT command
	   *	SANITIZE feature set
	   *	CRYPTO_SCRAMBLE_EXT command
	   *	BLOCK_ERASE_EXT command
	   *	DOWNLOAD MICROCODE DMA command
	   *	WRITE BUFFER DMA command
	   *	READ BUFFER DMA command
	   *	Data Set Management TRIM supported (limit 8 blocks)
	   *	Deterministic read ZEROs after TRIM

Logical Unit WWN Device Identifier: 500a07512511bd26
NAA : 5
IEEE OUI : 00a075
Unique ID : 12511bd26
Checksum: correct

Mentioned in SAL (#wikimedia-cloud) [2020-12-02T15:08:42Z] <andrewbogott> removing all osds on cloudcephosd1012 for rebuild, T268746

Mentioned in SAL (#wikimedia-cloud) [2020-12-02T20:03:56Z] <andrewbogott> removing all osds on cloudcephosd1010 for rebuild, T268746

Mentioned in SAL (#wikimedia-cloud) [2020-12-03T02:55:22Z] <andrewbogott> removing all osds on cloudcephosd1009 for rebuild, T268746

Mentioned in SAL (#wikimedia-cloud) [2020-12-03T13:24:15Z] <andrewbogott> removing all osds on cloudcephosd1008 for rebuild, T268746

Mentioned in SAL (#wikimedia-cloud) [2020-12-03T19:51:46Z] <andrewbogott> removing all osds on cloudcephosd1006 for rebuild, T268746

Mentioned in SAL (#wikimedia-cloud) [2020-12-03T21:45:48Z] <andrewbogott> removing all osds on cloudcephosd1005 for rebuild, T268746

Mentioned in SAL (#wikimedia-cloud) [2020-12-03T23:21:32Z] <andrewbogott> removing all osds on cloudcephosd1004 for rebuild, T268746

Everything is rebuild as jbod and put back in service. Looks ok so far!