Page MenuHomePhabricator

Move away from nfs?
Closed, DuplicatePublic

Description

Following a deployment to k8s, it may be reasonable to move away to NFS and have a cinder volume attached as a pv to the k8s cluster. This would need to be portable between clusters to facilitate upgrades.

Event Timeline

Would be good to consolidate discussion in T178520: Find somewhere else (not NFS) to store Quarry's resultsets - maybe we could switch directly to object storage.

Would be good to consolidate discussion in T178520: Find somewhere else (not NFS) to store Quarry's resultsets - maybe we could switch directly to object storage.

Oh look at that, been a good idea for some time.

Some initial tinkering suggests this may not be in reach in WMCS at the moment:
Making a pv:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: results
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: "/results"

and pvc:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: results
spec:
  accessModes: [ReadWriteMany]
  resources:
    requests:
      storage: 1Gi

Is resulting in:

Warning  FailedAttachVolume  45s (x8 over 113s)  attachdetach-controller  AttachVolume.Attach failed for volume "pvc-9c0aecc6-705b-42f4-a783-44c1ea8f5813" : rpc error: code = Internal desc = [ControllerPublishVolume] Attach Volume failed with error failed to attach a0026dab-171a-4f81-a8b0-0f656962aa88 volume to 5d82dc07-e241-44d4-8b46-f1c50297b1d4 compute: Bad request with: [POST https://openstack.eqiad1.wikimediacloud.org:28774/v2.1/servers/5d82dc07-e241-44d4-8b46-f1c50297b1d4/os-volume_attachments], error message: {"badRequest": {"code": 400, "message": "Invalid volume: volume a0026dab-171a-4f81-a8b0-0f656962aa88 is already attached to instances: cbabfe7c-a2a6-4e57-af2a-b868455bf57c"}}

On pods scheduled to nodes other than the first one to mount the volume.

https://docs.openstack.org/cinder/zed/configuration/block-storage/block-storage-overview.html further suggests that cinder may not be able to mount to more than one node at a time With the Block Storage service, you can attach a device to only one instance.

Some chatting in the irc channel suggests we might be stuck with nfs for this use case for the moment. Though hopefully not forever.