This is likely a problem of my own making, but rather than continue poking around in the dark...
I attempted push out a newer version of Kask to k8s by:
* Updating `deploy1001:/srv/scap-helm/sessionstore/sessionstore-staging-values.yaml` to set `main_app.version` to `v1.0.0`
* Running `CLUSTER=staging scap-helm sessionstore upgrade staging -f sessionstore-staging-values.yaml stable/kask`
The resulting pod restarted repeatedly, eventually entering `CrashLoopBackOff` status. The log output (Kask log output) seemed to indicate an entirely normal startup.
Eventually, I reverted `deploy1001:/srv/scap-helm/sessionstore/sessionstore-staging-values.yaml` to its original state, and re-ran the upgrade, but the resulting pod suffers the same:
```
eevans@deploy1001:~$ CLUSTER=staging scap-helm sessionstore status staging
### cluster staging
LAST DEPLOYED: Mon Jul 8 15:07:37 2019
NAMESPACE: sessionstore
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kask-staging NodePort 10.64.76.4 <none> 8080:8081/TCP 40d
==> v1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kask-staging 1 2 1 1 40d
==> v1/NetworkPolicy
NAME POD-SELECTOR AGE
kask-staging app=kask,release=staging 40d
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
kask-staging-6fd45bc767-55bnp 0/1 CrashLoopBackOff 7 13m
kask-staging-7b797797cd-6rbmq 1/1 Running 0 10d
==> v1/ConfigMap
NAME DATA AGE
config-staging 1 40d
cassandra-certs-staging 1 38d
kask-certs-staging 2 37d
NOTES:
Thank you for installing kask.
Your release is named kask-staging.
To learn more about the release, try:
$ helm status kask-staging
$ helm get kask-staging
eevans@deploy1001:~$
```
What am I do wrong?