Forked from discussion in T377420: [jobs-api,jobs-cli] Introduce a way to stop stuck cronjobs
In T377420#10241037, @aborrero wrote:In T377420#10239144, @bd808 wrote:I wonder if adding support for declaring concurrencyPolicy: Replace for a scheduled job would also be helpful? Something like toolforge jobs run --image foo --command bar --schedule '*/5 * * * *' --replace job-that-should-be-killed-if-still-running-when-the-next-schedule-fires could setup a CronJob instance that will be force killed by Kubernetes if a stale copy of the job is still active when the next scheduled run is due to start. Toolhub uses this Kubernetes behavior as a workaround for a non-terminating side car container in a CronJob for its production deployment.
I'm reading https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#concurrency-policy and yes, this seems interesting. We could actually support both things (healthcheck and concurrencypolicy). Maybe we would explore that on a different ticket?