Page MenuHomePhabricator

[infra,k8s,kyverno] Toolforge Kyverno low policy resources tools
Closed, ResolvedPublic

Description

After yesterday's "failed" upgrade, the "Toolforge Kyverno low policy resources tools" alert has been flapping overnight, looking at the logs I see:

1807 2024-09-04T02:53:23Z    ERROR   setup.runtime-checks    runtime/utils.go:101    failed to validate certificates {"error": "Get \"https://10.96.0.1:443/api/v1/namespaces/kyverno/secrets/kyverno-svc.kyverno.svc.kyverno-tls-ca\": context canceled"}

The error from helm history is also related to timeouts:

root@tools-k8s-control-7:~/toolforge-deploy/components/kyverno# helm history -n kyverno kyverno -o yaml
...
- app_version: v1.12.5
  chart: kyverno-3.2.6
  description: "Upgrade \"kyverno\" failed: post-upgrade hooks failed: 1 error occurred:\n\t*
    timed out waiting for the condition\n\n"
  revision: 9
  status: failed
  updated: "2024-09-03T15:22:51.66107678Z"

investigating

Event Timeline

those errors happen more often than just during the troughs, I see this also around one of the troughs, let me cross-check a bit better:

2005 2024-09-04T03:02:00Z    INFO    webhooks.server logging/log.go:184      2024/09/04 03:02:00 http: TLS handshake error from 192.168.57.64:62745: EOF

(updated graph with one that shows, bug in prometheus dark mode)

image.png (1×3 px, 169 KB)

From kyverno-admission-controller-7cb7c68647-zwrvv only, first trough:

1764 2024-09-04T02:48:29Z    INFO    setup.leader-election   leaderelection/leaderelection.go:99     another instance has been elected as leader     {"id": "kyverno-admission-controller-7cb7c68647-zwrvv", "leader": "kyverno-admission-controller-7cb7c68647-btm8l"}

Second trough:

3951 2024-09-04T04:03:36Z    INFO    setup.leader-election   leaderelection/leaderelection.go:99     another instance has been elected as leader     {"id": "kyverno-admission-controller-7cb7c68647-zwrvv", "leader": "kyverno-admission-controller-7cb7c68647-8cmt5"}
3952 2024-09-04T04:08:36Z    INFO    setup.leader-election   leaderelection/leaderelection.go:99     another instance has been elected as leader     {"id": "kyverno-admission-controller-7cb7c68647-zwrvv", "leader": "kyverno-admission-controller-7cb7c68647-7lpr6"}

And third trough:

5664 2024-09-04T04:58:45Z    INFO    setup.leader-election   leaderelection/leaderelection.go:99     another instance has been elected as leader     {"id": "kyverno-admission-controller-7cb7c68647-zwrvv", "leader": "kyverno-admission-controller-7cb7c68647-mj8kc"}

It looks like the leadership is flapping, looking

btm9l pod (the first to change leader election) restarted by itself:

root@tools-k8s-control-7:~/toolforge-deploy/components/kyverno# kubectl get pods -n kyverno kyverno-admission-controller-7cb7c68647-btm8l 
NAME                                            READY   STATUS    RESTARTS       AGE
kyverno-admission-controller-7cb7c68647-btm8l   1/1     Running   2 (167m ago)   16h

From the previous run:

2024-09-04T04:58:32Z    ERROR   setup.runtime-checks    runtime/utils.go:101    failed to validate certificates {"error": "Get \"https://10.96.0.1:443/api/v1/namespaces/kyverno/secrets/kyverno-svc.kyverno.svc.kyverno-tls-pair\": context canceled"}
2024-09-04T04:58:35Z    ERROR   klog    leaderelection/leaderelection.go:369    Failed to update lock: Put "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/kyverno/leases/kyverno": context deadline exceeded
2024-09-04T04:58:35Z    INFO    klog    leaderelection/leaderelection.go:285    failed to renew lease kyverno/kyverno: timed out waiting for the condition
2024-09-04T04:58:35Z    INFO    setup.leader-election   leaderelection/leaderelection.go:90     leadership lost, stopped leading        {"id": "kyverno-admission-controller-7cb7c68647-btm8l"}
2024-09-04T04:58:35Z    ERROR   webhooks/server.go:224  failed to start server  {"error": "http: Server closed"}
2024-09-04T04:58:35Z    INFO    setup.shutdown  internal/setup.go:27    shutting down...
2024-09-04T04:58:35Z    INFO    setup.shutdown  internal/setup.go:27    shutting down...
2024-09-04T04:58:35Z    INFO    setup.shutdown  internal/setup.go:27    shutting down...
2024-09-04T04:58:35Z    INFO    setup.maxprocs  internal/maxprocs.go:16 maxprocs: Resetting GOMAXPROCS to 8
2024-09-04T04:58:35Z    INFO    metrics-config-controller       controller/run.go:88    waiting for workers to terminate ...
2024-09-04T04:58:35Z    INFO    dynamic-client.Poll     dclient/discovery.go:81 stopping registered resources sync
2024-09-04T04:58:35Z    INFO    certmanager-controller.routine  controller/run.go:84    routine stopped {"id": 0}
2024-09-04T04:58:35Z    INFO    certmanager-controller  controller/run.go:88    waiting for workers to terminate ...
2024-09-04T04:58:35Z    INFO    exception-webhook-controller    controller/run.go:88    waiting for workers to terminate ...
2024-09-04T04:58:35Z    INFO    config-controller       controller/run.go:88    waiting for workers to terminate ...
2024-09-04T04:58:35Z    INFO    global-context  controller/run.go:88    waiting for workers to terminate ...
2024-09-04T04:58:35Z    ERROR   webhook-controller.routine      webhook/controller.go:258       failed to update lease  {"id": 0, "error": "Put \"https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/kyverno/leases/kyverno-health\": context canceled"}

It seems to be getting timeouts when dealing with leases, (and the cert error there too, though it's also around the logs when stable) looking

Similar error from the other controller pods that restarted, they lost the leadership and restarted themselves:

root@tools-k8s-control-7:~/toolforge-deploy/components/kyverno# kubectl -n kyverno logs --previous kyverno-admission-controller-7cb7c68647-wkwrw | vim -
...
2024-09-04T04:38:37Z    ERROR   klog    leaderelection/leaderelection.go:332    error retrieving resource lock kyverno/kyverno: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/kyverno/leases/kyverno": context deadline exceeded

kube-controller-manager seems to be restarting somewhat too:

kube-apiserver-tools-k8s-control-7            1/1     Running   2 (44h ago)     44h  
kube-apiserver-tools-k8s-control-8            1/1     Running   2 (43h ago)     43h                                                                                        
kube-apiserver-tools-k8s-control-9            1/1     Running   2 (43h ago)     43h                                                                                        
kube-controller-manager-tools-k8s-control-7   1/1     Running   10 (109m ago)   44h                                                                                        
kube-controller-manager-tools-k8s-control-8   1/1     Running   8 (114m ago)    43h                                                                                        
kube-controller-manager-tools-k8s-control-9   1/1     Running   6 (124m ago)    43h

Interesting, same issue, timeout and leader lost:

root@tools-k8s-control-7:~/toolforge-deploy/components/kyverno# kubectl -n kube-system logs --previous kube-controller-manager-tools-k8s-control-7 | vim -
...
E0904 06:08:44.251562       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://172.16.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": context deadline exceeded

probably time to look into api-server (though it did not restart lately)

There was a network switch problem overnight as well:

06:51 <+icinga-wm> PROBLEM - BGP status on cloudsw1-c8-eqiad.mgmt is CRITICAL: BGP CRITICAL - AS64605/IPv4: Active - Anycast https://wikitech.wikimedia.org/wiki/Network_monitoring%23BGP_status
06:53 <+icinga-wm> RECOVERY - BGP status on cloudsw1-c8-eqiad.mgmt is OK: BGP OK - up: 14, down: 0, shutdown: 0 https://wikitech.wikimedia.org/wiki/Network_monitoring%23BGP_status

What failed upgrade are we talking about @dcaro ? The upgrade to 1.26 or did we attempt doing anything directly on tools yesterday while planning for 1.27 upgrade?

oooh this https://gitlab.wikimedia.org/repos/cloud/toolforge/toolforge-deploy/-/merge_requests/511 ?
Thought the idea is to be done with toolsbeta before deploying anything on tools?

oooh this https://gitlab.wikimedia.org/repos/cloud/toolforge/toolforge-deploy/-/merge_requests/511 ?
Thought the idea is to be done with toolsbeta before deploying anything on tools?

yes, it is correct, the change you reference updated kyverno in both tools and toolsbeta.

But I suspect a datacenter network issue is the actual root problem here.

I think the current theory is that this is caused by the api-server being unrealiable, which is being caused by etcd being unreliable, which in turns may be caused by T373986: cloudsw1-c8-eqiad is unstable.

I just checked the etcd logs on server tools-k8s-etcd-22. There are a few leader elections around the time the switch failed:

Sep 04 06:08:35 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff is starting a new election at term 37801
Sep 04 06:08:35 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff became candidate at term 37802
Sep 04 06:08:35 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff received MsgVoteResp from 4c44ec1035dadff at term 37802
Sep 04 06:08:35 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff [logterm: 37801, index: 2379241163] sent MsgVote request to 2292e41cf22b5539 at term 37802
Sep 04 06:08:35 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff [logterm: 37801, index: 2379241163] sent MsgVote request to 2c74926d875bb8d7 at term 37802
Sep 04 06:08:35 tools-k8s-etcd-22 etcd[502]: raft.node: 4c44ec1035dadff lost leader 2c74926d875bb8d7 at term 37802
Sep 04 06:08:37 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff is starting a new election at term 37802
Sep 04 06:08:37 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff became candidate at term 37803
Sep 04 06:08:37 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff received MsgVoteResp from 4c44ec1035dadff at term 37803
Sep 04 06:08:37 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff [logterm: 37801, index: 2379241163] sent MsgVote request to 2292e41cf22b5539 at term 37803
Sep 04 06:08:37 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff [logterm: 37801, index: 2379241163] sent MsgVote request to 2c74926d875bb8d7 at term 37803
Sep 04 06:08:38 tools-k8s-etcd-22 etcd[502]: lost the TCP streaming connection with peer 2292e41cf22b5539 (stream Message reader)
Sep 04 06:08:38 tools-k8s-etcd-22 etcd[502]: failed to read 2292e41cf22b5539 on stream Message (read tcp 172.16.5.213:60570->172.16.2.200:2380: i/o timeout)
Sep 04 06:08:38 tools-k8s-etcd-22 etcd[502]: peer 2292e41cf22b5539 became inactive (message send to peer failed)
Sep 04 06:08:38 tools-k8s-etcd-22 etcd[502]: lost the TCP streaming connection with peer 2292e41cf22b5539 (stream MsgApp v2 reader)
Sep 04 06:08:38 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff is starting a new election at term 37803
Sep 04 06:08:38 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff became candidate at term 37804
Sep 04 06:08:38 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff received MsgVoteResp from 4c44ec1035dadff at term 37804
Sep 04 06:08:38 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff [logterm: 37801, index: 2379241163] sent MsgVote request to 2292e41cf22b5539 at term 37804
Sep 04 06:08:38 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff [logterm: 37801, index: 2379241163] sent MsgVote request to 2c74926d875bb8d7 at term 37804
Sep 04 06:08:39 tools-k8s-etcd-22 ulogd[1070667]: [fw-in-drop] IN=ens3 OUT= MAC=ff:ff:ff:ff:ff:ff:fa:16:3e:c4:2f:0a:08:00 SRC=0.0.0.0 DST=255.255.255.255 LEN=339 TOS=00 PREC=0xC0 TTL=64 ID=0 PROTO=UDP SPT=68 DPT=67 LEN=319 MARK=0
Sep 04 06:08:39 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff is starting a new election at term 37804
Sep 04 06:08:39 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff became candidate at term 37805
Sep 04 06:08:39 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff received MsgVoteResp from 4c44ec1035dadff at term 37805
Sep 04 06:08:39 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff [logterm: 37801, index: 2379241163] sent MsgVote request to 2292e41cf22b5539 at term 37805
Sep 04 06:08:39 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff [logterm: 37801, index: 2379241163] sent MsgVote request to 2c74926d875bb8d7 at term 37805
Sep 04 06:08:39 tools-k8s-etcd-22 etcd[502]: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000085952s) to execute
Sep 04 06:08:39 tools-k8s-etcd-22 etcd[502]: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (1.999938709s) to execute
Sep 04 06:08:39 tools-k8s-etcd-22 etcd[502]: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000028815s) to execute
Sep 04 06:08:39 tools-k8s-etcd-22 etcd[502]: read-only range request "key:\"/registry/secrets/kyverno/kyverno-svc.kyverno.svc.kyverno-tls-pair\" " with result "error:context canceled" took too long (1.720599875s) to execute
Sep 04 06:08:39 tools-k8s-etcd-22 etcd[502]: WARNING: 2024/09/04 06:08:39 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: peer 2292e41cf22b5539 became active
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: established a TCP streaming connection with peer 2292e41cf22b5539 (stream Message reader)
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: established a TCP streaming connection with peer 2292e41cf22b5539 (stream MsgApp v2 reader)
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff is starting a new election at term 37805
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff became candidate at term 37806
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff received MsgVoteResp from 4c44ec1035dadff at term 37806
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff [logterm: 37801, index: 2379241163] sent MsgVote request to 2292e41cf22b5539 at term 37806
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff [logterm: 37801, index: 2379241163] sent MsgVote request to 2c74926d875bb8d7 at term 37806
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.00010197s) to execute
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: closed an existing TCP streaming connection with peer 2292e41cf22b5539 (stream MsgApp v2 writer)
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: established a TCP streaming connection with peer 2292e41cf22b5539 (stream MsgApp v2 writer)
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: WARNING: 2024/09/04 06:08:40 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: closed an existing TCP streaming connection with peer 2c74926d875bb8d7 (stream Message writer)
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: established a TCP streaming connection with peer 2c74926d875bb8d7 (stream Message writer)
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: closed an existing TCP streaming connection with peer 2c74926d875bb8d7 (stream MsgApp v2 writer)
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: established a TCP streaming connection with peer 2c74926d875bb8d7 (stream MsgApp v2 writer)
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: closed an existing TCP streaming connection with peer 2292e41cf22b5539 (stream Message writer)
Sep 04 06:08:40 tools-k8s-etcd-22 etcd[502]: established a TCP streaming connection with peer 2292e41cf22b5539 (stream Message writer)
Sep 04 06:08:41 tools-k8s-etcd-22 etcd[502]: timed out waiting for read index response (local node might have slow network)
Sep 04 06:08:41 tools-k8s-etcd-22 etcd[502]: read-only range request "key:\"/registry/kyverno.io/policies/tool-wikitime/toolforge-kyverno-pod-policy\" " with result "error:etcdserver: request timed out" took too long (7.000732349s) to execute
Sep 04 06:08:41 tools-k8s-etcd-22 etcd[502]: health check for peer 2c74926d875bb8d7 could not connect: dial tcp 172.16.2.183:2380: i/o timeout (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Sep 04 06:08:41 tools-k8s-etcd-22 etcd[502]: health check for peer 2292e41cf22b5539 could not connect: dial tcp 172.16.2.200:2380: i/o timeout (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Sep 04 06:08:41 tools-k8s-etcd-22 etcd[502]: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (1.794974175s) to execute
Sep 04 06:08:41 tools-k8s-etcd-22 etcd[502]: WARNING: 2024/09/04 06:08:41 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
Sep 04 06:08:42 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff is starting a new election at term 37806
Sep 04 06:08:42 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff became candidate at term 37807
Sep 04 06:08:42 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff received MsgVoteResp from 4c44ec1035dadff at term 37807
Sep 04 06:08:42 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff [logterm: 37801, index: 2379241163] sent MsgVote request to 2292e41cf22b5539 at term 37807
Sep 04 06:08:42 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff [logterm: 37801, index: 2379241163] sent MsgVote request to 2c74926d875bb8d7 at term 37807
Sep 04 06:08:42 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff received MsgVoteResp rejection from 2c74926d875bb8d7 at term 37807
Sep 04 06:08:42 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff [quorum:2] has received 1 MsgVoteResp votes and 1 vote rejections
Sep 04 06:08:42 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff received MsgVoteResp rejection from 2292e41cf22b5539 at term 37807
Sep 04 06:08:42 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff [quorum:2] has received 1 MsgVoteResp votes and 2 vote rejections
Sep 04 06:08:42 tools-k8s-etcd-22 etcd[502]: 4c44ec1035dadff became follower at term 37807
[...]

further supporting the idea of a network problem affecting etcd, then the api-server, then everything else.

aborrero added a project: User-aborrero.

There are definitely network instabilities in etcd:

Sep 04 04:18:25 tools-k8s-etcd-23 etcd[503]: failed to read 4c44ec1035dadff on stream MsgApp v2 (read tcp 172.16.2.200:35018->172.16.5.213:2380: i/o timeout)
Sep 04 04:18:25 tools-k8s-etcd-23 etcd[503]: peer 4c44ec1035dadff became inactive (message send to peer failed)
Sep 04 04:18:26 tools-k8s-etcd-23 etcd[503]: lost the TCP streaming connection with peer 4c44ec1035dadff (stream Message reader)
Sep 04 04:18:27 tools-k8s-etcd-23 etcd[503]: lost the TCP streaming connection with peer 4c44ec1035dadff (stream MsgApp v2 writer)
Sep 04 04:18:27 tools-k8s-etcd-23 etcd[503]: peer 4c44ec1035dadff became active
Sep 04 04:18:27 tools-k8s-etcd-23 etcd[503]: established a TCP streaming connection with peer 4c44ec1035dadff (stream MsgApp v2 writer)
Sep 04 04:18:27 tools-k8s-etcd-23 etcd[503]: failed to write 4c44ec1035dadff on stream Message (write tcp 172.16.2.200:2380->172.16.5.213:37110: write: broken pipe)
Sep 04 04:18:27 tools-k8s-etcd-23 etcd[503]: peer 4c44ec1035dadff became inactive (message send to peer failed)
Sep 04 04:18:27 tools-k8s-etcd-23 etcd[503]: lost the TCP streaming connection with peer 4c44ec1035dadff (stream Message writer)
Sep 04 04:18:27 tools-k8s-etcd-23 etcd[503]: peer 4c44ec1035dadff became active
Sep 04 04:18:27 tools-k8s-etcd-23 etcd[503]: established a TCP streaming connection with peer 4c44ec1035dadff (stream Message writer)
Sep 04 04:18:27 tools-k8s-etcd-23 etcd[503]: 2292e41cf22b5539 [term: 37665] received a MsgAppResp message with higher term from 4c44ec1035dadff [term: 37669]
Sep 04 04:18:27 tools-k8s-etcd-23 etcd[503]: 2292e41cf22b5539 became follower at term 37669
Sep 04 04:18:27 tools-k8s-etcd-23 etcd[503]: raft.node: 2292e41cf22b5539 lost leader 2292e41cf22b5539 at term 37669
Sep 04 04:18:28 tools-k8s-etcd-23 etcd[503]: 2292e41cf22b5539 no leader at term 37669; dropping index reading msg
Sep 04 04:18:28 tools-k8s-etcd-23 etcd[503]: 2292e41cf22b5539 no leader at term 37669; dropping index reading msg
Sep 04 04:18:28 tools-k8s-etcd-23 etcd[503]: failed to dial 4c44ec1035dadff on stream Message (dial tcp: i/o timeout)
Sep 04 04:18:28 tools-k8s-etcd-23 etcd[503]: peer 4c44ec1035dadff became inactive (message send to peer failed)
Sep 04 04:18:28 tools-k8s-etcd-23 etcd[503]: 2292e41cf22b5539 [term: 37669] ignored a MsgVote message with lower term from 2c74926d875bb8d7 [term: 37666]
Sep 04 04:18:29 tools-k8s-etcd-23 etcd[503]: 2292e41cf22b5539 [term: 37669] received a MsgVote message with higher term from 4c44ec1035dadff [term: 37670]
Sep 04 04:18:29 tools-k8s-etcd-23 etcd[503]: 2292e41cf22b5539 became follower at term 37670
Sep 04 04:18:29 tools-k8s-etcd-23 etcd[503]: 2292e41cf22b5539 [logterm: 37665, index: 2378905928, vote: 0] rejected MsgVote from 4c44ec1035dadff [logterm: 37665, index: 2378905875] at term 37670
Sep 04 04:18:29 tools-k8s-etcd-23 etcd[503]: peer 4c44ec1035dadff became active
Sep 04 04:18:29 tools-k8s-etcd-23 etcd[503]: established a TCP streaming connection with peer 4c44ec1035dadff (stream Message reader)
Sep 04 04:18:29 tools-k8s-etcd-23 etcd[503]: established a TCP streaming connection with peer 4c44ec1035dadff (stream MsgApp v2 reader)
Sep 04 04:18:29 tools-k8s-etcd-23 etcd[503]: 2292e41cf22b5539 [term: 37670] received a MsgVote message with higher term from 2c74926d875bb8d7 [term: 37671]
Sep 04 04:18:29 tools-k8s-etcd-23 etcd[503]: 2292e41cf22b5539 became follower at term 37671
Sep 04 04:18:29 tools-k8s-etcd-23 etcd[503]: 2292e41cf22b5539 [logterm: 37665, index: 2378905928, vote: 0] cast MsgVote for 2c74926d875bb8d7 [logterm: 37665, index: 2378905928] at term 37671
Sep 04 04:18:29 tools-k8s-etcd-23 etcd[503]: raft.node: 2292e41cf22b5539 elected leader 2c74926d875bb8d7 at term 37671
dcaro changed the task status from Open to In Progress.Sep 4 2024, 4:15 PM
dcaro moved this task from Next Up to In Progress on the Toolforge (Toolforge iteration 14) board.
dcaro moved this task from In Progress to Done on the Toolforge (Toolforge iteration 14) board.