Similar to what happened in T322039 but this time I don't see any error in the logs. It started on Dec 25, 18:00 UTC.
This replica instance had other issues in the past and is probably not 100% consistent. We're currently planning a migration of both the primary and the replica in T301949, but in the meantime it would still be nice to get the replication working again.
SHOW SLAVE STATUS shows the replication is stuck at 8722460 and Seconds_Behind_Master is growing steadily.
MariaDB [(none)]> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: clouddb1001.clouddb-services.eqiad1.wikimedia.cloud Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: log.308899 Read_Master_Log_Pos: 66943708 Relay_Log_File: clouddb1002-relay-bin.005765 Relay_Log_Pos: 8722460 Relay_Master_Log_File: log.307637 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: s51412\_\_data.%,s51071\_\_templatetiger\_p.%,s52721\_\_pagecount\_stats\_p.%,s51290\_\_dpl\_p.%,s54518\_\_mw.% Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 8722178 Relay_Log_Space: 143508734485 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 864619 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 2886731673 Master_SSL_Crl: Master_SSL_Crlpath: Using_Gtid: Slave_Pos Gtid_IO_Pos: 0-2886731673-33522724637,2886731673-2886731673-3277343457 Replicate_Do_Domain_Ids: Replicate_Ignore_Domain_Ids: Parallel_Mode: conservative 1 row in set (0.00 sec)