Page MenuHomePhabricator
Paste P79194

Masterwork From Distant Lands
ActivePublic

Authored by ProdPasteBot on Jul 16 2025, 8:26 AM.
Tags
None
Referenced Files
F64748242: Masterwork From Distant Lands
Jul 16 2025, 8:26 AM
Subscribers
None
-- Journal begins at Fri 2025-06-13 00:57:02 UTC, ends at Wed 2025-07-16 08:23:17 UTC. --
Jul 11 14:12:32 cloudcephosd1013 kernel: INFO: task md2_raid1:668 blocked for more than 120 seconds.
Jul 11 14:12:32 cloudcephosd1013 kernel: Not tainted 5.10.0-35-amd64 #1 Debian 5.10.237-1
Jul 11 14:12:32 cloudcephosd1013 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 11 14:12:32 cloudcephosd1013 kernel: task:md2_raid1 state:D stack: 0 pid: 668 ppid: 2 flags:0x00004000
Jul 11 14:12:32 cloudcephosd1013 kernel: Call Trace:
Jul 11 14:12:32 cloudcephosd1013 kernel: __schedule+0x282/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: schedule+0x46/0xb0
Jul 11 14:12:32 cloudcephosd1013 kernel: md_super_wait+0x72/0xa0 [md_mod]
Jul 11 14:12:32 cloudcephosd1013 kernel: ? add_wait_queue_exclusive+0x70/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: write_page+0x270/0x390 [md_mod]
Jul 11 14:12:32 cloudcephosd1013 kernel: md_update_sb.part.0+0x313/0x880 [md_mod]
Jul 11 14:12:32 cloudcephosd1013 kernel: ? md_bitmap_daemon_work+0x271/0x3a0 [md_mod]
Jul 11 14:12:32 cloudcephosd1013 kernel: md_check_recovery+0x4d4/0x590 [md_mod]
Jul 11 14:12:32 cloudcephosd1013 kernel: raid1d+0x4a/0x1680 [raid1]
Jul 11 14:12:32 cloudcephosd1013 kernel: ? __switch_to_asm+0x3a/0x60
Jul 11 14:12:32 cloudcephosd1013 kernel: ? lock_timer_base+0x61/0x80
Jul 11 14:12:32 cloudcephosd1013 kernel: ? timer_delete_sync+0x67/0xb0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? prepare_to_wait_event+0x76/0x160
Jul 11 14:12:32 cloudcephosd1013 kernel: md_thread+0xa8/0x160 [md_mod]
Jul 11 14:12:32 cloudcephosd1013 kernel: ? add_wait_queue_exclusive+0x70/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: ? md_write_inc+0x50/0x50 [md_mod]
Jul 11 14:12:32 cloudcephosd1013 kernel: kthread+0x118/0x140
Jul 11 14:12:32 cloudcephosd1013 kernel: ? __kthread_bind_mask+0x60/0x60
Jul 11 14:12:32 cloudcephosd1013 kernel: ret_from_fork+0x1f/0x30
Jul 11 14:12:32 cloudcephosd1013 kernel: INFO: task tp_osd_tp:3811 blocked for more than 120 seconds.
Jul 11 14:12:32 cloudcephosd1013 kernel: Not tainted 5.10.0-35-amd64 #1 Debian 5.10.237-1
Jul 11 14:12:32 cloudcephosd1013 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 11 14:12:32 cloudcephosd1013 kernel: task:tp_osd_tp state:D stack: 0 pid: 3811 ppid: 1 flags:0x00000320
Jul 11 14:12:32 cloudcephosd1013 kernel: Call Trace:
Jul 11 14:12:32 cloudcephosd1013 kernel: __schedule+0x282/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: schedule+0x46/0xb0
Jul 11 14:12:32 cloudcephosd1013 kernel: io_schedule+0x42/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_mq_get_tag+0x11d/0x280
Jul 11 14:12:32 cloudcephosd1013 kernel: ? linear_map+0x50/0xa0 [dm_mod]
Jul 11 14:12:32 cloudcephosd1013 kernel: ? add_wait_queue_exclusive+0x70/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: __blk_mq_alloc_request+0x79/0x110
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_mq_submit_bio+0x13d/0x530
Jul 11 14:12:32 cloudcephosd1013 kernel: submit_bio_noacct+0x2f2/0x420
Jul 11 14:12:32 cloudcephosd1013 kernel: blkdev_direct_IO+0x3de/0x4a0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? aio_fsync_work+0xf0/0xf0
Jul 11 14:12:32 cloudcephosd1013 kernel: generic_file_direct_write+0x98/0x1c0
Jul 11 14:12:32 cloudcephosd1013 kernel: __generic_file_write_iter+0xb7/0x1d0
Jul 11 14:12:32 cloudcephosd1013 kernel: blkdev_write_iter+0xab/0x150
Jul 11 14:12:32 cloudcephosd1013 kernel: ? apparmor_file_permission+0x69/0x160
Jul 11 14:12:32 cloudcephosd1013 kernel: aio_write+0xf4/0x1f0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? update_load_avg+0x7a/0x5d0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? task_numa_fault+0x2a3/0xb70
Jul 11 14:12:32 cloudcephosd1013 kernel: ? io_submit_one+0x6d/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: ? kmem_cache_alloc+0xed/0x1f0
Jul 11 14:12:32 cloudcephosd1013 kernel: io_submit_one+0x195/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: ? __seccomp_filter+0x7c/0x6b0
Jul 11 14:12:32 cloudcephosd1013 kernel: __x64_sys_io_submit+0x82/0x180
Jul 11 14:12:32 cloudcephosd1013 kernel: do_syscall_64+0x30/0x80
Jul 11 14:12:32 cloudcephosd1013 kernel: entry_SYSCALL_64_after_hwframe+0x67/0xd1
Jul 11 14:12:32 cloudcephosd1013 kernel: RIP: 0033:0x7f53631e8fd9
Jul 11 14:12:32 cloudcephosd1013 kernel: RSP: 002b:00007f53444e55b8 EFLAGS: 00000246 ORIG_RAX: 00000000000000d1
Jul 11 14:12:32 cloudcephosd1013 kernel: RAX: ffffffffffffffda RBX: 00007f53444e6ba0 RCX: 00007f53631e8fd9
Jul 11 14:12:32 cloudcephosd1013 kernel: RDX: 00007f53444e55f0 RSI: 0000000000000005 RDI: 00007f535efa8000
Jul 11 14:12:32 cloudcephosd1013 kernel: RBP: 00007f535efa8000 R08: 00007f53444e56ac R09: 0000000000000000
Jul 11 14:12:32 cloudcephosd1013 kernel: R10: 000055788a3105a8 R11: 0000000000000246 R12: 0000000000000005
Jul 11 14:12:32 cloudcephosd1013 kernel: R13: 0000000000000000 R14: 00007f53444e55f0 R15: 00005577fba4b000
Jul 11 14:12:32 cloudcephosd1013 kernel: INFO: task bstore_kv_sync:3621 blocked for more than 120 seconds.
Jul 11 14:12:32 cloudcephosd1013 kernel: Not tainted 5.10.0-35-amd64 #1 Debian 5.10.237-1
Jul 11 14:12:32 cloudcephosd1013 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 11 14:12:32 cloudcephosd1013 kernel: task:bstore_kv_sync state:D stack: 0 pid: 3621 ppid: 1 flags:0x00000320
Jul 11 14:12:32 cloudcephosd1013 kernel: Call Trace:
Jul 11 14:12:32 cloudcephosd1013 kernel: __schedule+0x282/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: schedule+0x46/0xb0
Jul 11 14:12:32 cloudcephosd1013 kernel: io_schedule+0x42/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: wait_on_page_bit_common+0x116/0x3b0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? trace_event_raw_event_file_check_and_advance_wb_err+0xf0/0xf0
Jul 11 14:12:32 cloudcephosd1013 kernel: wait_on_page_writeback+0x25/0x80
Jul 11 14:12:32 cloudcephosd1013 kernel: __filemap_fdatawait_range+0x81/0xf0
Jul 11 14:12:32 cloudcephosd1013 kernel: file_fdatawait_range+0x15/0x20
Jul 11 14:12:32 cloudcephosd1013 kernel: __x64_sys_sync_file_range+0x3f/0x80
Jul 11 14:12:32 cloudcephosd1013 kernel: do_syscall_64+0x30/0x80
Jul 11 14:12:32 cloudcephosd1013 kernel: entry_SYSCALL_64_after_hwframe+0x67/0xd1
Jul 11 14:12:32 cloudcephosd1013 kernel: RIP: 0033:0x7f3d755da288
Jul 11 14:12:32 cloudcephosd1013 kernel: RSP: 002b:00007f3d640e6a50 EFLAGS: 00000293 ORIG_RAX: 0000000000000115
Jul 11 14:12:32 cloudcephosd1013 kernel: RAX: ffffffffffffffda RBX: 000055dee5b500f0 RCX: 00007f3d755da288
Jul 11 14:12:32 cloudcephosd1013 kernel: RDX: 0000000000002000 RSI: 000000d9241b2000 RDI: 000000000000002e
Jul 11 14:12:32 cloudcephosd1013 kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
Jul 11 14:12:32 cloudcephosd1013 kernel: R10: 0000000000000007 R11: 0000000000000293 R12: 0000000000000001
Jul 11 14:12:32 cloudcephosd1013 kernel: R13: 0000000000000001 R14: 000000d9241b2000 R15: 000055dde25ecc00
Jul 11 14:12:32 cloudcephosd1013 kernel: INFO: task tp_osd_tp:3982 blocked for more than 120 seconds.
Jul 11 14:12:32 cloudcephosd1013 kernel: Not tainted 5.10.0-35-amd64 #1 Debian 5.10.237-1
Jul 11 14:12:32 cloudcephosd1013 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 11 14:12:32 cloudcephosd1013 kernel: task:tp_osd_tp state:D stack: 0 pid: 3982 ppid: 1 flags:0x00000320
Jul 11 14:12:32 cloudcephosd1013 kernel: Call Trace:
Jul 11 14:12:32 cloudcephosd1013 kernel: __schedule+0x282/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: schedule+0x46/0xb0
Jul 11 14:12:32 cloudcephosd1013 kernel: io_schedule+0x42/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_mq_get_tag+0x11d/0x280
Jul 11 14:12:32 cloudcephosd1013 kernel: ? add_wait_queue_exclusive+0x70/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: __blk_mq_alloc_request+0x79/0x110
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_mq_submit_bio+0x13d/0x530
Jul 11 14:12:32 cloudcephosd1013 kernel: submit_bio_noacct+0x2f2/0x420
Jul 11 14:12:32 cloudcephosd1013 kernel: blkdev_direct_IO+0x3de/0x4a0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? aio_fsync_work+0xf0/0xf0
Jul 11 14:12:32 cloudcephosd1013 kernel: generic_file_direct_write+0x98/0x1c0
Jul 11 14:12:32 cloudcephosd1013 kernel: __generic_file_write_iter+0xb7/0x1d0
Jul 11 14:12:32 cloudcephosd1013 kernel: blkdev_write_iter+0xab/0x150
Jul 11 14:12:32 cloudcephosd1013 kernel: ? apparmor_file_permission+0x69/0x160
Jul 11 14:12:32 cloudcephosd1013 kernel: aio_write+0xf4/0x1f0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? __alloc_pages_nodemask+0x161/0x310
Jul 11 14:12:32 cloudcephosd1013 kernel: ? io_submit_one+0x6d/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: ? kmem_cache_alloc+0xed/0x1f0
Jul 11 14:12:32 cloudcephosd1013 kernel: io_submit_one+0x195/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: ? __seccomp_filter+0x7c/0x6b0
Jul 11 14:12:32 cloudcephosd1013 kernel: __x64_sys_io_submit+0x82/0x180
Jul 11 14:12:32 cloudcephosd1013 kernel: do_syscall_64+0x30/0x80
Jul 11 14:12:32 cloudcephosd1013 kernel: entry_SYSCALL_64_after_hwframe+0x67/0xd1
Jul 11 14:12:32 cloudcephosd1013 kernel: RIP: 0033:0x7f3d755defd9
Jul 11 14:12:32 cloudcephosd1013 kernel: RSP: 002b:00007f3d558d9568 EFLAGS: 00000246 ORIG_RAX: 00000000000000d1
Jul 11 14:12:32 cloudcephosd1013 kernel: RAX: ffffffffffffffda RBX: 00007f3d558daba0 RCX: 00007f3d755defd9
Jul 11 14:12:32 cloudcephosd1013 kernel: RDX: 00007f3d558d95a0 RSI: 0000000000000010 RDI: 00007f3d7139e000
Jul 11 14:12:32 cloudcephosd1013 kernel: RBP: 00007f3d7139e000 R08: 00007f3d558d96ac R09: 0000000000000000
Jul 11 14:12:32 cloudcephosd1013 kernel: R10: 000055df40c2be28 R11: 0000000000000246 R12: 0000000000000010
Jul 11 14:12:32 cloudcephosd1013 kernel: R13: 0000000000000000 R14: 00007f3d558d95a0 R15: 000055dde253b040
Jul 11 14:12:32 cloudcephosd1013 kernel: INFO: task tp_osd_tp:3984 blocked for more than 120 seconds.
Jul 11 14:12:32 cloudcephosd1013 kernel: Not tainted 5.10.0-35-amd64 #1 Debian 5.10.237-1
Jul 11 14:12:32 cloudcephosd1013 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 11 14:12:32 cloudcephosd1013 kernel: task:tp_osd_tp state:D stack: 0 pid: 3984 ppid: 1 flags:0x00000320
Jul 11 14:12:32 cloudcephosd1013 kernel: Call Trace:
Jul 11 14:12:32 cloudcephosd1013 kernel: __schedule+0x282/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: schedule+0x46/0xb0
Jul 11 14:12:32 cloudcephosd1013 kernel: io_schedule+0x42/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_mq_get_tag+0x11d/0x280
Jul 11 14:12:32 cloudcephosd1013 kernel: ? linear_map+0x50/0xa0 [dm_mod]
Jul 11 14:12:32 cloudcephosd1013 kernel: ? add_wait_queue_exclusive+0x70/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: __blk_mq_alloc_request+0x79/0x110
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_mq_submit_bio+0x13d/0x530
Jul 11 14:12:32 cloudcephosd1013 kernel: submit_bio_noacct+0x2f2/0x420
Jul 11 14:12:32 cloudcephosd1013 kernel: blkdev_direct_IO+0x3de/0x4a0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? aio_fsync_work+0xf0/0xf0
Jul 11 14:12:32 cloudcephosd1013 kernel: generic_file_direct_write+0x98/0x1c0
Jul 11 14:12:32 cloudcephosd1013 kernel: __generic_file_write_iter+0xb7/0x1d0
Jul 11 14:12:32 cloudcephosd1013 kernel: blkdev_write_iter+0xab/0x150
Jul 11 14:12:32 cloudcephosd1013 kernel: aio_write+0xf4/0x1f0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? io_submit_one+0x6d/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: ? kmem_cache_alloc+0xed/0x1f0
Jul 11 14:12:32 cloudcephosd1013 kernel: io_submit_one+0x195/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: ? __seccomp_filter+0x7c/0x6b0
Jul 11 14:12:32 cloudcephosd1013 kernel: __x64_sys_io_submit+0x82/0x180
Jul 11 14:12:32 cloudcephosd1013 kernel: do_syscall_64+0x30/0x80
Jul 11 14:12:32 cloudcephosd1013 kernel: entry_SYSCALL_64_after_hwframe+0x67/0xd1
Jul 11 14:12:32 cloudcephosd1013 kernel: RIP: 0033:0x7f3d755defd9
Jul 11 14:12:32 cloudcephosd1013 kernel: RSP: 002b:00007f3d548d75d8 EFLAGS: 00000246 ORIG_RAX: 00000000000000d1
Jul 11 14:12:32 cloudcephosd1013 kernel: RAX: ffffffffffffffda RBX: 00007f3d548d8ba0 RCX: 00007f3d755defd9
Jul 11 14:12:32 cloudcephosd1013 kernel: RDX: 00007f3d548d7610 RSI: 0000000000000001 RDI: 00007f3d7139e000
Jul 11 14:12:32 cloudcephosd1013 kernel: RBP: 00007f3d7139e000 R08: 00007f3d548d76ac R09: 0000000000000000
Jul 11 14:12:32 cloudcephosd1013 kernel: R10: 000055de565cb3a8 R11: 0000000000000246 R12: 0000000000000001
Jul 11 14:12:32 cloudcephosd1013 kernel: R13: 0000000000000000 R14: 00007f3d548d7610 R15: 000055dde253b040
Jul 11 14:12:32 cloudcephosd1013 kernel: INFO: task tp_osd_tp:3987 blocked for more than 120 seconds.
Jul 11 14:12:32 cloudcephosd1013 kernel: Not tainted 5.10.0-35-amd64 #1 Debian 5.10.237-1
Jul 11 14:12:32 cloudcephosd1013 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 11 14:12:32 cloudcephosd1013 kernel: task:tp_osd_tp state:D stack: 0 pid: 3987 ppid: 1 flags:0x00000320
Jul 11 14:12:32 cloudcephosd1013 kernel: Call Trace:
Jul 11 14:12:32 cloudcephosd1013 kernel: __schedule+0x282/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: schedule+0x46/0xb0
Jul 11 14:12:32 cloudcephosd1013 kernel: io_schedule+0x42/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_mq_get_tag+0x11d/0x280
Jul 11 14:12:32 cloudcephosd1013 kernel: ? linear_map+0x50/0xa0 [dm_mod]
Jul 11 14:12:32 cloudcephosd1013 kernel: ? add_wait_queue_exclusive+0x70/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: __blk_mq_alloc_request+0x79/0x110
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_mq_submit_bio+0x13d/0x530
Jul 11 14:12:32 cloudcephosd1013 kernel: submit_bio_noacct+0x2f2/0x420
Jul 11 14:12:32 cloudcephosd1013 kernel: blkdev_direct_IO+0x3de/0x4a0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? aio_fsync_work+0xf0/0xf0
Jul 11 14:12:32 cloudcephosd1013 kernel: generic_file_direct_write+0x98/0x1c0
Jul 11 14:12:32 cloudcephosd1013 kernel: __generic_file_write_iter+0xb7/0x1d0
Jul 11 14:12:32 cloudcephosd1013 kernel: blkdev_write_iter+0xab/0x150
Jul 11 14:12:32 cloudcephosd1013 kernel: aio_write+0xf4/0x1f0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? io_submit_one+0x6d/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: ? kmem_cache_alloc+0xed/0x1f0
Jul 11 14:12:32 cloudcephosd1013 kernel: io_submit_one+0x195/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: ? __seccomp_filter+0x7c/0x6b0
Jul 11 14:12:32 cloudcephosd1013 kernel: __x64_sys_io_submit+0x82/0x180
Jul 11 14:12:32 cloudcephosd1013 kernel: do_syscall_64+0x30/0x80
Jul 11 14:12:32 cloudcephosd1013 kernel: entry_SYSCALL_64_after_hwframe+0x67/0xd1
Jul 11 14:12:32 cloudcephosd1013 kernel: RIP: 0033:0x7f3d755defd9
Jul 11 14:12:32 cloudcephosd1013 kernel: RSP: 002b:00007f3d530d45d8 EFLAGS: 00000246 ORIG_RAX: 00000000000000d1
Jul 11 14:12:32 cloudcephosd1013 kernel: RAX: ffffffffffffffda RBX: 00007f3d530d5ba0 RCX: 00007f3d755defd9
Jul 11 14:12:32 cloudcephosd1013 kernel: RDX: 00007f3d530d4610 RSI: 0000000000000001 RDI: 00007f3d7139e000
Jul 11 14:12:32 cloudcephosd1013 kernel: RBP: 00007f3d7139e000 R08: 00007f3d530d46ac R09: 0000000000000000
Jul 11 14:12:32 cloudcephosd1013 kernel: R10: 000055de0fb25e28 R11: 0000000000000246 R12: 0000000000000001
Jul 11 14:12:32 cloudcephosd1013 kernel: R13: 0000000000000000 R14: 00007f3d530d4610 R15: 000055dde253b040
Jul 11 14:12:32 cloudcephosd1013 kernel: INFO: task tp_osd_tp:3988 blocked for more than 120 seconds.
Jul 11 14:12:32 cloudcephosd1013 kernel: Not tainted 5.10.0-35-amd64 #1 Debian 5.10.237-1
Jul 11 14:12:32 cloudcephosd1013 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 11 14:12:32 cloudcephosd1013 kernel: task:tp_osd_tp state:D stack: 0 pid: 3988 ppid: 1 flags:0x00000320
Jul 11 14:12:32 cloudcephosd1013 kernel: Call Trace:
Jul 11 14:12:32 cloudcephosd1013 kernel: __schedule+0x282/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: schedule+0x46/0xb0
Jul 11 14:12:32 cloudcephosd1013 kernel: io_schedule+0x42/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_mq_get_tag+0x11d/0x280
Jul 11 14:12:32 cloudcephosd1013 kernel: ? linear_map+0x50/0xa0 [dm_mod]
Jul 11 14:12:32 cloudcephosd1013 kernel: ? add_wait_queue_exclusive+0x70/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: __blk_mq_alloc_request+0x79/0x110
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_mq_submit_bio+0x13d/0x530
Jul 11 14:12:32 cloudcephosd1013 kernel: submit_bio_noacct+0x2f2/0x420
Jul 11 14:12:32 cloudcephosd1013 kernel: blkdev_direct_IO+0x3de/0x4a0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? aio_fsync_work+0xf0/0xf0
Jul 11 14:12:32 cloudcephosd1013 kernel: generic_file_direct_write+0x98/0x1c0
Jul 11 14:12:32 cloudcephosd1013 kernel: __generic_file_write_iter+0xb7/0x1d0
Jul 11 14:12:32 cloudcephosd1013 kernel: blkdev_write_iter+0xab/0x150
Jul 11 14:12:32 cloudcephosd1013 kernel: aio_write+0xf4/0x1f0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? file_update_time+0xfa/0x140
Jul 11 14:12:32 cloudcephosd1013 kernel: ? task_numa_fault+0x2a3/0xb70
Jul 11 14:12:32 cloudcephosd1013 kernel: ? io_submit_one+0x6d/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: ? kmem_cache_alloc+0xed/0x1f0
Jul 11 14:12:32 cloudcephosd1013 kernel: io_submit_one+0x195/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: ? __seccomp_filter+0x7c/0x6b0
Jul 11 14:12:32 cloudcephosd1013 kernel: __x64_sys_io_submit+0x82/0x180
Jul 11 14:12:32 cloudcephosd1013 kernel: do_syscall_64+0x30/0x80
Jul 11 14:12:32 cloudcephosd1013 kernel: entry_SYSCALL_64_after_hwframe+0x67/0xd1
Jul 11 14:12:32 cloudcephosd1013 kernel: RIP: 0033:0x7f3d755defd9
Jul 11 14:12:32 cloudcephosd1013 kernel: RSP: 002b:00007f3d528d2ef8 EFLAGS: 00000246 ORIG_RAX: 00000000000000d1
Jul 11 14:12:32 cloudcephosd1013 kernel: RAX: ffffffffffffffda RBX: 00007f3d528d4ba0 RCX: 00007f3d755defd9
Jul 11 14:12:32 cloudcephosd1013 kernel: RDX: 00007f3d528d2f30 RSI: 0000000000000002 RDI: 00007f3d7139e000
Jul 11 14:12:32 cloudcephosd1013 kernel: RBP: 00007f3d7139e000 R08: 00007f3d528d2fcc R09: 0000000000000000
Jul 11 14:12:32 cloudcephosd1013 kernel: R10: 000055df17d9be28 R11: 0000000000000246 R12: 0000000000000002
Jul 11 14:12:32 cloudcephosd1013 kernel: R13: 0000000000000000 R14: 00007f3d528d2f30 R15: 000055dde253b040
Jul 11 14:12:32 cloudcephosd1013 kernel: INFO: task bstore_kv_sync:3328 blocked for more than 120 seconds.
Jul 11 14:12:32 cloudcephosd1013 kernel: Not tainted 5.10.0-35-amd64 #1 Debian 5.10.237-1
Jul 11 14:12:32 cloudcephosd1013 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 11 14:12:32 cloudcephosd1013 kernel: task:bstore_kv_sync state:D stack: 0 pid: 3328 ppid: 1 flags:0x00000320
Jul 11 14:12:32 cloudcephosd1013 kernel: Call Trace:
Jul 11 14:12:32 cloudcephosd1013 kernel: __schedule+0x282/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: schedule+0x46/0xb0
Jul 11 14:12:32 cloudcephosd1013 kernel: io_schedule+0x42/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: wait_on_page_bit_common+0x116/0x3b0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? trace_event_raw_event_file_check_and_advance_wb_err+0xf0/0xf0
Jul 11 14:12:32 cloudcephosd1013 kernel: wait_on_page_writeback+0x25/0x80
Jul 11 14:12:32 cloudcephosd1013 kernel: __filemap_fdatawait_range+0x81/0xf0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? __filemap_fdatawrite_range+0xd8/0x110
Jul 11 14:12:32 cloudcephosd1013 kernel: file_fdatawait_range+0x15/0x20
Jul 11 14:12:32 cloudcephosd1013 kernel: __x64_sys_sync_file_range+0x3f/0x80
Jul 11 14:12:32 cloudcephosd1013 kernel: do_syscall_64+0x30/0x80
Jul 11 14:12:32 cloudcephosd1013 kernel: entry_SYSCALL_64_after_hwframe+0x67/0xd1
Jul 11 14:12:32 cloudcephosd1013 kernel: RIP: 0033:0x7f1b5aeb4288
Jul 11 14:12:32 cloudcephosd1013 kernel: RSP: 002b:00007f1b499c0a50 EFLAGS: 00000293 ORIG_RAX: 0000000000000115
Jul 11 14:12:32 cloudcephosd1013 kernel: RAX: ffffffffffffffda RBX: 00005646b50b8460 RCX: 00007f1b5aeb4288
Jul 11 14:12:32 cloudcephosd1013 kernel: RDX: 0000000000001000 RSI: 000000e17808c000 RDI: 000000000000002e
Jul 11 14:12:32 cloudcephosd1013 kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
Jul 11 14:12:32 cloudcephosd1013 kernel: R10: 0000000000000007 R11: 0000000000000293 R12: 0000000000000001
Jul 11 14:12:32 cloudcephosd1013 kernel: R13: 0000000000000001 R14: 000000e17808c000 R15: 0000564622e9cc00
Jul 11 14:12:32 cloudcephosd1013 kernel: INFO: task tp_osd_tp:3589 blocked for more than 121 seconds.
Jul 11 14:12:32 cloudcephosd1013 kernel: Not tainted 5.10.0-35-amd64 #1 Debian 5.10.237-1
Jul 11 14:12:32 cloudcephosd1013 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 11 14:12:32 cloudcephosd1013 kernel: task:tp_osd_tp state:D stack: 0 pid: 3589 ppid: 1 flags:0x00000320
Jul 11 14:12:32 cloudcephosd1013 kernel: Call Trace:
Jul 11 14:12:32 cloudcephosd1013 kernel: __schedule+0x282/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: schedule+0x46/0xb0
Jul 11 14:12:32 cloudcephosd1013 kernel: io_schedule+0x42/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_mq_get_tag+0x11d/0x280
Jul 11 14:12:32 cloudcephosd1013 kernel: ? linear_map+0x50/0xa0 [dm_mod]
Jul 11 14:12:32 cloudcephosd1013 kernel: ? add_wait_queue_exclusive+0x70/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: __blk_mq_alloc_request+0x79/0x110
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_mq_submit_bio+0x13d/0x530
Jul 11 14:12:32 cloudcephosd1013 kernel: submit_bio_noacct+0x2f2/0x420
Jul 11 14:12:32 cloudcephosd1013 kernel: blkdev_direct_IO+0x3de/0x4a0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? aio_fsync_work+0xf0/0xf0
Jul 11 14:12:32 cloudcephosd1013 kernel: generic_file_direct_write+0x98/0x1c0
Jul 11 14:12:32 cloudcephosd1013 kernel: __generic_file_write_iter+0xb7/0x1d0
Jul 11 14:12:32 cloudcephosd1013 kernel: blkdev_write_iter+0xab/0x150
Jul 11 14:12:32 cloudcephosd1013 kernel: aio_write+0xf4/0x1f0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? update_load_avg+0x7a/0x5d0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? io_submit_one+0x6d/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: ? kmem_cache_alloc+0xed/0x1f0
Jul 11 14:12:32 cloudcephosd1013 kernel: io_submit_one+0x195/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: ? __seccomp_filter+0x7c/0x6b0
Jul 11 14:12:32 cloudcephosd1013 kernel: __x64_sys_io_submit+0x82/0x180
Jul 11 14:12:32 cloudcephosd1013 kernel: do_syscall_64+0x30/0x80
Jul 11 14:12:32 cloudcephosd1013 kernel: entry_SYSCALL_64_after_hwframe+0x67/0xd1
Jul 11 14:12:32 cloudcephosd1013 kernel: RIP: 0033:0x7f1b5aeb8fd9
Jul 11 14:12:32 cloudcephosd1013 kernel: RSP: 002b:00007f1b3e9b9ef8 EFLAGS: 00000246 ORIG_RAX: 00000000000000d1
Jul 11 14:12:32 cloudcephosd1013 kernel: RAX: ffffffffffffffda RBX: 00007f1b3e9bbba0 RCX: 00007f1b5aeb8fd9
Jul 11 14:12:32 cloudcephosd1013 kernel: RDX: 00007f1b3e9b9f30 RSI: 0000000000000001 RDI: 00007f1b56c78000
Jul 11 14:12:32 cloudcephosd1013 kernel: RBP: 00007f1b56c78000 R08: 00007f1b3e9b9fcc R09: 0000000000000000
Jul 11 14:12:32 cloudcephosd1013 kernel: R10: 00005646700ac228 R11: 0000000000000246 R12: 0000000000000001
Jul 11 14:12:32 cloudcephosd1013 kernel: R13: 0000000000000000 R14: 00007f1b3e9b9f30 R15: 0000564622dfb000
Jul 11 14:12:32 cloudcephosd1013 kernel: INFO: task tp_osd_tp:3591 blocked for more than 121 seconds.
Jul 11 14:12:32 cloudcephosd1013 kernel: Not tainted 5.10.0-35-amd64 #1 Debian 5.10.237-1
Jul 11 14:12:32 cloudcephosd1013 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 11 14:12:32 cloudcephosd1013 kernel: task:tp_osd_tp state:D stack: 0 pid: 3591 ppid: 1 flags:0x00000320
Jul 11 14:12:32 cloudcephosd1013 kernel: Call Trace:
Jul 11 14:12:32 cloudcephosd1013 kernel: __schedule+0x282/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: schedule+0x46/0xb0
Jul 11 14:12:32 cloudcephosd1013 kernel: io_schedule+0x42/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_mq_get_tag+0x11d/0x280
Jul 11 14:12:32 cloudcephosd1013 kernel: ? linear_map+0x50/0xa0 [dm_mod]
Jul 11 14:12:32 cloudcephosd1013 kernel: ? add_wait_queue_exclusive+0x70/0x70
Jul 11 14:12:32 cloudcephosd1013 kernel: __blk_mq_alloc_request+0x79/0x110
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_mq_submit_bio+0x13d/0x530
Jul 11 14:12:32 cloudcephosd1013 kernel: submit_bio_noacct+0x2f2/0x420
Jul 11 14:12:32 cloudcephosd1013 kernel: blkdev_direct_IO+0x3de/0x4a0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? aio_fsync_work+0xf0/0xf0
Jul 11 14:12:32 cloudcephosd1013 kernel: generic_file_direct_write+0x98/0x1c0
Jul 11 14:12:32 cloudcephosd1013 kernel: __generic_file_write_iter+0xb7/0x1d0
Jul 11 14:12:32 cloudcephosd1013 kernel: blkdev_write_iter+0xab/0x150
Jul 11 14:12:32 cloudcephosd1013 kernel: aio_write+0xf4/0x1f0
Jul 11 14:12:32 cloudcephosd1013 kernel: ? io_submit_one+0x6d/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: ? kmem_cache_alloc+0xed/0x1f0
Jul 11 14:12:32 cloudcephosd1013 kernel: io_submit_one+0x195/0x870
Jul 11 14:12:32 cloudcephosd1013 kernel: ? __seccomp_filter+0x7c/0x6b0
Jul 11 14:12:32 cloudcephosd1013 kernel: __x64_sys_io_submit+0x82/0x180
Jul 11 14:12:32 cloudcephosd1013 kernel: do_syscall_64+0x30/0x80
Jul 11 14:12:32 cloudcephosd1013 kernel: entry_SYSCALL_64_after_hwframe+0x67/0xd1
Jul 11 14:12:32 cloudcephosd1013 kernel: RIP: 0033:0x7f1b5aeb8fd9
Jul 11 14:12:32 cloudcephosd1013 kernel: RSP: 002b:00007f1b3d9b85d8 EFLAGS: 00000246 ORIG_RAX: 00000000000000d1
Jul 11 14:12:32 cloudcephosd1013 kernel: RAX: ffffffffffffffda RBX: 00007f1b3d9b9ba0 RCX: 00007f1b5aeb8fd9
Jul 11 14:12:32 cloudcephosd1013 kernel: RDX: 00007f1b3d9b8610 RSI: 0000000000000002 RDI: 00007f1b56c78000
Jul 11 14:12:32 cloudcephosd1013 kernel: RBP: 00007f1b56c78000 R08: 00007f1b3d9b86ac R09: 0000000000000000
Jul 11 14:12:32 cloudcephosd1013 kernel: R10: 00005646bdf1c5a8 R11: 0000000000000246 R12: 0000000000000002
Jul 11 14:12:32 cloudcephosd1013 kernel: R13: 0000000000000000 R14: 00007f1b3d9b8610 R15: 0000564622dfb000
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#803 CDB: Write(10) 2a 00 19 d8 35 70 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#794 CDB: Read(10) 28 00 14 02 fb f0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#478 CDB: Write(10) 2a 00 19 d9 15 88 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#476 CDB: Write(10) 2a 00 19 d8 87 d0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#475 CDB: Write(10) 2a 00 19 d8 36 a0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#382 CDB: Write(10) 2a 00 19 d7 dd f0 00 00 10 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#381 CDB: Write(10) 2a 00 19 d7 d6 b8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#380 CDB: Write(10) 2a 00 19 d7 d5 68 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#379 CDB: Write(10) 2a 00 19 d7 d4 a0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#376 CDB: Write(10) 2a 00 19 71 fa 50 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#375 CDB: Write(10) 2a 00 19 6c 60 18 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#338 CDB: Write(10) 2a 00 19 d9 23 d8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#336 CDB: Write(10) 2a 00 19 d9 23 c8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#335 CDB: Write(10) 2a 00 19 d9 23 b0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#327 CDB: Write(10) 2a 00 19 d8 35 68 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#326 CDB: Write(10) 2a 00 19 d8 2a 00 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#325 CDB: Write(10) 2a 00 19 d8 29 f0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#323 CDB: Write(10) 2a 00 19 d8 27 60 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#149 CDB: Write(10) 2a 00 19 d9 23 98 00 00 18 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#147 CDB: Write(10) 2a 00 19 d9 16 50 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#138 CDB: Write(10) 2a 00 19 d7 c3 78 00 00 10 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#137 CDB: Write(10) 2a 00 19 d7 bd 60 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#136 CDB: Write(10) 2a 00 19 89 1f 38 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#135 CDB: Write(10) 2a 00 19 7c 3a 00 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: [sdj] tag#134 CDB: Write(10) 2a 00 19 75 1c 98 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#818 CDB: Write(10) 2a 00 38 7f b8 48 00 00 20 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#783 CDB: Write(10) 2a 00 38 8c 81 f0 00 00 30 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#782 CDB: Write(10) 2a 00 38 8c 81 e0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#781 CDB: Write(10) 2a 00 38 8c 81 d8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#703 CDB: Write(10) 2a 00 38 8c 81 d0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#622 CDB: Write(10) 2a 00 38 81 f2 88 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#762 CDB: Write(10) 2a 00 46 e8 41 b0 00 00 28 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#754 CDB: Read(10) 28 00 70 65 61 90 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#621 CDB: Write(10) 2a 00 38 7f d1 b8 00 00 10 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#758 CDB: Write(10) 2a 00 12 7b b5 60 00 00 10 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#761 CDB: Write(10) 2a 00 46 e8 41 a8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#704 CDB: Write(10) 2a 00 14 82 2c e8 00 00 20 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#620 CDB: Write(10) 2a 00 38 7f b8 78 00 00 28 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#757 CDB: Write(10) 2a 00 12 7b b4 98 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#759 CDB: Write(10) 2a 00 46 e8 41 a0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#702 CDB: Write(10) 2a 00 14 82 2d 10 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#515 CDB: Write(10) 2a 00 38 7f b8 68 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#756 CDB: Write(10) 2a 00 12 7b b4 68 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#753 CDB: Write(10) 2a 00 79 38 cc 78 00 00 68 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#651 CDB: Read(10) 28 00 49 56 e5 78 00 00 20 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#500 CDB: Write(10) 2a 00 6d a5 3e 20 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#650 CDB: Write(10) 2a 00 12 7b b4 30 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#752 CDB: Write(10) 2a 00 70 bc 0c 60 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#641 CDB: Write(10) 2a 00 16 71 e5 68 00 00 30 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#310 CDB: Write(10) 2a 00 b0 56 11 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#649 CDB: Write(10) 2a 00 12 7b 78 60 00 00 10 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#640 CDB: Write(10) 2a 00 14 82 2d 18 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#644 CDB: Write(10) 2a 00 79 38 cc 60 00 00 18 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#309 CDB: Write(10) 2a 00 b0 56 0f 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#648 CDB: Write(10) 2a 00 12 7b 78 50 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#535 CDB: Write(10) 2a 00 14 82 2d 08 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#643 CDB: Write(10) 2a 00 79 38 cb e0 00 00 80 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#308 CDB: Write(10) 2a 00 b0 56 0d 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#647 CDB: Write(10) 2a 00 12 7b 78 30 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#256 CDB: Write(10) 2a 00 16 71 f1 b0 00 00 50 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#642 CDB: Read(10) 28 00 02 2b 51 30 00 00 20 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#307 CDB: Write(10) 2a 00 b0 56 0b 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#556 CDB: Read(10) 28 00 73 07 83 50 00 01 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#254 CDB: Write(10) 2a 00 16 71 e5 f8 00 00 18 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#550 CDB: Write(10) 2a 00 46 e8 41 98 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#306 CDB: Write(10) 2a 00 b0 56 09 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#502 CDB: Write(10) 2a 00 6b b4 dd 58 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#251 CDB: Write(10) 2a 00 16 71 e5 f0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#549 CDB: Write(10) 2a 00 46 e8 41 90 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#305 CDB: Write(10) 2a 00 b0 56 07 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#284 CDB: Read(10) 28 00 06 71 7e c0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#250 CDB: Write(10) 2a 00 16 71 e5 c8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#547 CDB: Write(10) 2a 00 46 e8 41 88 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#304 CDB: Write(10) 2a 00 b0 56 05 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#574 CDB: Write(10) 2a 00 0b 16 3d c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#639 CDB: Write(10) 2a 00 78 82 21 a0 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#283 CDB: Read(10) 28 00 06 71 7d 88 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#249 CDB: Write(10) 2a 00 16 71 e5 c0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#546 CDB: Write(10) 2a 00 46 e8 41 80 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#303 CDB: Write(10) 2a 00 b0 56 03 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#573 CDB: Write(10) 2a 00 0b 16 3b c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#609 CDB: Write(10) 2a 00 21 0e 55 00 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#282 CDB: Read(10) 28 00 05 e1 d8 18 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#248 CDB: Write(10) 2a 00 16 71 e5 b8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#544 CDB: Write(10) 2a 00 46 e8 41 78 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#302 CDB: Write(10) 2a 00 b0 56 01 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#572 CDB: Write(10) 2a 00 0b 16 39 c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#608 CDB: Write(10) 2a 00 21 0e 54 f8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#281 CDB: Read(10) 28 00 04 02 bd 48 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#247 CDB: Write(10) 2a 00 16 71 e5 98 00 00 20 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#543 CDB: Write(10) 2a 00 46 e8 41 70 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#301 CDB: Write(10) 2a 00 b0 55 ff 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#571 CDB: Write(10) 2a 00 0b 16 37 c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#607 CDB: Write(10) 2a 00 21 0e 54 b0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#260 CDB: Write(10) 2a 00 12 7b b4 60 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#75 CDB: Write(10) 2a 00 16 71 e5 e0 00 00 10 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#542 CDB: Write(10) 2a 00 46 e8 41 48 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#300 CDB: Write(10) 2a 00 b0 55 fd 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#570 CDB: Write(10) 2a 00 0b 16 35 c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#606 CDB: Write(10) 2a 00 21 0d e4 a8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#259 CDB: Write(10) 2a 00 12 7b b4 58 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#74 CDB: Write(10) 2a 00 16 71 e5 d8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#536 CDB: Write(10) 2a 00 46 e8 41 40 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#299 CDB: Write(10) 2a 00 b0 55 fb 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#569 CDB: Write(10) 2a 00 0b 16 33 c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#605 CDB: Write(10) 2a 00 21 0d c7 b0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#258 CDB: Write(10) 2a 00 12 7b b4 40 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#73 CDB: Write(10) 2a 00 16 71 e5 d0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#275 CDB: Write(10) 2a 00 45 ed cf 78 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#298 CDB: Write(10) 2a 00 b0 55 f9 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#568 CDB: Write(10) 2a 00 0b 16 31 c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#604 CDB: Write(10) 2a 00 21 0d 9d 20 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#77 CDB: Write(10) 2a 00 12 7b b5 70 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#252 CDB: Read(10) 28 00 09 e2 a1 f0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#297 CDB: Write(10) 2a 00 b0 55 f7 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#567 CDB: Write(10) 2a 00 0b 16 2f c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#603 CDB: Write(10) 2a 00 21 0d 9a b0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#76 CDB: Write(10) 2a 00 12 7b b4 38 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#114 CDB: Write(10) 2a 00 79 38 cb 80 00 00 18 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#296 CDB: Write(10) 2a 00 b0 55 f5 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#566 CDB: Write(10) 2a 00 0b 16 2d c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#602 CDB: Write(10) 2a 00 21 0d 87 d8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#13 CDB: Write(10) 2a 00 12 79 56 c8 00 00 10 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#113 CDB: Write(10) 2a 00 79 38 cb 60 00 00 20 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#295 CDB: Write(10) 2a 00 b0 55 f3 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#565 CDB: Write(10) 2a 00 0b 16 2b c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#601 CDB: Write(10) 2a 00 21 0d 29 f0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#15 CDB: Write(10) 2a 00 79 38 cb c0 00 00 20 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#294 CDB: Write(10) 2a 00 b0 55 f1 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#564 CDB: Write(10) 2a 00 0b 16 29 c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#600 CDB: Write(10) 2a 00 21 0c ee a8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#14 CDB: Write(10) 2a 00 79 38 cb 98 00 00 28 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#293 CDB: Write(10) 2a 00 b0 55 ef 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#563 CDB: Write(10) 2a 00 0b 16 27 c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#598 CDB: Write(10) 2a 00 21 0c e3 f8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#292 CDB: Write(10) 2a 00 b0 55 ed 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#562 CDB: Write(10) 2a 00 0b 16 25 c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#593 CDB: Write(10) 2a 00 21 0c 6f b8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#291 CDB: Write(10) 2a 00 b0 55 eb 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#561 CDB: Write(10) 2a 00 0b 16 23 c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#580 CDB: Write(10) 2a 00 78 82 29 a0 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#290 CDB: Write(10) 2a 00 b0 55 e9 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#560 CDB: Write(10) 2a 00 0b 16 21 c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#579 CDB: Write(10) 2a 00 78 82 27 a0 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#289 CDB: Write(10) 2a 00 b0 55 e7 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#559 CDB: Write(10) 2a 00 0b 16 1f c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#578 CDB: Write(10) 2a 00 78 82 25 a0 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#288 CDB: Write(10) 2a 00 b0 55 e5 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#558 CDB: Write(10) 2a 00 0b 16 1d c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#576 CDB: Write(10) 2a 00 78 82 23 a0 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#287 CDB: Write(10) 2a 00 b0 55 e3 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#557 CDB: Write(10) 2a 00 0b 16 1b c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#516 CDB: Write(10) 2a 00 26 e3 74 68 00 00 20 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#286 CDB: Write(10) 2a 00 b0 55 e1 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#511 CDB: Write(10) 2a 00 0b 16 19 c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#501 CDB: Read(10) 28 00 6b 79 3c 60 00 00 10 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#285 CDB: Write(10) 2a 00 b0 55 df 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#510 CDB: Write(10) 2a 00 0b 16 17 c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#245 CDB: Write(10) 2a 00 21 0e 5f 90 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#280 CDB: Write(10) 2a 00 78 3f 66 50 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#509 CDB: Write(10) 2a 00 0b 16 15 c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#244 CDB: Write(10) 2a 00 21 0e 5f 88 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#279 CDB: Write(10) 2a 00 78 3f 64 50 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#508 CDB: Write(10) 2a 00 0b 16 13 c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#243 CDB: Write(10) 2a 00 21 0e 54 e8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#278 CDB: Write(10) 2a 00 78 3f 62 50 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#507 CDB: Write(10) 2a 00 0b 16 11 c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#242 CDB: Write(10) 2a 00 21 0c 5c 30 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#277 CDB: Write(10) 2a 00 78 3f 60 50 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#506 CDB: Write(10) 2a 00 0b 16 0f c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#241 CDB: Write(10) 2a 00 21 0c 55 e8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:9:0: [sdi] tag#276 CDB: Write(10) 2a 00 78 3f 5e 50 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#505 CDB: Write(10) 2a 00 0b 16 0d c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#227 CDB: Write(10) 2a 00 20 fa e4 d8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#504 CDB: Write(10) 2a 00 0b 16 0b c8 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#71 CDB: Write(10) 2a 00 21 0e 23 a0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#503 CDB: Write(10) 2a 00 0b 16 0a 38 00 01 90 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#70 CDB: Write(10) 2a 00 21 0d e4 b0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#253 CDB: Write(10) 2a 00 11 a5 15 d8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#69 CDB: Write(10) 2a 00 21 0d a9 88 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#226 CDB: Write(10) 2a 00 02 a7 a4 88 00 01 f0 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#68 CDB: Write(10) 2a 00 21 0d a9 78 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#225 CDB: Write(10) 2a 00 02 a7 a2 88 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#67 CDB: Write(10) 2a 00 21 0d 9d 30 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#224 CDB: Write(10) 2a 00 02 a7 a0 88 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#66 CDB: Write(10) 2a 00 21 0d 9d 28 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:7:0: [sde] tag#209 CDB: Write(10) 2a 00 02 a7 9e 88 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:4:0: [sdg] tag#65 CDB: Write(10) 2a 00 21 0c b1 98 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#767 CDB: Write(10) 2a 00 46 e8 42 48 00 00 20 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#766 CDB: Write(10) 2a 00 46 e8 42 38 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#765 CDB: Write(10) 2a 00 46 e8 42 28 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#764 CDB: Write(10) 2a 00 46 e8 42 08 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#763 CDB: Write(10) 2a 00 46 e8 41 f0 00 00 18 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#82 CDB: Write(10) 2a 00 46 e8 42 40 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#81 CDB: Write(10) 2a 00 46 e8 42 30 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#80 CDB: Write(10) 2a 00 46 e8 42 18 00 00 10 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#79 CDB: Write(10) 2a 00 46 e8 41 e8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:5:0: [sdh] tag#78 CDB: Write(10) 2a 00 46 e8 41 d8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#786 CDB: Write(10) 2a 00 38 8c 82 20 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#787 CDB: Write(10) 2a 00 75 e4 61 20 00 00 70 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#788 CDB: Write(10) 2a 00 12 7b b5 80 00 00 10 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#817 CDB: Write(10) 2a 00 38 90 1a 50 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#823 CDB: Write(10) 2a 00 6c 92 15 90 00 00 10 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#852 CDB: Write(10) 2a 00 38 90 1a 58 00 00 10 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#854 CDB: Write(10) 2a 00 38 90 1a 68 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#860 CDB: Write(10) 2a 00 38 93 33 48 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#859 CDB: Write(10) 2a 00 38 91 5c 98 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#858 CDB: Write(10) 2a 00 38 90 1a 88 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#857 CDB: Write(10) 2a 00 38 90 1a 80 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#856 CDB: Write(10) 2a 00 38 90 1a 78 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#855 CDB: Write(10) 2a 00 38 90 1a 70 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#867 CDB: Write(10) 2a 00 38 aa c9 38 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#866 CDB: Write(10) 2a 00 38 aa c9 28 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#865 CDB: Write(10) 2a 00 38 aa bd b8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#864 CDB: Write(10) 2a 00 38 aa bd 98 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#863 CDB: Write(10) 2a 00 38 aa bc 68 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#862 CDB: Write(10) 2a 00 38 96 43 08 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#873 CDB: Write(10) 2a 00 16 71 f2 00 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#861 CDB: Read(10) 28 00 8d af e7 28 00 00 20 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#878 CDB: Write(10) 2a 00 38 b0 59 c0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#877 CDB: Write(10) 2a 00 38 ae 72 78 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#876 CDB: Write(10) 2a 00 38 ad ff 00 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#875 CDB: Write(10) 2a 00 38 ad bf b8 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:3:0: [sdd] tag#874 CDB: Write(10) 2a 00 38 ac 02 70 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#87 CDB: Write(10) 2a 00 16 71 f2 08 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#890 CDB: Write(10) 2a 00 74 c4 06 a0 00 01 48 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#29 CDB: Write(10) 2a 00 14 ef 2e 08 00 00 40 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#28 CDB: Write(10) 2a 00 14 ef 2d 88 00 00 80 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#43 CDB: Write(10) 2a 00 12 7b b5 90 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#42 CDB: Write(10) 2a 00 74 c4 14 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#41 CDB: Write(10) 2a 00 74 c4 12 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#40 CDB: Write(10) 2a 00 74 c4 10 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#39 CDB: Write(10) 2a 00 74 c4 0e 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#38 CDB: Write(10) 2a 00 74 c4 0c 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#37 CDB: Write(10) 2a 00 74 c4 0a 80 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:6:0: [sdf] tag#36 CDB: Write(10) 2a 00 74 c4 08 88 00 01 f8 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#90 CDB: Write(10) 2a 00 11 08 90 50 00 00 50 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#89 CDB: Write(10) 2a 00 11 08 8e 50 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#88 CDB: Write(10) 2a 00 11 08 8c 50 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#12 CDB: Write(10) 2a 00 11 08 8a 50 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#11 CDB: Write(10) 2a 00 11 08 88 b0 00 01 a0 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#127 CDB: Write(10) 2a 00 04 ef 3e d0 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#94 CDB: Write(10) 2a 00 04 ef 3c d0 00 02 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:2:0: [sdc] tag#93 CDB: Write(10) 2a 00 04 ef 3b 20 00 01 b0 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:0:0: [sda] tag#45 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:1:0: [sdb] tag#44 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:1:0: [sdb] tag#44 OCR is requested due to IO timeout!!
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:1:0: [sdb] tag#44 SCSI host state: 5 SCSI host busy: 246 FW outstanding: 245
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:1:0: [sdb] tag#44 scmd: (0x0000000095a9ed26) retries: 0x0 allowed: 0x5
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:1:0: [sdb] tag#44 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:1:0: [sdb] tag#44 Request descriptor details:
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:1:0: [sdb] tag#44 RequestFlags:0xc MSIxIndex:0x2e SMID:0x2d LMID:0x0 DevHandle:0xb
Jul 11 14:12:32 cloudcephosd1013 kernel: IO request frame:
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000000: 0000000b 00000000 00000000 4fd61080 00600002 00000020 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000020: 00000000 0000400a 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000040: 00000035 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000060: 00b50000 00010000 00000000 00000000 00000000 00000000 00000010 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000080: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 000000a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 000000c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 000000e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: Chain frame:
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000000: 610df000 00000005 00001000 00000000 b6b67000 00000006 00001000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000020: 9913c000 00000007 00001000 00000000 7fba3000 00000006 00001000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000040: 25a8f000 00000008 00001000 00000000 26cc1000 00000007 00001000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000060: ff979000 00000006 00001000 00000000 43c26000 00000007 00001000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000080: 486ef000 00000003 00001000 00000000 cac9c000 00000005 00001000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 000000a0: 28e2c000 00000007 00001000 00000000 8531b000 00000005 00001000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 000000c0: 85318000 00000005 00001000 00000000 e0725000 00000004 00001000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 000000e0: 2474c000 00000008 00001000 00000000 75860000 00000007 00001000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000100: 0d3f8000 00000008 00001000 00000000 0b1dc000 00000003 00001000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000120: 0b1e5000 00000003 00001000 00000000 0e16f000 00000004 00001000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000140: 0e0d6000 00000004 00001000 00000000 24488000 00000008 00001000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000160: 3c3cc000 00000007 00001000 40000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000180: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 000001a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 000001c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 000001e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000200: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000220: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000240: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000260: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000280: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 000002a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 000002c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 000002e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000300: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000320: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000340: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000360: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 00000380: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 000003a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 000003c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: 000003e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: megasas_disable_intr_fusion is called outbound_intr_mask:0x40000009
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [ 0]waiting for 245 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [ 5]waiting for 245 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [10]waiting for 245 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [15]waiting for 245 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [20]waiting for 245 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Starting Update NIC firmware stats exported by node_exporter...
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [25]waiting for 244 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [30]waiting for 244 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [35]waiting for 244 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [40]waiting for 244 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [45]waiting for 243 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [50]waiting for 243 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [55]waiting for 243 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: systemd-journald.service: State 'stop-watchdog' timed out. Killing.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: systemd-journald.service: Killing process 803 (systemd-journal) with signal SIGKILL.
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [60]waiting for 243 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [65]waiting for 242 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [70]waiting for 242 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [75]waiting for 242 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [80]waiting for 242 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [85]waiting for 241 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [90]waiting for 241 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [95]waiting for 241 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [100]waiting for 241 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [105]waiting for 241 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [110]waiting for 240 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [115]waiting for 240 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [120]waiting for 240 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [125]waiting for 240 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [130]waiting for 239 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [135]waiting for 239 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [140]waiting for 239 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [145]waiting for 239 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: systemd-journald.service: Processes still around after SIGKILL. Ignoring.
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [150]waiting for 238 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [155]waiting for 238 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [160]waiting for 238 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [165]waiting for 238 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [170]waiting for 237 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: [175]waiting for 237 commands to complete for scsi0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: pending commands remain after waiting, will reset adapter scsi0.
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: resetting fusion adapter scsi0.
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: Outstanding fastpath IOs: 237
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: Waiting for FW to come to ready state
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Starting Update dpkg status exported by node_exporter...
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: systemd-journald.service: State 'final-sigterm' timed out. Killing.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: systemd-journald.service: Killing process 803 (systemd-journal) with signal SIGKILL.
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: FW now in Ready state
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: FW now in Ready state
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: Current firmware supports maximum commands: 928 LDIO threshold: 0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: Performance mode :Latency
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: FW supports sync cache : No
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: megasas_disable_intr_fusion is called outbound_intr_mask:0x40000009
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: FW provided supportMaxExtLDs: 1 max_lds: 64
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: controller type : MR(2048MB)
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: Online Controller Reset(OCR) : Enabled
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: Secure JBOD support : No
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: NVMe passthru support : No
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: FW provided TM TaskAbort/Reset timeout : 0 secs/0 secs
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: JBOD sequence map support : No
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: PCI Lane Margining support : No
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: JBOD sequence map is disabled megasas_setup_jbod_map 5746
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: megasas_enable_intr_fusion is called outbound_intr_mask:0x40000000
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: Adapter is OPERATIONAL for scsi:0
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: Reset successful for scsi0.
Jul 11 14:12:32 cloudcephosd1013 kernel: megaraid_sas 0000:18:00.0: 3781 (805558250s/0x0020/CRIT) - Controller encountered a fatal error and was reset
Jul 11 14:12:32 cloudcephosd1013 kernel: sd 0:0:8:0: SCSI device is removed
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#134 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=475s
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#134 CDB: Write(10) 2a 00 19 75 1c 98 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_update_request: I/O error, dev sdj, sector 427105432 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#135 FAILED Result: hostbyte=DID_REQUEUE driverbyte=DRIVER_OK cmd_age=475s
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_update_request: I/O error, dev sdj, sector 433661088 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#135 CDB: Write(10) 2a 00 19 7c 3a 00 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_update_request: I/O error, dev sdj, sector 427571712 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#136 FAILED Result: hostbyte=DID_REQUEUE driverbyte=DRIVER_OK cmd_age=475s
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_update_request: I/O error, dev sdj, sector 433596896 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#136 CDB: Write(10) 2a 00 19 89 1f 38 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_update_request: I/O error, dev sdj, sector 433660880 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_update_request: I/O error, dev sdj, sector 428416824 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#137 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=475s
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_update_request: I/O error, dev sdj, sector 433570696 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#137 CDB: Write(10) 2a 00 19 d7 bd 60 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_update_request: I/O error, dev sdj, sector 433570808 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_update_request: I/O error, dev sdj, sector 433569120 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: blk_update_request: I/O error, dev sdj, sector 433659280 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#138 FAILED Result: hostbyte=DID_REQUEUE driverbyte=DRIVER_OK cmd_age=475s
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#138 CDB: Write(10) 2a 00 19 d7 c3 78 00 00 10 00
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#147 FAILED Result: hostbyte=DID_REQUEUE driverbyte=DRIVER_OK cmd_age=475s
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#147 CDB: Write(10) 2a 00 19 d9 16 50 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#149 FAILED Result: hostbyte=DID_REQUEUE driverbyte=DRIVER_OK cmd_age=475s
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#149 CDB: Write(10) 2a 00 19 d9 23 98 00 00 18 00
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#323 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=475s
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#323 CDB: Write(10) 2a 00 19 d8 27 60 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#325 FAILED Result: hostbyte=DID_REQUEUE driverbyte=DRIVER_OK cmd_age=475s
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#325 CDB: Write(10) 2a 00 19 d8 29 f0 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#326 FAILED Result: hostbyte=DID_REQUEUE driverbyte=DRIVER_OK cmd_age=475s
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: [sdj] tag#326 CDB: Write(10) 2a 00 19 d8 2a 00 00 00 08 00
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: Buffer I/O error on dev dm-4, logical block 226257573, async page read
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: Buffer I/O error on dev dm-4, logical block 227155363, async page read
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: scsi 0:0:8:0: rejecting I/O to dead device
Jul 11 14:12:32 cloudcephosd1013 kernel: Buffer I/O error on dev dm-4, logical block 227178142, lost async page write
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Stopping LVM event activation on device 8:144...
Jul 11 14:12:32 cloudcephosd1013 kernel: Buffer I/O error on dev dm-4, logical block 227178143, lost async page write
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: systemd-journald.service: Main process exited, code=killed, status=9/KILL
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: systemd-journald.service: Failed with result 'watchdog'.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: systemd-journald.service: Consumed 54.087s CPU time.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: confd_prometheus_metrics.service: Succeeded.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Finished Export confd Prometheus metrics.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: prometheus-node-kernel-messages.service: Succeeded.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Finished Generate prometheus stats about kernel messages.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: prometheus-debian-version-textfile.service: Succeeded.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Finished Update Debian version stat exported by node_exporter.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: prometheus-nic-firmware-textfile.service: Succeeded.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Finished Update NIC firmware stats exported by node_exporter.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: prometheus-dpkg-success-textfile.service: Succeeded.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Finished Update dpkg status exported by node_exporter.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: lvm2-pvscan@8:144.service: Succeeded.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Stopped LVM event activation on device 8:144.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Starting Export confd Prometheus metrics...
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Starting Generate prometheus stats about kernel messages...
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Stopping Flush Journal to Persistent Storage...
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: systemd-journal-flush.service: Succeeded.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Stopped Flush Journal to Persistent Storage.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Stopped Journal Service.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: systemd-journald.service: Consumed 54.087s CPU time.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Starting Journal Service...
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: prometheus-node-kernel-messages.service: Succeeded.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Finished Generate prometheus stats about kernel messages.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: confd_prometheus_metrics.service: Succeeded.
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Finished Export confd Prometheus metrics.
Jul 11 14:12:32 cloudcephosd1013 systemd-journald[975165]: Journal started
Jul 11 14:12:32 cloudcephosd1013 systemd-journald[975165]: System Journal (/var/log/journal/7450d9254f8c4164b3088a399cb2cb51) is 3.9G, max 4.0G, 31.8M free.
Jul 11 14:05:00 cloudcephosd1013 systemd[1]: Starting Export confd Prometheus metrics...
Jul 11 14:04:52 cloudcephosd1013 ceph-osd[2055]: 2025-07-11T14:04:52.965+0000 7f34a2792700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f34844bc700' had timed out after 15.000000954s
Jul 11 14:05:00 cloudcephosd1013 systemd[1]: Starting Generate prometheus stats about kernel messages...
Jul 11 14:04:52 cloudcephosd1013 ceph-osd[2055]: 2025-07-11T14:04:52.965+0000 7f34a2792700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f34884c4700' had timed out after 15.000000954s
Jul 11 14:05:00 cloudcephosd1013 systemd[1]: Starting Generate prometheus network latency metrics with pings...
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2014]: 2025-07-11T14:04:53.013+0000 7f05acb2c700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f058e856700' had timed out after 15.000000954s
Jul 11 14:05:10 cloudcephosd1013 systemd[1]: Starting Update Debian version stat exported by node_exporter...
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2056]: 2025-07-11T14:04:53.321+0000 7f7ceb59c700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f7cd02cc700' had timed out after 15.000000954s
Jul 11 14:07:13 cloudcephosd1013 systemd[1]: systemd-journald.service: Watchdog timeout (limit 3min)!
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.417+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:07:13 cloudcephosd1013 systemd[1]: systemd-journald.service: Killing process 803 (systemd-journal) with signal SIGABRT.
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.417+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.445+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Starting Flush Journal to Persistent Storage...
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.445+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.485+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.485+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.489+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.489+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.489+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.489+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.501+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.501+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.533+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.533+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.541+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.541+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.593+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.593+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Started Journal Service.
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.625+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.625+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.665+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.665+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.701+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.701+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.709+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.709+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.805+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.809+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.837+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.837+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.857+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.857+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2013]: 2025-07-11T14:04:53.905+0000 7f1b58c8c700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1b381b1700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.917+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.917+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.933+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:53 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:53.933+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.025+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.025+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.057+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.057+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.057+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.057+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.117+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.117+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.117+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.117+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.117+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.117+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.121+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.121+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.129+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.129+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.129+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.129+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.137+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.137+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.137+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.137+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.149+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.149+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.149+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.149+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.157+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.157+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.157+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.157+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.177+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.177+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.177+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.177+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.201+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.201+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.201+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.201+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.209+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.209+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.209+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.209+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.321+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.321+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.321+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.321+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.333+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.333+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.333+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.333+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.361+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.361+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.361+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.361+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.377+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.377+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.377+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.377+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.449+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.449+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.449+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.449+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.461+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.461+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.461+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.461+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.489+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.489+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.489+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.489+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.493+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.493+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.493+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.493+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.561+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 systemd-journald[975165]: System Journal (/var/log/journal/7450d9254f8c4164b3088a399cb2cb51) is 3.9G, max 4.0G, 31.8M free.
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.561+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.561+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.561+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.585+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.585+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.585+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.585+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.589+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.589+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.589+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.589+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.613+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.613+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.613+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.613+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.625+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.625+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.625+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.625+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.669+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.669+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.669+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.669+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.693+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.693+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.693+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.693+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.693+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.693+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 systemd[1]: Finished Flush Journal to Persistent Storage.
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2067]: 2025-07-11T14:04:54.753+0000 7fc791a86700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fc771fad700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2067]: 2025-07-11T14:04:54.753+0000 7fc791a86700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fc775fb5700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2067]: 2025-07-11T14:04:54.753+0000 7fc792287700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fc771fad700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2067]: 2025-07-11T14:04:54.753+0000 7fc792287700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fc775fb5700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.825+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.825+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.825+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.825+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.825+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.825+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.825+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.825+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.825+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.829+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.829+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.829+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.845+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.845+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.845+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.845+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.845+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.845+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.845+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.845+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.861+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.861+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.861+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.861+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.861+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.861+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.861+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.861+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.869+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.869+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.869+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.869+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.869+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.869+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.869+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.869+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.909+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.909+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.909+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.909+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.909+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.909+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.909+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.909+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:54.921+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:54.921+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.929+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.929+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.929+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.929+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.929+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.929+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:54.929+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.929+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.929+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:54.929+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.929+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.929+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.929+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.929+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.929+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.929+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.929+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.929+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.957+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.957+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.957+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.957+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.957+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.957+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.957+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.957+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:54.961+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:54.961+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:54.961+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:54.961+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:54.961+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:54.961+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.969+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.969+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.969+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.969+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.969+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.969+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.969+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.969+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:54.969+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.969+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:54.969+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.969+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.969+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.969+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:54.969+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.969+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.969+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.969+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:54.969+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:54.969+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:54.969+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:54 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:54.969+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.009+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.009+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.009+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.009+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.009+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.009+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.013+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.013+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.013+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.013+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.013+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.013+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.013+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.013+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.013+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.013+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.013+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.013+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.013+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.013+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.013+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.013+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.029+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.029+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.029+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.029+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.029+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.029+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.081+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.081+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5340ce2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.081+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.081+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5344cea700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.081+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.081+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.081+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.081+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5340ce2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.081+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.081+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5344cea700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.081+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.081+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.085+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.085+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5340ce2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.085+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.085+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5344cea700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.085+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.085+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.085+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.085+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5340ce2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.085+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.085+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5344cea700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.085+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.085+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.121+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.121+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.121+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.121+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.121+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.121+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.185+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.185+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.185+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.185+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.185+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.185+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.189+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.189+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.189+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.189+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.189+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.189+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.229+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.229+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.229+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.229+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.229+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.229+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.229+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.229+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5340ce2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.229+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.229+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5342ce6700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.229+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5344cea700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.229+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.229+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.229+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.229+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5340ce2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.229+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.229+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5342ce6700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.229+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5344cea700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.229+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.229+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.241+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.241+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.241+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.241+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.241+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.241+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.337+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.337+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.337+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.337+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.337+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.337+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.361+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.361+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.361+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.361+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.361+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.361+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.421+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.421+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.421+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.421+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.421+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.421+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.457+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.457+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.457+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.457+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.457+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.457+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.593+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.593+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.593+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.593+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.593+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.593+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.593+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.593+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.605+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.605+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.605+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.605+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.605+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.605+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.605+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.605+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.657+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.657+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.657+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.657+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.657+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.657+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.657+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.657+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.709+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.709+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.709+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.709+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.709+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.709+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.709+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.709+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.725+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.725+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.725+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.725+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.725+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.725+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.725+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.725+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.725+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.725+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.725+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.725+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.725+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.725+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.725+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.725+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.757+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.757+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.757+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.757+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.757+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.757+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.757+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.757+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.821+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.821+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.821+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.821+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.821+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.821+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.821+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.821+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.837+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.837+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.837+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.837+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.837+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.837+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.837+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.837+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.897+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d538d9700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.897+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.897+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.897+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d560de700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.897+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d578e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.897+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.897+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.901+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d538d9700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.901+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.901+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.901+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d560de700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.901+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d578e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.901+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.901+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.905+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d538d9700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.905+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.905+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.905+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d560de700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.905+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d578e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.905+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.905+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.905+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d538d9700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.905+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.905+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.905+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d560de700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.905+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d578e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.905+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.905+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.921+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d538d9700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.921+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.921+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.921+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d560de700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.921+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d578e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.921+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.921+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.921+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d538d9700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.921+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.921+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.921+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d560de700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.921+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d578e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.921+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.921+0000 7f3d733b2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.949+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d538d9700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.949+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.949+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.949+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d560de700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.949+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d578e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.949+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.949+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.949+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d538d9700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.949+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.949+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.949+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d560de700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.949+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d578e1700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.949+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:55 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:55.949+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:56 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d528d7700' had timed out after 15.000000954s
Jul 11 14:04:56 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d538d9700' had timed out after 15.000000954s
Jul 11 14:04:56 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:56 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:56 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d560de700' had timed out after 15.000000954s
Jul 11 14:04:56 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d568df700' had timed out after 15.000000954s
Jul 11 14:04:56 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d578e1700' had timed out after 15.000000954s
Jul 11 14:04:56 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:04:56 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d73bb3700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:04:56 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d528d7700' had timed out after 15.000000954s
Jul 11 14:04:56 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d538d9700' had timed out after 15.000000954s
Jul 11 14:04:56 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d540da700' had timed out after 15.000000954s
Jul 11 14:04:56 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d550dc700' had timed out after 15.000000954s
Jul 11 14:04:56 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d560de700' had timed out after 15.000000954s
Jul 11 14:04:56 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d568df700' had timed out after 15.000000954s
Jul 11 14:04:56 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d578e1700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d580e2700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 sshd[974780]: Connection from 208.80.154.78 port 52102 on 10.64.20.64 port 22 rdomain ""
Jul 11 14:12:32 cloudcephosd1013 sshd[974781]: Connection from 208.80.153.42 port 54112 on 10.64.20.64 port 22 rdomain ""
Jul 11 14:12:32 cloudcephosd1013 sshd[974787]: Connection from 208.80.154.78 port 37508 on 10.64.20.64 port 22 rdomain ""
Jul 11 14:12:32 cloudcephosd1013 sshd[974788]: Connection from 208.80.153.42 port 52400 on 10.64.20.64 port 22 rdomain ""
Jul 11 14:12:32 cloudcephosd1013 sshd[974797]: Connection from 208.80.154.78 port 55558 on 10.64.20.64 port 22 rdomain ""
Jul 11 14:12:32 cloudcephosd1013 sshd[974798]: Connection from 208.80.153.42 port 53038 on 10.64.20.64 port 22 rdomain ""
Jul 11 14:12:32 cloudcephosd1013 sshd[974804]: Connection from 208.80.154.78 port 48768 on 10.64.20.64 port 22 rdomain ""
Jul 11 14:12:32 cloudcephosd1013 sshd[974805]: Connection from 208.80.153.42 port 50156 on 10.64.20.64 port 22 rdomain ""
Jul 11 14:12:32 cloudcephosd1013 sshd[974810]: Connection from 208.80.154.78 port 53780 on 10.64.20.64 port 22 rdomain ""
Jul 11 14:12:32 cloudcephosd1013 sshd[974811]: Connection from 208.80.153.42 port 48260 on 10.64.20.64 port 22 rdomain ""
Jul 11 14:12:32 cloudcephosd1013 sshd[974811]: error: kex_exchange_identification: Connection closed by remote host
Jul 11 14:12:32 cloudcephosd1013 sshd[974817]: Connection from 208.80.153.42 port 45874 on 10.64.20.64 port 22 rdomain ""
Jul 11 14:12:32 cloudcephosd1013 sshd[974820]: Connection from 208.80.154.78 port 54048 on 10.64.20.64 port 22 rdomain ""
Jul 11 14:12:32 cloudcephosd1013 sshd[974821]: Connection from 208.80.153.42 port 52812 on 10.64.20.64 port 22 rdomain ""
Jul 11 14:12:32 cloudcephosd1013 sshd[974820]: error: kex_exchange_identification: Connection closed by remote host
Jul 11 14:12:32 cloudcephosd1013 CRON[974779]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jul 11 14:12:32 cloudcephosd1013 sshd[974821]: error: kex_exchange_identification: Connection closed by remote host
Jul 11 14:12:32 cloudcephosd1013 nrpe[974838]: Error: (!log_opts) Could not complete SSL handshake with 208.80.153.42: 5
Jul 11 14:12:32 cloudcephosd1013 sudo[974833]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/bmc-info --config-file /tmp/ipmi_exporter-ee251f7b05565659893f35260e228b45
Jul 11 14:12:32 cloudcephosd1013 sudo[974835]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/bmc-info --config-file /tmp/ipmi_exporter-93c171d66bc322703b8184396e406a7f
Jul 11 14:12:32 cloudcephosd1013 nrpe[974841]: Error: (!log_opts) Could not complete SSL handshake with 208.80.154.78: 5
Jul 11 14:12:32 cloudcephosd1013 nrpe[974846]: Error: (!log_opts) Could not complete SSL handshake with 208.80.154.78: 5
Jul 11 14:12:32 cloudcephosd1013 nrpe[974858]: Error: (!log_opts) Could not complete SSL handshake with 208.80.153.42: 5
Jul 11 14:12:32 cloudcephosd1013 nrpe[974867]: Error: (!log_opts) Could not complete SSL handshake with 208.80.154.78: 5
Jul 11 14:12:32 cloudcephosd1013 sudo[974835]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:32 cloudcephosd1013 sudo[974865]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/bmc-info --config-file /tmp/ipmi_exporter-761057ceff7ce381568f23f14a06245c
Jul 11 14:12:32 cloudcephosd1013 sudo[974868]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/bmc-info --config-file /tmp/ipmi_exporter-eb10a93bd01d0d6906b6952cb81d9ebf
Jul 11 14:12:32 cloudcephosd1013 sudo[974866]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/bmc-info --config-file /tmp/ipmi_exporter-61c3c58cd211ea8e8bffb506e923d1e7
Jul 11 14:12:32 cloudcephosd1013 sudo[974973]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/bmc-info --config-file /tmp/ipmi_exporter-517d939cfa60d1712293d80053581893
Jul 11 14:12:32 cloudcephosd1013 sudo[974865]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:32 cloudcephosd1013 sudo[974868]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:32 cloudcephosd1013 sudo[974978]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/bmc-info --config-file /tmp/ipmi_exporter-62c890ea28ce0bf25c15223abf64b7ca
Jul 11 14:12:32 cloudcephosd1013 sudo[974979]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/bmc-info --config-file /tmp/ipmi_exporter-1608b9a379daeaddd03d76d9e03534b7
Jul 11 14:12:32 cloudcephosd1013 sudo[974974]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/bmc-info --config-file /tmp/ipmi_exporter-10d0dc4843cec8d331607a8e879ca0ed
Jul 11 14:12:32 cloudcephosd1013 sudo[974981]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/bmc-info --config-file /tmp/ipmi_exporter-37183c24f262396d0eb17a3d501dfc03
Jul 11 14:12:32 cloudcephosd1013 sudo[974990]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/bmc-info --config-file /tmp/ipmi_exporter-4f00441f55f4fd60199b0b2d221438ae
Jul 11 14:12:32 cloudcephosd1013 sudo[974989]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/bmc-info --config-file /tmp/ipmi_exporter-7586010f1b43e4445825a74f06ca20cf
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2012]: 2025-07-11T14:04:56.009+0000 7f3d743b4700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f3d590e4700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 sshd[974780]: banner exchange: Connection from 208.80.154.78 port 52102: Connection timed out
Jul 11 14:12:32 cloudcephosd1013 sshd[974781]: banner exchange: Connection from 208.80.153.42 port 54112: Connection timed out
Jul 11 14:12:32 cloudcephosd1013 sshd[974787]: banner exchange: Connection from 208.80.154.78 port 37508: Connection timed out
Jul 11 14:12:32 cloudcephosd1013 sshd[974788]: banner exchange: Connection from 208.80.153.42 port 52400: Connection timed out
Jul 11 14:12:32 cloudcephosd1013 sshd[974797]: banner exchange: Connection from 208.80.154.78 port 55558: Connection timed out
Jul 11 14:12:32 cloudcephosd1013 sshd[974798]: banner exchange: Connection from 208.80.153.42 port 53038: Connection timed out
Jul 11 14:12:32 cloudcephosd1013 sshd[974804]: error: kex_exchange_identification: Connection closed by remote host
Jul 11 14:12:32 cloudcephosd1013 sshd[974805]: error: kex_exchange_identification: Connection closed by remote host
Jul 11 14:12:32 cloudcephosd1013 sshd[974810]: error: kex_exchange_identification: Connection closed by remote host
Jul 11 14:12:32 cloudcephosd1013 sshd[974811]: Connection closed by 208.80.153.42 port 48260
Jul 11 14:12:32 cloudcephosd1013 sshd[974816]: Connection from 208.80.154.78 port 46802 on 10.64.20.64 port 22 rdomain ""
Jul 11 14:12:32 cloudcephosd1013 sshd[974817]: error: kex_exchange_identification: Connection closed by remote host
Jul 11 14:12:32 cloudcephosd1013 sshd[1563]: error: beginning MaxStartups throttling
Jul 11 14:12:32 cloudcephosd1013 sshd[974820]: Connection closed by 208.80.154.78 port 54048
Jul 11 14:12:32 cloudcephosd1013 sshd[974821]: Connection closed by 208.80.153.42 port 52812
Jul 11 14:12:32 cloudcephosd1013 nrpe[974836]: Error: (!log_opts) Could not complete SSL handshake with 208.80.154.78: 5
Jul 11 14:12:32 cloudcephosd1013 sudo[974833]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:32 cloudcephosd1013 sudo[974854]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/bmc-info --config-file /tmp/ipmi_exporter-7fde8960ace9abb03d875d2d2e8f4a40
Jul 11 14:12:32 cloudcephosd1013 sudo[974972]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/bmc-info --config-file /tmp/ipmi_exporter-35655f278c9767169e1811b1c0e0fac9
Jul 11 14:12:32 cloudcephosd1013 sudo[974977]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/bmc-info --config-file /tmp/ipmi_exporter-0ede2a3d67a6a57d40d734e8ec847957
Jul 11 14:12:32 cloudcephosd1013 sudo[974866]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:32 cloudcephosd1013 CRON[975172]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Jul 11 14:12:32 cloudcephosd1013 sudo[974973]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:32 cloudcephosd1013 sudo[974978]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:32 cloudcephosd1013 sudo[974974]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:32 cloudcephosd1013 sudo[974979]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:32 cloudcephosd1013 sudo[974990]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:32 cloudcephosd1013 sudo[974981]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:32 cloudcephosd1013 sudo[974989]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:32 cloudcephosd1013 sshd[974804]: Connection closed by 208.80.154.78 port 48768
Jul 11 14:12:32 cloudcephosd1013 sshd[974805]: Connection closed by 208.80.153.42 port 50156
Jul 11 14:12:32 cloudcephosd1013 sshd[974810]: Connection closed by 208.80.154.78 port 53780
Jul 11 14:12:32 cloudcephosd1013 sshd[974816]: error: kex_exchange_identification: Connection closed by remote host
Jul 11 14:12:32 cloudcephosd1013 sshd[974817]: Connection closed by 208.80.153.42 port 45874
Jul 11 14:12:32 cloudcephosd1013 sudo[974854]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:32 cloudcephosd1013 sudo[974972]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:32 cloudcephosd1013 sudo[974977]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:32 cloudcephosd1013 sshd[974816]: Connection closed by 208.80.154.78 port 46802
Jul 11 14:12:32 cloudcephosd1013 sshd[1563]: drop connection #14 from [208.80.154.78]:39390 on [10.64.20.64]:22 past MaxStartups
Jul 11 14:12:32 cloudcephosd1013 sshd[1563]: exited MaxStartups throttling after 00:00:23, 1 connections dropped
Jul 11 14:12:32 cloudcephosd1013 CRON[974779]: pam_unix(cron:session): session closed for user root
Jul 11 14:12:32 cloudcephosd1013 sshd[975188]: Connection from 208.80.153.42 port 58528 on 10.64.20.64 port 22 rdomain ""
Jul 11 14:12:32 cloudcephosd1013 sshd[975188]: error: kex_exchange_identification: Connection closed by remote host
Jul 11 14:12:32 cloudcephosd1013 sshd[975188]: Connection closed by 208.80.153.42 port 58528
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.241+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.241+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5340ce2700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.241+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.241+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5342ce6700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.241+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5344cea700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.241+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.241+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.241+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.241+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5340ce2700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.241+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.241+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5342ce6700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.241+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5344cea700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.241+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.241+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.249+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.249+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5340ce2700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.249+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.249+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5342ce6700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.249+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5344cea700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.249+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.249+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.249+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.249+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5340ce2700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.249+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.249+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5342ce6700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.249+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5344cea700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.249+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.249+0000 7f5360fbc700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.297+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.297+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5340ce2700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.297+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5341ce4700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.297+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5342ce6700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.297+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5344cea700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.297+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f5345cec700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.297+0000 7f53617bd700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53464ed700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 ceph-osd[2010]: 2025-07-11T14:04:55.297+0000 7f5361fbe700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f53404e1700' had timed out after 15.000000954s
Jul 11 14:12:32 cloudcephosd1013 puppet-agent[974559]: Caching catalog for cloudcephosd1013.eqiad.wmnet
Jul 11 14:12:33 cloudcephosd1013 puppet-agent[974559]: Applying configuration version '(c39baea9ee) Manuel Arostegui - db2242: Migrate to MariaDB 10.11'
Jul 11 14:12:33 cloudcephosd1013 systemd[1]: ceph-osd@323.service: Main process exited, code=killed, status=6/ABRT
Jul 11 14:12:33 cloudcephosd1013 systemd[1]: ceph-osd@323.service: Failed with result 'signal'.
Jul 11 14:12:33 cloudcephosd1013 systemd[1]: ceph-osd@323.service: Consumed 3h 51min 26.303s CPU time.
Jul 11 14:12:33 cloudcephosd1013 systemd[1]: ceph-osd@326.service: Main process exited, code=killed, status=6/ABRT
Jul 11 14:12:33 cloudcephosd1013 systemd[1]: ceph-osd@326.service: Failed with result 'signal'.
Jul 11 14:12:33 cloudcephosd1013 systemd[1]: ceph-osd@326.service: Consumed 4h 27min 21.221s CPU time.
Jul 11 14:12:33 cloudcephosd1013 kernel: Buffer I/O error on dev dm-4, logical block 468842480, async page read
Jul 11 14:12:33 cloudcephosd1013 kernel: Buffer I/O error on dev dm-4, logical block 468842480, async page read
Jul 11 14:12:33 cloudcephosd1013 kernel: Buffer I/O error on dev dm-4, logical block 468842480, async page read
Jul 11 14:12:33 cloudcephosd1013 kernel: Buffer I/O error on dev dm-4, logical block 468842480, async page read
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@327.service: Main process exited, code=killed, status=6/ABRT
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@327.service: Failed with result 'signal'.
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@327.service: Consumed 4h 11min 42.807s CPU time.
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@324.service: Main process exited, code=killed, status=6/ABRT
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@324.service: Failed with result 'signal'.
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@324.service: Consumed 5h 29min 41.801s CPU time.
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@325.service: Main process exited, code=killed, status=6/ABRT
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@325.service: Failed with result 'signal'.
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@325.service: Consumed 4h 19min 33.159s CPU time.
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@322.service: Main process exited, code=killed, status=6/ABRT
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@322.service: Failed with result 'signal'.
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@322.service: Consumed 4h 43min 38.007s CPU time.
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@320.service: Main process exited, code=killed, status=6/ABRT
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@320.service: Failed with result 'signal'.
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@320.service: Consumed 4h 25min 7.812s CPU time.
Jul 11 14:12:34 cloudcephosd1013 sudo[974835]: pam_unix(sudo:session): session closed for user root
Jul 11 14:12:34 cloudcephosd1013 sudo[974868]: pam_unix(sudo:session): session closed for user root
Jul 11 14:12:34 cloudcephosd1013 sudo[974833]: pam_unix(sudo:session): session closed for user root
Jul 11 14:12:34 cloudcephosd1013 sudo[975239]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/ipmimonitoring -Q --ignore-unrecognized-events --comma-separated-output --no-header-output --sdr-cache-recreate --output-event-bitmask --output-sensor-state --config-file /tmp/ipmi_exporter-2e9c7b2d345edcc22498d1d5547ed17a
Jul 11 14:12:34 cloudcephosd1013 sudo[975237]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/ipmimonitoring -Q --ignore-unrecognized-events --comma-separated-output --no-header-output --sdr-cache-recreate --output-event-bitmask --output-sensor-state --config-file /tmp/ipmi_exporter-e0776f0b5a6fee9b58bf4907d6a36e8c
Jul 11 14:12:34 cloudcephosd1013 sudo[975238]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/ipmimonitoring -Q --ignore-unrecognized-events --comma-separated-output --no-header-output --sdr-cache-recreate --output-event-bitmask --output-sensor-state --config-file /tmp/ipmi_exporter-1609dd4c8fa6a0624fede742083674ef
Jul 11 14:12:34 cloudcephosd1013 sudo[975238]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:34 cloudcephosd1013 sudo[975239]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:34 cloudcephosd1013 sudo[975237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@321.service: Main process exited, code=killed, status=6/ABRT
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@321.service: Failed with result 'signal'.
Jul 11 14:12:34 cloudcephosd1013 systemd[1]: ceph-osd@321.service: Consumed 5h 3min 52.757s CPU time.
Jul 11 14:12:34 cloudcephosd1013 sudo[974866]: pam_unix(sudo:session): session closed for user root
Jul 11 14:12:34 cloudcephosd1013 sudo[974974]: pam_unix(sudo:session): session closed for user root
Jul 11 14:12:34 cloudcephosd1013 sudo[974972]: pam_unix(sudo:session): session closed for user root
Jul 11 14:12:34 cloudcephosd1013 sudo[974978]: pam_unix(sudo:session): session closed for user root
Jul 11 14:12:34 cloudcephosd1013 sudo[974854]: pam_unix(sudo:session): session closed for user root
Jul 11 14:12:34 cloudcephosd1013 sudo[974973]: pam_unix(sudo:session): session closed for user root
Jul 11 14:12:34 cloudcephosd1013 sudo[974977]: pam_unix(sudo:session): session closed for user root
Jul 11 14:12:34 cloudcephosd1013 sudo[974981]: pam_unix(sudo:session): session closed for user root
Jul 11 14:12:34 cloudcephosd1013 sudo[974979]: pam_unix(sudo:session): session closed for user root
Jul 11 14:12:34 cloudcephosd1013 sudo[974865]: pam_unix(sudo:session): session closed for user root
Jul 11 14:12:34 cloudcephosd1013 sudo[974990]: pam_unix(sudo:session): session closed for user root
Jul 11 14:12:34 cloudcephosd1013 sudo[974989]: pam_unix(sudo:session): session closed for user root
Jul 11 14:12:34 cloudcephosd1013 sudo[975245]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/ipmimonitoring -Q --ignore-unrecognized-events --comma-separated-output --no-header-output --sdr-cache-recreate --output-event-bitmask --output-sensor-state --config-file /tmp/ipmi_exporter-fdb44e931e502e50a07341d9d5de0583
Jul 11 14:12:34 cloudcephosd1013 sudo[975245]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:34 cloudcephosd1013 sudo[975247]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/ipmimonitoring -Q --ignore-unrecognized-events --comma-separated-output --no-header-output --sdr-cache-recreate --output-event-bitmask --output-sensor-state --config-file /tmp/ipmi_exporter-0fbf2d0c3a5342f3df6f8a97cc4813f7
Jul 11 14:12:34 cloudcephosd1013 sudo[975247]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:34 cloudcephosd1013 sudo[975252]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/ipmimonitoring -Q --ignore-unrecognized-events --comma-separated-output --no-header-output --sdr-cache-recreate --output-event-bitmask --output-sensor-state --config-file /tmp/ipmi_exporter-14800999d68682cbcc8cf2faa979327d
Jul 11 14:12:34 cloudcephosd1013 sudo[975248]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/ipmimonitoring -Q --ignore-unrecognized-events --comma-separated-output --no-header-output --sdr-cache-recreate --output-event-bitmask --output-sensor-state --config-file /tmp/ipmi_exporter-275f29dc507684a85791e5351ff85308
Jul 11 14:12:34 cloudcephosd1013 sudo[975253]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/ipmimonitoring -Q --ignore-unrecognized-events --comma-separated-output --no-header-output --sdr-cache-recreate --output-event-bitmask --output-sensor-state --config-file /tmp/ipmi_exporter-d5bc161c5858016251acb71d91762fea
Jul 11 14:12:34 cloudcephosd1013 sudo[975250]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/ipmimonitoring -Q --ignore-unrecognized-events --comma-separated-output --no-header-output --sdr-cache-recreate --output-event-bitmask --output-sensor-state --config-file /tmp/ipmi_exporter-c5afb4ee13d185b0e233f449fe91c2ad
Jul 11 14:12:34 cloudcephosd1013 sudo[975254]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/ipmimonitoring -Q --ignore-unrecognized-events --comma-separated-output --no-header-output --sdr-cache-recreate --output-event-bitmask --output-sensor-state --config-file /tmp/ipmi_exporter-b0c9cc7dfe11d7282648d33e0d03ea63
Jul 11 14:12:34 cloudcephosd1013 sudo[975252]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:34 cloudcephosd1013 sudo[975248]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:34 cloudcephosd1013 sudo[975253]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:34 cloudcephosd1013 sudo[975250]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:34 cloudcephosd1013 sudo[975251]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/ipmimonitoring -Q --ignore-unrecognized-events --comma-separated-output --no-header-output --sdr-cache-recreate --output-event-bitmask --output-sensor-state --config-file /tmp/ipmi_exporter-f8d4a4edc68a8dae2a2a1287a51fb070
Jul 11 14:12:34 cloudcephosd1013 sudo[975254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:34 cloudcephosd1013 sudo[975251]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:34 cloudcephosd1013 sudo[975255]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/ipmimonitoring -Q --ignore-unrecognized-events --comma-separated-output --no-header-output --sdr-cache-recreate --output-event-bitmask --output-sensor-state --config-file /tmp/ipmi_exporter-ae8a9c4039b101800ec09f8ce230de33
Jul 11 14:12:34 cloudcephosd1013 sudo[975255]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:34 cloudcephosd1013 sudo[975256]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/ipmimonitoring -Q --ignore-unrecognized-events --comma-separated-output --no-header-output --sdr-cache-recreate --output-event-bitmask --output-sensor-state --config-file /tmp/ipmi_exporter-2f09f5782857ca0c69023e9dea696b05
Jul 11 14:12:34 cloudcephosd1013 sudo[975257]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/ipmimonitoring -Q --ignore-unrecognized-events --comma-separated-output --no-header-output --sdr-cache-recreate --output-event-bitmask --output-sensor-state --config-file /tmp/ipmi_exporter-06aa05f38fbc97b9e47205bcac79b09a
Jul 11 14:12:34 cloudcephosd1013 sudo[975258]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/ipmimonitoring -Q --ignore-unrecognized-events --comma-separated-output --no-header-output --sdr-cache-recreate --output-event-bitmask --output-sensor-state --config-file /tmp/ipmi_exporter-0a0c900be33ce136c444f83a13675077
Jul 11 14:12:34 cloudcephosd1013 sudo[975257]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:34 cloudcephosd1013 sudo[975256]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:34 cloudcephosd1013 sudo[975258]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:35 cloudcephosd1013 sudo[975270]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/bmc-info --config-file /tmp/ipmi_exporter-0d84f6726bb1ed055b1c4ce904729d6a
Jul 11 14:12:35 cloudcephosd1013 sudo[975270]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:35 cloudcephosd1013 systemd[1]: prometheus-node-pinger.service: Succeeded.
Jul 11 14:12:35 cloudcephosd1013 systemd[1]: Finished Generate prometheus network latency metrics with pings.
Jul 11 14:12:35 cloudcephosd1013 systemd[1]: Starting Generate prometheus network latency metrics with pings...
Jul 11 14:12:36 cloudcephosd1013 sudo[975270]: pam_unix(sudo:session): session closed for user root
Jul 11 14:12:36 cloudcephosd1013 sudo[975528]: prometheus : PWD=/ ; USER=root ; COMMAND=/usr/sbin/ipmimonitoring -Q --ignore-unrecognized-events --comma-separated-output --no-header-output --sdr-cache-recreate --output-event-bitmask --output-sensor-state --config-file /tmp/ipmi_exporter-1caf7a5c4db5de2ae11ec02cb63d1976
Jul 11 14:12:36 cloudcephosd1013 sudo[975528]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=110)
Jul 11 14:12:39 cloudcephosd1013 puppet-agent[974559]: (/Stage[main]/Profile::Cloudceph::Osd/Exec[Disable write cache on device /dev/sdj]/returns) /dev/sdj: No such file or directory
Jul 11 14:12:39 cloudcephosd1013 puppet-agent[974559]: 'hdparm -W 0 /dev/sdj' returned 2 instead of one of [0]
Jul 11 14:12:39 cloudcephosd1013 puppet-agent[974559]: (/Stage[main]/Profile::Cloudceph::Osd/Exec[Disable write cache on device /dev/sdj]/returns) change from 'notrun' to ['0'] failed: 'hdparm -W 0 /dev/sdj' returned 2 instead of one of [0] (corrective)
Jul 11 14:12:39 cloudcephosd1013 puppet-agent[974559]: (/Stage[main]/Profile::Cloudceph::Osd/Exec[Set IO scheduler on device /dev/sdj to none]/returns) sh: 1: cannot create /sys/block/sdj/queue/scheduler: Directory nonexistent
Jul 11 14:12:39 cloudcephosd1013 puppet-agent[974559]: 'echo none > /sys/block/sdj/queue/scheduler' returned 2 instead of one of [0]
Jul 11 14:12:39 cloudcephosd1013 puppet-agent[974559]: (/Stage[main]/Profile::Cloudceph::Osd/Exec[Set IO scheduler on device /dev/sdj to none]/returns) change from 'notrun' to ['0'] failed: 'echo none > /sys/block/sdj/queue/scheduler' returned 2 instead of one of [0] (corrective)
Jul 11 14:12:41 cloudcephosd1013 systemd[1]: prometheus-node-pinger.service: Succeeded.
Jul 11 14:12:41 cloudcephosd1013 systemd[1]: Finished Generate prometheus network latency metrics with pings.
Jul 11 14:12:43 cloudcephosd1013 systemd[1]: ceph-osd@323.service: Scheduled restart job, restart counter is at 1.
Jul 11 14:12:43 cloudcephosd1013 systemd[1]: Stopped Ceph object storage daemon osd.323.
Jul 11 14:12:43 cloudcephosd1013 systemd[1]: ceph-osd@323.service: Consumed 3h 51min 26.303s CPU time.
Jul 11 14:12:43 cloudcephosd1013 systemd[1]: Starting Ceph object storage daemon osd.323...
Jul 11 14:12:43 cloudcephosd1013 systemd[1]: Started Ceph object storage daemon osd.323.
Jul 11 14:12:43 cloudcephosd1013 systemd[1]: ceph-osd@326.service: Scheduled restart job, restart counter is at 1.
Jul 11 14:12:43 cloudcephosd1013 systemd[1]: Stopped Ceph object storage daemon osd.326.
Jul 11 14:12:43 cloudcephosd1013 systemd[1]: ceph-osd@326.service: Consumed 4h 27min 21.221s CPU time.
Jul 11 14:12:43 cloudcephosd1013 systemd[1]: Starting Ceph object storage daemon osd.326...
Jul 11 14:12:43 cloudcephosd1013 systemd[1]: Started Ceph object storage daemon osd.326.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: ceph-osd@327.service: Scheduled restart job, restart counter is at 1.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Stopped Ceph object storage daemon osd.327.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: ceph-osd@327.service: Consumed 4h 11min 42.807s CPU time.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Starting Ceph object storage daemon osd.327...
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Started Ceph object storage daemon osd.327.
Jul 11 14:12:44 cloudcephosd1013 ceph-osd[976829]: 2025-07-11T14:12:44.180+0000 7fdc55797200 0 set uid:gid to 64045:64045 (ceph:ceph)
Jul 11 14:12:44 cloudcephosd1013 ceph-osd[976829]: 2025-07-11T14:12:44.180+0000 7fdc55797200 0 ceph version 16.2.15 (618f440892089921c3e944a991122ddc44e60516) pacific (stable), process ceph-osd, pid 976829
Jul 11 14:12:44 cloudcephosd1013 ceph-osd[976795]: 2025-07-11T14:12:44.204+0000 7f43ae715200 0 pidfile_write: ignore empty --pid-file
Jul 11 14:12:44 cloudcephosd1013 ceph-osd[976795]: 2025-07-11T14:12:44.212+0000 7f43ae715200 -1 bluestore(/var/lib/ceph/osd/ceph-326/block) _read_bdev_label failed to read from /var/lib/ceph/osd/ceph-326/block: (5) Input/output error
Jul 11 14:12:44 cloudcephosd1013 ceph-osd[976795]: 2025-07-11T14:12:44.212+0000 7f43ae715200 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-326: (2) No such file or directory
Jul 11 14:12:44 cloudcephosd1013 ceph-osd[976795]: 2025-07-11T14:12:44.212+0000 7f43ae715200 -1 bluestore(/var/lib/ceph/osd/ceph-326/block) _read_bdev_label failed to read from /var/lib/ceph/osd/ceph-326/block: (5) Input/output error
Jul 11 14:12:44 cloudcephosd1013 ceph-osd[976795]: 2025-07-11T14:12:44.212+0000 7f43ae715200 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-326: (2) No such file or directory
Jul 11 14:12:44 cloudcephosd1013 kernel: Buffer I/O error on dev dm-4, logical block 0, async page read
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: ceph-osd@326.service: Main process exited, code=exited, status=1/FAILURE
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: ceph-osd@326.service: Failed with result 'exit-code'.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: ceph-osd@324.service: Scheduled restart job, restart counter is at 1.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Stopped Ceph object storage daemon osd.324.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: ceph-osd@324.service: Consumed 5h 29min 41.801s CPU time.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Starting Ceph object storage daemon osd.324...
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Started Ceph object storage daemon osd.324.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: ceph-osd@325.service: Scheduled restart job, restart counter is at 1.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Stopped Ceph object storage daemon osd.325.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: ceph-osd@325.service: Consumed 4h 19min 33.159s CPU time.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Starting Ceph object storage daemon osd.325...
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Started Ceph object storage daemon osd.325.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: ceph-osd@322.service: Scheduled restart job, restart counter is at 1.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Stopped Ceph object storage daemon osd.322.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: ceph-osd@322.service: Consumed 4h 43min 38.007s CPU time.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Starting Ceph object storage daemon osd.322...
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Started Ceph object storage daemon osd.322.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: ceph-osd@320.service: Scheduled restart job, restart counter is at 1.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Stopped Ceph object storage daemon osd.320.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: ceph-osd@320.service: Consumed 4h 25min 7.812s CPU time.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Starting Ceph object storage daemon osd.320...
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Started Ceph object storage daemon osd.320.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: ceph-osd@321.service: Scheduled restart job, restart counter is at 1.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Stopped Ceph object storage daemon osd.321.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: ceph-osd@321.service: Consumed 5h 3min 52.757s CPU time.
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Starting Ceph object storage daemon osd.321...
Jul 11 14:12:44 cloudcephosd1013 systemd[1]: Started Ceph object storage daemon osd.321.

Event Timeline

ProdPasteBot changed the title of this paste from untitled to Masterwork From Distant Lands.