Page MenuHomePhabricator

Marostegui (Manuel Aróstegui)
Staff Database Administrator

Today

  • Clear sailing ahead.

Tomorrow

  • Clear sailing ahead.

Friday

  • Clear sailing ahead.

User Details

User Since
Sep 1 2016, 6:48 AM (326 w, 6 d)
Availability
Available
IRC Nick
marostegui
LDAP User
Marostegui
MediaWiki User
MArostegui (WMF) [ Global Accounts ]

TZ: UTC +1/+2

Recent Activity

Today

Marostegui committed rSCHCHfc0fd9e37d88: change_echo_unread_wikis_T255174.py: New schema change (authored by Marostegui).
change_echo_unread_wikis_T255174.py: New schema change
Wed, Dec 7, 11:12 AM
Marostegui created P42442 (An Untitled Masterwork).
Wed, Dec 7, 10:16 AM
Marostegui added a comment to T319383: Mydumper incompatibility with MariaDB 10.6 (was: Logical recoveries (myloader) to db2098:s7 are failing with "Lock wait timeout exceeded; try restarting transaction").

This is an interesting finding, maybe we need to pass this info along to Marko and see if they can guess why (and if it is expected) it happens.
Also interesting to see that the script is faster than this last run of myloader - good job! :)

Wed, Dec 7, 9:03 AM · Patch-For-Review, Data-Persistence-Backup, DBA, database-backups
Marostegui updated the task description for T255174: Extend echo_unread_wikis.euw_wiki.
Wed, Dec 7, 8:55 AM · Patch-For-Review, DBA
Marostegui added a comment to T255174: Extend echo_unread_wikis.euw_wiki.

metawiki, mediawikiwiki, labswiki,officewiki aren't having echo_unread_wikis table anymore.
So this is only needed in x1 (wikishared)

Wed, Dec 7, 8:54 AM · Patch-For-Review, DBA
Marostegui moved T255174: Extend echo_unread_wikis.euw_wiki from Blocked to Ready on the DBA board.
Wed, Dec 7, 8:46 AM · Patch-For-Review, DBA
Marostegui claimed T255174: Extend echo_unread_wikis.euw_wiki.

I am going to start working on this in January. It will require x1 to go to SBR though for a few days.

Wed, Dec 7, 8:45 AM · Patch-For-Review, DBA
Marostegui reopened T323941: Add Kelton Hurd to wmf ldap group as "Open".

@KHurd-WMF this is not yet done - I was just verifying it is now fine and also added you to the Phabricator group wmf-nda
This can now proceed and can be picked up by the clinic duty person this week!

Wed, Dec 7, 7:51 AM · SecTeam-Processed, LDAP-Access-Requests, SRE, Security-Team
Marostegui placed T323941: Add Kelton Hurd to wmf ldap group up for grabs.
Wed, Dec 7, 7:51 AM · SecTeam-Processed, LDAP-Access-Requests, SRE, Security-Team
Marostegui reopened T323941: Add Kelton Hurd to wmf ldap group, a subtask of T318841: Onboard Kelton to Security Team, as Open.
Wed, Dec 7, 7:50 AM · Patch-For-Review, user-sbassett, Security-Team
Marostegui added a comment to T323941: Add Kelton Hurd to wmf ldap group.

So, check_user looks good and KHurd1 is associated to khurd WMF email account now.

Wed, Dec 7, 7:42 AM · SecTeam-Processed, LDAP-Access-Requests, SRE, Security-Team
Marostegui added a comment to T323941: Add Kelton Hurd to wmf ldap group.

@KHurd-WMF Thanks for the explanation. It is probably easier if you keep KHurd1 then as it is associated to your wmf email account already. Could you edit the task to reflect that's the user that needs to go into the wmf group?

Wed, Dec 7, 7:36 AM · SecTeam-Processed, LDAP-Access-Requests, SRE, Security-Team
Marostegui added a comment to T323941: Add Kelton Hurd to wmf ldap group.

@sbassett I am not sure KHurd is the right user name, from what I can see there are two users with KHurd, there is KHurd and KHurd1, both created in 2022. One was created in Nov 2022 and another one yesterday (6th Dec).
Only KHurd1 has a wikimedia email associated to.

Wed, Dec 7, 7:03 AM · SecTeam-Processed, LDAP-Access-Requests, SRE, Security-Team
Marostegui added a comment to T280604: Post-deployment: (partly) ramp parser cache retention back up .

@Krinkle can this be closed?

Wed, Dec 7, 6:58 AM · DBA, Performance-Team (Radar), Editing-team, DiscussionTools
Marostegui added a comment to T324181: Test new PERC 755 controller on DB hosts.

This host is now being back to serve traffic.

Wed, Dec 7, 6:41 AM · Infrastructure-Foundations, DBA
Marostegui edited projects for T323418: decommission phab1001.eqiad.wmnet, added: Data-Persistence (work done); removed DBA.
Wed, Dec 7, 6:23 AM · Data-Persistence (work done), Patch-For-Review, SRE, ops-eqiad, Phabricator, serviceops-collab, decommission-hardware
Marostegui edited projects for T324556: vote.wikimedia.org's Special:Securepoll/list/1402 takes considerably longer in codfw than in eqiad, leading to timeouts, added: Data-Persistence; removed DBA.
Wed, Dec 7, 6:12 AM · Data-Persistence, MW-1.40-notes (1.40.0-wmf.12; 2022-11-28), Patch-For-Review, Performance Issue, MediaWiki-extensions-SecurePoll, SRE
Marostegui added a comment to T323502: Request increased quota for mix-n-match Toolforge tool.

For the DB connections request, adding @Marostegui for review.

Wed, Dec 7, 6:09 AM · Toolforge (Quota-requests)
Marostegui triaged T324142: Database grants for readonly VRTS user as Medium priority.

@Arnoldokoth let's discuss here what you need.
Do you need a new user with just SELECT grant?

Wed, Dec 7, 6:03 AM · DBA, vrts, Znuny, serviceops-collab
Marostegui moved T323418: decommission phab1001.eqiad.wmnet from Triage to Done on the DBA board.

All done from the DBA side.

Wed, Dec 7, 6:00 AM · Data-Persistence (work done), Patch-For-Review, SRE, ops-eqiad, Phabricator, serviceops-collab, decommission-hardware
Marostegui added a comment to T323418: decommission phab1001.eqiad.wmnet.
root@db1159.eqiad.wmnet[(none)]> select user,host from mysql.user where host like '10.64.16.8';
+----------------+------------+
| User           | Host       |
+----------------+------------+
| phabricatorphd | 10.64.16.8 |
| phadmin        | 10.64.16.8 |
| phmanifest     | 10.64.16.8 |
| phstats        | 10.64.16.8 |
| phuser         | 10.64.16.8 |
+----------------+------------+
Wed, Dec 7, 5:57 AM · Data-Persistence (work done), Patch-For-Review, SRE, ops-eqiad, Phabricator, serviceops-collab, decommission-hardware
Marostegui added a comment to T323418: decommission phab1001.eqiad.wmnet.

I will merge that change and then proceed and remove grants live

Wed, Dec 7, 5:49 AM · Data-Persistence (work done), Patch-For-Review, SRE, ops-eqiad, Phabricator, serviceops-collab, decommission-hardware
Marostegui added a project to T323418: decommission phab1001.eqiad.wmnet: DBA.

That's ok Daniel, I will take care of it on this task.

Wed, Dec 7, 5:49 AM · Data-Persistence (work done), Patch-For-Review, SRE, ops-eqiad, Phabricator, serviceops-collab, decommission-hardware

Mon, Dec 5

Marostegui added a comment to T324142: Database grants for readonly VRTS user.

We can give it only SELECT grant

Mon, Dec 5, 4:20 PM · DBA, vrts, Znuny, serviceops-collab
Marostegui added a comment to T324466: VictorOps 'escalator' did not work on 2022-12-03.

Is there a way to monitor this so we can get an alert if it happens again?

Mon, Dec 5, 3:32 PM · Observability-Alerting
Marostegui added a comment to T324181: Test new PERC 755 controller on DB hosts.

it is now pulled with the normal weight. I will leave it during working hours to see how it goes and later depool it again.

Mon, Dec 5, 11:38 AM · Infrastructure-Foundations, DBA
Marostegui added a comment to T319383: Mydumper incompatibility with MariaDB 10.6 (was: Logical recoveries (myloader) to db2098:s7 are failing with "Lock wait timeout exceeded; try restarting transaction").

Ah cool, so the times are pretty similar, and there's not a big difference (for good or bad).

Mon, Dec 5, 10:14 AM · Patch-For-Review, Data-Persistence-Backup, DBA, database-backups
Marostegui added a comment to T319383: Mydumper incompatibility with MariaDB 10.6 (was: Logical recoveries (myloader) to db2098:s7 are failing with "Lock wait timeout exceeded; try restarting transaction").

s5 (a best case scenario, our smallest wiki section and a balanced number of tables) took 10h30 with the oneliners:

Mon, Dec 5, 8:51 AM · Patch-For-Review, Data-Persistence-Backup, DBA, database-backups
Marostegui added a comment to T324181: Test new PERC 755 controller on DB hosts.

The RAID controller itself looks good.
The default options we care about are all present:

  • WriteBack
  • Learning cycles disabled
  • RAID10 was done by default
  • Strip size 256k
  • No read ahead
Mon, Dec 5, 8:23 AM · Infrastructure-Foundations, DBA
Marostegui added a comment to T324181: Test new PERC 755 controller on DB hosts.

After replicating fine during the last few days - I have pooled this host with just 1% weight to see how the controller does in terms of performance. I will slowly increase its weight during the day.

Mon, Dec 5, 6:38 AM · Infrastructure-Foundations, DBA
Marostegui added a comment to T322988: db2173 HW errors.

Host being repooled automatically.

Mon, Dec 5, 6:14 AM · DBA, SRE, ops-codfw

Fri, Dec 2

Marostegui closed T324058: Requesting access to Turnilo for USER:Damilare Adedoyin as Resolved.

Excellent, closing this then!

Fri, Dec 2, 2:34 PM · SRE-Access-Requests, SRE
Marostegui closed T322591: Requesting access to analytics-privatedata-users for Dasm as Resolved.

Added to nda group and created the kerberos principal. @dasm you should've received an email with further instructions. Also please allow 30-60 minutes for puppet to run everywhere.
Please reopen if you have troubles accessing.

Fri, Dec 2, 2:33 PM · SRE, SRE-Access-Requests
Marostegui added a comment to T324058: Requesting access to Turnilo for USER:Damilare Adedoyin.

@Damilare T319057 was closed, so I would assume so. Can you please test and let us know if you get any errors?

Fri, Dec 2, 2:29 PM · SRE-Access-Requests, SRE
Marostegui added a comment to T319383: Mydumper incompatibility with MariaDB 10.6 (was: Logical recoveries (myloader) to db2098:s7 are failing with "Lock wait timeout exceeded; try restarting transaction").

Thanks for the update! Looking forward to seeing that next comparison

Fri, Dec 2, 12:13 PM · Patch-For-Review, Data-Persistence-Backup, DBA, database-backups
Marostegui claimed T324181: Test new PERC 755 controller on DB hosts.
Fri, Dec 2, 6:42 AM · Infrastructure-Foundations, DBA
Marostegui moved T324180: Switchover s3 master (db2127 -> db2105) from Triage to Ready on the DBA board.
Fri, Dec 2, 6:41 AM · DBA
Marostegui renamed T324181: Test new PERC 755 controller on DB hosts from Test new PERC 755 controller to Test new PERC 755 controller on DB hosts.
Fri, Dec 2, 6:19 AM · Infrastructure-Foundations, DBA
Marostegui updated the title for P42205 db1134 -> db1206 from Masterwork From Distant Lands to db1134 -> db1206.
Fri, Dec 2, 6:14 AM
Marostegui added a comment to T313978: Q1:rack/setup/install db1204, db1205.

@jcrespo do you want/have a tracking task to productionize these hosts?

Fri, Dec 2, 6:05 AM · SRE, Data-Persistence-Backup, ops-eqiad, DC-Ops
Marostegui updated subscribers of T324057: Requesting access to Turnilo for USER:wfan.

@Ottomata can you confirm if this also needs analytics-privatedata-users group membership without ssh and kerberos?
We need @XenoRyet to approve as well.

Fri, Dec 2, 5:58 AM · Patch-For-Review, SRE-Access-Requests, SRE

Thu, Dec 1

Marostegui closed T324101: Request for access to analytics-platform-eng-admins for mlitn as Resolved.

I have merged your patch. Also, you should've gotten an email about your kerberos principal.
Please also allow 30-60 minutes for puppet to spread across the fleet.

Thu, Dec 1, 3:59 PM · SRE, SRE-Access-Requests
Marostegui updated the task description for T324101: Request for access to analytics-platform-eng-admins for mlitn.
Thu, Dec 1, 3:57 PM · SRE, SRE-Access-Requests
Marostegui closed T324205: mariadb: grant user 'phstats' additional select on phabricator_search db as Resolved.

Merged and applied the grants. Please test it and reopen if it is not working!
Thanks!

Thu, Dec 1, 3:36 PM · SRE, DBA, SRE-Access-Requests
Marostegui added a comment to T324101: Request for access to analytics-platform-eng-admins for mlitn.

@matthiasmullie do you want to also add yourself to analytics-privatedata-users in the gerrit patch? Once done I can merge and add the kerberos principal too.

Thu, Dec 1, 3:34 PM · SRE, SRE-Access-Requests
Marostegui closed T324197: nahidunlimited with same SSH password for WMCS and production as Resolved.

ssh key verified and replaced. Please allow 30-60 minutes for it to totally spread across the fleet.

Thu, Dec 1, 12:15 PM · SRE-Access-Requests, SRE
Marostegui claimed T324197: nahidunlimited with same SSH password for WMCS and production.
Thu, Dec 1, 12:09 PM · SRE-Access-Requests, SRE
Marostegui moved T324197: nahidunlimited with same SSH password for WMCS and production from Untriaged to Awaiting User Input on the SRE-Access-Requests board.
Thu, Dec 1, 11:50 AM · SRE-Access-Requests, SRE
Marostegui updated the task description for T324197: nahidunlimited with same SSH password for WMCS and production.
Thu, Dec 1, 11:50 AM · SRE-Access-Requests, SRE
Marostegui added a comment to T324057: Requesting access to Turnilo for USER:wfan.

Thanks Greg!. I will wait for the correct template and then proceed.

Thu, Dec 1, 10:48 AM · Patch-For-Review, SRE-Access-Requests, SRE
Marostegui triaged T324197: nahidunlimited with same SSH password for WMCS and production as High priority.
Thu, Dec 1, 10:45 AM · SRE-Access-Requests, SRE
Marostegui created T324197: nahidunlimited with same SSH password for WMCS and production.
Thu, Dec 1, 10:45 AM · SRE-Access-Requests, SRE
Marostegui added a comment to T324181: Test new PERC 755 controller on DB hosts.

@MoritzMuehlenhoff pointed out that this host might need https://wikitech.wikimedia.org/wiki/PERCCli - which indeed seems to be working a lot better

Thu, Dec 1, 7:27 AM · Infrastructure-Foundations, DBA
Marostegui added a comment to T324181: Test new PERC 755 controller on DB hosts.

1[ 244.573876] INFO: task kworker/u97:19:289 blocked for more than 120 seconds.
2[ 244.580940] Not tainted 5.10.0-19-amd64 #1 Debian 5.10.149-2
3[ 244.587133] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
4[ 244.594969] task:kworker/u97:19 state:D stack: 0 pid: 289 ppid: 2 flags:0x00004000
5[ 244.594983] Workqueue: writeback wb_workfn (flush-8:0)
6[ 244.594987] Call Trace:
7[ 244.594999] __schedule+0x282/0x880
8[ 244.595007] ? wbt_rqw_done+0xf0/0xf0
9[ 244.595011] schedule+0x46/0xb0
10[ 244.595014] io_schedule+0x42/0x70
11[ 244.595019] rq_qos_wait+0xc1/0x150
12[ 244.595023] ? karma_partition+0x1e0/0x1e0
13[ 244.595026] ? wbt_cleanup_cb+0x20/0x20
14[ 244.595030] wbt_wait+0x9d/0x100
15[ 244.595040] __rq_qos_throttle+0x20/0x40
16[ 244.595050] blk_mq_submit_bio+0x128/0x530
17[ 244.595066] submit_bio_noacct+0x3ad/0x420
18[ 244.595124] ext4_io_submit+0x49/0x60 [ext4]
19[ 244.595155] ext4_writepages+0x569/0xfd0 [ext4]
20[ 244.595165] ? enqueue_entity+0x163/0x760
21[ 244.595172] do_writepages+0x31/0xc0
22[ 244.595176] __writeback_single_inode+0x39/0x2a0
23[ 244.595179] writeback_sb_inodes+0x20d/0x4a0
24[ 244.595184] __writeback_inodes_wb+0x4c/0xe0
25[ 244.595190] wb_writeback+0x1d8/0x2a0
26[ 244.595193] wb_workfn+0x296/0x4e0
27[ 244.595199] ? __switch_to_asm+0x3a/0x60
28[ 244.595207] process_one_work+0x1b3/0x350
29[ 244.595216] worker_thread+0x53/0x3e0
30[ 244.595225] ? process_one_work+0x350/0x350
31[ 244.595233] kthread+0x118/0x140
32[ 244.595242] ? __kthread_bind_mask+0x60/0x60
33[ 244.595251] ret_from_fork+0x1f/0x30
34[ 244.595258] INFO: task kworker/u98:20:297 blocked for more than 120 seconds.
35[ 244.602309] Not tainted 5.10.0-19-amd64 #1 Debian 5.10.149-2
36[ 244.608494] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
37[ 244.616318] task:kworker/u98:20 state:D stack: 0 pid: 297 ppid: 2 flags:0x00004000
38[ 244.616323] Workqueue: writeback wb_workfn (flush-8:0)
39[ 244.616324] Call Trace:
40[ 244.616327] __schedule+0x282/0x880
41[ 244.616330] ? wbt_rqw_done+0xf0/0xf0
42[ 244.616331] schedule+0x46/0xb0
43[ 244.616332] io_schedule+0x42/0x70
44[ 244.616334] rq_qos_wait+0xc1/0x150
45[ 244.616336] ? karma_partition+0x1e0/0x1e0
46[ 244.616338] ? wbt_cleanup_cb+0x20/0x20
47[ 244.616340] wbt_wait+0x9d/0x100
48[ 244.616341] __rq_qos_throttle+0x20/0x40
49[ 244.616343] blk_mq_submit_bio+0x128/0x530
50[ 244.616346] submit_bio_noacct+0x3ad/0x420
51[ 244.616360] ext4_io_submit+0x49/0x60 [ext4]
52[ 244.616371] ext4_writepages+0x22e/0xfd0 [ext4]
53[ 244.616375] ? update_sd_lb_stats.constprop.0+0xfa/0x8a0
54[ 244.616377] do_writepages+0x31/0xc0
55[ 244.616379] ? find_busiest_group+0x41/0x320
56[ 244.616381] __writeback_single_inode+0x39/0x2a0
57[ 244.616384] writeback_sb_inodes+0x20d/0x4a0
58[ 244.616391] __writeback_inodes_wb+0x4c/0xe0
59[ 244.616399] wb_writeback+0x1d8/0x2a0
60[ 244.616407] wb_workfn+0x296/0x4e0
61[ 244.616415] ? __switch_to_asm+0x3a/0x60
62[ 244.616423] process_one_work+0x1b3/0x350
63[ 244.616435] worker_thread+0x53/0x3e0
64[ 244.616437] ? process_one_work+0x350/0x350
65[ 244.616438] kthread+0x118/0x140
66[ 244.616439] ? __kthread_bind_mask+0x60/0x60
67[ 244.616440] ret_from_fork+0x1f/0x30
68[ 244.616525] INFO: task megacli.real:2175 blocked for more than 120 seconds.
69[ 244.623486] Not tainted 5.10.0-19-amd64 #1 Debian 5.10.149-2
70[ 244.629665] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
71[ 244.637494] task:megacli.real state:D stack: 0 pid: 2175 ppid: 2174 flags:0x00000004
72[ 244.637496] Call Trace:
73[ 244.637498] __schedule+0x282/0x880
74[ 244.637500] schedule+0x46/0xb0
75[ 244.637506] megasas_issue_blocked_cmd+0xc9/0x190 [megaraid_sas]
76[ 244.637510] ? add_wait_queue_exclusive+0x70/0x70
77[ 244.637513] megasas_mgmt_fw_ioctl+0x465/0x6b0 [megaraid_sas]
78[ 244.637516] megasas_mgmt_ioctl_fw.constprop.0+0x11d/0x180 [megaraid_sas]
79[ 244.637518] megasas_mgmt_ioctl+0x24/0x40 [megaraid_sas]
80[ 244.637522] __x64_sys_ioctl+0x88/0xc0
81[ 244.637525] do_syscall_64+0x30/0x80
82[ 244.637528] entry_SYSCALL_64_after_hwframe+0x61/0xc6
83[ 244.637530] RIP: 0033:0x7f723d59f5f7
84[ 244.637531] RSP: 002b:00007ffe06def618 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
85[ 244.637533] RAX: ffffffffffffffda RBX: 0000000000febe70 RCX: 00007f723d59f5f7
86[ 244.637534] RDX: 0000000000fe6dd0 RSI: 00000000c1944d01 RDI: 0000000000000003
87[ 244.637534] RBP: 00007ffe06def650 R08: 0000000000fe6dd0 R09: 00007f723d67cbe0
88[ 244.637535] R10: 000000000000006e R11: 0000000000000246 R12: 00000000004028a0
89[ 244.637536] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
90[ 295.786301] sd 0:3:111:0: [sda] tag#4496 CDB: Read(16) 88 00 00 00 00 00 01 44 95 b8 00 00 00 08 00 00
91[ 295.826282] sd 0:3:111:0: [sda] tag#3284 CDB: Write(16) 8a 00 00 00 00 00 02 c1 0c 90 00 00 00 08 00 00
92[ 295.826292] sd 0:3:111:0: [sda] tag#3283 CDB: Write(16) 8a 00 00 00 00 00 02 81 11 f8 00 00 00 08 00 00
93[ 295.826296] sd 0:3:111:0: [sda] tag#3282 CDB: Write(16) 8a 00 00 00 00 00 02 81 09 b0 00 00 00 08 00 00
94[ 295.826301] sd 0:3:111:0: [sda] tag#3281 CDB: Write(16) 8a 00 00 00 00 00 02 80 10 50 00 00 00 08 00 00
95[ 295.826305] sd 0:3:111:0: [sda] tag#3280 CDB: Write(16) 8a 00 00 00 00 00 02 80 0c 40 00 00 00 08 00 00
96[ 295.826308] sd 0:3:111:0: [sda] tag#3279 CDB: Write(16) 8a 00 00 00 00 00 02 80 0c 28 00 00 00 08 00 00
97[ 295.826312] sd 0:3:111:0: [sda] tag#3278 CDB: Write(16) 8a 00 00 00 00 00 02 80 09 00 00 00 00 08 00 00
98[ 295.826315] sd 0:3:111:0: [sda] tag#3277 CDB: Write(16) 8a 00 00 00 00 00 01 80 0f 00 00 00 00 08 00 00
99[ 295.826319] sd 0:3:111:0: [sda] tag#3276 CDB: Write(16) 8a 00 00 00 00 00 01 80 08 80 00 00 00 08 00 00
100[ 295.826323] sd 0:3:111:0: [sda] tag#3275 CDB: Write(16) 8a 00 00 00 00 00 00 40 0b d8 00 00 00 08 00 00
101[ 295.826326] sd 0:3:111:0: [sda] tag#3274 CDB: Write(16) 8a 00 00 00 00 00 00 00 08 10 00 00 00 08 00 00
102[ 299.370317] sd 0:3:111:0: [sda] tag#4498 CDB: Read(16) 88 00 00 00 00 00 01 c6 34 90 00 00 00 30 00 00
103[ 322.410434] sd 0:3:111:0: [sda] tag#2523 CDB: Read(16) 88 00 00 00 00 00 01 05 7f 68 00 00 00 10 00 00
104[ 351.082576] sd 0:3:111:0: [sda] tag#835 CDB: Read(16) 88 00 00 00 00 00 00 09 de 20 00 00 00 f0 00 00
105[ 354.922592] sd 0:3:111:0: [sda] tag#838 CDB: Read(16) 88 00 00 00 00 00 00 09 dd 20 00 00 00 88 00 00
106[ 354.946596] sd 0:3:111:0: [sda] tag#838 OCR is requested due to IO timeout!!
107[ 354.946607] sd 0:3:111:0: [sda] tag#838 SCSI host state: 5 SCSI host busy: 16 FW outstanding: 27
108[ 354.946613] sd 0:3:111:0: [sda] tag#838 scmd: (0x0000000012556baa) retries: 0x0 allowed: 0x5
109[ 354.946618] sd 0:3:111:0: [sda] tag#838 CDB: Read(16) 88 00 00 00 00 00 00 09 dd 20 00 00 00 88 00 00
110[ 354.946622] sd 0:3:111:0: [sda] tag#838 Request descriptor details:
111[ 354.946626] sd 0:3:111:0: [sda] tag#838 RequestFlags:0xe MSIxIndex:0x2a SMID:0x347 LMID:0x0 DevHandle:0x0
112[ 354.946628] IO request frame:
113[ 354.946631] 00000000: f10f00ef 00000000 00000000 0b113a40 00600002 00000020 00000000 00011000
114[ 354.946645] 00000020: 00000000 00000010 00000000 00000000 00000000 00000000 00000000 02000000
115[ 354.946657] 00000040: 00000088 09000000 000020dd 00008800 00000000 00000000 00000000 00000000
116[ 354.946668] 00000060: 00140012 00ef0010 0001f920 00000000 00000088 00000000 00020400 00008011
117[ 354.946680] 00000080: d06b5000 00000002 00001000 00000000 b1ef6000 00000001 00001000 00000000
118[ 354.946691] 000000a0: 23dc1000 00000001 00001000 00000000 6b734000 00000001 00001000 00000000
119[ 354.946703] 000000c0: 6b74c000 00000001 00001000 00000000 0c031000 00000001 00001000 00000000
120[ 354.946714] 000000e0: b21a0000 00000001 00001000 00000000 b14c0000 00000001 000000a0 80000000
121[ 354.946726] Chain frame:
122[ 354.946727] 00000000: 25ee8000 00000001 00001000 00000000 b2963000 00000001 00001000 00000000
123[ 354.946739] 00000020: 0e883000 00000001 00001000 00000000 6d25e000 00000001 00001000 00000000
124[ 354.946750] 00000040: f9263000 00000001 00001000 00000000 0e869000 00000001 00001000 00000000
125[ 354.946762] 00000060: 0e86d000 00000001 00001000 00000000 420b5000 00000002 00001000 00000000
126[ 354.946773] 00000080: 0aa64000 00000001 00001000 00000000 0be2b000 00000001 00001000 40000000
127[ 354.946784] 000000a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
128[ 354.946796] 000000c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
129[ 354.946807] 000000e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
130[ 354.946818] 00000100: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
131[ 354.946829] 00000120: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
132[ 354.946840] 00000140: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
133[ 354.946851] 00000160: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
134[ 354.946862] 00000180: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
135[ 354.946873] 000001a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
136[ 354.946884] 000001c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
137[ 354.946895] 000001e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
138[ 354.946906] 00000200: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
139[ 354.946917] 00000220: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
140[ 354.946928] 00000240: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
141[ 354.946939] 00000260: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
142[ 354.946950] 00000280: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
143[ 354.946961] 000002a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
144[ 354.946972] 000002c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
145[ 354.946983] 000002e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
146[ 354.946994] 00000300: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
147[ 354.947005] 00000320: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
148[ 354.947016] 00000340: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
149[ 354.947026] 00000360: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
150[ 354.947037] 00000380: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
151[ 354.947048] 000003a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
152[ 354.947059] 000003c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
153[ 354.947070] 000003e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
154[ 354.947081] 00000400: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
155[ 354.947092] 00000420: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
156[ 354.947103] 00000440: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
157[ 354.947114] 00000460: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
158[ 354.947125] 00000480: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
159[ 354.947136] 000004a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
160[ 354.947147] 000004c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
161[ 354.947157] 000004e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
162[ 354.947168] 00000500: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
163[ 354.947179] 00000520: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
164[ 354.947190] 00000540: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
165[ 354.947201] 00000560: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
166[ 354.947212] 00000580: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
167[ 354.947223] 000005a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
168[ 354.947234] 000005c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
169[ 354.947245] 000005e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
170[ 354.947256] 00000600: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
171[ 354.947267] 00000620: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
172[ 354.947278] 00000640: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
173[ 354.947289] 00000660: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
174[ 354.947300] 00000680: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
175[ 354.947311] 000006a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
176[ 354.947322] 000006c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
177[ 354.947333] 000006e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
178[ 354.947344] 00000700: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
179[ 354.947355] 00000720: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
180[ 354.947366] 00000740: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
181[ 354.947377] 00000760: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
182[ 354.947388] 00000780: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
183[ 354.947399] 000007a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
184[ 354.947410] 000007c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
185[ 354.947421] 000007e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
186[ 354.947432] 00000800: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
187[ 354.947443] 00000820: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
188[ 354.947454] 00000840: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
189[ 354.947465] 00000860: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
190[ 354.947476] 00000880: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
191[ 354.947487] 000008a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
192[ 354.947498] 000008c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
193[ 354.947509] 000008e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
194[ 354.947520] 00000900: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
195[ 354.947531] 00000920: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
196[ 354.947542] 00000940: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
197[ 354.947553] 00000960: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
198[ 354.947565] 00000980: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
199[ 354.947576] 000009a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
200[ 354.947587] 000009c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
201[ 354.947598] 000009e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
202[ 354.947609] 00000a00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
203[ 354.947620] 00000a20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
204[ 354.947631] 00000a40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
205[ 354.947642] 00000a60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
206[ 354.947653] 00000a80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
207[ 354.947664] 00000aa0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
208[ 354.947675] 00000ac0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
209[ 354.947686] 00000ae0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
210[ 354.947697] 00000b00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
211[ 354.947708] 00000b20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
212[ 354.947719] 00000b40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
213[ 354.947730] 00000b60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
214[ 354.947741] 00000b80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
215[ 354.947752] 00000ba0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
216[ 354.947763] 00000bc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
217[ 354.947774] 00000be0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
218[ 354.947785] 00000c00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
219[ 354.947796] 00000c20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
220[ 354.947807] 00000c40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
221[ 354.947818] 00000c60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
222[ 354.947829] 00000c80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
223[ 354.947840] 00000ca0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
224[ 354.947851] 00000cc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
225[ 354.947862] 00000ce0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
226[ 354.947873] 00000d00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
227[ 354.947884] 00000d20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
228[ 354.947895] 00000d40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
229[ 354.947906] 00000d60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
230[ 354.947917] 00000d80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
231[ 354.947929] 00000da0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
232[ 354.947940] 00000dc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
233[ 354.947951] 00000de0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
234[ 354.947962] 00000e00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
235[ 354.947972] 00000e20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
236[ 354.947984] 00000e40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
237[ 354.947994] 00000e60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
238[ 354.948005] 00000e80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
239[ 354.948016] 00000ea0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
240[ 354.948027] 00000ec0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
241[ 354.948039] 00000ee0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
242[ 354.948050] 00000f00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
243[ 354.948061] 00000f20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
244[ 354.948072] 00000f40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
245[ 354.948083] 00000f60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
246[ 354.948094] 00000f80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
247[ 354.948105] 00000fa0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
248[ 354.948116] 00000fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
249[ 354.948128] 00000fe0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
250[ 354.948144] megaraid_sas 0000:65:00.0: megasas_disable_intr_fusion is called outbound_intr_mask:0x40000009
251[ 354.948210] megaraid_sas 0000:65:00.0: [ 0]waiting for 27 commands to complete for scsi0
252[ 360.062534] megaraid_sas 0000:65:00.0: [ 5]waiting for 27 commands to complete for scsi0
253[ 365.182553] megaraid_sas 0000:65:00.0: [10]waiting for 27 commands to complete for scsi0
254[ 365.406556] INFO: task kworker/u97:0:8 blocked for more than 120 seconds.
255[ 365.413356] Not tainted 5.10.0-19-amd64 #1 Debian 5.10.149-2
256[ 365.419542] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
257[ 365.427375] task:kworker/u97:0 state:D stack: 0 pid: 8 ppid: 2 flags:0x00004000
258[ 365.427389] Workqueue: writeback wb_workfn (flush-8:0)
259[ 365.427393] Call Trace:
260[ 365.427405] __schedule+0x282/0x880
261[ 365.427412] ? wbt_rqw_done+0xf0/0xf0
262[ 365.427416] schedule+0x46/0xb0
263[ 365.427419] io_schedule+0x42/0x70
264[ 365.427424] rq_qos_wait+0xc1/0x150
265[ 365.427428] ? karma_partition+0x1e0/0x1e0
266[ 365.427432] ? wbt_cleanup_cb+0x20/0x20
267[ 365.427436] wbt_wait+0x9d/0x100
268[ 365.427440] __rq_qos_throttle+0x20/0x40
269[ 365.427445] blk_mq_submit_bio+0x128/0x530
270[ 365.427452] submit_bio_noacct+0x3ad/0x420
271[ 365.427516] ext4_io_submit+0x49/0x60 [ext4]
272[ 365.427552] ext4_writepages+0x22e/0xfd0 [ext4]
273[ 365.427567] ? update_load_avg+0x7a/0x5d0
274[ 365.427580] ? update_load_avg+0x7a/0x5d0
275[ 365.427592] ? enqueue_entity+0x163/0x760
276[ 365.427598] do_writepages+0x31/0xc0
277[ 365.427602] __writeback_single_inode+0x39/0x2a0
278[ 365.427606] writeback_sb_inodes+0x20d/0x4a0
279[ 365.427610] __writeback_inodes_wb+0x4c/0xe0
280[ 365.427613] wb_writeback+0x1d8/0x2a0
281[ 365.427617] wb_workfn+0x296/0x4e0
282[ 365.427622] ? __switch_to_asm+0x3a/0x60
283[ 365.427627] process_one_work+0x1b3/0x350
284[ 365.427631] worker_thread+0x53/0x3e0
285[ 365.427634] ? process_one_work+0x350/0x350
286[ 365.427637] kthread+0x118/0x140
287[ 365.427641] ? __kthread_bind_mask+0x60/0x60
288[ 365.427645] ret_from_fork+0x1f/0x30
289[ 365.427702] INFO: task kworker/u97:8:267 blocked for more than 120 seconds.
290[ 365.434669] Not tainted 5.10.0-19-amd64 #1 Debian 5.10.149-2
291[ 365.440856] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
292[ 365.448683] task:kworker/u97:8 state:D stack: 0 pid: 267 ppid: 2 flags:0x00004000
293[ 365.448686] Workqueue: writeback wb_workfn (flush-8:0)
294[ 365.448687] Call Trace:
295[ 365.448690] __schedule+0x282/0x880
296[ 365.448692] ? wbt_rqw_done+0xf0/0xf0
297[ 365.448693] schedule+0x46/0xb0
298[ 365.448694] io_schedule+0x42/0x70
299[ 365.448696] rq_qos_wait+0xc1/0x150
300[ 365.448698] ? karma_partition+0x1e0/0x1e0
301[ 365.448699] ? wbt_cleanup_cb+0x20/0x20
302[ 365.448701] wbt_wait+0x9d/0x100
303[ 365.448702] __rq_qos_throttle+0x20/0x40
304[ 365.448704] blk_mq_submit_bio+0x128/0x530
305[ 365.448705] submit_bio_noacct+0x3ad/0x420
306[ 365.448721] ext4_io_submit+0x49/0x60 [ext4]
307[ 365.448732] ext4_writepages+0x22e/0xfd0 [ext4]
308[ 365.448734] ? update_load_avg+0x7a/0x5d0
309[ 365.448736] ? update_load_avg+0x7a/0x5d0
310[ 365.448737] ? enqueue_entity+0x163/0x760
311[ 365.448739] do_writepages+0x31/0xc0
312[ 365.448740] __writeback_single_inode+0x39/0x2a0
313[ 365.448742] writeback_sb_inodes+0x20d/0x4a0
314[ 365.448744] __writeback_inodes_wb+0x4c/0xe0
315[ 365.448745] wb_writeback+0x1d8/0x2a0
316[ 365.448747] wb_workfn+0x296/0x4e0
317[ 365.448748] ? __switch_to_asm+0x3a/0x60
318[ 365.448750] process_one_work+0x1b3/0x350
319[ 365.448752] worker_thread+0x53/0x3e0
320[ 365.448753] ? process_one_work+0x350/0x350
321[ 365.448755] kthread+0x118/0x140
322[ 365.448756] ? __kthread_bind_mask+0x60/0x60
323[ 365.448757] ret_from_fork+0x1f/0x30
324[ 365.448759] INFO: task kworker/u97:15:281 blocked for more than 120 seconds.
325[ 365.455805] Not tainted 5.10.0-19-amd64 #1 Debian 5.10.149-2
326[ 365.461985] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
327[ 365.469816] task:kworker/u97:15 state:D stack: 0 pid: 281 ppid: 2 flags:0x00004000
328[ 365.469819] Workqueue: writeback wb_workfn (flush-8:0)
329[ 365.469820] Call Trace:
330[ 365.469821] __schedule+0x282/0x880
331[ 365.469823] ? wbt_rqw_done+0xf0/0xf0
332[ 365.469824] schedule+0x46/0xb0
333[ 365.469826] io_schedule+0x42/0x70
334[ 365.469827] rq_qos_wait+0xc1/0x150
335[ 365.469828] ? karma_partition+0x1e0/0x1e0
336[ 365.469829] ? wbt_cleanup_cb+0x20/0x20
337[ 365.469831] wbt_wait+0x9d/0x100
338[ 365.469832] __rq_qos_throttle+0x20/0x40
339[ 365.469833] blk_mq_submit_bio+0x128/0x530
340[ 365.469835] submit_bio_noacct+0x3ad/0x420
341[ 365.469844] ext4_io_submit+0x49/0x60 [ext4]
342[ 365.469852] ext4_writepages+0x22e/0xfd0 [ext4]
343[ 365.469853] ? update_load_avg+0x7a/0x5d0
344[ 365.469856] ? update_load_avg+0x7a/0x5d0
345[ 365.469857] ? enqueue_entity+0x163/0x760
346[ 365.469858] do_writepages+0x31/0xc0
347[ 365.469861] ? fprop_reflect_period_percpu.isra.0+0x7b/0xc0
348[ 365.469862] __writeback_single_inode+0x39/0x2a0
349[ 365.469864] writeback_sb_inodes+0x20d/0x4a0
350[ 365.469865] __writeback_inodes_wb+0x4c/0xe0
351[ 365.469866] wb_writeback+0x1d8/0x2a0
352[ 365.469868] wb_workfn+0x296/0x4e0
353[ 365.469869] ? __switch_to_asm+0x3a/0x60
354[ 365.469871] process_one_work+0x1b3/0x350
355[ 365.469872] worker_thread+0x53/0x3e0
356[ 365.469873] ? process_one_work+0x350/0x350
357[ 365.469874] kthread+0x118/0x140
358[ 365.469875] ? __kthread_bind_mask+0x60/0x60
359[ 365.469876] ret_from_fork+0x1f/0x30
360[ 365.469878] INFO: task kworker/u97:19:289 blocked for more than 241 seconds.
361[ 365.476929] Not tainted 5.10.0-19-amd64 #1 Debian 5.10.149-2
362[ 365.483116] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
363[ 365.490942] task:kworker/u97:19 state:D stack: 0 pid: 289 ppid: 2 flags:0x00004000
364[ 365.490945] Workqueue: writeback wb_workfn (flush-8:0)
365[ 365.490946] Call Trace:
366[ 365.490948] __schedule+0x282/0x880
367[ 365.490950] ? wbt_rqw_done+0xf0/0xf0
368[ 365.490951] schedule+0x46/0xb0
369[ 365.490952] io_schedule+0x42/0x70
370[ 365.490953] rq_qos_wait+0xc1/0x150
371[ 365.490954] ? karma_partition+0x1e0/0x1e0
372[ 365.490955] ? wbt_cleanup_cb+0x20/0x20
373[ 365.490957] wbt_wait+0x9d/0x100
374[ 365.490958] __rq_qos_throttle+0x20/0x40
375[ 365.490959] blk_mq_submit_bio+0x128/0x530
376[ 365.490960] submit_bio_noacct+0x3ad/0x420
377[ 365.490968] ext4_io_submit+0x49/0x60 [ext4]
378[ 365.490976] ext4_writepages+0x569/0xfd0 [ext4]
379[ 365.490978] ? enqueue_entity+0x163/0x760
380[ 365.490979] do_writepages+0x31/0xc0
381[ 365.490981] __writeback_single_inode+0x39/0x2a0
382[ 365.490982] writeback_sb_inodes+0x20d/0x4a0
383[ 365.490983] __writeback_inodes_wb+0x4c/0xe0
384[ 365.490985] wb_writeback+0x1d8/0x2a0
385[ 365.490986] wb_workfn+0x296/0x4e0
386[ 365.490987] ? __switch_to_asm+0x3a/0x60
387[ 365.490989] process_one_work+0x1b3/0x350
388[ 365.490990] worker_thread+0x53/0x3e0
389[ 365.490991] ? process_one_work+0x350/0x350
390[ 365.490992] kthread+0x118/0x140
391[ 365.490993] ? __kthread_bind_mask+0x60/0x60
392[ 365.490995] ret_from_fork+0x1f/0x30
393[ 365.490997] INFO: task kworker/u98:20:297 blocked for more than 241 seconds.
394[ 365.498050] Not tainted 5.10.0-19-amd64 #1 Debian 5.10.149-2
395[ 365.504227] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
396[ 365.512053] task:kworker/u98:20 state:D stack: 0 pid: 297 ppid: 2 flags:0x00004000
397[ 365.512056] Workqueue: writeback wb_workfn (flush-8:0)
398[ 365.512057] Call Trace:
399[ 365.512060] __schedule+0x282/0x880
400[ 365.512062] ? wbt_rqw_done+0xf0/0xf0
401[ 365.512063] schedule+0x46/0xb0
402[ 365.512064] io_schedule+0x42/0x70
403[ 365.512066] rq_qos_wait+0xc1/0x150
404[ 365.512067] ? karma_partition+0x1e0/0x1e0
405[ 365.512069] ? wbt_cleanup_cb+0x20/0x20
406[ 365.512070] wbt_wait+0x9d/0x100
407[ 365.512071] __rq_qos_throttle+0x20/0x40
408[ 365.512072] blk_mq_submit_bio+0x128/0x530
409[ 365.512074] submit_bio_noacct+0x3ad/0x420
410[ 365.512082] ext4_io_submit+0x49/0x60 [ext4]
411[ 365.512090] ext4_writepages+0x22e/0xfd0 [ext4]
412[ 365.512092] ? update_sd_lb_stats.constprop.0+0xfa/0x8a0
413[ 365.512094] do_writepages+0x31/0xc0
414[ 365.512095] ? find_busiest_group+0x41/0x320
415[ 365.512096] __writeback_single_inode+0x39/0x2a0
416[ 365.512097] writeback_sb_inodes+0x20d/0x4a0
417[ 365.512099] __writeback_inodes_wb+0x4c/0xe0
418[ 365.512100] wb_writeback+0x1d8/0x2a0
419[ 365.512101] wb_workfn+0x296/0x4e0
420[ 365.512103] ? __switch_to_asm+0x3a/0x60
421[ 365.512104] process_one_work+0x1b3/0x350
422[ 365.512105] worker_thread+0x53/0x3e0
423[ 365.512107] ? process_one_work+0x350/0x350
424[ 365.512108] kthread+0x118/0x140
425[ 365.512109] ? __kthread_bind_mask+0x60/0x60
426[ 365.512110] ret_from_fork+0x1f/0x30
427[ 365.512112] INFO: task kworker/u98:21:298 blocked for more than 120 seconds.
428[ 365.519160] Not tainted 5.10.0-19-amd64 #1 Debian 5.10.149-2
429[ 365.525340] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
430[ 365.533180] task:kworker/u98:21 state:D stack: 0 pid: 298 ppid: 2 flags:0x00004000
431[ 365.533184] Workqueue: writeback wb_workfn (flush-8:0)
432[ 365.533185] Call Trace:
433[ 365.533187] __schedule+0x282/0x880
434[ 365.533189] ? wbt_rqw_done+0xf0/0xf0
435[ 365.533190] schedule+0x46/0xb0
436[ 365.533191] io_schedule+0x42/0x70
437[ 365.533193] rq_qos_wait+0xc1/0x150
438[ 365.533194] ? karma_partition+0x1e0/0x1e0
439[ 365.533196] ? wbt_cleanup_cb+0x20/0x20
440[ 365.533198] wbt_wait+0x9d/0x100
441[ 365.533199] __rq_qos_throttle+0x20/0x40
442[ 365.533200] blk_mq_submit_bio+0x128/0x530
443[ 365.533201] submit_bio_noacct+0x3ad/0x420
444[ 365.533209] ext4_bio_write_page+0x30c/0x590 [ext4]
445[ 365.533219] mpage_submit_page+0x4b/0x80 [ext4]
446[ 365.533227] mpage_process_page_bufs+0x11a/0x130 [ext4]
447[ 365.533235] mpage_prepare_extent_to_map+0x1ce/0x2e0 [ext4]
448[ 365.533243] ext4_writepages+0x210/0xfd0 [ext4]
449[ 365.533246] ? update_load_avg+0x4f6/0x5d0
450[ 365.533248] ? enqueue_entity+0x3ea/0x760
451[ 365.533259] ? select_task_rq_fair+0x14e/0x11c0
452[ 365.533269] do_writepages+0x31/0xc0
453[ 365.533280] ? fprop_reflect_period_percpu.isra.0+0x7b/0xc0
454[ 365.533293] ? sched_clock+0x5/0x10
455[ 365.533303] ? sched_clock_cpu+0xc/0xb0
456[ 365.533314] __writeback_single_inode+0x39/0x2a0
457[ 365.533324] writeback_sb_inodes+0x20d/0x4a0
458[ 365.533333] __writeback_inodes_wb+0x4c/0xe0
459[ 365.533343] wb_writeback+0x1d8/0x2a0
460[ 365.533353] wb_workfn+0x296/0x4e0
461[ 365.533364] ? __switch_to_asm+0x3a/0x60
462[ 365.533366] process_one_work+0x1b3/0x350
463[ 365.533368] worker_thread+0x53/0x3e0
464[ 365.533369] ? process_one_work+0x350/0x350
465[ 365.533370] kthread+0x118/0x140
466[ 365.533371] ? __kthread_bind_mask+0x60/0x60
467[ 365.533373] ret_from_fork+0x1f/0x30
468[ 365.533400] INFO: task jbd2/sda1-8:628 blocked for more than 120 seconds.
469[ 365.540186] Not tainted 5.10.0-19-amd64 #1 Debian 5.10.149-2
470[ 365.546364] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
471[ 365.554191] task:jbd2/sda1-8 state:D stack: 0 pid: 628 ppid: 2 flags:0x00004000
472[ 365.554193] Call Trace:
473[ 365.554195] __schedule+0x282/0x880
474[ 365.554197] ? wbt_rqw_done+0xf0/0xf0
475[ 365.554198] schedule+0x46/0xb0
476[ 365.554199] io_schedule+0x42/0x70
477[ 365.554200] rq_qos_wait+0xc1/0x150
478[ 365.554201] ? karma_partition+0x1e0/0x1e0
479[ 365.554203] ? wbt_cleanup_cb+0x20/0x20
480[ 365.554204] wbt_wait+0x9d/0x100
481[ 365.554205] __rq_qos_throttle+0x20/0x40
482[ 365.554206] blk_mq_submit_bio+0x128/0x530
483[ 365.554208] submit_bio_noacct+0x3ad/0x420
484[ 365.554212] ? bio_add_page+0x62/0x90
485[ 365.554215] submit_bh_wbc+0x16a/0x1a0
486[ 365.554226] jbd2_journal_commit_transaction+0x60a/0x1ad0 [jbd2]
487[ 365.554232] kjournald2+0xab/0x270 [jbd2]
488[ 365.554234] ? add_wait_queue_exclusive+0x70/0x70
489[ 365.554237] ? load_superblock.part.0+0xb0/0xb0 [jbd2]
490[ 365.554238] kthread+0x118/0x140
491[ 365.554239] ? __kthread_bind_mask+0x60/0x60
492[ 365.554240] ret_from_fork+0x1f/0x30
493[ 370.302569] megaraid_sas 0000:65:00.0: [15]waiting for 27 commands to complete for scsi0
494[ 375.422586] megaraid_sas 0000:65:00.0: [20]waiting for 27 commands to complete for scsi0
495[ 380.542603] megaraid_sas 0000:65:00.0: [25]waiting for 27 commands to complete for scsi0
496[ 385.662619] megaraid_sas 0000:65:00.0: [30]waiting for 27 commands to complete for scsi0
497[ 390.782639] megaraid_sas 0000:65:00.0: [35]waiting for 27 commands to complete for scsi0
498[ 395.902649] megaraid_sas 0000:65:00.0: [40]waiting for 27 commands to complete for scsi0
499[ 401.022667] megaraid_sas 0000:65:00.0: [45]waiting for 27 commands to complete for scsi0
500[ 406.142693] megaraid_sas 0000:65:00.0: [50]waiting for 27 commands to complete for scsi0
501[ 411.262695] megaraid_sas 0000:65:00.0: [55]waiting for 27 commands to complete for scsi0
502[ 416.382703] megaraid_sas 0000:65:00.0: [60]waiting for 27 commands to complete for scsi0
503[ 421.502719] megaraid_sas 0000:65:00.0: [65]waiting for 27 commands to complete for scsi0
504[ 426.622737] megaraid_sas 0000:65:00.0: [70]waiting for 27 commands to complete for scsi0
505[ 431.742746] megaraid_sas 0000:65:00.0: [75]waiting for 27 commands to complete for scsi0
506[ 436.862759] megaraid_sas 0000:65:00.0: [80]waiting for 27 commands to complete for scsi0
507[ 441.982770] megaraid_sas 0000:65:00.0: [85]waiting for 27 commands to complete for scsi0
508[ 447.102785] megaraid_sas 0000:65:00.0: [90]waiting for 27 commands to complete for scsi0
509[ 452.222796] megaraid_sas 0000:65:00.0: [95]waiting for 27 commands to complete for scsi0
510[ 457.342805] megaraid_sas 0000:65:00.0: [100]waiting for 27 commands to complete for scsi0
511[ 462.462823] megaraid_sas 0000:65:00.0: [105]waiting for 27 commands to complete for scsi0
512[ 467.582835] megaraid_sas 0000:65:00.0: [110]waiting for 27 commands to complete for scsi0
513[ 472.702845] megaraid_sas 0000:65:00.0: [115]waiting for 27 commands to complete for scsi0
514[ 475.737647] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
515[ 477.822854] megaraid_sas 0000:65:00.0: [120]waiting for 27 commands to complete for scsi0
516[ 480.862861] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
517[ 482.942863] megaraid_sas 0000:65:00.0: [125]waiting for 27 commands to complete for scsi0
518[ 485.982871] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
519[ 488.062877] megaraid_sas 0000:65:00.0: [130]waiting for 27 commands to complete for scsi0
520[ 491.102883] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
521[ 493.182888] megaraid_sas 0000:65:00.0: [135]waiting for 27 commands to complete for scsi0
522[ 496.222893] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
523[ 498.302898] megaraid_sas 0000:65:00.0: [140]waiting for 27 commands to complete for scsi0
524[ 501.342920] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
525[ 503.422912] megaraid_sas 0000:65:00.0: [145]waiting for 27 commands to complete for scsi0
526[ 506.462935] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
527[ 508.542941] megaraid_sas 0000:65:00.0: [150]waiting for 27 commands to complete for scsi0
528[ 511.582971] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
529[ 513.662988] megaraid_sas 0000:65:00.0: [155]waiting for 27 commands to complete for scsi0
530[ 516.703025] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
531[ 518.783047] megaraid_sas 0000:65:00.0: [160]waiting for 27 commands to complete for scsi0
532[ 521.823063] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
533[ 523.903074] megaraid_sas 0000:65:00.0: Trigger snap dump
534[ 526.943112] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
535[ 532.063143] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
536[ 537.183191] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
537[ 539.263201] megaraid_sas 0000:65:00.0: resetting fusion adapter scsi0.
538[ 539.264295] megaraid_sas 0000:65:00.0: Outstanding fastpath IOs: 13
539[ 542.303230] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
540[ 546.947269] megaraid_sas 0000:65:00.0: Waiting for FW to come to ready state
541[ 547.423269] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
542[ 552.543312] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
543[ 557.663348] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
544[ 562.783382] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
545[ 566.246389] systemd[1]: systemd-journald.service: State 'stop-watchdog' timed out. Killing.
546[ 566.246464] systemd[1]: systemd-journald.service: Killing process 680 (systemd-journal) with signal SIGKILL.
547[ 566.246599] systemd[1]: systemd-journald.service: Killing process 2370 (journal-offline) with signal SIGKILL.
548[ 566.246629] systemd[1]: systemd-journald.service: Killing process 2371 (journal-offline) with signal SIGKILL.
549[ 567.903416] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
550[ 568.227417] megaraid_sas 0000:65:00.0: FW now in Ready state
551[ 568.227425] megaraid_sas 0000:65:00.0: FW now in Ready state
552[ 568.228551] megaraid_sas 0000:65:00.0: Current firmware supports maximum commands: 5101 LDIO threshold: 0
553[ 568.228556] megaraid_sas 0000:65:00.0: Performance mode :Balanced
554[ 568.228559] megaraid_sas 0000:65:00.0: FW supports sync cache : Yes
555[ 568.228567] megaraid_sas 0000:65:00.0: megasas_disable_intr_fusion is called outbound_intr_mask:0x40000009
556[ 571.083436] megaraid_sas 0000:65:00.0: FW supports atomic descriptor : Yes
557[ 571.139437] megaraid_sas 0000:65:00.0: FW provided supportMaxExtLDs: 1 max_lds: 240
558[ 571.139442] megaraid_sas 0000:65:00.0: controller type : MR(8192MB)
559[ 571.139445] megaraid_sas 0000:65:00.0: Online Controller Reset(OCR) : Enabled
560[ 571.139447] megaraid_sas 0000:65:00.0: Secure JBOD support : No
561[ 571.139449] megaraid_sas 0000:65:00.0: NVMe passthru support : Yes
562[ 571.139452] megaraid_sas 0000:65:00.0: FW provided TM TaskAbort/Reset timeout : 6 secs/60 secs
563[ 571.139454] megaraid_sas 0000:65:00.0: JBOD sequence map support : Yes
564[ 571.139456] megaraid_sas 0000:65:00.0: PCI Lane Margining support : Yes
565[ 573.023454] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
566[ 578.143494] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
567[ 583.263521] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
568[ 588.383554] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
569[ 593.503586] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
570[ 598.623617] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
571[ 599.175616] megaraid_sas 0000:65:00.0: megasas_get_ld_map_info DCMD timed out, RAID map is disabled
572[ 603.743646] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
573[ 606.851659] megaraid_sas 0000:65:00.0: Waiting for FW to come to ready state
574[ 608.863673] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
575[ 613.983704] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
576[ 619.103733] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
577[ 624.223760] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
578[ 628.243781] megaraid_sas 0000:65:00.0: FW now in Ready state
579[ 628.243788] megaraid_sas 0000:65:00.0: FW now in Ready state
580[ 628.244932] megaraid_sas 0000:65:00.0: Current firmware supports maximum commands: 5101 LDIO threshold: 0
581[ 628.244937] megaraid_sas 0000:65:00.0: Performance mode :Balanced
582[ 628.244939] megaraid_sas 0000:65:00.0: FW supports sync cache : Yes
583[ 628.244947] megaraid_sas 0000:65:00.0: megasas_disable_intr_fusion is called outbound_intr_mask:0x40000009
584[ 628.775783] megaraid_sas 0000:65:00.0: FW supports atomic descriptor : Yes
585[ 628.859782] megaraid_sas 0000:65:00.0: FW provided supportMaxExtLDs: 1 max_lds: 240
586[ 628.859787] megaraid_sas 0000:65:00.0: controller type : MR(8192MB)
587[ 628.859790] megaraid_sas 0000:65:00.0: Online Controller Reset(OCR) : Enabled
588[ 628.859792] megaraid_sas 0000:65:00.0: Secure JBOD support : No
589[ 628.859794] megaraid_sas 0000:65:00.0: NVMe passthru support : Yes
590[ 628.859797] megaraid_sas 0000:65:00.0: FW provided TM TaskAbort/Reset timeout : 6 secs/60 secs
591[ 628.859799] megaraid_sas 0000:65:00.0: JBOD sequence map support : Yes
592[ 628.859801] megaraid_sas 0000:65:00.0: PCI Lane Margining support : Yes
593[ 629.343796] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
594[ 634.463823] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
595[ 639.583839] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
596[ 644.703861] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
597[ 649.823892] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
598[ 654.943916] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
599[ 656.496882] systemd[1]: systemd-journald.service: Processes still around after SIGKILL. Ignoring.
600[ 656.887924] megaraid_sas 0000:65:00.0: megasas_get_ld_map_info DCMD timed out, RAID map is disabled
601[ 660.063942] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
602[ 662.481290] systemd[1]: session-4.scope: Succeeded.
603[ 664.599960] megaraid_sas 0000:65:00.0: Waiting for FW to come to ready state
604[ 665.183966] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
605[ 670.303990] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
606[ 675.424014] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
607[ 680.544034] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
608[ 685.044053] megaraid_sas 0000:65:00.0: FW now in Ready state
609[ 685.044061] megaraid_sas 0000:65:00.0: FW now in Ready state
610[ 685.045181] megaraid_sas 0000:65:00.0: Current firmware supports maximum commands: 5101 LDIO threshold: 0
611[ 685.045186] megaraid_sas 0000:65:00.0: Performance mode :Balanced
612[ 685.045189] megaraid_sas 0000:65:00.0: FW supports sync cache : Yes
613[ 685.045197] megaraid_sas 0000:65:00.0: megasas_disable_intr_fusion is called outbound_intr_mask:0x40000009
614[ 685.576052] megaraid_sas 0000:65:00.0: FW supports atomic descriptor : Yes
615[ 685.632053] megaraid_sas 0000:65:00.0: FW provided supportMaxExtLDs: 1 max_lds: 240
616[ 685.632058] megaraid_sas 0000:65:00.0: controller type : MR(8192MB)
617[ 685.632061] megaraid_sas 0000:65:00.0: Online Controller Reset(OCR) : Enabled
618[ 685.632063] megaraid_sas 0000:65:00.0: Secure JBOD support : No
619[ 685.632066] megaraid_sas 0000:65:00.0: NVMe passthru support : Yes
620[ 685.632069] megaraid_sas 0000:65:00.0: FW provided TM TaskAbort/Reset timeout : 6 secs/60 secs
621[ 685.632071] megaraid_sas 0000:65:00.0: JBOD sequence map support : Yes
622[ 685.632073] megaraid_sas 0000:65:00.0: PCI Lane Margining support : Yes
623[ 685.632079] megaraid_sas 0000:65:00.0: return -EBUSY from megasas_refire_mgmt_cmd 4249 cmd 0x5 opcode 0x10b0100
624[ 685.642287] megaraid_sas 0000:65:00.0: return -EBUSY from megasas_mgmt_fw_ioctl 8325 cmd 0x5 opcode 0x10b0100 cmd->cmd_status_drv 0x3
625[ 685.664064] megaraid_sas 0000:65:00.0: waiting for controller reset to finish
626[ 685.696101] megaraid_sas 0000:65:00.0: megasas_enable_intr_fusion is called outbound_intr_mask:0x40000000
627[ 685.696793] megaraid_sas 0000:65:00.0: Adapter is OPERATIONAL for scsi:0
628[ 685.698506] megaraid_sas 0000:65:00.0: Snap dump wait time : 15
629[ 685.698511] megaraid_sas 0000:65:00.0: Reset successful for scsi0.
630[ 685.698588] megaraid_sas 0000:65:00.0: megasas_disable_intr_fusion is called outbound_intr_mask:0x40000009
631[ 685.698653] megaraid_sas 0000:65:00.0: megasas_enable_intr_fusion is called outbound_intr_mask:0x40000000
632[ 685.699221] megaraid_sas 0000:65:00.0: 1695 (723194424s/0x0020/CRIT) - Controller encountered an error and was reset
633[ 685.711348] megaraid_sas 0000:65:00.0: scanning for scsi0...
634[ 685.711585] megaraid_sas 0000:65:00.0: 1740 (723194469s/0x0020/DEAD) - Fatal firmware error: Line 171 in fw\raid\utils.c
635
636[ 685.711760] megaraid_sas 0000:65:00.0: 1743 (723194477s/0x0020/CRIT) - Controller encountered an error and was reset
637[ 685.714974] megaraid_sas 0000:65:00.0: scanning for scsi0...
638[ 685.715224] megaraid_sas 0000:65:00.0: 1787 (723194520s/0x0020/DEAD) - Fatal firmware error: Line 171 in fw\raid\utils.c
639
640[ 685.715330] megaraid_sas 0000:65:00.0: 1790 (723194528s/0x0020/CRIT) - Controller encountered an error and was reset
641[ 685.717602] megaraid_sas 0000:65:00.0: scanning for scsi0...
642[ 695.971152] systemd[1]: prometheus-debian-version-textfile.service: Succeeded.
643[ 695.977540] systemd[1]: systemd-journald.service: Main process exited, code=killed, status=9/KILL
644[ 695.977552] systemd[1]: systemd-journald.service: Failed with result 'watchdog'.
645[ 695.980251] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
646[ 695.982889] systemd[1]: Stopping Flush Journal to Persistent Storage...
647[ 695.991723] systemd[1]: systemd-journal-flush.service: Succeeded.
648[ 695.992044] systemd[1]: Stopped Flush Journal to Persistent Storage.
649[ 695.992450] systemd[1]: Stopped Journal Service.
650[ 695.996153] systemd[1]: Starting Journal Service...
651[ 695.996769] systemd[1]: prometheus_puppet_agent_stats.service: Succeeded.
652[ 696.000451] systemd[1]: Started Regular job to collect puppet agent stats.
653[ 696.016412] systemd[1]: prometheus-nic-firmware-textfile.service: Succeeded.
654[ 696.032953] systemd-journald[2721]: File /var/log/journal/ba95df37d8d64c4583ad80128a0ad521/system.journal corrupted or uncleanly shut down, renaming and replacing.
655[ 696.066583] systemd[1]: Started Journal Service.
656[ 696.069139] systemd-journald[2721]: File /var/log/journal/ba95df37d8d64c4583ad80128a0ad521/user-15343.journal corrupted or uncleanly shut down, renaming and replacing.
657[ 696.078876] systemd-journald[2721]: Received client request to flush runtime journal.

Thu, Dec 1, 7:25 AM · Infrastructure-Foundations, DBA
Marostegui created P42112 (An Untitled Masterwork).
Thu, Dec 1, 7:24 AM
Marostegui moved T324181: Test new PERC 755 controller on DB hosts from Triage to In progress on the DBA board.

db1206 got installed correctly and worked out of the box.
However, trying to play around with megacli has revealed that the host gets frozen when running some of the commands to get controller's info.

Thu, Dec 1, 7:21 AM · Infrastructure-Foundations, DBA
Marostegui triaged T324181: Test new PERC 755 controller on DB hosts as High priority.
Thu, Dec 1, 7:16 AM · Infrastructure-Foundations, DBA
Marostegui created T324181: Test new PERC 755 controller on DB hosts.
Thu, Dec 1, 7:16 AM · Infrastructure-Foundations, DBA
Marostegui reassigned T322591: Requesting access to analytics-privatedata-users for Dasm from Jcross to andrea.denisse.
Thu, Dec 1, 6:42 AM · SRE, SRE-Access-Requests
Marostegui added a comment to T322591: Requesting access to analytics-privatedata-users for Dasm.

@andrea.denisse - can you merge and submit your patch so we can create the kerberos principal and close this task?
Thanks!

Thu, Dec 1, 6:42 AM · SRE, SRE-Access-Requests
Marostegui closed T314676: Requesting access to analytics-privatedata-users for vpoundstone - WMF as Resolved.

I am going to close this, please reopen if adding you to the WMF group wasn't enough.

Thu, Dec 1, 6:29 AM · SRE, SRE-Access-Requests
Marostegui added a comment to T324058: Requesting access to Turnilo for USER:Damilare Adedoyin.

Looks like @Damilare is already part of the analytics-privatedata-users: T319057

Thu, Dec 1, 6:27 AM · SRE-Access-Requests, SRE
Marostegui updated the task description for T324058: Requesting access to Turnilo for USER:Damilare Adedoyin.
Thu, Dec 1, 6:22 AM · SRE-Access-Requests, SRE
Marostegui updated the task description for T324058: Requesting access to Turnilo for USER:Damilare Adedoyin.
Thu, Dec 1, 6:21 AM · SRE-Access-Requests, SRE
Marostegui added a comment to T321130: Add column cuc_private to cu_changes on wmf wikis.

Applied to db2173 which was down as part of T322988

Thu, Dec 1, 6:20 AM · DBA, Schema-change-in-production
Marostegui added a comment to T322988: db2173 HW errors.

Thank you Papaul!

Thu, Dec 1, 6:12 AM · DBA, SRE, ops-codfw
Marostegui added a comment to T321126: Add column 'cul_actor' and index cul_actor_time to cu_log on wmf wikis.

Applied to db2173 which was down as part of T322988

Thu, Dec 1, 6:11 AM · DBA, Schema-change-in-production

Wed, Nov 30

Marostegui added a comment to T324058: Requesting access to Turnilo for USER:Damilare Adedoyin.

Thanks @Damilare - which access do you need? https://wikitech.wikimedia.org/wiki/Analytics/Data_access#What_access_should_I_request?
You just need Turnilo without accessing PII? It is not clear what you'd need :)

Wed, Nov 30, 5:04 PM · SRE-Access-Requests, SRE
Marostegui added a comment to T324101: Request for access to analytics-platform-eng-admins for mlitn.

I will hold until the group needed is sorted :)

Wed, Nov 30, 3:57 PM · SRE, SRE-Access-Requests
Marostegui added a comment to T319383: Mydumper incompatibility with MariaDB 10.6 (was: Logical recoveries (myloader) to db2098:s7 are failing with "Lock wait timeout exceeded; try restarting transaction").

It would be interesting to see the difference in length, in a sX section between that script and myloader. Just to have some rough idea...

Length in time? As in, performance optimized? Once parallelization is done we can can compare (but I don't expect a bash script to be faster than a proper C native-connector application, though).

Wed, Nov 30, 3:22 PM · Patch-For-Review, Data-Persistence-Backup, DBA, database-backups
Marostegui added a comment to T319383: Mydumper incompatibility with MariaDB 10.6 (was: Logical recoveries (myloader) to db2098:s7 are failing with "Lock wait timeout exceeded; try restarting transaction").

It would be interesting to see the difference in length, in a sX section between that script and myloader. Just to have some rough idea...

Wed, Nov 30, 3:11 PM · Patch-For-Review, Data-Persistence-Backup, DBA, database-backups
Marostegui moved T324101: Request for access to analytics-platform-eng-admins for mlitn from Untriaged to Manager/NDA Approval/Confirmation on the SRE-Access-Requests board.
Wed, Nov 30, 1:16 PM · SRE, SRE-Access-Requests
Marostegui updated subscribers of T324014: New Keyholder identity for RelEng Jenkins service.
Wed, Nov 30, 1:16 PM · serviceops-collab, SRE-Access-Requests, SRE, Continuous-Integration-Infrastructure, Jenkins
Marostegui triaged T324101: Request for access to analytics-platform-eng-admins for mlitn as Medium priority.
Wed, Nov 30, 1:15 PM · SRE, SRE-Access-Requests
Marostegui added a comment to T324101: Request for access to analytics-platform-eng-admins for mlitn.

Confirmed L3 is signed.
@MarkTraceur we need your approval for this.

Wed, Nov 30, 12:48 PM · SRE, SRE-Access-Requests
Marostegui updated the task description for T324101: Request for access to analytics-platform-eng-admins for mlitn.
Wed, Nov 30, 12:45 PM · SRE, SRE-Access-Requests
Marostegui added a comment to T314676: Requesting access to analytics-privatedata-users for vpoundstone - WMF.

@VirginiaPoundstone you were not in the WMF LDAP group. I just added you. Can you retry?

Wed, Nov 30, 11:14 AM · SRE, SRE-Access-Requests
Marostegui closed T323911: Grant Access to wmf for abartov as Resolved.

This is done - please give it 30-60 minutes for the change to spread everywhere.

Wed, Nov 30, 9:55 AM · SRE, LDAP-Access-Requests
Marostegui moved T324057: Requesting access to Turnilo for USER:wfan from Untriaged to Awaiting User Input on the SRE-Access-Requests board.
Wed, Nov 30, 8:31 AM · Patch-For-Review, SRE-Access-Requests, SRE
Marostegui moved T324058: Requesting access to Turnilo for USER:Damilare Adedoyin from Untriaged to Awaiting User Input on the SRE-Access-Requests board.
Wed, Nov 30, 8:31 AM · SRE-Access-Requests, SRE
Marostegui triaged T324057: Requesting access to Turnilo for USER:wfan as Medium priority.
Wed, Nov 30, 7:10 AM · Patch-For-Review, SRE-Access-Requests, SRE
Marostegui triaged T324058: Requesting access to Turnilo for USER:Damilare Adedoyin as Medium priority.
Wed, Nov 30, 7:10 AM · SRE-Access-Requests, SRE
Marostegui added a comment to T324058: Requesting access to Turnilo for USER:Damilare Adedoyin.

Hello @Damilare can you please follow the proper template at: https://phabricator.wikimedia.org/maniphest/task/edit/form/8/

Wed, Nov 30, 7:10 AM · SRE-Access-Requests, SRE
Marostegui added a comment to T324057: Requesting access to Turnilo for USER:wfan.

Hello @AnnWF can you please follow the proper template at: https://phabricator.wikimedia.org/maniphest/task/edit/form/8/

Wed, Nov 30, 7:10 AM · Patch-For-Review, SRE-Access-Requests, SRE
Marostegui moved T323911: Grant Access to wmf for abartov from Manager Approval Pending to Code Review Pending on the LDAP-Access-Requests board.
Wed, Nov 30, 7:08 AM · SRE, LDAP-Access-Requests
Marostegui closed T321126: Add column 'cul_actor' and index cul_actor_time to cu_log on wmf wikis, a subtask of T233004: Update CheckUser for actor and comment table, as Resolved.
Wed, Nov 30, 6:51 AM · MW-1.39-notes (1.39.0-wmf.23; 2022-08-01), MW-1.38-notes (1.38.0-wmf.26; 2022-03-14), Data-Engineering, Platform Team Workboards (Clinic Duty Team), Patch-For-Review, Schema-change, CheckUser
Marostegui added a comment to T321126: Add column 'cul_actor' and index cul_actor_time to cu_log on wmf wikis.

All done

Wed, Nov 30, 6:51 AM · DBA, Schema-change-in-production
Marostegui closed T321126: Add column 'cul_actor' and index cul_actor_time to cu_log on wmf wikis, a subtask of T321063: Fix CheckUser database schema drifts in production, as Resolved.
Wed, Nov 30, 6:51 AM · CheckUser
Marostegui closed T321126: Add column 'cul_actor' and index cul_actor_time to cu_log on wmf wikis as Resolved.
Wed, Nov 30, 6:51 AM · DBA, Schema-change-in-production

Tue, Nov 29

Marostegui updated subscribers of T322256: Q3:rack/setup/install db1206.

Is this something @Papaul can finish?

Tue, Nov 29, 3:59 PM · SRE, DBA, ops-eqiad, DC-Ops
Marostegui edited projects for T324020: Load IP ranges in reverse-proxy.php from Netbox/Puppet network module, added: Infrastructure-Foundations, serviceops; removed SRE.

I think this might be more specific for these two teams.

Tue, Nov 29, 3:19 PM · serviceops, Infrastructure-Foundations
Marostegui moved T323911: Grant Access to wmf for abartov from Awaiting User Input to Manager Approval Pending on the LDAP-Access-Requests board.
Tue, Nov 29, 11:51 AM · SRE, LDAP-Access-Requests
Marostegui moved T323911: Grant Access to wmf for abartov from Backlog to Awaiting User Input on the LDAP-Access-Requests board.
Tue, Nov 29, 11:51 AM · SRE, LDAP-Access-Requests
Marostegui moved T323941: Add Kelton Hurd to wmf ldap group from Backlog to Awaiting User Input on the LDAP-Access-Requests board.
Tue, Nov 29, 11:51 AM · SecTeam-Processed, LDAP-Access-Requests, SRE, Security-Team
Marostegui moved T323943: Add Kelton Hurd to deployment and analytics-privatedata-users groups from Untriaged to Awaiting User Input on the SRE-Access-Requests board.
Tue, Nov 29, 11:51 AM · SecTeam-Processed, SRE-Access-Requests, SRE, Security-Team
Marostegui updated the task description for T243037: Shutdown scholarships.wikimedia.org and archive project.
Tue, Nov 29, 11:48 AM · Patch-For-Review, Wikimedia-GitHub, Diffusion-Repository-Administrators, Projects-Cleanup, Wikimedia-Wikimania-Scholarships
Marostegui added a comment to T243037: Shutdown scholarships.wikimedia.org and archive project.

Thank you Jaime!

Tue, Nov 29, 11:48 AM · Patch-For-Review, Wikimedia-GitHub, Diffusion-Repository-Administrators, Projects-Cleanup, Wikimedia-Wikimania-Scholarships
Marostegui closed T323928: Downgrade from 10.4.27 to 10.4.26, a subtask of T322620: Compile and package MariaDB 10.4.27 and 10.6.11, as Resolved.
Tue, Nov 29, 9:16 AM · DBA
Marostegui closed T323928: Downgrade from 10.4.27 to 10.4.26 as Resolved.

All done

Tue, Nov 29, 9:16 AM · Patch-For-Review, DBA
Marostegui updated the task description for T323928: Downgrade from 10.4.27 to 10.4.26.
Tue, Nov 29, 9:15 AM · Patch-For-Review, DBA