Page MenuHomePhabricator
Paste P43154

asw-b2-codfw crash IR
ActivePublic

Authored by RhinosF1 on Jan 14 2023, 11:16 AM.
Tags
None
Referenced Files
F36245119: asw-b2-codfw crash IR
Jan 16 2023, 7:55 AM
F36203119: asw-b2-codfw crash IR
Jan 14 2023, 11:16 AM
Subscribers
None
{{irdoc|status=draft}}
== Summary ==
{{Incident scorecard
| task = T327001
| paged-num = batphone?
| responders-num = 4 SRE, 2 Volunteers
| coordinators = ?
| start = 08:18 UTC
| end = ?
}}
asw-b2-codfw crashed causing 18 hosts to go down. No user impact.
<!-- Reminder: No private information on this page! --><mark>Summary of what happened, in one or two paragraphs. Avoid assuming deep knowledge of the systems here, and try to differentiate between proximate causes and root causes.</mark>
{{TOC|align=right}}
==Timeline==
<mark>Write a step by step outline of what happened to cause the incident, and how it was remedied. Include the lead-up to the incident, and any epilogue.</mark>
<mark>Consider including a graphs of the error rate or other surrogate.</mark>
<mark>Link to a specific offset in SAL using the SAL tool at https://sal.toolforge.org/ ([https://sal.toolforge.org/production?q=synchronized&d=2012-01-01 example])</mark>
''All times in UTC.''
*08:18 batch of host down alerts fire '''INCIDENT BEGINS'''
*08:22 juniper virtual chassis for asw-b2-codfw goes critical
*08:34 RhinosF1 notices on IRC and pings an SRE who showed as recently active
*08:43 RhinosF1 files task
*08:59 taavi klaxons
*09:09 godog (first SRE) responds
*09:19 Emporer depools ms-fe20(10|02)
*09:46 godog attempts to reboot the switch
*10:23 XiNoNox confirms console is dead and needs an RMA
*10:50 immediate incident stood down
*XX:XX switch replaced/fixed (TODO)
<!-- Reminder: No private information on this page! -->
<mark>TODO: Clearly indicate when the user-visible outage began and ended.</mark>
==Detection==
<mark>Write how the issue was first detected. Was automated monitoring first to detect it? Or a human reporting an error?</mark>
<mark>Copy the relevant alerts that fired in this section.</mark>
<mark>Did the appropriate alert(s) fire? Was the alert volume manageable? Did they point to the problem with as much accuracy as possible?</mark>
<mark>TODO: If human only, an actionable should probably be to "add alerting".</mark>
* PROBLEM - Host XXXX is DOWN: PING CRITICAL - Packet loss = 100%
* PROBLEM - BGP status on cr(1|2)-codfw is CRITICAL: BGP CRITICAL - AS64600/IPv4: Connect - PyBal https://wikitech.wikimedia.org/wiki/Network_monitoring%23BGP_status
* PROBLEM - Juniper virtual chassis ports on asw-b-codfw is CRITICAL: CRIT: Down: 7 Unknown: 0 https://wikitech.wikimedia.org/wiki/Network_monitoring%23VCP_status
* (virtual-chassis crash) firing: Alert for device asw-b-codfw.mgmt.codfw.wmnet - virtual-chassis crash  - https://alerts.wikimedia.org/?q=alertname%3Dvirtual-chassis+crash
==Conclusions ==
<mark>OPTIONAL: General conclusions (bullet points or narrative)</mark>
===What went well?===
* No user impact, all systems failed over or were redundant as expected
<mark>OPTIONAL: (Use bullet points) for example: automated monitoring detected the incident, outage was root-caused quickly, etc</mark>
===What went poorly?===
* lack of knowledge about how to handle a broken switch or contact remote hands
* page was manual, this could have delayed response if a worse set of servers
* weekend so limited SRE online
<mark>OPTIONAL: (Use bullet points) for example: documentation on the affected service was unhelpful, communication difficulties, etc</mark>
===Where did we get lucky?===
* no databases, appservers and only 2 cache proxies affected, therefore user facing impact was minimal to none
<mark>OPTIONAL: (Use bullet points) for example: user's error report was exceptionally detailed, incident occurred when the most people were online to assist, etc</mark>
==Links to relevant documentation==
* …
<mark>Add links to information that someone responding to this alert should have (runbook, plus supporting docs). If that documentation does not exist, add an action item to create it.</mark>
==Actionables==
* Add documentation on how to handle a virtual chassis crash
* Decide if alerting was appropriate
* be clearer on when to depool, deploys couldn’t happen to MediaWiki so codfw got depooled on Monday.
<mark>Create a list of action items that will help prevent this from happening again as much as possible. Link to or create a Phabricator task for every step.</mark>
<mark>Add the [[phab:project/view/4758/|#Sustainability (Incident Followup)]] and the [[phab:project/profile/4626/|#SRE-OnFIRE (Pending Review & Scorecard)]] Phabricator tag to these tasks.</mark>
==Scorecard==
{| class="wikitable"
|+[[Incident Scorecard|Incident Engagement ScoreCard]]
!
!Question
!Answer
(yes/no)
!Notes
|-
! rowspan="5" |People
|Were the people responding to this incident sufficiently different than the previous five incidents?
| yes
|
|-
|Were the people who responded prepared enough to respond effectively
| no
| initial response unable to determine how to deal with switch
|-
|Were fewer than five people paged?
| no? batphone?
|
|-
|Were pages routed to the correct sub-team(s)?
| ?
|
|-
|Were pages routed to online (business hours) engineers?  ''Answer “no” if engineers were paged after business hours.''
| no
| out of hours
|-
! rowspan="5" |Process
|Was the "Incident status" section atop the Google Doc kept up-to-date during the incident?
| N/A
| no doc
|-
| Was a public wikimediastatus.net entry created?
| no
| no user impact
|-
|Is there a phabricator task for the incident?
| yes
|
|-
|Are the documented action items assigned?
| no
|
|-
|Is this incident sufficiently different from earlier incidents so as not to be a repeat occurrence?
| yes
|
|-
! rowspan="5" |Tooling
|To the best of your knowledge was the open task queue free of any tasks that would have prevented this incident? ''Answer “no” if there are''
''open tasks that would prevent this incident or make mitigation easier if implemented.''
| yes
|
|-
| Were the people responding able to communicate effectively during the incident with the existing tooling?
| yes
|
|-
|Did existing monitoring notify the initial responders?
| yes
|
|-
|Were the engineering tools that were to be used during the incident, available and in service?
| yes
|
|-
|Were the steps taken to mitigate guided by an existing runbook?
| no
|
|-
! colspan="2" align="right" |Total score (count of all “yes” answers above)
| 7/14
| scorecard guessed
|}