Page MenuHomePhabricator

Lost connection to MariaDB server during query
Closed, DeclinedPublic

Description

Wikidata query (WDQ) is running on wikidata-wdq-mm. It uses the wikidatawiki_p replica on 10.64.37.4 to update.

Since (about-ish) last night, updates fail. WDQ connects to the database, but then soon fails with "MySQL server has gone away".

Whatever changed last night, please fix it! I can't maintain tools if the infrastructure keeps changing under my feet.

Event Timeline

Magnus created this task.Dec 4 2014, 9:33 AM
Magnus raised the priority of this task from to Unbreak Now!.
Magnus updated the task description. (Show Details)
Magnus added a project: Toolforge.
Magnus changed Security from none to None.
Magnus added a subscriber: Magnus.
Husky added a subscriber: Husky.Dec 4 2014, 9:49 AM

Update: I think I have coded around the issue for now. Time will tell if it works properly.

Still scary that such changes just happen and break things that worked perfectly well before.

This seems related to T76956

Steinsplitter renamed this task from Wikidata query breaking after DB change (?) to Lost connection to MariaDB server during query.Dec 10 2014, 10:30 AM
Steinsplitter assigned this task to coren.
Steinsplitter lowered the priority of this task from Unbreak Now! to High.
Steinsplitter added a subscriber: Danilo.
MariaDB [commonswiki_p]> SELECT * FROM user_daily_contribs LIMIT 3;
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id:    1943900
Current database: commonswiki_p

+---------+------------+----------+
| user_id | day        | contribs |
+---------+------------+----------+
|      -1 | 2009-04-11 |        1 |
|      -1 | 2009-06-14 |        1 |
|      -1 | 2009-06-20 |        1 |
+---------+------------+----------+
3 rows in set (0.14 sec)
coren reassigned this task from coren to Springle.Dec 23 2014, 7:35 PM
coren moved this task from Triage to Waiting for information on the Toolforge board.
coren added a subscriber: Springle.

I see the problem occuring intermitently, but I am unable to reproduce it myself.

@Springle: do you have insight on what could be going on / has changed?

Lydia_Pintscher moved this task from incoming to monitoring on the Wikidata board.Dec 27 2014, 8:19 PM

No smoking gun yet.

As discussed a few times on labs-l in order fight abuse and replag we kill things explicitly when:

  1. A query runs for more that 28800 seconds
  2. One or more queries are about to collectively cause an OOM (rare)
  3. A client holds a connection open and idle for more than 60 seconds
  4. The CATSCAN stuff runs slow writes for more than 300 seconds

I wonder if #3 is the culprit here.

60s was chosen because some users leave transactions open for long periods and/or consume many concurrent connections for no good reason. If this is the problem, we can either bump up the time limit or encourage apps to auto-reconnect, use keep-alive, or just close them.

#4 is moot, as I changed CatScat to not use temporary tables anymore.

If you still see this happening, it's probably this copy:
http://tools.wmflabs.org/catscan3/catscan2.php
which was done by someone else, and has no maintainer...

I noticed quite a few of my bots stopped working somewhere start of December last year. All crashed with "MySQL server has gone away". I'm pretty sure it's the 60 seconds idle limit.

The bot uses two connections:

  1. With the commonswiki_p database to do queries
  2. One with a database on labs to store the results

Do query on 1, wait wait wait, insert result in 2, maybe a bit of waiting, do query on 1. So if either query will be longer than 60 seconds, the other connection would have been dropped. Could you please drop the 60 seconds limit or just remove it for the heritage account?

Multichill raised the priority of this task from High to Unbreak Now!.Jan 18 2015, 3:18 PM
Springle added a comment.EditedJan 23 2015, 10:30 PM

Unsure if your #1 and #2 are both on labsdb replicas, or if #2 is some other db like tools-db or a custom instance.

If #1 and #2 are both connections to the labsdb production replicas, then yes, either connection going idle for 60s will result in its disconnection.

If #2 is tools-db or a custom instance, then only your "insert result in 2" taking over 60s will cause a #1 labsdb replica disconnect.

In both cases you're still free to run queries over 60s on labsdb replicas without disconnection. Tools that experience disconnection for any reason, whether due to idle timeout or some other issue, need to be able to reconnect; relying on persistent connections is simply never going to work in all cases.

As for raising the 60s idle limit, we need some value. @coren?

It's pretty clear that any tool that aims for reliability necessarily must have the ability to reconnect if the connection is dropped between transactions. It's not clear that values much over 60s are useful (any limit will end up being a problem if the code doesn't reconnect) but I could see 120s as a reasonable compromise to cover cases where a bit of unplanned sluggishness causes intermittent issues.

All of that said, a tool that breaks because it does not have error handling to recover from a connection to the DB being disconnected can never expect to be stable or reliable in the long term and needs to be fixed.

So how to proceed here (asking as this has Priority "Unbreak now", if that urgency is realistic)?

coren lowered the priority of this task from Unbreak Now! to High.Feb 16 2015, 1:50 PM

It's a serious pain point for some tool mainteners, but can be fixed tool-side also. I'm lowering to "high" because we realistically want to do some tweaking server side sooner rather than later.

scfc closed this task as Declined.Apr 6 2015, 1:29 AM
scfc added a subscriber: scfc.

I understand this task to be "bring back uninterrupted connections", and this seems to be off the table. Increasing the timeout from one minute to two is, as already mentioned :-), a "compromise" that will not free tool developers from designing their applications with the possibility in mind that the servers will punish you if you idle.

Depending on the individual use case, there can be work-arounds like implementing some automatic reconnect if you don't need transactions, or doing data crunching in a user database on the replica servers, and then transfering the results en bloc to somewhere else.

Multichill reopened this task as Open.Apr 6 2015, 6:56 AM

Just declining (lack of) usability bugs is not the way to go forward.

scfc added a comment.Apr 6 2015, 8:51 AM

Oh, if I had only known that me summarizing the comments in this task and thinking about the different implications of different approaches could be shortened to "just declining", I would have indeed just declined.

valhallasw closed this task as Declined.EditedJul 2 2015, 8:18 PM
valhallasw added a subscriber: valhallasw.

I don't see what we can reasonably do here. An infinite connection limit is off the table, and a minute does not seem too unreasonable [1]. If there is a specific other time limit you think would work, please re-open the task.

Solving it client-side should be possible with MYSQL_OPT_RECONNECT with mysql_ping(): https://dev.mysql.com/doc/refman/5.0/en/auto-reconnect.html
or by just catching 'server has gone away' and reconnecting.

[1] Although the configuration seems to specify wait_timeout = 3600 for labsdb, so it should be an hour rather than a minute.

Restricted Application added a project: Cloud-Services. · View Herald TranscriptJul 2 2015, 8:18 PM