suddenly these popped up
```
18:30 < icinga-wm> PROBLEM - MariaDB Slave IO: m3 on dbstore2002 is CRITICAL: CRITICAL slave_io_state could not connect
18:30 < icinga-wm> PROBLEM - MariaDB Slave IO: s1 on dbstore2002 is CRITICAL: CRITICAL slave_io_state could not connect
18:31 < icinga-wm> PROBLEM - MariaDB Slave SQL: s4 on dbstore2002 is CRITICAL: CRITICAL slave_sql_state could not connect
18:31 < icinga-wm> PROBLEM - MariaDB Slave IO: s3 on dbstore2002 is CRITICAL: CRITICAL slave_io_state could not connect
18:31 < icinga-wm> PROBLEM - MariaDB Slave SQL: s5 on dbstore2002 is CRITICAL: CRITICAL slave_sql_state could not connect
18:31 < icinga-wm> PROBLEM - MariaDB Slave SQL: s6 on dbstore2002 is CRITICAL: CRITICAL slave_sql_state could not connect
18:31 < icinga-wm> PROBLEM - MariaDB Slave IO: s6 on dbstore2002 is CRITICAL: CRITICAL slave_io_state could not connect
18:31 < icinga-wm> PROBLEM - MariaDB Slave SQL: m2 on dbstore2002 is CRITICAL: CRITICAL slave_sql_state could not connect
18:31 < icinga-wm> PROBLEM - MariaDB Slave IO: s7 on dbstore2002 is CRITICAL: CRITICAL slave_io_state could not connect
18:31 < icinga-wm> PROBLEM - MariaDB Slave SQL: x1 on dbstore2002 is CRITICAL: CRITICAL slave_sql_state could not connect
18:31 < icinga-wm> PROBLEM - MariaDB Slave SQL: m3 on dbstore2002 is CRITICAL: CRITICAL slave_sql_state could not connect
18:31 < icinga-wm> PROBLEM - MariaDB Slave IO: x1 on dbstore2002 is CRITICAL: CRITICAL slave_io_state could not connect
18:32 < icinga-wm> PROBLEM - MariaDB Slave IO: m2 on dbstore2002 is CRITICAL: CRITICAL slave_io_state could not connect
18:32 < icinga-wm> PROBLEM - MariaDB Slave SQL: s1 on dbstore2002 is CRITICAL: CRITICAL slave_sql_state could not connect
18:32 < icinga-wm> PROBLEM - MariaDB Slave SQL: s2 on dbstore2002 is CRITICAL: CRITICAL slave_sql_state could not connect
18:32 < icinga-wm> PROBLEM - MariaDB Slave SQL: s3 on dbstore2002 is CRITICAL: CRITICAL slave_sql_state could not connect
18:32 < icinga-wm> PROBLEM - MariaDB Slave IO: s2 on dbstore2002 is CRITICAL: CRITICAL slave_io_state could not connect
18:33 < icinga-wm> PROBLEM - MariaDB Slave SQL: s7 on dbstore2002 is CRITICAL: CRITICAL slave_sql_state could not connect
18:33 < icinga-wm> PROBLEM - MariaDB Slave IO: s4 on dbstore2002 is CRITICAL: CRITICAL slave_io_state could not connect
18:33 < icinga-wm> PROBLEM - MariaDB Slave IO: s5 on dbstore2002 is CRITICAL: CRITICAL slave_io_state could not connect
```
but dbstore2002 was up and running, though with **--skip-slave-start.**
```
18:33 < mutante> that may look scary but it's all the same server, dbstore2002 and that is up with mysql running and it has --skip-slave-start
18:33 < mutante> i guess that can explain it. but doesnt mean i know why
18:33 < mutante> jynus:
18:37 < mutante> i would call if it was the actual db servers, but with just one of the dbstores i think not
```
```
root@dbstore2002:~# ps aux | grep maria
root 3427 0.0 0.0 4440 656 ? S Mar18 0:00 /bin/sh /opt/wmf-mariadb10/bin/mysqld_safe --datadir=/srv/sqldata --pid-file=/srv/sqldata/dbstore2002.pid --skip-slave-start
root 4220 0.0 0.0 11868 924 pts/2 S+ 01:35 0:00 grep maria
mysql 40350 212 51.0 146328192 67414708 ? Sl Mar18 429639:04 /opt/wmf-mariadb10/bin/mysqld --basedir=/opt/wmf-mariadb10 --datadir=/srv/sqldata --plugin-dir=/opt/wmf-mariadb10/lib/plugin --user=mysql --skip-slave-start --log-error=/srv/sqldata/dbstore2002.err --open-files-limit=400000 --pid-file=/srv/sqldata/dbstore2002.pid --socket=/tmp/mysql.sock --port=3306
```
YeaMonitoring stopped collecting information at that time, so it runs with --skip-slave-start and that explains that Icinga can't check for slave lag?too:
But is it{F4340678}
Edit: (`--skip-slave-start` is normal and why now? l, it is just like that when it is manually started)