Page MenuHomePhabricator

Graphite1001 disk usage at 96%
Closed, ResolvedPublic

Description

While graphite1004 needs to be put in service (T196484) graphite1001 has reached 96% utilization.

Growth during the last ~20d looks like it has been mostly driven mostly by ores and zuul (metrics created)

220856 ores
 34543 zuul
  8889 servers
  2943 MediaWiki
  2612 frontend
  2518 webpagetest
  1092 eventstreams
   633 daily
   562 restbase
   523 librenms
   372 varnish
   308 aqs
   286 test_joal
   108 nodepool
    78 mw
    48 parsoid
    46 graphoid
    36 tilerator
    28 labstore
    24 logstash
    18 wikibase
    18 swift
    12 thumbor
    12 mobileapps
    12 changeprop
     6 proton
     6 eventlogging

Event Timeline

  1. ores appears to be capturing worker-specific metrics at ores.<server_name>.uwsgi.worker.<worker id>.(...) The <worker id> field appears variable and unpredictable. Depending on the implementation, this could be the source of the ballooning usage and would contain sparse data (e.g. <worker id> being a thread number that has a short lifespan). Total in uwsgi alone: ~183,000 metrics.
  2. zuul might be concerning, but evaluating usefulness of metrics might be worth considering. Each extension name in zuul.pipeline.(postmerge|gate-and-submit|test).mediawiki.extensions.<extension name> gets 18 metrics and appears to follow files. I imagine a situation where the metric becomes useless if an extension is removed, or renamed. Total ~39,000 metrics.
  3. There are 637 servers no longer reporting data to servers.<hostname>.(...) and 577 of them do not appear in monitoring. It's possible the non-reporting but in monitoring nodes are due to T183454. Total: ~ 151,000 metrics.

I suggest two things:

  1. Disable the ores uwsgi metrics collection.
  2. Remove the hosts in servers.<hostname> that are dead or no longer reporting metrics.

I estimate at current rate of utilization the disk will be full in less than 15 days (unless there is a run on creates, then even less).

jijiki triaged this task as Medium priority.

Resolving, we're onto new graphite hardware now with more resources.