MediaWiki splits caches per version. Will there be performance impacts due to rolling out many trains in a week?
@hashar had some thoughts here.
MediaWiki splits caches per version. Will there be performance impacts due to rolling out many trains in a week?
@hashar had some thoughts here.
I asked @akosiaris for ServiceOps thoughts on Tuesday, he said he'd check in with others and get back with any specific concerns.
I see @dpifke has added this to the performance team radar, but I also DM'd a few questions.
Are there any places we should monitor closely throughout the week? Any specific concerns or things you expect to break?
ParserCache (for one) is *not* split by version. This is actually sometimes an issue, since if a ParserCache-affecting change rolls out, there's no way to rollback the potentially-wrong content already pushed into the ParserCache. "Best practice" is to roll out compatibility code first on one train which knows how to deal with the "new content", and then wait a full "train cycle" before pushing out the code which actually changes content stuffed into the ParserCache... but of course, it's when the unexpected happens that trouble arises.
So perhaps this is saying that perhaps ParserCache *should* be split by version -- perhaps not fully, but at least content in the cache tagged with the mediawiki version which generated it, so that it can be efficiently/routinely purged when a rollback is done.
Talked a bit with the team, we don't think opcache wise we see any big risk with going from 3 to 4 trains.