User Details
- User Since
- Jun 7 2021, 7:25 AM (119 w, 4 d)
- Availability
- Available
- LDAP User
- Jelto
- MediaWiki User
- JWodstrcil (WMF) [ Global Accounts ]
Yesterday
Thanks for spotting and applying the change. This is resolved now, thanks.
Thu, Sep 21
Per discussion with @LSobanski we skip switchover for planet, it's running in codfw already.
I'll close this task, please re-open if you still have problems importing from GitLab.com
Wed, Sep 20
Planet is running in codfw already (planet2002) and was not switched back. See https://gerrit.wikimedia.org/r/plugins/gitiles/operations/dns/+/refs/heads/master/templates/wmnet#908
Tue, Sep 19
Mon, Sep 18
A bit more context: We had this issue some time ago and added SPF records in T328642.
Service is deployed to all wikikube clusters:
Thu, Sep 14
Tue, Sep 12
Thanks @aborrero for fixing all WMCS runners!
Similar to the last time T343646#9074005 a puppet run happened during that time:
Mon, Sep 11
Host reimaged and post-installation steps done.
I can take care of that today.
Probably similar to T343646.
Thu, Aug 31
Onboarding to the new deployment workflow and repos has happened. @fkaelin feel free to close the task if you don't have any more questions.
Usage of local gems is enabled on the test instance now. Two additional files were created /opt/gitlab/embedded/service/gitlab-rails/Gemfile.local and /opt/gitlab/embedded/service/gitlab-rails/Gemfile.local.lock.
SRE can take care of refactoring the puppet code. Thanks for bringing that up!
Tue, Aug 29
Thanks you! Yes sure, that works. Let's do a short session tomorrow.
@sbassett feel free to remove security ACL from this task, as the issue is fixed and can be public now.
In https://gitlab.wikimedia.org/repos/sre/miscweb/bugzilla/-/commit/1d0b575e5119c293305c62185615331ca7848e38 I removed all unavailable resources like css, javascript and images. I used the following sed commands:
Mon, Aug 28
Some observations from using gzipped content again:
Fri, Aug 25
I deployed the new miscweb bugzilla image from GitLab which includes the refactored storage and serving of uncompressed html files. It seems the problem just changed. Using curl causes little cpu usage and visiting bugzilla in a browser causes significant throttling. This is mostly the opposite of the initial suspected problem.
Thu, Aug 24
As mentioned in T300171#9117180 I refactored the future bugzilla docker image which is built on GitLab (see https://gitlab.wikimedia.org/repos/sre/miscweb/bugzilla).
Aug 23 2023
Reassigning to @fkaelin
Aug 21 2023
I'm closing the task for now. I'll do more troubleshooting in case this happens again.
Adjusting tags for DC-Ops (they need ops-codfw tag instead of team tag to proceed).
Aug 18 2023
Picking up the discussion from last weeks meeting and your mail @MatthewVernon :
Aug 17 2023
On the infrastructure side everything is prepared for switching both services to GitLab and Kubernetes/wikikube.
Maybe it's possible to add a new GitLab flavored markdown (GLFM) extension which understands Phabricator task ids:
Note: the increased number of replicas was reverted with the last deployment of miscweb in codfw. I left it at the default (two replicas) for now.
Aug 16 2023
Cleanup of puppet code is done and most cas references are removed.
Aug 15 2023
What remotes are configured in your local repo? git remote show origin.
The warning: redirecting to https://gitlab.wikimedia.org/repos/data-engineering/airflow-dags.git/ looks a bit suspicious.
Both services are migrated to GitLab (https://gitlab.wikimedia.org/repos/sre/miscweb/wikiworkshop and https://gitlab.wikimedia.org/repos/sre/miscweb/research-landing-page) and deployed to wikikube staging. Everything looks good so far.
Aug 14 2023
Your screenshot shows you are using https://gitlab-replica.wikimedia.org. Is this desired?
Aug 11 2023
Images should be available now. I'm closing the task. Please re-open if you have any problems with using the images in CI.
Aug 10 2023
Currently all pages in bugzilla are stored in around 150k different gz archives. They have a total size of ~1.2GB on disk and ~1.5GB in the docker images. For a test I extracted all archives and uncompressed they need 3.6GB.
3.6GB should be reasonable size if we decide to serve all content uncompressed.
Thanks for opening the request!
Aug 9 2023
Thanks for the feedback. I'll prepare a access request for you @fkaelin with your shell user fab for the deployment group (so you are allowed to login to deployment server).