Jul 20 2021
Following on a previous discussion, noting some concerns that current implementation (copying over the latest backup to a separate latest file) may cause. It's not a major problem, but definitely something that can be optimized.
Jun 9 2021
A bit of a context: the plan was to hold 7 days worth of backups locally, but in order to test local retention and its interaction with Bacula this number was to be be reduced to 3 days for the duration of the early deployment stage. @wkandek correct me if I'm wrong, please.
Jun 4 2021
It may contain sensitive data, particularly /etc/gitlab/gitlab-secrets.json. If you guys believe a single backup set is safe enough for both general backup and configuration/secrets, then it's fine with us.
On a different note, I'd like to remind about a separate backup set for Gitlab configuration backup, which we discussed earlier (if memory doesn't fail me) : /etc/gitlab/config_backup
Also, guys, can you verify that the manual UI configurations were done after running Ansible?
Jun 1 2021
Yes, this is reasonable. These two variables need to be updated accordingly (and Ansible redeployed):
May 28 2021
Thanks! I confirm we can now log in to Gitlab.
May 27 2021
Looking at gitlab1001, I only see a root volume:
Is this something you guys will be changing (and deploying)? Just to verify that we are on the same page here.
I believe this is going to be finished today, you should not get more of these, sorry about that.
The installation has not been completed yet, the server has been brought down intentionally, backup failures are expected at the time.
May 26 2021
Considering that installation is in an unfinished state now (especially for the default password reset part, and then UI configurations) the safe way would be to shut Gitlab down with gitlab-ctl until it can be finished. However, password can be reset with CLI (sudo gitlab-rake "gitlab:password:reset, https://docs.gitlab.com/ee/security/reset_user_password.html) and installation left alone until tomorrow.
Yes, it seems exactly like it, everything else seems fine so far.
May 25 2021
On a closer inspection, this setting is not directly monitoring related in the context of this ticket. It configures Prometheus server for Gitlab UI Prometheus integration: https://docs.gitlab.com/ee/user/project/integrations/prometheus.html
May 21 2021
May 19 2021
From now on this is going to be default:
# Monitoring configuration gitlab_prometheus_enable: "false" gitlab_grafana_enable: "false" gitlab_alertmanager_enable: "false" gitlab_gitlab_exporter_enable: "false" gitlab_node_exporter_enable: "false" gitlab_postgres_exporter_enable: "false" gitlab_redis_exporter_enable: "false"
May 18 2021
Along with the other settings for an eternal prometheus server
May 17 2021
Here's the LICENSE file finally in the root of the repo, please review: https://gerrit.wikimedia.org/r/plugins/gitiles/operations/gitlab-ansible/+/refs/heads/master/LICENSE
May 14 2021
As soon as we have a running production Gitlab instance I will post an update here with the list of Prometheus exporter endpoints to pull.
May 13 2021
This is weird, I vividly remember we added a LICENSE file to the repo. The original Gitlab-suggested playbook we started with was licensed under MIT. We'll verify and re-add the license back to the repo.
May 10 2021
More good news: inspired by this issue (https://gitlab.com/gitlab-org/gitlab/-/issues/24510), @Sfigor was able to make LogOut work on the Gitlab side.
Requesting some more information on the current LDAP schema, which attributes can and should be used for key mapping (https://github.com/tduehr/omniauth-cas3), specifically these:
Anyway, given that the decision to stick with SSO has been made, we're going with CAS, having following limitations in mind:
- No SSH keys import/sync at the moment (can probably be implemented at later stages with fetch_raw_info)
- No group membership (turns out that was never a requirement)
- No git remote passwords from SSO
- Logout current state
These were all addressed (please review). Please note, that some options are overridden in hostvars for production.
May 8 2021
Just to confirm that we are on the same page here: Logstash agents are installed and configured by Puppet, we're only proving the list of logs to be ingested and configuring output format if needed, right?
May 7 2021
For the moment, I'll just put it here for history, documentation to the 3rd party OmniAuth CAS Strategy module: https://github.com/tduehr/omniauth-cas3, which brings light to configuration options and features.
One thing that should be made explicit is that we had into account the amount of disk and resources needed to store backups in our long term storage (bacula). Any resources needed to generate the tarballs and store them for a short term until they are backed up by bacula (in a similar way we do for databases, e.g., where we store a couple of recent exports in case something goes wrong with the exporting process) it will have to be accounted on your side or budget - (e.g. extra disk space on the local gitlab hosts for one or several exports) and added on your own into the annual plan- basically the pure service needs outside of remote backups.
We wanted to make this explicit to prevent confusion.
Ack, we'll make sure we have this covered on our side.
If by shipped you guys mean ingested by Logstash into an ElasticSearch instance, then yes, this was the plan.
May 3 2021
Gitlab outbound mail configuration is done:
Apr 30 2021
Here it is, requesting settings review:
Apr 14 2021
Using Gerrit backups as a baseline makes sense. What components are currently included in the hourly Gerrit backups? What is the retention policy for build artifacts, build data (logs etc)? Any information would be valuable here. Thank you.
The plan overall is to utilize Gitlab's built in backup that (to some point) takes care of the consistency of backups, including database (PostgreSQL) and repositories. It backs up components separately, as large (tar.gz) files:
Apr 13 2021
It is possible to have multiple backup sets with different components included. For example, a weekly full backup of all components, and a daily backup that does not include artifacts and CI builds, preserving backup space and reducing backup runtime. Let us know if this is a strategy we should proceed with.
Apr 12 2021
As per Gitlab suggestion (https://docs.gitlab.com/omnibus/settings/backups.html):
It is recommended to keep a copy of /etc/gitlab, or at least of /etc/gitlab/gitlab-secrets.json, in a safe place. If you ever need to restore a GitLab application backup you need to also restore gitlab-secrets.json. If you do not, GitLab users who are using two-factor authentication will lose access to your GitLab server and 'secure variables' stored in GitLab CI will be lost. It is not recommended to store your configuration backup in the same place as your application data backup, see below.
Apr 8 2021
@jbond Also, any feedback is welcome and expected, please let us know. Thanks!
Apr 7 2021
Apr 6 2021
Sounds like a plan to me then.
Mar 23 2021
Mar 19 2021
Something missing from the docs?
ahh yes, i have placed the ldap cn=admin password in idp01.sso.eqiad1.wikimedia.cloud:/root/ldap
Mar 11 2021
Mar 10 2021
The service has still to be notified somehow, to reload certificates at least. It can be implemented as simple as a hook (a predefined script for example) that can be called on a change. We will manage the script within the installation.
Mar 9 2021
Mar 8 2021
Mar 5 2021
From our perspective Gitlab-managed auto-renewed Let's Encrypt certificates are the most straightforward and preferable solution, and this is what we're starting with unless you guys find otherwise.
Mar 3 2021
Thanks, let's keep it this way!
Mar 2 2021
@jbond It's an outcome of me trying to separate personal and S&F accounts here, sorry about that. I updated the ticket with the correct _shell username_ (strofimovsky01), hope this helps.
Allow me to stress that the SSH port for Gitlab is a long term choice. Whatever is decided here, will have to be carried for a long time, it will stay in numerous remote origins forever. My +1 to a standard port here.
Gitlab takes a bit of an opposite approach with this. Gitlab server manages its own user key database, as well as they can sync in user keys from LDAP. They do indeed support AuthorizedKeysCommand, but in their own way, to quickly look up a key in a local database, to alleviate issues with huge authorized_keys files. I think this is even a default for some configuration as of lately.
Mar 1 2021
Thank you, guys! That should be enough to start working on load tests, I'm sure we'll come up with more follow up questions on the way.
Given the ratio between number of people that will be using git SSH access, and people that will log in to manage the system, I would spend an extra minute here to try and stick with the default SSH port for git. Here's a few options I believe may balance out convenience and security concerns:
This is the confirmation that the L3 document is signed. You signed this document on Fri, Feb 26, 7:18 PM.
Feb 26 2021
Trying to un-mess things with registering on the S&F email now, here's my info: