User Details
- User Since
- Nov 2 2014, 11:35 PM (498 w, 1 d)
- Availability
- Available
- IRC Nick
- andrewbogott
- LDAP User
- Unknown
- MediaWiki User
- Andrewbogott [ Global Accounts ]
Fri, May 17
Wed, May 15
That's correct. Of course the owner of the db project can manage project access, so ideally they keep things in sync manually.
Tue, May 14
Mon, May 13
<snip>
I'm still thinking on use the cases
This task is specifically tackling this one:
- As a tool, I want to be able to access the s3 buckets I created (from horizon) from within toolforge
I'm now learned that new prod puppservers also use the 'gitpuppet' user. So eliminating that user will increase the diff with prod rather than shrink it. So that's not the right path forward. Probably I should just figure out a fix for case 1.
Thu, May 9
Rivers change course, civilizations rise and fall, and I have finally done some work on this task.
Reimaging cloudcontrol2006-dev works now, thanks!
I'm closing this as invalid since those hosts have come and gone :)
Wed, May 8
I'm hitting a roadblock with the service user plan -- because of keystone's belt-and-suspenders approach to security, I can override the policy to allow an admin user to create app creds for another user (e.g. novaadmin creating creds for tool.mytool) but there's an explicit check in the code comparing context ID to cred ID and erroring out. IMO this is a keystone bug (https://launchpad.net/bugs/2065212) but it's unlikely to be changed upstream anytime soon.
Tue, May 7
So many questions!
Mon, May 6
My favorite option is 'Automatic creation of per-tool keystone project'. Since that's a simple extension of 'On-demand creation of per-tool keystone project' I'm going to start with that (with a cli tool rather than an API endpoint for now).
I'm striking out the 'keystone projects in ldap' option because keystone doesn't really support that one.
Thu, May 2
Wed, May 1
I think this is now cleaned up and resolved for now. In the future, I suspect that deleting canary VMs before deleting hypervisors will prevent them from showing up here, but openstack resource provider delete might be needed.
Ok, I think I found them! These deleted hosts can be cleaned up with
Removing hardware records from the DB seems a little bit dangerous as that could leave dangling references elsewhere (for instance in the action log which keeps track of any previous actions a VM took, including a reference to where the VM was at the time.)
Inasmuch as Trove works for this, the integration is also working.
Fri, Apr 26
Thu, Apr 25
I (Andrew) am accepting this task to investigate deprecation warnings in these services and (probably) take over maintenance of python-flask-keystone.
From a meeting about these services today:
The toolforge exim server is using an experimental feature to support forwarding to gmail. That build is here: https://gitlab.wikimedia.org/repos/sre/exim4-arc -- it will likely become part of the main exim build soon.
Wed, Apr 24
Tue, Apr 23
I've built a new puppetserver in this project, wdqspuppetserver-1. Nothing was using the old one so probably this effort was in vain.
It is safe to reimage cloudbackup1003 on April 30.
Everything seems happy now. Thanks!
This is taavi messing with ovs
Apr 19 2024
...I just checked and Bobcat is still using greenlet 2.0.2 so this is likely not fixed in bobcat :(
I think this is the same issue (but different log message) as T352635
I've been seeing this crash periodically since we upgraded to A -- if this is the same failure then believe this is a bug in the python threading library that we're using and the full queue is a symptom of a stuck listener.
Apr 18 2024
I'm late to this, but I also agree with B5. Do we need another rule regarding what to do with existing code when applying new checks or standards and would result in a reformat?
Apr 17 2024
codfw1dev is now running bobcat. The only (minor) issue I'm aware of so far is T350807
Coincidentally, I just did a dist-upgrade that pulled in this new package. The 0.14 package installs its binary here:
These are now in service and working fine.
Apr 16 2024
puppetserver is upgraded but everything in this project is Buster so puppet 7 will be unhappy until that's fixed.
This project was managed by jbond -- for now I will do this upgrade.
10:28 AM taavi, jhathaway, moritzm, is the puppet-dev project effectively defunct now that jbond has departed? It's unmarked on the purge page and also has https://phabricator.wikimedia.org/T361593 with no response
10:29 AM
<moritzm> Moritz Mühlenhoff let me have a look
10:31 AM I haven't used it for ages and I think it was mostly used to stage/test the puppet 7. from my PoV it can be phased out unless Jesse or Taavi still use it
10:32 AM <andrewbogott> Andrew Bogott ok, thanks moritzm, let's see if anyone else has an opinion :)
10:34 AM
<jhathaway> Jesse Hathaway I agree with moritzm, I would like to keep the project around, but the instances can be removed
10:35 AM <andrewbogott> Andrew Bogott great, shall I delete things right now?
10:37 AM
<jhathaway> Jesse Hathaway fine by me, unless taavi objects
10:37 AM
<jbond> John Bond ftw also good with me
10:37 AM
<taavi> Taavi Väänänen no objections from me
Apr 15 2024
The latest puppetserver code is prone to gobbling RAM; I'd check for oom messages and see about using profile::puppetserver::java_max_mem
Apr 12 2024
It's not really a buster thing -- the puppet code for geoip is entirely different in the puppetserver manifests vs. the old puppetmaster manifests.