For k8s auth passwords and perhaps SSL certs in the future.
Probably be doable with a a puppetmaster for the project. Must verify that non-root users can't get access to the secrets even in the labs puppetmaster roles.
For k8s auth passwords and perhaps SSL certs in the future.
Probably be doable with a a puppetmaster for the project. Must verify that non-root users can't get access to the secrets even in the labs puppetmaster roles.
Status | Subtype | Assigned | Task | ||
---|---|---|---|---|---|
Resolved | yuvipanda | T111885 Initial Deployment of Kubernetes to Tool Labs | |||
Resolved | yuvipanda | T112005 Setup a way to store secrets and access them from puppet inside the Tool Labs project |
Should basically track production branch as closely as possible (faster git pulls maybe?)
Can we solve this using just ssh auth? We already have host-based auth infra in place, after all.
Otoh, having a generic secure storage also sounds like a good plan.
What's the purpose of this? If it's just a one-time job when setting up a new Kubernetes host, doing it manually seems acceptable to me (and much less work than setting up a non-leaking Puppet master); we do that already for the proxy servers as well.
So the webproxy is only one thing that doesn't change and has to be on only two places, while this is a bit more dynamic.
As I wrote in T107993#1567549:
You don't need to involve NFS (directly). You can set up the certificate generator on any machine as a HTTP/whatever service, do the auth* via is_in_tools_project($IP) && identd($IP, $port) and then let the "client" write it to the local disk (which will probably be NFS :-)).
But a certificate generator certainly requires a process maintaining a certificate revocation list and distributing it to where it needs to be.
That is a lot more complexity than just setting up a private puppetmaster
:) we also shouldn't be doing any extra, non upstreamable pieces in the
infrastructure if we can help it, and I definitely think an authenticating
proxy falls in that category.
Also not doing TLS client certificates right now - not sure how much
kubernetes supports revocation checking. Will move to that in the next
couple of months though :)
I'm not seeing a less bad way of doing it offhand. It might perhaps be possible to actually do it the replicas.my.cnf way by having labstore* do it asynchronously? This wouldn't require k8s do have NFS mounts itself.
How would we be able to do it in labstore without k8s having nfs? The credentials will have to be generated somwehere (k8s master or labstore) and then securely transferred to the other... Not sure if that is simpler.
I should think it fairly easy to transfer credentials to the k8s master from labstore (rsync, or something else); but that's an extra piece of home-spun software to maintain.
(It might actually be worthwhile to see if the mysql system could be generalized, however, given that we now have at least three use case for "need to generate credentials per-uid in labs and distribute to the user and the service")
@Joe suggested we just have a private repo on labs puppetmaster, which I highly approve of!
Change 243800 had a related patch set uploaded (by Yuvipanda):
tools: Add puppetmaster/client roles
Ok, there's now tools-puppetmaster-01 setup to serve as puppetmaster for the k8s master node, two workers and the dynamicproxies!
There are two ways of doing this:
I vastly prefer #1 in this case.
We should also replicate the private puppetmaster setup in palladium, which has;
Change 244827 had a related patch set uploaded (by Yuvipanda):
puppet: Have a 'secret' repository for self hosted puppetmasters
Change 244827 merged by Yuvipanda:
puppet: Have a 'secret' repository for self hosted puppetmasters
Done! (ish!). Still needs I think to figure out how to make hiera also look at the secret repo for files...
Change 244841 had a related patch set uploaded (by Yuvipanda):
hiera: Add support for 'secret' datadir