May 13 2021
As promised here are the prototypes for dumping and loading Server Configuration Profiles for the previous design. The idea would be to implement these in the redfish module in spirerack, but I have not had an opportunity to complete that.
Apr 16 2021
Apr 15 2021
Apr 14 2021
Apr 13 2021
I have discussed moving forward on the production side with @ayounsi and we've talked about the requirements for production from a network perspective. Basically it'd be good to ensure nothing bad will happen by having the mgmt->dhcp holes open to the install servers and keep track of that to some extent. It'd be good, if possible, to monitor this also.
Apr 7 2021
Mar 31 2021
- added my full working todo list for this project
Since the maps servers are being replaced? I think? Perhaps we can cross them off for this project. Am I right in that this is happening?
@fgiunchedi Is there any process we should follow to test/make sure everything is okay if we add ipv6 DNS for ms-be and ms-fe?
- Moved dbproxy to T271138
Mar 30 2021
A thing we discovered today that should also be imported from Netbox to puppet is the PDU list which is used to produce monitoring, stored in modules/facilities/manifests/init.pp
Mar 29 2021
Here's a quick survey of the hosts listed above and maybe some potential problems in just adding AAAA DNS records to these clusters:
Thanks for following up.
A quick survey of the clusters above:
It appears the kafka-main2* cluster is indeed listening on ipv6, it just seems to need DNS (especially in the face of the eqiad ones already having this DNS). Is there any particular care that's needed here?
Mar 27 2021
Thank you for the extra explanation. Given that these will have the same problems as the db hosts, we will mark them as skipping for now / on back burner.
Mar 26 2021
Yes, apologies for the ambiguity, this is specifically about AAAA records. The assumption we made back when we imported DNS to Netbox was that if there was not AAAA record for the box we should not add one in Netbox for risk of something unexpected happening (such as firewall rules not applying to/being open to IPv6 and services not listening on the IPv6 address). Basically, will anything bad happen if we add the DNS entries?
I guess given that the ganeti clusters will wait for Buster, the Kafka cluster is the only one remaining. What needs to be done for this?
Thanks! I'm closing this ticket and updating the parent to reflect these changes.
The point of the project is to get as many hosts to have an IPv6 address (and, obviously, to be functional on that address) as we can, and, in general, for it to be default to have IPv6 addresses in DNS. If it's not appropriate for a particular cluster, that's a valid outcome.
Mar 25 2021
Mar 23 2021
So the idea is we'd like overall for all of our clusters to have IPv6 reachability. This is not terribly urgent, just a state that has remained a long time and we'd like to rectify.
Mar 19 2021
Mar 17 2021
Mar 16 2021
So we discussed this at the automation meeting, and it turns out we've all agreed that the current code and patches need to be thrown out entirely and the project redone with the django-auth-cas-ng solution because of the Logout Problem. Basically this is an unrealized problem involving mod_cas keeping sessions separately from CAS or Django. Each of these keep their own sessions, and even if the django or the CAS sessions get invalidated, the mod_cas session lives on, and re-creates the Django session. Rather than kludging it to invalidate this session also, removing the apache layer seems like a better fix.
Mar 15 2021
Mar 10 2021
Has this been followed up with an NDA ticket? @MNadrofsky
Mar 8 2021
Mar 3 2021
Update on progress: I discussed the possibilities and situation with @jbond, with the idea that adapting RemoteUserBackend was the general consensus of the above discussion.
Mar 1 2021
I've experimented with SSO on netbox-next and been reading a lot of code, and this is an update on all of that.
Feb 24 2021
Feb 23 2021
There is also https://djangocas.dev