User Details
- User Since
- Mar 6 2020, 9:03 PM (195 w, 5 d)
- Availability
- Available
- IRC Nick
- Raymond_Ndibe
- LDAP User
- Raymond Ndibe
- MediaWiki User
- Raymond Ndibe [ Global Accounts ]
Yesterday
Tue, Dec 5
Fri, Nov 17
Thu, Nov 16
Wed, Nov 15
Tue, Nov 14
Fri, Nov 10
Thu, Nov 9
Wed, Nov 8
Tue, Nov 7
Nov 3 2023
Nov 2 2023
Oct 31 2023
Oct 25 2023
Oct 24 2023
did a little research on this and from what I was able to find, there is currently no way to delete and image in a project with immutable policy set without first disabling the immutable policy (and enabling it back after).
Given that this is the only way rn, I propose we do this as part of some ongoing harbor maintenance (maybe something like once every month or three months or something).
During the maintenance window, we:
- send out some kind of notification that harbor will be down for say 10mins.
- disable write access
- disable immutable rule
- then for our target projects (i.e. toolforge), delete every image except maybe the 10-20 most recent images and charts
- enable immutable rule
- enable write access
- notify users that they can now push to harbor
@dcaro we should mark this as resolved no?
Oct 23 2023
In my opinion I think we should go with Option 1 in the short term and Option 3 in the long term. Option 2 is totally out of the question in my opinion because it offers no real benefit over Option 1 (the one advantage it has, speed, is not really a bottleneck for the type of application we are building so doesn't mean much, yet we have to worry about the time it will take to complete).
Option 1 in the short term because we want to start enjoying the benefits of the merge as soon as possible and Option 3 will likely take a while before we can come up with a complete openapi spec.
Option 3 in the long term because it helps us standardize the way we develop stuffs for toolforge. I can see the spec created for Option 3 being used in the future for something like toolforge UI
Oct 6 2023
oo thanks so much @dcaro this looks promising
Oct 5 2023
Sep 30 2023
Sep 29 2023
thanks for working on this @dcaro, really there are too many moving parts involved. I was wondering, if we were to switch from using kind in lima-kilo to using minikube, can that solve many of our problems? taking a glace at the things you discussed here, it seems like the way minikube does things is closest to our production setup compared to kind.
I think we should either move away from minikube (read rewrite any minikube specific readme's to kind, verify that that all build services stuffs work on kind and fix them if they don't), or switch to using minikube on toolforge-jobs and lima-kilo
I think the issue is more of having a unified way to do toolforge stuffs (atleast I think that's what lima-kilo was trying to do). Currently that idea is being defeated by the fact we still work on different parts of toolforge by switching between two different local clusters (kind and minikube)
@dcaro I'm having a hard time deciding on how to configure a new prometheus job for this. Nothing we have at the moment running for other tools is quite like this (pingthing looks like both the right and wrong answer because pulling metrics from the metrics endpoint will atleast require kubernetes authentication and I am not sure if we can do this with pingthing).
Do you have any idea on how to proceed with this?
acked