Why?
some cli arguments default values are generated on the backend and there are currently no non-brittle way to infer what these values are.
example: filelog-stdout default value of /data/project/<tool-name>/<jobname>.out is generated on the backend.
If you want to know what these default values are, there are three possible ways to go about it:
- try to infer on the frontend what this value might possibly be. This will mean storing the default prefix /data/project somewhere on the frontend, that you can later combine with the tool and job names to get the potential default value. The problem with this is that there now exist two sources of truth for this. What happens when for some reason we tweak this default value on the backend and then forget to update the frontend? yeaa, regression.
- create api endpoint that returns the default values of a job from the backend.
- preferred option is to move everything about the load feature to the backend.
Advantages:
- simplified frontend code. we won't have to run relatively complex algorithms on the frontend just to figure out something as simple as the default values of a job. all of that will be done on the backend.
- more accurate load operation. for example:
- we will be able to handle filelog-stderr and filelog-stdout (we don't currently handle these in the loads operation which means that not explicitly providing filelog-stdout/stderr in loads yaml for a job that returns these values from the backend will cause the job to be deleted and recreated)
- better handling of cpu and memory changes (right now explicitly setting default cpu/memory in loads yaml counts as a difference but it shouldn't)