I have the LLST sciece pipelines dockerized in a kubernetes cluster. I would prefer that big computing loads are processed in the SLURM manager we have installed in the site so it uses our dedicated computing nodes with GPUs and such
¿Has anyone else used a worload manager with the LSST pipelines? Particularly in kubernetes?
We do provide containers of our own if you want to use them.
We run all our batch processing using SLURM. We have also run the DP0.2 processing at Google using Kubernetes and PanDA. See for example Adding Workflow Management Flexibility to LSST Pipelines Execution - ADS