@josh I’m planning on making a push on lsst_ci next to formalize and standardize the datasets that we would like to ensure pass before changes are merged to master. Right now this is sort of informally done by including lsst_ci as a dependency to build in Jenkins (in some configurations). But this only builds the dependencies, including the obs_* packages. I would then like to run some processing under the control of lsst_ci. To start with this would run the example driver scripts currently stored in the examples directory in the validate_drp repo. It may grow or need options to run at different scales subject to configuration options passed.
How should running these scripts be incorporated into the build system?
If there is a failure, how should we provide information to the developer about what failed and what to inspect?
I think there should be ci_* packages for each active user group capturing the kinds of operations done by the HSC survey, the LSST Camera, Sims, QA and DESC groups (and anyone else who wants to contribute). lsst_ci should be a top-level meta-package that simply has dependencies on each of those. It’s rather unfortunate that, presently, lsst_ci does not depend on ci_hsc, and that to get decent integration testing one has to explicitly include ci_hsc in Jenkins.
I suggest not storing good stuff in the examples dir. Everywhere else in the stack, “examples” is a code word for “bitrotted” (unfortunate but true). If you’ve got a useful script, put it in the bin dir, and put together some basic mechanism to exercise it.
As for how to run them, I decided to use SCons for ci_hsc because it’s a python way of running a dependency tree in parallel, but there’s no reason you couldn’t use make or a plain bash script. I think it’s more important to get the stuff in there and being exercised in the first place, and then you can come back once it’s useful and make it better based on the lessons you’ve learned. At least, that was the idea for ci_hsc.