Understanding how to test an lsst_sims installation is becoming more urgent now that square is starting to handle our releases, particularly the conda binary releases. (sidenote: this is a good thing and I’m very happy square is extending their work to us).
DM have a separate “demo” which is used to test the DM software stack, after installation. This and the validate_drp package also provide the opportunity to do things like regression testing, to see the impact of changes to the code.
Things are a little more complicated, conceptually, for sims, as we have a very diverse set of packages which are not necessarily related. For example, sims_maf and sims_catUtils have dependencies in common, but do not actually operate on the same data and one runs metrics on opsim (or other) outputs, while one generates catalogs of objects.
When building the release from source, each package has unit tests which test that package and its dependencies.
If a user installs from source, these unit tests are run.
If a user installs from conda binaries however, the unit tests are not run on the user’s computer, and there could potentially be problems. In particular, it’s hard for square to know if the conda binary build was a success.
So – long-term, I think we need to build a “demo” package for sims that will exercise various pieces of the sims software.
In the meantime, to make @frossie and @jmatt 's life easier and happier, what should we do?
One possible option I wondered about was if the unit tests themselves could be used to do a simple test of the binaries, after being built.
Another option is for us to try to hurry up to put a set of scripts together. I have added making a proper demo to the list for next year’s planning (probably could get to this in the fall), but if we need something sooner, we could probably pull the guts out of a few ipython notebooks? The rush version would be quite non-comprehensive though.
Thoughts?