Metric summaries and comparisons between runs are a hot topic of discussion with the SCOC, in science collaboration or task force meetings, as well as on the LSSTC slack.
I thought I’d add some of these resources that can help support that conversation here, too.
The metrics under consideration themselves are a work in progress, and the survey strategy team and the SCOC is very interested in understanding the community’s viewpoint.
A slightly different approach is to look at a particular simulation family and then look for metrics which might be relevant to this change:
For further information on a particular metric (how it was configured when we ran it, for example), please look into the configuration files we use – most of the metrics we’re talking about are from the “ScienceRadarBatch”: