Run comparison updates (v2.0/v2.1)

Metric summaries and comparisons between runs are a hot topic of discussion with the SCOC, in science collaboration or task force meetings, as well as on the LSSTC slack.
I thought I’d add some of these resources that can help support that conversation here, too.

The metrics under consideration themselves are a work in progress, and the survey strategy team and the SCOC is very interested in understanding the community’s viewpoint.

These first links are particular-science-goal-oriented notebooks, looking at how the metrics change across simulation families: (a work in progress)

A slightly different approach is to look at a particular simulation family and then look for metrics which might be relevant to this change:
Rolling Cadence

An introductory tutorial for the archive and summary_plots modules:
And a notebook demonstrating how I would extend the metric_sets dataframe:
Identifying metrics

For further information on a particular metric (how it was configured when we ran it, for example), please look into the configuration files we use – most of the metrics we’re talking about are from the “ScienceRadarBatch”:
ScienceRadarBatch v0.9.0