June 2019 update

Hello!

We have received the SAC report on the survey strategy white papers, along with guidelines on the families of simulations to investigate.

We have already done a bit of preliminary investigation into these families – you can see the outputs in progress at https://lsst-web.ncsa.illinois.edu/sim-data/sims_featureScheduler_runs/ (see the README.md for information on what’s what)
but we are still working on some code updates before re-running a more ‘official’ baseline and then we’ll quickly re-run all of the relevant runs on the same codebase, make better-looking metric outputs and put them all up online.

We’ve also been working on pulling in new metrics – Peter has been in contact with many of you for transient and variable or DESC related metrics, and I’ve been working on solar system metric updates. As preparation for writing up the final summary report, we’ve also been thinking about how to actually evaluate each of these families of simulations, given the diverse metrics possible for each run. There are still plenty of wrinkles to work out, which we’ll be doing as we can during meetings with each of the science collaborations over the summer plus the PWC workshop in August plus right here on community.lsst.org. However, one plan that seems useful is to ask for thresholds for key metrics, which could be set by “this is the bar LSST must meet in order to meaningfully make progress for our science” … or potentially, “below this point, LSST is not doing the science we need it to do”. These thresholds would not be the bar for the final report (the SCOC) – indeed, we could avoid passing these threshold values along to the SCOC at all. However, they would be extremely useful in our investigation of what variations on the survey strategy are possible – if we break some science, we need to stop pushing in that direction. Once we have our series of final candidate runs, then we will work on ranking those runs with the science community – so to some extent, it will be up to you to decide how to combine metrics within your community. My suspicion is that, once you come up with a good set of metrics, that most runs will be fairly comparable, with perhaps one or two standouts.

In mid-July, look for an updated set of runs with the new scheduler covering:

  • current baseline footprint (likely with some tweaks about u band scheduling)
  • pairs in the same filter, mixed filters, and (potentially) the “presto” transient triplet
  • DD cadence ala the DESC & AGN survey strategy requests
  • a ‘big’ (N/S extended) WFD footprint vs. the smaller baseline WFD footprint
  • some basic variation on the rolling cadence
  • a run more similar to the alt-scheduler cadence (through making the survey filter choice less dependent on sky-brightness)

These will have an updated set of m5 values; the expected throughputs have been updated with some additional as-measured values, which have changed the zenith/dark-sky m5 values slightly. They will also have an updated version of the various weather and telescope code packages; same values, but different API to better conform with how we expect these modules to be used during operations.

Cookies for reading this far -
Lynne

1 Like