Survey Strategy Session at the Rubin Obs PCW 2020

Hi everybody!
There is a survey strategy session at the Rubin Observatory Project and Community Workshop (PCW) 2020 - Wednesday morning, Aug 12. See the Session agenda for connection

The plan for the session is to make materials available ahead of time for previewing – these resources include

Our report to the SCOC on the survey strategy experiments is now publicly available pstn-051.lsst.io!
A topic dedicated to the report is available here: Report for the Survey Cadence Optimization Committee (SCOC)

The session will start with a very brief recap of the videos above - more of a reminder than a substitute for the videos themselves. Then we are planning to have short presentations from each of the science collaborations, a comment on their status as we go to the next phase of survey strategy development – community feedback to the SCOC.

  • Are there any issues, missing simulations or missing metrics for evaluating the survey strategy?
  • How is your collaboration placed in terms of this next phase - what resources are available or lacking?

Slides from the session: Community Evaluation of Rubin Survey Strategies Thank you to @dscolnic, @pmmcgehee, @willclarkson, @gtrichards, @hcferguson, @fed, @rstreet, and @mschwamb for the updates from your science collaborations!

Recording of the session: https://youtu.be/wKzPcT4WmOo

1 Like

I know that is a lot of material! If you are time limited, I would suggest the following:

If you are new to the survey strategy question, watch
an overview of the scheduler, how it works and how it’s balancing different drivers
a quick introductory overview of some of the survey strategy basics (what are the different survey regions, etc.)

If you have been following along with the survey strategy and scheduler, you may want to skip to
a brief description of changes in the scheduler (2019-2020)

Both groups probably want to watch at least parts of
an in-depth description of each of the families of simulation experiments and investigations created for the report to the SCOC – see the timestamps for particular subjects within that longer video in the next post (or perhaps in the youtube caption) … I would expect the section from 3:12 to 9:50 to be useful orientation and then either all or subsets of the more in-depth descriptions later in the video.

The FBS 1.6 families video will be useful as well, but most likely only after gaining some knowledge of the FBS 1.5 families.

Regarding the longer FBS 1.5 experiment families video:

In case this doesn’t make it into the youtube caption, I have timestamps for the video of the in-depth description of each of the families of simulation experiments and investigations created for the report to the SCOC (because it is long).

Also, if you’re looking for it and it hasn’t rendered on the LSST channel yet, you can also see it here: https://youtu.be/TZol55IjBy8

1:53 List of general SAC questions
3:12 Brief overview of simulation families and groups
9:50 Start of in-depth coverage of each family
10:12 u_pairs
11:46 wfd_depth
12:42 baselines
14:27 u60
16:06 var_exp
17:50 third_obs
21:22 footprints
23:10 bulges
24:10 filter_dist
25:29 alt_roll_dust
27:38 rolling_fpo
29:45 DDF
31:21 good_seeing
32:50 twilight_neo
37:38 short_exp
39:08 DCR
40:57 even_filters
42:19 spiders
44:55 Go to community.lsst.org for more!

It would be immensely helpful for us to understand how this material helped you understand what is going on in the survey strategy process or what is missing. There is a really wide range in where people are coming from – either knowing nothing about the general plan for the LSST or having followed the evolution of this planning for many years. It’s easy for us to miss areas where people have questions.

Hi Lynne and Peter, thanks for posting the report.

Question for you as I was checking the metrics - page 14 on the SRD metrics. Understandably the high level SRD metrics of area, number of visits and errors on parallax/pm are considered. But you also have an SRD metric of “number of rapid revisits (on timescales of between a few to 40 sec) per point on sky”. Is that a typo ? Do you actually mean minutes ?

If it is seconds, then difficult to understand how this plays with snaps and visits (for moving object identification)

We don’t usually talk much about the rapid revisit requirement. It typically gets met because neighboring pointings have an overlap area, so as long as we keep most of the slews short, we meet the requirement.

We haven’t experimented much with how dense/sparse the sky tessellation is, mainly because fiddling with it too much would probably make us fail the rapid revisit requirement.

Now addressed in the “SCOC report” topic thread: