Draft Phase 3 recommendation available and open for feedback

The SCOC is slated to release its Phase 3 recommendation by 9/30/2024.

A draft of the recommendation is live since 9/7/2024 at https://pstn-056.lsst.io. The SCOC invites the community to review the draft and share their feedback via: the SCOC liaisons into the Science Collaborations (see the SCOC webpage to find your liaison), this community topic (reply below), reaching out to the committee chair (Federica Bianco) on slack or by email.

The SCOC wishes to remind the scientific community that the LSST survey strategy is designed to evolve in response to changes in the scientific priorities and/or the Rubin system. The community is always welcome to share their feedback with the SCOC and the survey strategy will continue to be monitored and evaluated during LSST operations: The SCOC will be active throughout the entirety of LSST. PSTN-056 is the last planned recommendation for the survey as a whole before LSST starts. However, the SCOC will refine the plan for Y1 in particular and the LSST in general in the light of commissioning outcomes.

Please note: the current draft continues to be subject to development up to 9/30/2024 and the content may change. The draft versions are marked by a revision date on the cover page and, if you wish, you may refer to the document GitHub repo for details on the content of changes (link in https://pstn-056.lsst.io)

1 Like

Congrats to the SCOC. Is there an expected date when the cadence simulation will be available with the metric information? Most figures that include metrics say draft on it, so it is hard to know if that figure is representative of the baseline that incorporates the phase 3 recommendations? Thanks

Yes, in fact we were expecting to release them jointly with this post (you probably get that sense when reading the document) but we realized there was a small bug and we are rerunning them. However, we did not want to delay the post further. They should be available this week

Given we’re already on Wednesday this week and the next week is LSST@Europe, I ask the SCOC to please consider a pushing back their deadline of finalizing the report by one week because of the delay in the community being able to digest the metrics associated with the proposed baseline to allow the SCs time to review the metrics and provide feedback. This would mean the report being finalized by October 4th.

Hello, I understand your concern, but unfortunately, this is a hard deadline. For this reason, we have socialized the elements of the recommendation overtime in as many venues as we could (e.g. Rubin CW, SCOC workshop, and any other meeting where we were invited to speak or we were able to participate), shared multiple simulations that explored each of the parameters under deliberation, and provided regular updates that included all of the deliberations on which we had converged and/or voted over the course of the past two years on community. While feedback before the publication deadline is particularly appreciated, as always, we will continue to receive feedback and consider community input past the date of publication of this recommendation and, indeed, throughout operations.

@mschwamb and others -
The updated v3.6 simulations themselves are available for download already at
https://s3df.slac.stanford.edu/data/rubin/sim-data/sims_featureScheduler_runs3.6/

The metrics are still updating, unfortunately, and a notebook with this information should be available before the end of the week.
If you are interested in the metrics from the “pre-draft draft” v3.5 runs that helped the SCOC confirm their decisions, those are at Index of /sim-data/sims_featureScheduler_runs3.5/ and their high-level metrics are available in survey_strategy/fbs_3.5/v3.5_Update.ipynb at main · lsst-pst/survey_strategy · GitHub
These v3.5 runs are also on astro-lsst-01.
(heads up that astro-lsst-01 will be retiring imminently and a new link will be posted).

1 Like

I understand that some of these decisions have been socialized but for example all the presentations shown only 3 solar system metrics when we have many key ones. The last simulation output that was publicized on community.lsst.org and on the survey strategy website was v3.4 in May. At least for the Solar System community we would have greater trust and confidence in the cadence if we had time to digest it. We effectively have next week to review this if we do have feedback for the SCOC. The SCOC and the Observatory being late on delivering the baseline cadence should not translate into the community having less time to digest the baseline simulation by reviewing the metrics. Perhaps it would make more sense to me if you could explain the operations/construction reasons why giving an additional week for the community to review the final metrics is not possible.

I realize the SCOC minutes don’t contain all of the details that you may want in order to be able to see the impacts of various changes, but one takeaway is that much of the discussion has been in relation to questions which were heavily influenced by task forces tied to science collaborations impacted by those questions (and which doesn’t necessarily have a large impact beyond that collaboration).

For example, simulating ToOs and whether these would have larger impacts on the remainder of the survey than expected and if the program laid out from the ToO workshop was realistic – we found this was possible, it was impactful on the level or less as was previously expected, and that the prior decision by the SCOC to devote on the order of 3% of survey time to ToOs could remain as a useful limit. And yet, in another aspect, this has no impact on the survey strategy choices, as the 3% decision was already made.

Changes to galactic plane, MC and SCP coverage were likewise intended to be overall neutral to the remainder of the survey, but were impactful within these regions. These were addressed hand-in-hand with a task force intended to provide more feedback on the details of those changes.

Decisions on rolling cadence (3 cycles vs 4 cycles) in the low-dust WFD are indeed more impactful in general, and these are probably places where you would like more time to evaluate the impact. However, this is also a place where the SCOC have decided they will make a recommendation, but as it doesn’t change on-sky behavior within the first year (and likely longer), this can be revisited - and is indicated as such in the recommendations.

The other widespread impact would be in the decision to adapt to updates in the mirror coating (increased sensitivity in grizy, decreased sensitivity in u) by increasing the u-band exposure time to return single image depths to approximate pre-update values. The impact of these changes has been discussed for quite a while and can be seen in this u-band comparison notebook from v3.4.

The really major impact you will see in the v3.6 simulations is that we have added more downtime within year one, in an attempt to capture a picture of the observatory coming up to speed and dealing with engineering issues after an extremely short commissioning period. We decided to add this downtime to all of the v3.6 simulations; in v3.5 it is only present in the “telescope_jerk_downtime_v3.5” simulation. If you want to look only at cadence choices, the v3.5 simulations are likely where it’s most useful to investigate - I probably should have made that more clear when I posted the link to the comparison notebook.

I do apologize for the short time period to look through these new simulations and it’s not ideal. I do feel that the changes that are present in these Phase 3 recommendations are less widely-sweeping compared to the Phase 1 or Phase 2 recommendations, where the footprint was significantly changed and rolling cadence introduced for the first time – which is really good news I think!

One other thing I would urge everyone to keep in mind is that we are still learning about the capabilities of the observatory, and will continue to learn more and improve as operations start. Between these recommendations and the start of operations, there will be adjustments to the survey strategy – hopefully one of those adjustments is the adoption of single snap visits, but also there could be adjustments to accommodate template building or further updates to the system sensitivity. The SCOC will still be active to guide these choices.

1 Like

The v3.6 notebook comparison is available now as well: survey_strategy/fbs_3.6/v3.6_Update.ipynb at main · lsst-pst/survey_strategy · GitHub

The runs themselves are still available at Index of /sim-data/sims_featureScheduler_runs3.6/

The complete MAF metric results pages are delayed. In the worst of timings, it looks like our astro-lsst-01 server finally gave up the ghost. I am hopeful it will be up at USDF tomorrow. The data that we would put up for these metrics is available at Index of /sim-data/sims_featureScheduler_runs3.6_maf/ … if you download this directory (about 19G), you could stand up your own “show_maf” server. Or you could look at individual plots.

1 Like

And thanks to some hardworking people at USDF, and especially a huge huge massive thank you to @ktl - we have a new home for MAF metrics!

Please see https://usdf-maf.slac.stanford.edu

I realize that there is probably some confusion about the recent opsim runs, especially because a lot changes at v3.6. So in response to the question “which runs should I look at to send feedback to the SCOC”, here is a modified version of my response:

  • v3.6 all have more downtime in year 1. If you want to know what the end simulation might more realistically look like, or for questions between current options, explore v3.6.

  • if you want to know how the change in strategy (only) impacted metrics, look at v3.5.

Within the v3.5 runs, not all of them have ToOs. Within the v3.6 runs, all of them have ToOs but also have more downtime.

So which runs you should use depends what your question you’re trying to answer is.

If you want to say “is uniform rolling vs. four cycle rolling better”, I would compare baseline_v3.6 to four_cycle_v3.6.

If you want to say, “how much better will single snaps be than two snaps”, look at baseline_v3.6 to one_snap_v3.6.

If you want to say “how much change do I see with SCOC recommendations, compared to previous simulations” you might look at all of the baselines from v3.0 to v3.5 and also too_v3.5_10yrs (baseline_v3.5_10yrs does not have ToOs, but ToOs are something you should consider).

A quick recap of changes from v3.0 to v3.6:

  • v3.0 swapped u-z through the lunar cycle and had 3 cycles of rolling cadence, and used old Al-Ag-Al throughputs.
  • v3.2 swapped u-y through the lunar cycle, and had 4 cycles of rolling cadence, and used old Al-Ag-Al throughputs.
  • v3.3 swapped u-y, and had 4 cycles of rolling cadence, and used new 3xAg throughputs.
  • v3.4 swapped u-y and had 4 cycles of rolling cadence and used new 3xAg throughputs (it should be very close to v3.3)
  • v3.5 swapped u-y and had 3 cycles of rolling cadence spaced in “uniform rolling”, used new 3xAg throughput, and also changed the filter balance: u=38s exposures, grizy = 29.2s exposures, 10% more u band visits in the wfd.
    There are also other changes according to SCOC recommendations, such as changing the GP WFD footprint and filter balance within the GP WFD and LMC-SMC region. baseline_v3.5_10yrs does not include ToOs, but too_v3.5_10yrs does.
  • v3.6 is like too_v3.5_10yrs (i.e. SCOC recommendations) BUT adds additional downtime focused in year 1.
1 Like

Thanks for the detailed report! One small nomenclature question; in the one snap version of the simulation grizy exposures are set at 29.2s - in the two snap realisation it’s written as ‘2x15s’ exposures. Is this actually 2x14.6s exposures to add up to 29.2s, or still really 2x15s adding up to 30s exposures again? Thanks!

We mean 2x14.6 exposures. It’s been 2x15s for so long it’ll take a while for everyone to update.

On behalf of the TVS Fast Transients subgroup, we know that this recommendation has the potential to impact vulnerable transient science and we are committed to quantifying the effects of phase 3. We understand that KN are being used by the SCOC as the only metric for fast transients, but we want to emphasize how rich in important and physically diverse phenomena this region of the parameter space is - far beyond kilonovae. We are putting together metrics for GRB afterglows, SNE IIb with shock cooling emission peaks, stellar flares, and FBOTs, if not more. We expect that the loss of a cycle of rolling will impact these science cases, and we appreciate your patience as we quantify the extent.

We do also have some “generic fast transient” metrics (color_slope and 2day color_slope), which just count the number of times we managed to measure a color and slope for a spot in the sky, either within a single night, or over 2 nights. The difference between 3 and 4 rolling cycles looks pretty small with these metrics.