Baseline v3.3 Run released!

I would like to draw your attention to a new simulation of the baseline survey – baseline_v3.3_10yrs
This illustrates the baseline v3.3 survey strategy, which is pretty much the same as v3.2 (a few minor bug fixes), but most importantly – it contains an update to the throughput curves!

The LSST PST (project science team) recently reviewed the question of whether to coat the mirrors with “triple silver” (Ag coatings for M1, M2 and M3) or to stick with the previously simulated version of “aluminum - silver - aluminum” (Al-Ag-Al for M1, M2 and M3 respectively) coatings. The project moved ahead with a change request to move to 3Ag based on evaluating higher overall survey efficiency. The major thing to know about this mirror coating change is that the sensitivity in u band is lower (by about 0.2 magnitudes with 3Ag compared to Al-Ag-Al) but the sensitivity in all other bands (grizy) is higher by about 0.1 - 0.15 magnitudes. Because the surveys includes more visits in grizy bands than in u band, the overall survey efficiency is increased by approximately 15-20% by this coating change.

The SCOC will be evaluating possible responses to the u band drop in sensitivity, such as potentially having longer visits in u band, which could be accommodated by slighter shorter visits in other bandpasses for example.

A high-level overview of the changes from v3.2 to v3.3, including the effect of the throughput updates, is presented in this series of slides – Survey Simulation v3.2 -> v3.3 - Google Slides

The baseline_v3.3_10yrs simulation can be downloaded from https://s3df.slac.stanford.edu/data/rubin/sim-data/sims_featureScheduler_runs3.3/baseline/baseline_v3.3_10yrs.db
MAF analysis outputs are available at http://astro-lsst-01.astro.washington.edu:8080

1 Like

A summary H5 file (readable with maf “get_metric_summaries” if you set summary_source = the summary.h5 file) is available at https://s3df.slac.stanford.edu/data/rubin/sim-data/sims_featureScheduler_runs3.3/maf/summary.h5

The notebook referred to in the slides above, comparing metrics across baseline strategies from v1.x to v3.3 is available here: https://github.com/lsst-pst/survey_strategy/blob/main/fbs_3.3/v3.3_Update.ipynb

In writing a notebook for a different purpose, I realized that I have forgotten to mention that the NearSun twilight visits were called “twilight neo” in v3.2 (and maybe “twi neo” in earlier simulations) – but they are labelled as “twilight_near_sun” in v3.3 (which better matches their requested survey name).

Some of the ‘note’ information is straightforward – “DD:ELAIS1” for example is a DD field, and visits labelled like this were taken in a sequence specifically for the DDF (note that there will also be “typical WFD-style” visits at that point on the sky too). Some is not immediately obvious but clear if you know some extra information … “blob_long” and “long” visits are visits that are part of the triplet sets (see SCOC recommendations to occasionally take triplets of visits in a night, for better sub-night time sampling) … ‘blob_long’ are the first pair, ‘long’ is the more separated triplet visit. “twilight_near_sun” visits are part of the near-sun twilight visits taken at high airmass to find near-sun objects.

“greedy” are the simplest twilight visits, taken basically as a time-filler … these are scheduled as ‘the next best visit to take’ only, instead of as blocks/blobs. They are single visits, not pairs.

“pair_15” are standard survey visits taken during twilight … the interval is 15 minutes instead of the standard 33, because we need to finish the twilight visit block before twilight is done.

“pair_33” are the standard survey pair visits. You can see which pair of filters is in each pair by the name (see iz paired, as well as zy … as well as yy). “a” or “b” indicates whether the visit was in the first pass of a pair or the second.