Survey Simulations v1.7 release (January 2021)

Tags: #<Tag:0x00007f27b3661a70> #<Tag:0x00007f27b36619a8>

We are pleased to announce the release of v1.7 of the survey strategy simulations. For general information on the survey simulations, see our PSTN-051 document. PSTN-051 will be updated shortly to include more in-depth analysis of these new runs.

The major change for this release, we have switched the default observing behavior to visits of 2x15s exposures rather than 1x30s exposures. The extra shutter motion and readout time is a significant increase in overhead compared to our previous default simulations. The run baseline_nexp2_v1.7_10yrs.db is the simulation which other runs in this release should be compared to.

other updates in this release:

• Updated telescope kinematic model which fixes problems that previous releases included observations slightly below the altitude limit of the telescope and put the camera rotator in unphysical positions.
• Improved rolling cadence that maintains the proper weighting for regions outside the wide-fast-deep survey and has proper variable strength.

MAF outputs for these (and previous releases) can be found at:
The sqlite files for these simulations are also at NCSA:

Descriptions of runs


We run 2 baseline simulations, baseline_nexp1_v1.7_10yrs.db and baseline_nexp2_v1.7_10yrs.db. Again, the rest of the runs in this release have 2x15s visits and should be compared to baseline_nexp2_v1.7_10yrs.db.


We try a variety of different survey footprints based on increasing the low-extinction area of the wide-fast-deep footprint. We also test including coverage of the bulge and outer Milky Way disk as part of the WFD area.


We test rolling cadence by dividing the sky in half (nslice2) or thirds (nslice3). We also vary the weight of the rolling from 20% (scale0.2) to 100% (scale1.0). Because the scheduler tries to keep observations at low airamss, the rolling, especially for the nslice3 runs, is not as strong as it could be.

These runs also include a basis function which modulates the emphasis of the survey so the northern half of the sky is observed on even days and the southern half on odd days.


These are rolling simulations similar to the above, but do not include the daily north/south modulation.


These runs vary the size of the spatial dithering of the deep drilling fields from 0 to 2 degrees.


These runs test using a custom dither pattern for the Euclid deep drilling field to better match the Euclid field of view.


In the baseline, observations are typically taken in pairs separated by ~22 minutes. In these runs, we vary that pair time from 11 minutes to 55 minutes.


The baseline simulations to not attempt to pair twilight visits. In these runs twilight observations are paired in either the same filter, or a different filter (mixed). The twilight observations are also set to attempt to re-observe areas of the sky that have already been observed in the night (repeat).


An update of previous experiments that use twilight time to perform an NEO survey. These simulations include a large number of 1s observations. We still need to verify that the camera and network could handle taking so many short exposures.


Observations in the u filter are taken as single snaps, and we test increasing u-band exposure times. Note, DDF u-band observations are still the default 2x15s exposures.


An experiment where long gaps in g-band exposures are avoided, even if that means observing g in bright time. We test different limits on how many g-observations are taken and if the blob scheduler tries to maintain contiguous areas.

1 Like

Some additional notes:

The u-long family varies the exposure time in u-band, but attempts to keep the number of u-band visits the same. This means decreasing the number of visits in other filters! The decrease should be spread evenly among the other filters, which typically receive many more visits than u band (so have a smaller percentage drop in total visits).

You may also find extremely useful the following CSV files (which you can then use to read into whatever your favorite data processing method is …
to read into python via pandas and get a DataFrame indexed by run name, I suggest
summary = pd.read_csv('big_1.7.csv', index_col=0)

The following CSV files contain the summary statistics for every metric we run, for all of the 1.5, 1.6 and 1.7 runs (respectively).

(Note that I have not included a link to FBS 1.4 run summary metrics … this is available, but not recommended. The FBS 1.4 runs have been superceded by runs in 1.5, 1.6 or 1.7, or may be part of a strategy question we are not actively considering at this point).

Please be careful with your normalizations as always, however these CSV files can be very useful for comparing results across different simulations.
The complete MAF outputs (including plots) are available at – this link contains a comprehensive set of simulations which are actively under consideration (i.e. the 1.5 and 1.6 rolling cadence runs are not included, and neither are the 1.5 twilight neo runs).

It’s mentioned above, but I wanted to emphasize again:

All of the runs here use 2x15s visits with the notable exception of baseline_nexp1_v1.7_10yrs, which uses 1x30s visits.

When comparing the impact of survey strategy variations in each family here, compare within the family and compare the family as a whole to baseline_nexp2_v1.7_10yrs when relevant (i.e. when the footprint is generally the same … which is true for all of these families except the footprint_tune family, which still can be compared with the baseline but with a different intent).

When you want to check if v1.7 (in general) changed anything relevant to your science, compare baseline_nexp1_v1.7_10yrs to baseline_v1.6_10yrs and baseline_v1.5_10yrs – the purpose of the baseline_nexp1_v1.7_10yrs run is to serve as a link to the previous releases.

Hi Lynne, I’m computing a metric around the Galactic Plane and I imported in the MAF framework the Dust Model, as suggested by Peter Yoachim. Could you, please, tell me, some details on this model? Is it based on the Schlegel et al. 1998 dust maps?

Yes, that’s right. I’m sorry we don’t have that in the docs anywhere. But the dust maps we are using are just the Schlegel dust maps (SFD_dust_4096 north and south), resampled onto healpix grids.