Release of V3.4 Simulations

This announcement summarizes a set of simulations available in a “v3.4” release, which includes minor code updates. Baseline_v3.4 should be used as the current reference run for these simulations.

With our last baseline release (baseline_v3.3), we announced the update of the Rubin throughput curves to as-measured versions as well as moving to 3xAg (“triple silver”) mirror coatings.
(and if you haven’t seen the mirror after coating, it looks beautiful - Rubin Observatory Achieves Another Major Milestone: Reflective Coating of the 8.4-Meter Primary/Tertiary Mirror | Rubin Observatory)
The SCOC are currently evaluating the effect of extending the time spent on u-band to rebalance the final coadded depths in each bandpass, accommodating the changes due to this change in throughputs. These simulations include extending u-band exposure time to 38 or 45 seconds, as well as potentially increasing the number of u-band visits by 0, 10, or 20%. Simulations including these changes are available here:
u band 30 seconds x 1.1 visits
u band 30 seconds x 1.2 visits
u band 38 seconds x 1.0 visits
u band 38 seconds x 1.1 visits
u band 38 seconds x 1.2 visits
u band 45 seconds x 1.0 visits
u band 45 seconds x 1.1 visits
Additional analysis of these effects on our standard metrics are available in a jupyter notebook and a more in-depth analysis of the effects on photo-z is summarized in these slides.

Next up will be evaluating the effects of varying rolling cadence to improve uniformity at specific data releases. Simulations implementing these effects are available here - Index of /sim-data/sims_featureScheduler_runs3.4/roll_uniform_early_half/
with roll_uniform_early_half_mjdp0_v3.4_10yrs being the primary run (with the same start date as the baseline).
Additional simulations that aid in evaluating the impact of these rolling cadence variations include:
noroll_mjdp0_v3.4_10yrs (and other non-rolling variations at Index of /sim-data/sims_featureScheduler_runs3.4/noroll/) as well as a 3-cycle rolling cadence roll_3_v3.4_10yrs.

Simulations adding a weather variances (the “weather_*” runs) are available here: Index of /sim-data/sims_featureScheduler_runs3.4/weather/
And simulations which add start-date variances throughout the year (evaluating impacts of starting the survey at different times during the year - the “start_date_*” runs) are available here:
Index of /sim-data/sims_featureScheduler_runs3.4/start_date/

In addition, task forces to evaluate the Milky Way / Galactic Plane coverage and the Deep Drilling Field coverage are continuing their work, comparing various survey strategy choices and simulations. Please contact the task force leaders (via the LSSTC slack – #scoc-community-ddf-discuss and #scoc-community-milkyway-discuss is preferred, or email if necessary) if you would like more information or to become involved.
Galactic Plane Coverage - Jay Strader and Rachel Street
DDF - Saurabh Jha

As always, more detailed individual simulation metric results are available online at
http://astro-lsst-01.astro.washington.edu:8080

1 Like

I’d also like to take this opportunity to note the release of a new (and growing) survey strategy website, https://survey-strategy.lsst.io, which includes a brief summary of the changes in the baseline survey strategy over time (Updates to the baseline — Observing Strategy).

Al always, the SCOC welcomes input on these simulations from the community and is accessible in a number of ways: if you are in a Science Collaboration, the most effective way to share input is via your SC liaison (see The Survey Cadence Optimization Information | Rubin Observatory to find out who liaisons with your science collaboration). Additionally, and if you are not in a Science Collaboration, feedback can be shared as replies to this post, by contacting the SCOC by email (see The Survey Cadence Optimization Information | Rubin Observatory for details) or by coming to the monthly SCOC office hour no the last Monday of the month at 7AM Pacific (see SCOC office hour - #8 by fed). The SCOC will continue to deliberate through August 2024 to deliver its Phase 3 recommendation in September 2024, the last recommendation on survey strategy before the beginning of LSST. Therefore, prompt feedback is preferred. Feedback received leading up to and at the Rubin Community Workshop, where the SCOC will discuss its recommendations, and through July 2024 will be incorporated, later feedback may be incorporated in our deliberations on a best-effort basis. The SCOC plans to review the surevey strategy annually, so additional feedback, even it it may not be incorporated in the Phase 3 recommendation, will be incorporated throughout the 10-year survey

2 Likes

To clear up some confusion - there are also additional simulations with “rolling_uniform” in the name, but these are early experiments and not considered ‘release’ runs. When looking at the impact of rolling cadence variations (uniform (aka, with pauses), no rolling, 3 cycles of rolling cadence or 4 (the standard baseline) cycles of rolling cadence) please do not consider these additional sets of simulations.

We are also working on some “accordion WFD” versions that are the “roll_uniform_accordian_” sets of simulations, but these are also considered early experiments and not formally released.

Within the boundaries of “rolling cadence variations” this means:

  • roll_uniform_early_half: - four simulations, with start dates scattered throughout the year (the “pXXX” means how many days after the standard start time the simulation was started … this moves the start date from May to August, to November, and to February (of the next year).
    What changes in these simulations? There are three cycles of rolling cadence, with occasional pauses throughout the simulation so that intermediate data releases at year 4 and 7 are uniform.
  • no_roll : - consider the four simulations with “pXXX” in the names, which describes offsets to the start of the survey simulation, as above.
  • start_date: the baseline (with 4 cycles of rolling cadence), with offsets throughout the year.
  • roll_3 : the baseline (no pauses) but with 3 cycles of rolling cadence. There is only one of these.
  • baseline: the baseline, with 4 cycles of rolling cadence. There is only one of these.

You may also find some of the slides within [RollingCadences_2024 - Google Slides] to be helpful to illustrate the uniform rolling strategy.

At LINCC Frameworks we work on a new catalog-level light-curve simulation tool for LSST community. OpSim files are extremely useful for that purpose, but they miss a significant bit of information: zero point value for the given observation. Could I ask to add a new OpSim column with zero point provided?

I’m not sure if it was the right place to ask, please redirect me to the right place.

My apologies, I missed your request.
We don’t currently add a zero point for each observation, as this isn’t needed for most use given that we provide the 5-sigma limiting magnitude.
We can add this to future outputs, as it’s likely also useful for LSST data management, but for the simulations that are already released, it’s likely better for us to just provide you with some code that could be called to calculate this on the fly.
Because we don’t have atmosphere absorption curves for every airmass, only at steps of 0.1, the zero points will be quantized in a way our m5 values are not (the m5s are scaled using the airmass, seeing, skybrightness … the zero points would be calculated from scratch for various versions of the throughputs using a variety of different atmosphere throughput curves (but only at 0.1 increments of the airmass). Note that these variations on atmosphere do not include variation in the atmospheric absorption components (although neither do our m5 values). The value would be the (AB) magnitude that produces one electron in a one second exposure. Does this sound useful?

1 Like

Thank you, @ljones!

Could you please clarify this part?

… the zero points will be quantized in a way our m5 values are not (the m5s are scaled using the airmass …

Does it mean that zp used quantized airmass, while m5s did not? Do m5s calculations use zp as an input?

it’s likely better for us to just provide you with some code that could be called to calculate this on the fly.

It would also work, but if zp was used for m5s it looks like a bit of reverse processing of the simulation pipeline, while for me it sounds more reasonable to just use the original value.
Would this code look like one from OpSim Summary project?

@ljones this is a gentle reminder about this discussion

Thanks for the reminder.

No, we do not calculate zeropoints (by which I’m assuming you mean instrumental zero points for each visit) in order to calculate m5 depths.
We instead have a sensitivity measurement for the system (equivalent to, but not represented as, a zero point for the system under a defined set of conditions) that results in a m5 value for those same conditions. We then modify this analytically for varying airmass and seeing, resulting in an updated m5 value that represents the value for a given observation.
There are more details in SMTN-002 at
https://smtn-002.lsst.io/#calculating-m5-values-in-the-lsst-operations-simulator

To calculate an instrumental zero point, I could either take that m5 value and turn it back into “the magnitude that would result in 1 electron in the camera, for a 1 second exposure” (aka the instrumental zero point) or I could start from the throughput components, including an airmass-dependent light curve, and calculate the instrumental zero point that way (aka like the code in rubin_sim phot_utils here). The only problem with that second approach is that I only have airmass curves at 1.0, 1.1, 1.2, 1.3,… 2.5 steps in airmass, which is why I said it would be quantized. Or I could write a new equivalent to our m5 calculation, but dealing with the zero point instead, or write the conversion from m5 to zero point.

If you’re using the zero point in order to calculate SNR for a source in a given visit, none of this is actually necessary**. You can go from magnitude of a source, plus m5 for the visit to SNR of that source in the visit via rubin_sim.m52snr. Of course, that may not be what you’re looking for here, so I don’t know if that’s helpful.

** I would want to just re-verify this for SNR across all regimes (sky noise vs. shot noise dominated), but it should work for most sources … I remember checking it at a previous time, so I think it’s reasonable. Gamma would change for different observations, but the amount of change in gamma is very small and I think you’d find something very reasonable by just using a given gamma or even a gamma value per filter.

Thank you, @ljones, it is very-very helpful!

I think I should give a bit more context of what we are doing. We are simulating F_ν(t, ν) for transients and variable objects. They are mostly point-source objects, but for some of them, for example SN Ia, we re going to simulate host galaxy F_ν(ν, α, δ), which is an extended source. Moreover, we are aimed to provide data in “forced photometry” regime, so the simulations would include zero fluxes for some objects at some times.

This makes some problems with using the approach you proposed: the S/N equation doesn’t optimized for extended sources and zero fluxes.

When I started this conversation my naive approach was that we can derive output fluxes and their uncertainties using instrumental zero-point. You are right, by zero-point I meant a value of a flat F_ν (or an AB mag as a different representation of that), which produces a single electron per second, or per exposure. This zero point would allow us to get a “perfect” electron count for all the components of the model we are interested in: sky, transient, host, etc. Individual noise of each component would be Poisson, so we can get “observed” counts and errors, which we can convert back to “observed” passband fluxes (AB mags) and errors using the same zero-point.

I see now that Opsim is doing this type of simulations in a more sophisticated way. We would like to use opsim DB as a single source of all parameters needed to produce the simulations. I don’t see a big problem in quantized atmospheric throughput curves, because I don’t think that we need that level of the precision, at least today. Maybe we could just interpolate the result zp on the airmass grid?

Summarizing, I still think that zepo point value would be very useful, because it provides a straightforward, through not precise, way of producing catalog-level simulations.

Sorry, @ljones, it is another gentle reminder. We can talk in a different place if you prefer: Slack, email, zoom?

Hi @malanchev - I’m sorry about the pace for responding to your request.
Unfortunately, our resources are rather tied up with responding to simulation requests and evaluating those simulations for the upcoming SCOC Phase 3 recommendation deadline.
I’m happy to work on including the zero point in the opsim outputs, but it’s just not likely to be available before middle to late September given our other priorities.

However, here is a short function to calculate the photometric zero point for each of our bandpasses, which you could apply to the opsim outputs –

def scaled_zeropoint(filtername: str, airmass: float, exptime: float = 1) -> float:
    """Photometric zeropoint (magnitude that produces 1 electron
    in a 1 second exposure) for LSST bandpasses (v1.9), using
    a standard atmosphere scaled for different airmasses and potentially
    scaled for other exposure times.

    Parameters
    ----------
    filtername : `str`
        The filter for which to return the photometric zeropoint.
    airmass : `float`
        The airmass at which to return the photometric zeropoint.
    exptime : `float`, optioanl
        The exposure time for which to return the photometric zeropoint.
        Canonically this should be 1 second, however some uses may
        find other exposure times useful.

    Notes
    -----
    Typically, zeropoints are defined as the magnitude of a source
    which would produce 1 count in a 1 second exposure -
    here we use *electron* counts, not ADU counts.
    """
    # calculated with syseng_throughputs v1.9
    extinction_coeff = {
        "u": -0.458,
        "g": -0.208,
        "r": -0.122,
        "i": -0.074,
        "z": -0.057,
        "y": -0.095,
    }
    zeropoint_X1 = {
        "u": 26.524,
        "g": 28.508,
        "r": 28.361,
        "i": 28.171,
        "z": 27.782,
        "y": 26.818,
    }
    return zeropoint_X1[filtername] + extinction_coeff[filtername] * (airmass - 1) + 2.5 * np.log10(exptime)

This will be available in a future release of rubin_scheduler, but is provided here for your use if desired.

1 Like

Thank you so much, @ljones. The code is very useful, and sorry for bothering you so intense, I understand that it is low priority comparing to the SCOC. End of September is a reasonable timeline for us, and it still would be very helpful to have zero points directly in the opsim db.