Scientific impact of moving from 2 snaps to a single exposure

There are some technical motivations that may favor moving away from the 2x15 sec snaps per visit to a single 30sec exposure. This would improve readout noise and increase overall survey efficiency by a few percent. However, there are some scientific drawbacks that should be considered.

I started this thread to formulate and discuss the drawbacks and advantages of different snap strategies.

I want to start with two obvious drawbacks:

Saturation limit would be even brighter; time dependent phenomena like planetary transients (including in WFD), microlensing (really only in special programs, but special programs could have different strategies for snaps I assume), stellar variability, and anything that gets bright fast would lose targets or valuable data. Starts studies as well, variable and static. The overlap with some surveys in magnitude range will shrink.

We would lose sub-minute time scales. Right now the 2 snaps could be used to investigate sub-minute timescales and this is relevant for a number of transients, for example CVs. Those times skills would lost (in WFD) and the shortest time scales that could be studied would be ordered one minute in the overlapping regions of consecutive visits

1 Like

Saturation at r=16 with 15 sec snaps is already a heavy limit for our science case (blazars), besides the cases you have already mentioned. We would lose many flaring/outburst phases, the most interesting ones that could be the targets of follow-up observations especially at high energies.
Instead, we were considering to propose different exposure times for the two snaps per visit, one shorter than 15 sec and one longer, so that the shorter one could lower the mag limit. Are there technical motivations that would prevent this choice?

Incidentally, a long time ago we had a similar proposal with randomized split, see the document below:

At the time it was shot down, but I still think it is a good idea.

anze

is it possible to gather here some more informations or pointers to the technical motivations being currently discussed on this matter?

shot down with which rationale, among the various Cons that you are listing?

Even with snaps there will be some effort involved in getting short-timescale variability information. According to the DPDD, snaps are user-accessible as raw images, but all* of the downstream data products LSST produces use the Processed Visit Image (PVI) with the snaps combined. The one exception that I’m aware of is the diffFlux/diffFluxErr field of DIASource, which reports the PSF flux at the DIASource position measured on the difference of the two snaps. That is presumably enough to detect some short-timescale variation, but my reading is that any deeper analysis would likely require user-generated processing.

The original motivation for the snaps as I understand it was artifact rejection; it’s clear that there are classes of false positives that can be more easily rejected with snaps. I think it’s difficult to be quantitative at this point about how much of an effect that creates.

My personal view is that the ~10% loss of survey time imposed by the extra readout is a steep price to pay, and we should explore innovative observing strategies (e.g., twilight observing, short exposure sequences) that might use that time to enable these science goals.

I found some discussion on handling saturation problems here:
Getting photometry of very bright stars
but it stopped more than one year ago. Is there something new on that topic?

I am not aware of much additional work on the issue of bright star photometry. Probably the most relevant related material is the work on halo photometry in Kepler/K2 by Tim Wwite at Aarhus (https://academic.oup.com/mnras/article/471/3/2882/4081952) and work by Ben Pope at NYU (https://academic.oup.com/mnrasl/article/455/1/L36/2589594). However, the utility of those methods for LSST will rely heavily on the ultimate LSST detector characteristics, including blending, bleeding, flux conservation (or the lack), nonlinearity, and other saturation characteristics. Optimistically, I suspect that we will be able to obtain reasonably precise light curves of arbitrarily bright stars from halo photometry, but the question is how far from the photon limit we can get, and what will be the primary type of systematics.

pretty much anything less than getting a sum of 30s on-sky (with or without snaps) per visit will significantly impact Solar System science. We won’t get the numbers of solar system detentions. For example doing 1 15s snap in r,g,i will decimate the number of Solar System detections.

Two snaps per visit, doesn’t really help Solar System. For the very fastest near earth asteroids you know which way they were moving since you can use the information from both snaps. Though from a single snap you have the streak so you have narrowed it down to a direction down to motion along a line. For all other Solar System science cases we’re agnostic, but going to 1x30s exposure 7% of observing time is freed up which we would rather see to ensure getting the northern ecliptic spur (NES) needed for Outer Solar System science and confirming/disproving the Planet 9 hypothesis. Gaining 7% of observing time would allow the NES and additional mini-surveys from the community, so we will likely advocate for 1x30 observation at each visit.

aha! This is awesome, I had mentioned this to David several times - mostly people looked at me like I had grown a third eye when I suggested random LSST exposures but apparently someone was taking me seriously! @slosar are you here? any thoughs of writing this as a white paper? I think we should have a white paper collecting the pros and cons of different snap strategies for the Nov 30 submission deadline

First, I think everybody is overly excited about the 7% extra. Even if it happens, it will be distributed among science cases, so your particular cases might only get some 4% (even with sharing).

@fed @johannct Yes, I’m happy to help write a white-paper. What happened last time is that we I got a response that was basically no, without too many convincing arguments, but then lupton privately told me that all these things are in the air, so my reading was that they just in reality thought it is premature to be thinking about these details; but now it clearly isn’t. I think one possibility would be to do a single snap with x% probabily or two randomized with 100-x%, so you can have a trade-off between increased total time and increase variability/dynamic range/systematics in a tunable way while not disturbing to the std 30s/visit strategy. Let’s catch on slack for discussing this?

1 Like

Going to 30s exposure will improve readout noise but this , if any , will only matter in U … so I’ll re-phrase the initial question : does the transient science interested in 2x 15s exposure , care about all filters , including U ?

Hi, I am Roberto Silvotti from Torino Observatory. I have just been at a LSST national meeting in Palermo to discuss white papers.
Given that I am interested in white dwarf planetary transits and given that a WD transit has a typical duration of 1-2 min, to keep the present strategy of 2 snaps of 15s each would be very useful for this program, doubling the number of points in or near the transit and allowing to better distinguish a real transit from other phenomena like stellar pulsations or dark spots and outbursts (more rare but not impossible in WDs). Note that for this program filters do not matter, we just need to catch some points (as many as possible) in the transit.

@ebellm @mschwamb or anyone else could you clarify where the 10%/7% numbers come from? Is 7% from 2s/30s rounded up?
The effective exposure time of 15s in the key numbers is 16s - 0.5*2s for the 2s of shutter open/close time, no? A single exposure of 31s would get the same effective exposure time, so I suppose the 10% number is 34s/31s-1?
I recall hearing some discussion about how a longer readout time could reduce read artifacts. Given that the median slew time is 4.8s, if there’s no need to do a second readout, could the read time be increased to ~4s with a single exposure?

Here is the current baseline for 1 pointing with 2 exposures of 15 s :
15s [exposure] + 1s [shutter] + 2s [readout] + 15s [exposure] + 1s [shutter] + ~5s [ new pointing+readout ]
Total = 34s + new pointing time ~ 38 s

So for 1 pointing with 1 exposure of 30s :
30s [exposure] + 1s [shutter] + ~5s [ new pointing+readout ]
Total = 31s + new pointing time ~ 36 s
and yes the readout time can then be set to the new pointing time ( up to ~ 5s) , without extra overhead.

So the 3s win is more on the 6% side , and can be even less if pointing time is greater than 5s ( this is highly dependent of the observation plan … for which there is white papers call at the moment ).

Remark that from the current sensor test there is for some device a good win in readout noise with 3 or 4 s readout … the improvement more or less saturate for readout time > 5s.
Remark that the readout noise improvement matter only in U … but for all filters a reduction in Xtalk is also
expected. Still the xtalk is expected to not be a limiting factor ( we should be able to correct for it ) .

1 Like

The issues for CVs center on picking up the shortest timescales for variability for the shortest period systems (AM CVns have periods down to a few minutes), eclipses that last minutes, and pulsations that last minutes. In addition, the saturation level matters for the tie-in to past survey data. We
also prefer U filter as CVs are very blue in color. The degree of variability that can be picked up will be better known from the ZTF data, especially the fast cadence in the plane that will be accomplished. This will likely not be known in time for the white papers but should be known prior to the start of LSST to inform future cadences of LSST during the survey.

7% comes from @ljones comparing the time from the opsim that have 1 snap versus 2. At least what’s she came up with at the Flatiron Cadence Hackathon. Lynne can you comment more?

@antilogus’s post above explains the efficiency gain clearly (@dtaranu)

Yes, @antilogus spells it out pretty clearly and agrees well with the ~7% which was comparing a simulation with 1 snap vs a simulation with 2. (there will be some randomization between simulations with different configurations).

As to whether it’s reasonable to think this amount of time is something to be excited about or not … consider that the entire DD program (with the current survey configuration) uses about 5-6% of the total survey time. It’s certainly a good point that this additional time won’t be likely to added to a specific survey, but having it available gives a lot of the other proposed surveys much more breathing room.

That said, of course, the final survey strategy will be chosen to maximize science - so understanding what science is gained by doing short or mixed length snaps within a visit (and the mixed snaps are technically feasible) and what science is gained by doing additional observations or adding a particular mini survey, then the Observing Strategy Committee will be better informed.

I add a comment that I did in SLACK today on this topic :

For the overall quality and science : the one shot option + 1 readout per pointing in U , sound the best anyway for U with our without noise issue with the ITL sensor ( U : sky is high so short exposure will have even less sky …so electronic noise will mater anyway ) ….

For the other filters (or some of the other …G R I ? ) considering different exposure time for the 2 exposures of one pointing is certainly a topic to investigate … I think that for these filters , noise/readout time issue can be fixed as you can always do the long exposure of the pair first with a short readout time ( long exposure = high sky level …electronic noise don’t dominate ) followed by a short exposure with long readout time during slew ….( here you can even go down to a ~2-3 e- readout noise with e2v device, 4-8 e- with ITL )