Elahe is investigating potential scheduling algorithms (note: this means, what kinds of algorithms are useful, from an operations research point of view, in developing scheduler ‘controllers’), and as part of that development, she is building prototype cost functions and evaluations of the resulting observations. Currently she is investigating ‘approximate optimal control’ solutions. The scheduling algorithms are focused on optimizing observations within a single night.
One of the areas we would like to investigate is what should the ‘features’ used to weight the desired next field be? (see slide 17 of Elahe’s slides on the confluence page)
For example: would it be useful to combine skybrightness (in each field, in each filter), seeing (in each field, in each filter), and cloud transparency (in each field) a SNR or m5 depth in each field/filter feature?
If different features drive different behavior for the scheduler, they should presumably not be combined. So if we have a cloud mask (0 = no clouds, 1 = clouds) and a separate skybrightness estimate, perhaps that drives the scheduler to behave differently than it would if these values were combined – i.e., with clouds separate the scheduler might be more driven to move to a different field, but with everything combined the scheduler might be more likely to change the filter but stay in one part of the sky.
I advocate using the combination of sky brightness and attenuation due to clouds to establish a map of m5, the five sigma point source sensitivity, across the sky. If we know the dependence of the merit function on m5 then we can readily re-compute the optimal sequence in response to changing conditions.
My own thinking (from working on the ZTF simulator) is that this sort of scalar objective function is exactly what one wants. An SNR metric could fold in sky brightness, seeing (as a function of altitude!), atmospheric extinction (again, altitude dependent), and transparency, and avoid explicit moon and airmass cuts entirely.
Then the challenge the scheduler faces is to window those SNR values (which change in time!) through the cadence windows and maximize the throughput.
While the SNR at a given RA,Dec changes with time, for the most part it should change slowly with time. I think it would make sense to schedule in ~30 min blocks to make the problem more tractable.
Also, normalizing the SNR metric by the “best expected” SNR for a given point+filter could let the scheduler naturally balance the filter selection.
I agree that the hard moon and airmass limits are showing up in strange ways in the current sims and we need an algorithm that seeks out good regions rather than just avoids really bad regions.
Agreed on the potential utility of scheduling blocks. I expect (hope) that discretization will also make look-ahead scheduling algorithms more straightforward.