Are there different kinds of nights that the scheduler should consider?

Please read the background information in the slides at
https://confluence.lsstcorp.org/display/SIM/13+April%2C+2016?focusedCommentId=44861025#comment-44861025

Elahe is investigating potential scheduling algorithms (note: this means, what kinds of algorithms are useful, from an operations research point of view, in developing scheduler ‘controllers’), and as part of that development, she is building prototype cost functions and evaluations of the resulting observations. Currently she is investigating ‘approximate optimal control’ solutions. The scheduling algorithms are focused on optimizing observations within a single night.

One of the questions in developing a scheduler controller is, should there be different controllers for different kinds of nights? Or are nights similar enough that one controller could be trained to operate on all nights?

Differences in nights might be:
length of the night (winter/summer), cloudiness or weather prediction for the night (mostly cloudy/big clouds, slightly cloudy/small clouds, no clouds), and lunar phase.

A related question would be, if there are different kinds of nights, what is the level of difference between the extremes and what kind of granularity would be expected between the different levels?

I would guess the main difference would be the (expected) length of the night. Sometimes you get a clear winter night, sometimes you project that weather will close things down for all but an hour.

My guess would be that you’d want a single controller at least for now, since there aren’t any hard boundaries between the various states. As Peter says, most of these things (winter/summer, weather) come down to the expected length of the night. Perhaps you’d want the controller to assess how likely it will be to be able to complete a cadence sequence–i.e., maybe it doesn’t start on a Deep Drilling field if the weather looks marginal.

I would be surprised if there was any algorithmic difference between nights, provided all the constraints are captured in that one algorithm: moon avoidance, cloud avoidance, length of time available (because of time of year or engineering constraints), airmass preference, minimising filter changes etc. I mean there are other constraints but I assume they don’t appply to a fixed programme optical survey telescope.

The more interesting question is whether optimizing within only one night is sufficient, but I assume you’ve already thought that one through. The main example of this in PI-type telescopes is when you are seeking to bias observation selection based on past history (eg. to favour a 90% complete science programme get to 100% rather than a 0% get to 10%); an example of this for a survey would be whether you value being contiguous to an area you have already started mapping, or preferring a previously underused filter, or something like that.

My inner IR astronomer still quails at the thought of a whole-night schedule, but I keep trying to remind myself that things are different in the optical :slight_smile:

I think the idea is that you optimize the night, but have the compute power to re-optimize when the schedule gets too far out of whack…things go bad in the optical plenty.

Also - the optimization takes into account the past history of the telescope. (this is another area of consideration; how do we summarize the past history, in a way that accounts for important features for considering “which field to observe next” while not becoming too complex?).

Having thought about this before, here’s the list of things we probably need to track as part of the history. As usual, I would lobby these be healpixel maps rather than stats per field:

  • Total number of visits (per filter)
  • Co-added depth (per filter)
  • Most recent visit mjd (per filter)
  • Number of visits in the current night (per filter)

Some other possibilities:

  • Best seeing map (per filter). Handy to make sure we get good templates everywhere.
  • 2nd most recent visit? (per filter).
  • Maybe a boolean map that records if a visit pair was taken in an optimal time window on a night.

That comes out to 42 maps. Even at ComCam resolution, I don’t think that should be too much of a memory burden. If we want enough information to optimize for SNe light-curves, that could potentially get expensive and complicated.