SMTN-005: Cloud Statistics via All-Sky Camera

Announcing Sims tech note, SMTN-005:

This is a writeup of looking at data from the all-sky camera data collected at the LSST site. The main results:

  • The camera is not sensitive enough to detect enough stars at high enough SNR to construct full transparency maps (maps of the extinction magnitude over the visible sky).
  • Difference imaging consecutive exposures does look like a potential way forward for constructing cloud masks (e.g., which pixels probably have clouds, and which are probably clear).
  • The difference images should also provide a useful way of predicting where clouds will move to in future exposures.
  • The majority of time is either very clear or very cloudy. It looks like only 5-10% of the time could be improved by using dynamic cloud avoidance.

Tech notes are living documents, so feel free to make comments or request edits (or clone the repo and fix things on your own…).


Cool work!

Have you thought about forward-modeling the images? I’m guessing that part of the reason that you aren’t reaching the Poisson limit is that you have large pixels and you aren’t accounting for all of the light coming from the fainter stars in the pixels. Once you have a decent WCS from the ~2000 stars you could more accurately model the entire image using a slightly deeper stellar catalog and “smooth” image of the background unresolved light OR just a higher-resolution image of the entire sky that you resample (and “bin”) onto your WCS. The Mellinger all-sky map might work well for the latter option. You could then compare your model of the sky to the observed one and figure out the transparency. Of course, there might be other problems with this method that I’m not thinking of. Just thought I’d mention it.

1 Like

I really like @nidever 's suggestion.

Hi @nidever,

I think faint stars are more likely to be an issue in galactic plane, and I spot checked some isolated stars and saw the same scatter. It’s been a while since photometry 101, but I think stars below the detection threshold should be an issue for all detectors at all depths? And we don’t expect to have to forward-model LSST images to get pretty close to the Poisson limit for aperture photometry?

I have are lots of suspects for the extra noise–like I don’t do any flat-fielding and I don’t know the details of CMOS+Bayer filters. (are they any gaps in the pixels, that would do it)

The main point is that in photometric dark-time conditions, we can detect 2,000 stars. If there’s a layer of clouds with ~1 mag of extinction, there are not going to be anywhere near enough detectable sources to make a cloud extinction map. Even for the brightnest stars, the luminosity function is sharply rising. If we decide we need that, first thing we probably need is more aperture. Then we can start hunting noise sources, otherwise it’s premature optimization.