CameraGeom vs. WCS for Optical Distortions

@jmeyers314 and I were discussing today how to best get various coordinate transforms for use in PSF modeling, and it occurred to me that with the composable Transform objects we have now, we could create a per-Visit CameraGeom system that uses the optical distortion fit as part of the WCS (which should be much more appropriate for that Visit). PSF modeling code needs to know about optical distortions, but it should not have to know about actual positions on the sky, so it’d be very nice if it could use CameraGeom APIs only and not have to worry about the WCS directly.

I’m interested in getting thoughts both on the structural challenges that would be involved in making this happen (@rowen, @KSK?) and whether it’s algorithmically advisable (seems like it is to me; @rhl?).

I think the main structural challenges are:

  • We need a distortion transform that covers the full focal plane, not a single chip. meas_mosaic produces this, and I believe jointcal will too, but for at least the former it would take some work to extract it, as all we save are per-chip WCSs. (Note that I do not think it is necessary to obtain the WCS via a model that fits explicitly for the optical distortion, even though we plan to do that eventually, as we can isolate it from the full WCS by assuming pixels<->focal plane is static and identifying the origin on the sky by using the WCS to transform the focal plane origin).
  • We currently have one static Camera object for each mapper. We’ve all long planned to at least version these in the way master calibrations are versioned, but updating the optical distortions would involve having a version with every visit. We are currently planning to persist Detector objects in Exposure objects in Gen3 (and we could implement this in Gen2 before Gen3 rolls out); do these currently store any transform information themselves that could be updated when we write calexps?

Jointcal’s persisted model (saved per-chip) includes both PIXEL->FOCAL and FOCAL-IWC components, so one could simply query each chip for it’s PIXEL->FOCAL transform to build the model you want.

Note that jointcal’s PIXEL->FOCAL transform is fixed across all visits in that tract, under the assumption that the chips don’t move around. We can look into relaxing that in the future.

For DECam (and I assume LSST) the chips move around every time there’s a camera warm-up (they have to be allowed to move or you get cracked CCDs). See, e.g., Figure 13 of Bernstein et al. (2017): . So this is something that will have to be relaxed, though I have no statement on the priority of making that change.

For DECam (and I assume LSST) the chips move around every time there’s a camera warm-up (they have to be allowed to move or you get cracked CCDs).

This is the kind of behavior that versioning the geometry as a master calibration should handle without Jointcal having to do anything.

Well, anything besides providing the fits that the new geometry model is derived from.

Detector objects do store a TransformMap that transforms from pixels to focal plane, but not focal plane to field angle. That could be added, but it means a lot of duplication with the transform that is in lsst.afw.cameraGeom.Camera's TransformMap. It also would also require a bit of work in how TransformMap is constructed, but in theory that should be easy.