@jmeyers314 and I were discussing today how to best get various coordinate transforms for use in PSF modeling, and it occurred to me that with the composable
Transform objects we have now, we could create a per-Visit CameraGeom system that uses the optical distortion fit as part of the WCS (which should be much more appropriate for that Visit). PSF modeling code needs to know about optical distortions, but it should not have to know about actual positions on the sky, so it’d be very nice if it could use CameraGeom APIs only and not have to worry about the WCS directly.
I’m interested in getting thoughts both on the structural challenges that would be involved in making this happen (@rowen, @KSK?) and whether it’s algorithmically advisable (seems like it is to me; @rhl?).
I think the main structural challenges are:
- We need a distortion transform that covers the full focal plane, not a single chip. meas_mosaic produces this, and I believe jointcal will too, but for at least the former it would take some work to extract it, as all we save are per-chip WCSs. (Note that I do not think it is necessary to obtain the WCS via a model that fits explicitly for the optical distortion, even though we plan to do that eventually, as we can isolate it from the full WCS by assuming pixels<->focal plane is static and identifying the origin on the sky by using the WCS to transform the focal plane origin).
- We currently have one static
Cameraobject for each mapper. We’ve all long planned to at least version these in the way master calibrations are versioned, but updating the optical distortions would involve having a version with every visit. We are currently planning to persist
Exposureobjects in Gen3 (and we could implement this in Gen2 before Gen3 rolls out); do these currently store any transform information themselves that could be updated when we write