Pursuant to DM-4153, this is a request for information about people’s expectations for future World Coordinate System (WCS) needs in the stack. In particular, I would like feedback from @rhl, @price, @jbosch, @timj, @PierreAstier, @boutigny, @rowen, and anyone else who has opinions on what sort of WCS transformations will be needed or desired to help us achieve our science goals.
Note that these requirements are wholly independent of any current WCS implementation (the stack’s, wcslib, GWCS, AST, etc.), and also independent of any given representation (e.g. FITS). We need to determine what we want in the future separately from what is available now. I am preparing a document describing the current WCS options.
The WCS requirements are also independent of our representation of celestial coordinate systems (Coord objects in the stack). We will very likely do all of our celestially coordinate work in ICRS internally, which should vastly simplify that problem.
For reference, DMTN-005 summarizes the current WCS usage in the stack. I will continue fleshing it out with more details, but I believe it describes everything in the stack that currently uses WCS.
Please include in your requirements your thoughts on how transformations should be composed, whether we should include color terms or pixel distortion effects, and anything else you deem relevant. This information will be used in the crafting of a WCS requirements document to guide us going forward.
The transformations that are required are distinct from describing the requirement that we have to be able to combine transformations in serial and in parallel.
wcsLib, FITS-WCS, and the current LSST WCS code are not set up to handle arbitrary combination of transformations. AST (and gWCS) were designed from scratch to allow transformations to be combined.
What I’m really saying is that there are requirements on the architecture of the WCS implementation and there are requirements for individual transformations that need to be included.
Anything that affects how we go from pixel coordinates to sky coordinates should be available as a pluggable transformation. Is there a requirement to be able to attach a WCS to a untransformed mosaic of CCDs from the focal plane? (this would require distinct changes in the transformation solution across the image).
I assume everyone has read this who cares but here is the AST description paper:
Which talks about the general problem of combining mappings, inverting mappings, and simplifying mappings.
Thanks Tim. Those are all good points. I should have been clearer that this is both a list of requirements about what transformations are needed, and about the architecture.
This will probably be the first of several thoughts on this, as I’m sure I’ll forget some things now. And some may be design opinions that I’ve not yet realized aren’t actually requirements, so feel free to push back if it sounds like I’m being unreasonable.
I think our WCS system does need to include pixel distortion effects, but only those that are frozen into the chip for at least the duration of an exposure (i.e. not brighter-fatter). I expect these to be our only real exotic transformations; everything else we need is likely to just be a polynomial, spline, or some sort of standard trigonometric projection. Well, some variety of fancy full-sky pixelization is a possibility as well.
The transforms we use to define mappings between coordinate systems on the camera must be the same kind of transforms we use when mapping images to the sky, or be so interoperable that there’s no overhead in using them together.
Compositions of transformations must be possible, and should only simplify mathematically when this can be done exactly (well, up to machine precision). Fast approximations to composed transformations should also be available, and should provide some guarantees as to their accuracy.
I think we will at least sometimes want a transform class that knows its coordinate system endpoints (either by owning an object that describes that coordinate system, or maybe just a string or enum label), along with some way to guarantee that a chain of composed transforms is valid given those coordinate system labels. I don’t think this is necessary for all transform class use cases, and if it’s heavyweight, we may also want versions of the transforms that don’t have these labels.
I think we want to distinguish between spherical coordinate systems and Cartesian coordinate systems, at least so we can use the appropriate (spherical vs. Cartesian) geometry types as inputs and outputs.
I do not think our WCS system needs to be aware of color or wavelength. We’ll have to consider chromatic effects in the PSF, but if we define our PSFs with an offset centroid, I’m quite convinced we won’t also need a chromatic WCS, and I think that’s a huge win for simplicity.
The interface for transforms must provide a way to invert them, but I think it’s acceptable if some specific transforms are not invertible (if we don’t have a use case for the particular inverse transform). It could also be very useful to allow a pair of one-way transforms determined empirically to be inverses to be combined into a single bidirectional transform.
We need to be able to efficiently obtain the exact transform at a given point (in either input or output coordinates), mostly to support warping. I can’t imagine doing this efficiently enough (especially in a multithreaded context) while going through any layers of Python, even if there’s a C++ front-end over them. However, this requirement may only apply to certain frequently-used transforms, so having some pure-Python transform implementations for other use cases may be fine.
We need to be able to efficiently obtain a local linear approximation to the transform at a given point (in either input or output coordinates). This will almost certainly need to be done in a multithreaded context, so I’m also skeptical of having Python-implemented transforms here, but it’s not out of the question.
We need to be able to persist WCS objects as components of other arbitrary objects (not just Exposure, but certainly Psf, and probably some other things too).
We need to be able to persist groups of related composed transforms efficiently (e.g. WCSs of the same CCD on different visits, which share the same pixel distortion field), which I think means persisting (and unpersisting) heavyweight shared components only once. I think we also need to be able to persist non-WCS objects that hold some shared WCS components in a similar manner (by saving at least some heavyweight WCS components only once, and having only one in-memory representation).
Some transform objects need to expose some sort of parametrization, and provide the ability to compute at least first derivatives with respect to those parameters at a given set of points. I actually don’t think this is the same transform class we want to use everywhere else, but we want them to be closely related and highly interoperable.
Jim has apparently adopted a broader scope that just WCS’s transforms by describing cases which are not directly related to any sidereal coordinate system. I would tend to follow a similar route, by advocating that the needed framework has to embed transforms in a general sense, WCS’s representing some sort of specialization, and providing persistence capabilities for all of them.
The composition should be allowed and the “collapse” of a composition into a single transformation should only occur when both exact and requested.
Regarding inverses, we cannot request that all transforms have exact inverses, but provide an inverse when(ever) possible. Approximate mapped inverses should
become part of the toolbox at some point.
As Jim points out, some parametrized transforms are needed and should expose their derivatives w.r.t their parameters. “meas_simastrom” is not the only place where we need that.
My guess is that reaching a consensus on what the interface of top abstract classes should look like is going to be relatively easy. I anticipate that persistence is not going to be straightforward (to be easily expandable), nor designing the mutable/immutable twin classes that Jim is alluding to in his last bullet point.
Warping has the requirement that we can quickly transform a pixel position on one image to the pixel pointing to the same sky coordinate on another image (e.g. pixel_im1->sky->pixel_im2). It would be great if we could somehow combine the two WCS for the two images to make an efficient pixel-to-pixel transform that omitted the transformation to sky and back again.
ISR has the requirement to easily combine the pupil->focal plane transform in the camera geometry and the TAN WCS provided in the raw data to produce the WCS of the post-ISR CCD: our best guess as to the true WCS, before refining that with our astrometric solver.
Pragmatically I think we want the following methods:
isDistorted: return False if this is a pure TAN WCS. We could probably manage without this, but it has proven handy so far.
getTanWcs(point_or_coord): return the local TAN WCS at a given point, specified as a pixel position or sky position, whichever you prefer. I doubt we need both, since the conversion between these is trivial.
I would also like some nice methods or factory functions for easily creating simple WCS. Right now we have a number of functions scattered about for this, including some buried in unit tests.
I suggest we try to clean up the many variants of skyToPixel and pixelToSky. The current API is cluttered.
I am not convinced we need to be able to shift a WCS; that feature seems to be a holdover. It may be used, at present, when we return a subimage, but I don’t think it is necessary to do so.
Thank you for all the replies so far. A further question would be what analytic form future distortion models and transformations might take.
Are Nth order polynomials and/or Chebyshev polynomials enough, or will we likely want more complicated models? If so, what form of models do people foresee needing?
Polynomials and Chebyshevs will definitely not be enough for sensor effects.
I expect some fairly complex (read: arbitrary) functional forms for edge roll-offs and potentially tree-rings, and perhaps something pixelized with bilinear or spline interpolation for pixel area variations.
I think we’ll also need to support fairly arbitrary (but mostly radial) functions for optical distortion.
I guess the bottom line is that it’s a requirement that we be able to add new functional forms in the future, rather than spec out all possible transforms right now.
I withdraw this suggestion unless it is trivial to implement. If you want a locally flat approximation (e.g. in order to plot or measure the amount of distortion) then call getTanWcs.