Debugging astrometry: what tools do you use? what tools would you like?

For my entire career in astronomy, the registration of images to a reference coordinate system, or even just to each other, has consumed far more time than I would like to admit.

Astrometric fitting continues to be a challenge in the modern era for reasons both stupid and interesting.

I thus ask for the collective wisdom to contribute their thoughts, suggestions, and requests.

When you are debugging a failed astrometric solution:

  1. How did you know it failed?
  2. what approaches do you take?
  3. What do you look at?
  4. How do you visualize what’s going wrong?
  5. How do you verify later that you’ve fixed things?
  6. What additional challenges do multi-CCD focal planes add to the debugging and complexity of solutions?

What tools do you wish you had to make any of these steps easier?

Thanks for bringing this up. I don’t have a ton of experience doing this myself, but here are some thoughts, in no particular order:

  • Once a problem has been identified on a small number of frames, visual inspection of the image with sources and reference objects overlaid should almost always be the first thing we look at. While we have tools for this (I use ds9+xpa+ssh tunnels), the overhead in using them for the first time is considerable.

  • We also lack tools for general interactive analysis of joined tables. That’s not just a lack of an N-way spatial matcher, though that’s a big missing piece - we also don’t have good interfaces for dealing with the matches we’ve already determined (i.e. the afw.table.Match objects are extremely clunky).

  • Looking at RMS scatter is useful, but outliers are often important, so more robust statistics and explicit outlier fractions are important too. Histogram plots that show a Gaussian that corresponds to a robust RMS as well as somehow highlighting points excluded from that RMS are very useful.

  • It’s also important to visualize the problem spatially, in CCD, focal-plane, or sky-patch plots that show average scatter in a region via a colormap and/or whiskers that demonstrate bulk offsets.

  • It’s often very hard to distinguish between problems in our code from problems in the reference catalog, so it can be very useful to have multiple independent reference catalogs to test with, even if we expect one to be generally best.

@KSK Could you capture the tools you found useful or the tools you wished you had when you were astrometry debugging on Feb 19? (and report them here).

Feel free to delegate this to @rowen or @ctslater, etc.

I’ll toss out a few of the general strategies I needed to pursue in this recent effort:

  • Cross matching on Ra/Dec between various odd combinations of catalogs (icSrc vs a new data type in a branch, icMatch vs a different src table, etc), and in particular between two different repositories. I’ve been dropping into astropy for this.

  • After cross matching, focusing on objects that are unique to one of the catalogs. It is the objects that changed that I’m most interested in. I used python sets for this, and it took some non-trivial wrangling to create the entire SourceTable -> crossmatch -> set difference -> new SourceTable data path.

  • Adding various bits of instrumentation into the processing path. For instance, we save the output of the matcher/wcs fitter, but not necessarily the immediate input to that process (or at least I wasn’t 100% positive it was the same as what we saved). This wasn’t very difficult and I don’t think we want to clutter all code with every possible output point. Once tasks are able to create their own data types, it might be nice to be able to write these with the butler rather than random paths on disk.

I don’t know if I have any specific tool recommendations. In general I think most of what I needed was general stack output debugging support rather than something specific to astrometry.