reran processCcdDecam on the COSMOS using PSFEx instead of PCA, and ran my ubercal program
on the data and then measured scatter for objects across multiple visits
the results are mixed. The photometric scatter is better for g- and r-band, similar
for i-band, u-band and astrometry, but worse for z-band
not sure why z-band is worse
Colin: why is z-band worse, maybe galaxies could be coming in and contaminating things
[CORRECTION! After the meeting I double-checked the un-ubercaled photometric scatter plots and found that it was substantially better for the z-band, for both PCA and PSFEx (see the confluence page). For some reason my ubercal routine got messed up for the z-band. I double-checked the other four bands and the ubercaled version was always better than the un-ubercaled version (especially for z-band). So the conclusion is that PSFEX is always better or of similar quality to PCA and I recommend PSFEx be made the default.]
What I think will help is to do a rather cleverer job of handing saturated stars. Experiments suggest that using a GP interpolator with rather broader support than the current ± 2 pixels will enable us to get good-enough centroids to solve the astrometry crudely (which is all we need – it’s easy to peak it up)
I added a Gaussian process interpolator, borrowed from scikit-learn for the moment. This does a nice job of generating peaks along the rows, which prevents the spurious peaks at the start and end of the interpolation. However, because it’s interpolating each row individually, the row in the center of the star has more missing pixels than the others, so that row doesn’t peak up as much. The result is a horizontal trench through bright stars.
I will poke around the interpolation parameters a bit more (I’m not giving it as much information about the PSF as I probably could), but any thoughts or suggestions @rhl?
I think you need to do a 2D interpolation, but that’s going to be slow.
I would have thought that attempting to get overlap by tossing reference sources brighter than the saturation point was the way to go, but it sounds like that didn’t work. Do you know why it didn’t work?
With the old interpolation, a good fraction of the brightest sources were these spurious detections on the edges of bright stars
With the high density of sources, the brightest n stars span a small range in magnitude, so I have to set that saturation limit very accurately (and I only have a rough estimate from a few manual cross-matches I was able to do).
My next step is probably to look at the two catalogs in detail to see how much overlap I can get with some manual tuning. Hopefully that show whether these suspicions are actually causing the problem or if there’s something else going on.
For 1, try activating detection.doFootprintBackground=True. Not much can be done about 2; I think it’s a fundamental problem that we’re going to have to overcome.
PS1 did OK in the Galactic Plane (though it doesn’t go as deep as HSC), so I wonder if Gene’s matching algorithm might be a bit more robust:
If the two sets of coordinates are not known to agree well, but the
relative scale and approximate relative rotation is known, then a much faster
match can be found using pair-pair displacements. In such a case, the two
lists can be considered as having the same coordinate system, with an unknown
relative displacement. In this algorithm, all possible pair-wise differences
between the source positions in the two lists are constructed and accumulated
in a grid of possible offset values. The resulting grid is searched for a
cluster representing the offset between the two input lists. This algorithm
can only tolerate a small error in the relative scale or the relative rotation
of the two coordinate lists. However, this process is naturally O(N2), and is
thus advantageous over triangle matching in some circumstances. This process
can be extended to allow a larger uncertainty in the relative rotation by
allowing the procedure to scan over a range of rotations.
The current interpolator is doing poor-man’s GP interpolation. I was (as Paul indicated) going to do this in 2-D, using all the pixels around the bad pixels