Generating per-object coadds for bright objects for image subtraction

,

At the risk of making both @jbosch and @ebellm despair, I have a semi-crazy idea to explore:

Should we use per-object templates for bright objects in concert with full-image templates for image subtraction and DIA source identification?

The motivation is to ensure that bright objects in the template are created from combinations of consistent PSFs (i.e., not bad columns, saturated spikes, etc.). Getting the convolution wrong in the wings of the PSFs around bright objects will lead to significant numbers of false detection.

  1. Use a “standard” good-PSF coadd template for the subtractions.
  2. Then subtract these regions around bright objects with per-object templates for those bright objects.
  3. Generate a DIA source detection and measure based on the subtraction in Step 1 but masking regions around bright objects, and then add in the detections from the objects from Step 2.

Notes:

  • Simply masking objects brighter than, e.g., r< 17mag is reasonable. I’m thinking more of the next two magnitudes.
  • This presumes that we’re really good at BF corrections, but that assumptions is necessary for the baseline as well.
  • I dislike having sharp boundaries in detection, but we may implicitly have that anyway in templates due to different effective numbers of pixels contributing to each pixel in the template, and a different mix of images over the size of patch.
  • We may naturally transition to such a regime. In the first 6 months we won’t have gaps through most bright objects. As we accumulate more images, the chance of a masked column/region being overlaid on to a bright object approach unity. At this point it may be worth implementing this proposed (semi-crazy) scheme.

To the extent that problems around bright stars are due to our difference kernels (like our PSFs) just not going out far enough in radius, I think that has be solved by explicitly modeling and subtracting or deconvolving the PSF wings. I’m - perhaps naively - imagining doing that on the input images to the template and the to-be-differenced science image separately, but by no means set on that.

The problems we have with discontinuous PSFs due to edges or missing pixels in our current coadds are a feature of making direct coadds with no PSF matching. I think difference imaging templates simply have to use PSF matching, much as that causes different problems with working out how to make a difference kernel for science images with excellent seeing.

Are there other problems in differences around bright stars your proposal might also help with? Or were you thinking of it more as a potential alternative to PSF-matched templates?

I think I don’t fully understand how PSF-matched templates work in the presence of masking.

I think that has be solved by explicitly modeling and subtracting or deconvolving the PSF wings.

And I agree with this in principle, but have never seen an implementation successfully do this. I would love to learn more about how to do this successfully in practice, particularly for both stars and galaxies.

There will galaxies with cores bright enough that we have to understand the PSF wings. But it may be a bit of a mess to figure out what the PSF vs. extended is. That’s a whole sub-field of AGN variability analysis.

There’s both good and bad: PSF-matching explodes the set of pixels you have to throw away on the input image (if you’re conservative, by the matching kernel width), but it means you can just ignore those bad pixels when you build the coadd, and the coadd will still have a contiguous pixels and, if you have at least some good pixels everywhere, no bad pixels. It will have a messy noise distribution, which may matter early in the survey when the templates aren’t much deeper than the to-be-differenced science image if we want to decorrelate the noise or otherwise properly account for pixel uncertainties in detection.

Agreed, but I think per-object coadds can’t help you with this; on the spatial scales of those extended wings, you can’t get away with just throwing away any input images with bad pixels or edges in the region of interest - you’ll just end up throwing away almost everything.

1 Like

Thanks, I understand. I appreciate you helping me think through this.