Advanced methods for background subtraction

Dear all,

I’m interested in collaborating in the background modelling and subtraction. I know that subtractBackground.py in lsst.meas.algorithms contains the functions and classes for doing the background subtraction. I read through the files and happily found that several of the standard recipes are already implemented.

During the last couple of years I’ve been using random gaussian processes and radial basis functions for modelling the sky for WIRCam and DECam images. Some people maybe think these recipes are an overkill, but I have found that they work pretty well when dealing with stray light, pupil ghosts and diffuse light. Is LSST DM interested in implementing more recipes for modelling and subtracting the background? Can I play around and add thin plate splines (TPS) and Kriging? Is there any plan for adding GPU support to the image processing pipeline?

I’ve had good success with Shepard’s method (inverse-distance weighting to a power), which has recently been implemented in photutils photutils.readthedocs.io/en/latest/api/photutils.utils.ShepardIDWInterpolator.html .

I’ve posted an example with a fairly pathological background at https://gist.github.com/hcferguson/3489a6404fe268d6f8a0b90a172d24f5 .

2 Likes

We’d certainly love to have some new algorithms contributed for things like background estimation. The only potential problem is that we haven’t really defined the interface for background estimation in a way that would make it easy to swap in other algorithms (or at least qualitatively different ones), as we have with source measurement algorithms, PSF estimation, and a few other things. It is possible to swap in different background estimation code - it’s just a little clunky, and it’s likely there are places in the code that make assumptions they shouldn’t about how some of the background code works. We’ve got some open issues for cleaning this up here.

If you’re eager to get started, I suggest taking a look at our SubtractBackgroundTask and see if what you have in mind can fit into the same interface. If this is more of a long term proposal, it may be better to just watch the issue I linked above to see when we’ll be in better shape to support pluggable background algorithms.

I should add that our plans for background estimation also involve doing what we call background matching, in which we model much of the background on differences between pairs of images to ensure that we have a consistent definition of what’s not background across all of the images that cover a patch of sky. We think this will be particularly important in dealing with stray light, ghosts, and other background features that will differ between observations. We’ve gotten this working in our codebase on SDSS data, but it’s not yet up and running on other cameras.

At this point we’re not planning to add GPU support. We did some prototype work in providing GPU implementation for a few low-level algorithms a while back, but we’re not currently planning to have a GPU cluster for our processing.