Extreme memory usage during source deblending / measurement

Hi,

I have recently encountered very high memory usage while running processCcdTask, which becomes a serious problem for the calibrate.deblend and calibrate.measurement subtasks. The memory usage seems to consistently scale with the number of sources. In short, a final catalogue of ~20,000 sources (20 megapixel image) requires around 16.6 GB of RAM.

This is surprisingly high and makes it impossible to process more than a few images at once. Can anyone suggest what is going wrong here?

The logs look like this, where memory usage peaked at the end of calibrate.deblend:

characterizeImage.detection INFO: Detected 11499 positive peaks in 6883 footprints and 8 negative peaks in 3 footprints to 5 sigma
characterizeImage.detection INFO: Resubtracting the background after object detection
characterizeImage.measurement INFO: Measuring 6883 sources (6883 parents, 0 children)
characterizeImage.measurePsf INFO: Measuring PSF
characterizeImage.measurePsf INFO: PSF star selector found 406 candidates
characterizeImage.measurePsf.reserve INFO: Reserved 0/406 sources
characterizeImage.measurePsf INFO: Sending 406 candidates to PSF determiner
characterizeImage.measurePsf.psfDeterminer WARNING: NOT scaling kernelSize by stellar quadrupole moment, but using absolute value
characterizeImage.measurePsf INFO: PSF determination using 391/406 stars.
characterizeImage INFO: iter 2; PSF sigma=2.48, dimensions=(41, 41); median background=547.78
characterizeImage.measurement INFO: Measuring 6883 sources (6883 parents, 0 children)
characterizeImage.measureApCorr INFO: Measuring aperture corrections for 2 flux fields
characterizeImage.measureApCorr INFO: Aperture correction for base_GaussianFlux: RMS 0.417713 from 327
characterizeImage.measureApCorr INFO: Aperture correction for base_PsfFlux: RMS 0.286895 from 342
characterizeImage.applyApCorr INFO: Applying aperture corrections to 2 instFlux fields
ctrl.mpexec.singleQuantumExecutor INFO: Execution of task 'characterizeImage' on quantum {instrument: 'Huntsman', detector: 9, visit: 210304183232676, ...} took 68.255 seconds
ctrl.mpexec.mpGraphExecutor INFO: Executed 2 quanta, 1 remain out of total 3 quanta.
calibrate.detection INFO: Detected 11287 positive peaks in 6252 footprints and 8 negative peaks in 2 footprints to 5 sigma
calibrate.detection INFO: Resubtracting the background after object detection
calibrate.skySources INFO: Added 100 of 100 requested sky sources (100%)
calibrate.deblend INFO: Deblending 6352 sources
meas_deblender.baseline WARNING: Skipping peak at (1495.0, 926.0): no unmasked pixels nearby
meas_deblender.baseline WARNING: Skipping peak at (1516.0, 919.0): no unmasked pixels nearby
meas_deblender.baseline WARNING: Skipping peak at (1512.0, 923.0): no unmasked pixels nearby
meas_deblender.baseline WARNING: Skipping peak at (2453.0, 1474.0): no unmasked pixels nearby
meas_deblender.baseline WARNING: Skipping peak at (2481.0, 1453.0): no unmasked pixels nearby
meas_deblender.baseline WARNING: Skipping peak at (2478.0, 1458.0): no unmasked pixels nearby
meas_deblender.baseline WARNING: Skipping peak at (2466.0, 1462.0): no unmasked pixels nearby
meas_deblender.baseline WARNING: Skipping peak at (2453.0, 1442.0): no unmasked pixels nearby
calibrate.deblend INFO: Deblended: of 6352 sources, 1846 were deblended, creating 6873 children, total 13225 sources
calibrate.measurement INFO: Measuring 13225 sources (6352 parents, 6873 children)
calibrate.applyApCorr INFO: Applying aperture corrections to 2 instFlux fields
calibrate INFO: Copying flags from icSourceCat to sourceCat for 6130 sources
calibrate.photoCal.match.sourceSelection INFO: Selected 1322/13225 sources
calibrate INFO: Loading reference objects from region bounded by [199.65715831, 202.90882873], [-44.13322063, -41.75304918] RA Dec
calibrate INFO: Loaded 1348 reference objects
calibrate.photoCal.match.referenceSelection INFO: Selected 1348/1348 references
calibrate.photoCal.match INFO: Matched 59 from 1322/13225 input and 1348/1348 reference sources
calibrate.photoCal.reserve INFO: Reserved 0/59 sources
calibrate.photoCal INFO: Not applying color terms because config.applyColorTerms is None and data is not available and photoRefCat is provided
calibrate.photoCal INFO: Magnitude zero point: 25.837880 +/- 0.001690 from 57 stars

I note that I am using Gen3 middleware and the latest weekly docker build. I have not found a similar Topic on the Community Forum.

3 Likes

That’s an awful lot of sources. We don’t get that many on deep extragalactic coadds. So what is special about your image that you’re getting so many on a single CCD? Perhaps you could share an image?

Hi @price, thanks for getting back to me.

We are using images from the Huntsman telescope, which have quite a wide FOV per CCD at ~2.5 deg^2. This particular image was a 5 min exposure from a single CCD, and is not a an especially dense field. The number of detected sources does not seem excessive:

While 10,000 sources may be a lot compared to a typical LSST exposure, I am still surprised by the memory consumption. Programs like SExtractor are able to handle these similar images without too much of a problem.

We could e.g. raise the detection threshold, but since we are interested in low surface brightness science that is not desirable. The other option could be to override some of the detection subtasks for Huntsman, but I would like to avoid that if possible.

Programs like Source Extractor don’t try to do full deblending. The problem is that you have a lot of sources that are blended together, and deblending them is expensive, both in terms of memory and execution time.

Is that a flat-fielded sky-subtracted image (a calexp)? There seems to be more light in the middle than on the edges. If you’re not subtracting that off, then you’re going to artificially blend a lot of sources. You may be able to do that by tweaking some parameters. I’m happy to help you out with that once I know what we’re dealing with.

@price

The image is properly calibrated (bias, defects, dark and flat) using ISRTask before doing the detection. The image I showed above is just the raw exposure, so that middle light is just the flat field response.

What do you mean by “full” deblending? I am familiar with how SExtractor does it but do not know much about the LSST deblender.

I was also wondering about the sky subtraction in this image and the possibility that you may be getting huge (and artificial) blends. There is a config parameter called maxFootprintArea in the deblender tasks which sets a

Maximum area for footprints before they are ignored as large; non-positive means no threshold applied

It defaults to 100000 here, but even for normal processing of HSC data we override this to 10000 in single frame processing due to memory issues of trying to deblend such large parent footprints (e.g. overrides are here in charImage and here in calibrate in obs_subaru). Skipping blends won’t help with your desire to keep and process all the sources, but if you add this override you will easily see just how many of them you have with your current processing parameters as you will see lines like the following in the logs:

30605 WARN  2021-06-25T08:56:22.420-0500 singleFrameDriver.processCcd.calibrate.deblend: Parent 527486408458531: skipping large footprint (area: 14894)

Given multiple blended objects (multiple peaks within an area above threshold), Source Extractor (please let’s not continue to propagate the name “SExtractor”) simply measures all objects. The LSST pipeline attempts to disentangle the two objects so we can measure each without being affected by the others. This doesn’t always work well, and it is expensive when there are lots of blends.

Can you show the calexp with sources overlaid? I’d like to see the full field and a zoom on clusters of sources.

@laurenam Thanks for the tip. I will apply this override and get back to you with the output.

@price I was under the impression that SExtractor (Source Extractor) does a multi-component Gaussian fit to the footprint in order determine which pixels get assigned to which deblended source. I think it can also provide a corrected flux based on this, but would have to check. Is that similar to the LSST algorithm? I will get back to you with the calexp image.

On an unrelated note, I don’t think there is a problem with “SExtractor”. That is how it was presented in the original article, and is often used in the community / literature.

EDIT: It has come to my attention that some in the community may find “SExtractor” an offensive term. I therefore will adopt “Source Extractor” in future. I apologise for any offence caused by this.

Hi @price

Here is the calexp with sources overlaid:

And a zoom in:

Looking at the footprints, I wouldn’t say any are excessively large. Let me know if there is anything else that might help diagnose the problem. Thanks!

Well, that doesn’t look much like what I was expecting. As you say, most of the footprints are relatively small and don’t appear to contain multiple peaks. Maybe it’s just that one large footprint in the center-right.

Can you send me the calexp FITS image and the src catalog so I can poke around?

Hi @price, sure thing. Here is a Dropbox link to the pipeline outputs:

Below is the calexp with the sources plotted. As you can see, the main problem is CenA (formerly known as “that one large footprint in the center-right”). Just don’t observe any bright galaxies, and you should be in good shape!

Seriously, the CenA parent is about 250k pixels and contains about 7k children, probably mostly noise spikes (the individual peaks are above the image threshold, but not above a local threshold). I think @laurenam’s idea of setting maxFootprintArea in the deblender is the thing to do, as I suspect you don’t particularly care about deblending and measuring CenA on every input image — certainly on the coadd you do, but what would be the point of doing it for every individual image? Looking at the distribution of deblend_nChild and base_FootprintArea_value, I suggest maxFootprintArea=3000, which would remove CenA and the haloes around the bright stars, and make the processing go smoother.


While I was poking around, I looked at the photometry, and the usual star/galaxy separation plot (PsfMag - GaussianMag as a function of PsfMag) looks a bit strange. Most of the large features (the blobs in the top-right, center-right and middle-top, and the horizontal feature at 2.5) are due to CenA, so we can ignore them. But there’s more than just that, and if I plot PsfMag-GaussianMag as a function of position, there’s a strange feature in the lower-right. The aperture correction is ramping up there (the image quality is significantly worse than in the rest of the image), so my guess is you need to increase the order of the PSF spatial variation and/or the order of the aperture correction.


Here’s the code for plotting the sources:

>>> from lsst.afw.image import MaskedImageF
>>> image = MaskedImageF("/home/pfs/data/huntsman/calexp_Huntsman_g_g_band_210304160733323_2d194b0013090900_pipeline_outputs_20210826T231952Z.fits")
>>> from lsst.afw.table import SourceCatalog
>>> cat = SourceCatalog.readFits("/home/pfs/data/huntsman/src_Huntsman_g_g_band_210304160733323_2d194b0013090900_pipeline_outputs_20210826T231952Z.fits")
>>> from lsst.afw.display import Display
>>> display = Display(backend="ds9", frame=1)
>>> display.mtv(image)
>>> with display.Buffering():        
...   for src in cat:
...     display.dot("+", src.get("base_SdssCentroid_x"), src.get("base_SdssCentroid_y"))
... 
>>> max(src.getFootprint().getArea() for src in cat)
246074
>>> max(src.get("deblend_nChild") for src in cat)
7217

(Note: ordinarily, you should use the butler to read the data, but I am just reading the FITS files directly. I would use ExposureF instead of MaskedImageF, except that I’m using an old version of afw which doesn’t support version=1 Exposure files.)

1 Like

Oh, another idea to play around with is to reduce the detection.tempLocalBackground.binSize from its default of 64. This is the scale of a background model (in pixels) that is temporarily removed from the image in order to suppress noise spikes. You don’t have a lot of stellar wings, so a smaller scale may help to follow the background a bit more closely.

1 Like

Hi @price

Thanks for your highly detailed response. Very nice to get help from the experts!

You are right that there is not much point (that I can see at the moment anyway) to deblend the large footprints on a per-CCD basis. So I will adopt your and @laurenam 's suggestion about limiting the max footprint size for the deblender when producing calexp datasets.

Interesting point about the PSF. We are indeed aware of some large variations in image quality accross the FOV with our current setup. Your method of plotting the aperture correction will be very helpful for diagnosing our problem - thanks!

I will mark your answer as the solution and get back to you here if the memory problem persists. Otherwise, thank you for your help on this.

1 Like