How to adjust Detection and Measurement threshold in the AP pipeline to measure fainter sources

Hello, I’ve been trying to run the LSST science (AP) pipeline on DECam images, and especially on the DES SN observations to compare the output to the DM-processed data set. To do this, I match the LSST AP pipeline output to known DES SNs, and examine their light curves.

One thing I’ve noticed is that the photometry from the LSST-AP pipeline is missing measurement/detections for the fainter part of the DES light curves. Especially, the LSST-AP output is missing the measurements with DES magnitude error larger than 0.2 mag. See the attached plot

I talked to Shenming Fu @sfu about this, he pointed out that the default detection threshold of the AP pipeline is set at 5.0 (config.detection.thresholdValues=5.0 in detectAndMeasure_config_DECam_runs.py). So this explains the magnitude error threshold of 0.2 mag. I tried to lower the thresholdValues lower, but when lowering it to 2.0, the LSST-AP output starts to miss the bright parts of the DES light curves. We thought that the detection thresholdValues probably need to be adjusted together with some other parameters to optimize the detections/measurements.

I’ve been looking at the other parameters in the configuration file for the AP pipeline config:detectAndMeasure_config_DECam_runs.py, but don’t know what are the other obvious parameters to adjust together. Might anyone have suggestions about what parameters to look into to improve the detection and measurement of sources below a threshold of 5.0?

(I am using the pipeline version of 26.0.0 if it matters)

1 Like

Lowering the threshold substantially below SNR of 5.0 will not be productive. If you’re only interested in known sources, I’d suggest using the ForcedMeasurementTask to allow you to measure sub-threshold events.

(I’d also suggest running with a newer pipelines version as well–v28 is current, and v29 is coming very soon.)

Following up from Eric’s comment, if you lower the detection threshold you’ll end up finding a lot more noise/junk sources, which will be blended with the real sources (because an e.g. 2-sigma detection threshold covers a large fraction of the image). That will result in effectively detecting fewer sources, because everything will be blended.

In Alert Processing, we will do forced photometry at the locations of detections, so sources that were previously below 5 sigma will have measurements from early images once they go above that S/N.

Perhaps also worth noting: I’ve experimented a bit with modeling measurement (characterization) of faint sources (vs. detection), and found that 5-sigma is sort of a magic threshold, where below that level flux uncertainties quickly become strongly non-Gaussian, and the dependence between flux and direction/location becomes strong and complicated. So the usual best fit ± error bar photometry/astrometry summaries really don’t sufficiently convey what the data say about source properties for such dim candidate sources. And that’s just for point sources.

Hi @ynzhang - As the forum watcher this week, I wanted to ask if @ebellm’s response provided the answer you were looking for?

Thank you for checking Sarah, and thank you Eric, John and Tom!

I checked out the forced measurements, and indeed, the comparisons are much better! I will go with these going forward. Thank you so much for your help! :slight_smile:

Great, I’ve marked Eric’s response as the solution. Thanks for asking your question.