I am joint processing HSC PDR2 data alongside shallower VISTA data. On the deepest VIDEO fields mergeCoaddDetections seems to work and generate a comparable number of sources to the individual band detections. On a recent run of the the shallower VHS data the mergeCoaddDetections.py task is taking between 250 and 10000 sources per band and resulting in very low numbers such as 2 to 4 sources. It is even culling all the sky sources. For example:
root INFO: Loading config overrride file '/home/ir-shir1/rds/rds-iris-ip005/ras81/lsst_stack/stack/miniconda3-py37_4.8.2-cb4e2dc/Linux64/obs_vista/21.0.0-1/config/mergeCoaddDetections.py'
CameraMapper INFO: Loading exposure registry from /rds/project/rds-rPTGgs6He74/ras81/lsst-ir-fusion/dmu4/dmu4_VHS/data/registry.sqlite3
root INFO: Running: /home/ir-shir1/rds/rds-iris-ip005/ras81/lsst_stack/stack/miniconda3-py37_4.8.2-cb4e2dc/Linux64/pipe_tasks/21.0.0+44ca056b81/bin/mergeCoaddDetections.py ../data --rerun coaddPhot --id filter=VISTA-Z^VISTA-Y^VISTA-J^VISTA-H^VISTA-Ks^HSC-G^HSC-R^HSC-I^HSC-Z^HSC-Y tract=8524 patch=3,5
conda.common.io INFO: overtaking stderr and stdout
conda.common.io INFO: stderr and stdout yielding back
mergeCoaddDetections INFO: Read 440 sources for filter VISTA-J: DataId(initialdata={'filter': 'VISTA-J', 'tract': 8524, 'patch': '3,5'}, tag=set())
mergeCoaddDetections INFO: Read 265 sources for filter VISTA-H: DataId(initialdata={'filter': 'VISTA-H', 'tract': 8524, 'patch': '3,5'}, tag=set())
mergeCoaddDetections INFO: Read 269 sources for filter VISTA-Ks: DataId(initialdata={'filter': 'VISTA-Ks', 'tract': 8524, 'patch': '3,5'}, tag=set())
mergeCoaddDetections INFO: Read 7291 sources for filter HSC-G: DataId(initialdata={'filter': 'HSC-G', 'tract': 8524, 'patch': '3,5'}, tag=set())
mergeCoaddDetections INFO: Read 5463 sources for filter HSC-R: DataId(initialdata={'filter': 'HSC-R', 'tract': 8524, 'patch': '3,5'}, tag=set())
mergeCoaddDetections INFO: Read 8845 sources for filter HSC-I: DataId(initialdata={'filter': 'HSC-I', 'tract': 8524, 'patch': '3,5'}, tag=set())
mergeCoaddDetections INFO: Read 5326 sources for filter HSC-Z: DataId(initialdata={'filter': 'HSC-Z', 'tract': 8524, 'patch': '3,5'}, tag=set())
mergeCoaddDetections INFO: Read 4396 sources for filter HSC-Y: DataId(initialdata={'filter': 'HSC-Y', 'tract': 8524, 'patch': '3,5'}, tag=set())
mergeCoaddDetections.skyObjects INFO: Added 0 of 100 requested sky sources (0%)
mergeCoaddDetections INFO: Merged to 4 sources
mergeCoaddDetections INFO: Culled 78639 of 90880 peaks
mergeCoaddDetections INFO: Wrote merged catalog: DataId(initialdata={'tract': 8524, 'patch': '3,5'}, tag=set())
The only config parameter I have are the priorities:
config.priorityList = ["HSC-I","HSC-R","HSC-Z","HSC-Y","HSC-G","VISTA-Z","VISTA-Y","VISTA-J","VISTA-H","VISTA-Ks" ]
Which consists of the HSC order appended by the new bands loosely in order of signal to noise ratio. Is there an obvious reason why the sources would be culled so drastically?This results in a final catalogue of 4 rows for the measurement and forced catalogues where I expected them to be at least as large as the original HSC catalogues.
Is the definition of a source a contiguous set of detected pixels? Could it be that regions of detected pixels in one of the bands are all connected by adjacent pixels due to a possible issue with detection? Judging by the output above there should still be large numbers of peaks but these don’t seem to correspond to rows in the final measurement catalogues which instead seem to correspond to sources.
Many thanks for any interest,
Raphael.