Reference image’s WCS compares inequal to the current image's WCS

Hello,

I am getting an error concerning reference image’s WCS being inequal to the current image’s WCS and I was told that it may be a long-standing bug.

I am running the multiBandDriver on pdr2_wide HSC data in which I added fake sources.
I am using LSST pipelines 23_0_0 and running with doPropagateFlags=False in the measureCoaddSources task.
It fails at the forcedPhotCoadd task with the attached traceback.
traceback (3.4 KB)
Is this really a bug or something on my side ?

Thanks,
Maxime

The traceback doesn’t have a file extension so has to be downloaded before it can be seen. Here it is inline for others to comment on:

Traceback (most recent call last):
  File "/work/mpaillassa/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/ctrl_pool/23.0.0+1611dd45d2/python/lsst/ctrl/pool/pool.py", line 112, in wrapper
    return func(*args, **kwargs)
  File "/work/mpaillassa/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/ctrl_pool/23.0.0+1611dd45d2/python/lsst/ctrl/pool/pool.py", line 1069, in run
    while not menu[command]():
  File "/work/mpaillassa/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/ctrl_pool/23.0.0+1611dd45d2/python/lsst/ctrl/pool/pool.py", line 239, in wrapper
    return func(*args, **kwargs)
  File "/work/mpaillassa/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/ctrl_pool/23.0.0+1611dd45d2/python/lsst/ctrl/pool/pool.py", line 1087, in reduce
    result = self._processQueue(context, func, [(index, data)], *args, **kwargs)[0]
  File "/work/mpaillassa/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/ctrl_pool/23.0.0+1611dd45d2/python/lsst/ctrl/pool/pool.py", line 546, in _processQueue
    return self._reduceQueue(context, None, func, queue, *args, **kwargs)
  File "/work/mpaillassa/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/ctrl_pool/23.0.0+1611dd45d2/python/lsst/ctrl/pool/pool.py", line 572, in _reduceQueue
    resultList = [func(self._getCache(context, i), data, *args, **kwargs) for i, data in queue]
  File "/work/mpaillassa/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/ctrl_pool/23.0.0+1611dd45d2/python/lsst/ctrl/pool/pool.py", line 572, in <listcomp>
    resultList = [func(self._getCache(context, i), data, *args, **kwargs) for i, data in queue]
  File "/work/mpaillassa/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/pipe_drivers/23.0.0+476726adcc/python/lsst/pipe/drivers/multiBandDriver.py", line 464, in runForcedPhot
    self.forcedPhotCoadd.runDataRef(dataRef)
  File "/work/mpaillassa/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/meas_base/23.0.0+1c9213783a/python/lsst/meas/base/forcedPhotCoadd.py", line 297, in runDataRef
    forcedPhotResult = self.run(measCat, exposure, refCat, refWcs, exposureId=exposureId)
  File "/work/mpaillassa/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/meas_base/23.0.0+1c9213783a/python/lsst/meas/base/forcedPhotCoadd.py", line 328, in run
    self.measurement.run(measCat, exposure, refCat, refWcs, exposureId=exposureId)
  File "/work/mpaillassa/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/meas_base/23.0.0+1c9213783a/python/lsst/meas/base/forcedMeasurement.py", line 369, in run
    self.callMeasure(measChildRecord, exposure, refChildRecord, refWcs,
  File "/work/mpaillassa/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/meas_base/23.0.0+1c9213783a/python/lsst/meas/base/baseMeasurement.py", line 337, in callMeasure
    self.doMeasurement(plugin, measRecord, *args, **kwds)
  File "/work/mpaillassa/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/meas_base/23.0.0+1c9213783a/python/lsst/meas/base/baseMeasurement.py", line 367, in doMeasurement
    plugin.measure(measRecord, *args, **kwds)
  File "/work/mpaillassa/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/meas_modelfit/23.0.0+2ea42a5d58/python/lsst/meas/modelfit/cmodel/cmodelContinued.py", line 104, in measure
    raise lsst.meas.base.FatalAlgorithmError(
lsst.pex.exceptions.wrappers.FatalAlgorithmError: CModel forced measurement currently requires the measurement image to have the same Wcs as the reference catalog (this is a temporary limitation).
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 2

Yes, this is a known issue: DM-15181, though it also seems like something similar happened earlier and was fixed on DM-10105. We haven’t seen it all all recently, so it fell off everyone’s radar.

There is a commit on the branch for the former that you may be able to checkout, rebase, and build to work around the problem.

Oh, I now recall that this gets triggered when the forced measurement is done on a different machine (architecture?) than originally created the reference WCS.