"Ambiguous calibration lookup for bias" in single frame processing using v26.0


I was using v26.0 to build bias/flat and process some single frame data.
The master bias/flat was built nightly:

pipetask --long-log run \
    -b $BUTLER_REPO \
    -i WFST/raw/all,WFST/calib \
    -o DATA/bias_$current_date \
    -p $CP_PIPE_DIR/pipelines/_ingredients/cpBias.yaml \
    -d "instrument='WFST' AND exposure.observation_type='bias' AND exposure.day_obs>=$current_date AND exposure.day_obs<$next_date AND exposure NOT IN (19642, 21646)" \
    --register-dataset-types -c isr:doDefect=False -j 8

and certified to corresponding timespan:

butler certify-calibrations  $BUTLER_REPO  DATA/bias_$current_date  WFST/calib  --begin-date $current_date --end-date $next_date bias

Then, when I continued to the single frame step, I got this error complaining the “ambiguous lookup for bias in timespan [2024-02-12T23:59:28.000000, 2024-02-13T00:02:28.000000)”

lsst.daf.butler.cli.utils ERROR: Caught an exception, details are in traceback:
Traceback (most recent call last):
  File "/home/liubinyang/lsst_stack_v26_0_0/conda/envs/lsst-scipipe-7.0.1/share/eups/Linux64/ctrl_mpexec/g218a3a8f53+ca4789321c/python/lsst/ctrl/mpexec/cli/cmd/commands.py", line 199, in run
    if (qgraph := script.qgraph(pipelineObj=pipeline, **kwargs, show=show)) is None:
  File "/home/liubinyang/lsst_stack_v26_0_0/conda/envs/lsst-scipipe-7.0.1/share/eups/Linux64/ctrl_mpexec/g218a3a8f53+ca4789321c/python/lsst/ctrl/mpexec/cli/script/qgraph.py", line 210, in qgraph
    qgraph = f.makeGraph(pipelineObj, args)
  File "/home/liubinyang/lsst_stack_v26_0_0/conda/envs/lsst-scipipe-7.0.1/share/eups/Linux64/ctrl_mpexec/g218a3a8f53+ca4789321c/python/lsst/ctrl/mpexec/cmdLineFwk.py", line 622, in makeGraph
    qgraph = graphBuilder.makeGraph(
  File "/home/liubinyang/lsst_stack_v26_0_0/conda/envs/lsst-scipipe-7.0.1/share/eups/Linux64/pipe_base/g8798d61f7d+6612571a14/python/lsst/pipe/base/graphBuilder.py", line 1831, in makeGraph
  File "/home/liubinyang/lsst_stack_v26_0_0/conda/envs/lsst-scipipe-7.0.1/share/eups/Linux64/pipe_base/g8798d61f7d+6612571a14/python/lsst/pipe/base/graphBuilder.py", line 1458, in resolveDatasetRefs
    prereq_ref = registry.findDataset(
  File "/home/liubinyang/lsst_stack_v26_0_0/conda/envs/lsst-scipipe-7.0.1/share/eups/Linux64/daf_butler/gaa4f23791d+8ca47f5a75/python/lsst/daf/butler/_registry_shim.py", line 179, in findDataset
    return self._registry.findDataset(
  File "/home/liubinyang/lsst_stack_v26_0_0/conda/envs/lsst-scipipe-7.0.1/share/eups/Linux64/daf_butler/gaa4f23791d+8ca47f5a75/python/lsst/daf/butler/registries/sql.py", line 532, in findDataset
    raise LookupError(
LookupError: Ambiguous calibration lookup for bias in collections ('WFST/raw/all', 'WFST/calib', 'refcats', 'skymaps') with timespan [2024-02-12T23:59:28.000000, 2024-02-13T00:02:28.000000).

I have checked the continuity of certified bias timespan:

from lsst.daf.butler import Butler, Timespan
butler = Butler('/home/pubdata/WFST_coadd_test/butler_2402cosmos_t2', writeable=True)

qda = butler.registry.queryDatasetAssociations
coll = "WFST/calib"
biases = [x for x in qda("bias", collections=coll) if x.ref.dataId["detector"] == 1]

and it seems that the timespan is continuous from 2024-02-12 00:00:00 to 2024-02-16 00:00:00:

... ...
DatasetAssociation(ref=DatasetRef(DatasetType('bias', {instrument, detector}, ExposureF, isCalibration=True), {instrument: 'WFST', detector: 1}, run='DATA/bias_20240212/20240628T162413Z', id=feba0086-1388-4a2b-b544-4b46d6413dc0), collection='WFST/calib', timespan=Timespan(begin=astropy.time.Time('2024-02-12 00:00:00.000000', scale='tai', format='iso'), end=astropy.time.Time('2024-02-13 00:00:00.000000', scale='tai', format='iso'))), 

DatasetAssociation(ref=DatasetRef(DatasetType('bias', {instrument, detector}, ExposureF, isCalibration=True), {instrument: 'WFST', detector: 1}, run='DATA/bias_20240213/20240628T164443Z', id=471e1b27-18c9-4d79-a531-3125b7809b87), collection='WFST/calib', timespan=Timespan(begin=astropy.time.Time('2024-02-13 00:00:00.000000', scale='tai', format='iso'), end=astropy.time.Time('2024-02-16 00:00:00.000000', scale='tai', format='iso'))),
... ...

So, I do not understand why it is saying the master bias has an issue from 2024-02-12T23:59:28 to 2024-02-13T00:02:28.
Before this run, I generated monthly master bias/flat and that worked well. Could someone please give me some hints to figure out which step I missed or possible workaround? Thanks in advance!

If I’m interpreting this message correctly, the exposure to be processed in your single frame step is a 180s exposure starting at 2024-02-12T23:59:28. This exposure therefore starts in the validation range of your first bias (which expires at 2024-02-13T00:00:00, 32 seconds after the start of the exposure) and ends after the validation start of the second bias. This is what makes this lookup ambiguous.
I don’t believe we have a command line way to decertify a calibration yet, so if I had to fix this situation in my own calibration collections, I’d abandon this problematic one and start again, recertifying the two biases such that the transition from the first to the second happens when no exposures would be active (either 2024-02-12T23:59:27 or 2024-02-13T00:02:29 for this situation, depending on whether this exposure should be processed with the first or second bias).

1 Like

Thanks Chris!
You are correct about the issue – I should avoid a master bias expiring at 00:00:00. Anyway, I will decertify and re-certify that timespan. I think that should solve the problem. :smiley: