Error in single frame processing

Hi, I am following @lskelvin’s notebook to process DECam datat via the LSST pipeline (link here: Merian Data Processing Using the LSST Science Pipelines - HackMD). It has been super useful so far and things have been pretty smooth until step 5.1 - single frame processing. I am stuck as I am trying to run these lines:

LOGFILE=$LOGDIR/demi_step1.log; \
date | tee $LOGFILE; \
pipetask --long-log run --register-dataset-types -j 12 \
-b $REPO --instrument lsst.obs.decam.DarkEnergyCamera \
-i DECam/raw/all,refcats,DECam/calib/demi \
-o $OUTPUT \
-p $DRP_PIPE_DIR/pipelines/DECam/DRP-Merian.yaml \
-d "instrument='DECam' AND $DATAQUERY" \
2>&1 | tee -a $LOGFILE; \
date | tee -a $LOGFILE

and I get the error message below. I googled it and searched the Community, but I haven’t found anything related to it. Any help will be much appreciated.
The paths here are basically the same as Lee’s, the “merian” directory being replaced by “demi”; my DATAQUERY selects science exposures between 2012/11/18 and 2012/11/29.

ERROR 2022-07-26T16:28:37.644-04:00 lsst.daf.butler.cli.utils ()( - Caught an exception, details are in traceback:
Traceback (most recent call last):
  File "/home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/ctrl_mpexec/g269e72b56f+15f56307d2/python/lsst/ctrl/mpexec/cli/cmd/", line 130, in run
    qgraph = script.qgraph(pipelineObj=pipeline, **kwargs)
  File "/home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/ctrl_mpexec/g269e72b56f+15f56307d2/python/lsst/ctrl/mpexec/cli/script/", line 187, in qgraph
    qgraph = f.makeGraph(pipelineObj, args)
  File "/home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/ctrl_mpexec/g269e72b56f+15f56307d2/python/lsst/ctrl/mpexec/", line 603, in makeGraph
    qgraph = graphBuilder.makeGraph(
  File "/home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/pipe_base/ge25eb00a14+a59d6d83de/python/lsst/pipe/base/", line 1199, in makeGraph
    scaffolding = _PipelineScaffolding(pipeline, registry=self.registry)
  File "/home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/pipe_base/ge25eb00a14+a59d6d83de/python/lsst/pipe/base/", line 538, in __init__
    datasetTypes = PipelineDatasetTypes.fromPipeline(pipeline, registry=registry)
  File "/home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/pipe_base/ge25eb00a14+a59d6d83de/python/lsst/pipe/base/", line 1172, in fromPipeline
    raise ValueError(
ValueError: {'overscanRaw'} marked as both prerequisites and outputs
Overriding default configuration file with /home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/dustmaps_cachedata/g41a3ec361e+ac198e9f13/config/.dustmapsrc
forced_src inputs
 {'scarletModels', 'refWcs', 'refCat', 'exposure', 'refCatInBand'}
mar 26 lug 2022, 16.28.38, EDT

I am pretty sure the issue here is that you cannot just run the entire DRP-Merian.yaml at once. Instead, it needs to be done in stages, or steps. This actually is documented in the main pipeline that your pipeline imports, it’s just a bit hard to find :smile: see lines 5-7 in the DECam/DRP.yaml “ingredients” file on GitHub.

You can fix this by running each “step” one at at time, e.g., -p $DRP_PIPE_DIR/pipelines/DECam/DRP-Merian.yaml#step0, for starters.

1 Like

Hi Demetra, lovely to hear from you again! Yes, @mrawls is correct - from taking a fresh look at your error here, it looks like you need to run the step 0 subset first before being able to run step 1. Step 0 is a DECam-specific step that runs the isrForCrosstalkSources task. This task takes your raw exposures and information about the camera and produces the overscanRaw dataset type.

In my notes file you link to above, this step is discussed in Section 3.4 (after construction of the master biases, and before construction of the master flats). If you did run the step0 subset already, then perhaps the fix here will be to simply include the collection that contains these overscanRaw dataset types as inputs in your command above. Assuming that you followed a similar setup to the notes files, these could live in a collection named something like DECam/calib/demi/crosstalk, or similar.

1 Like

PS - I can’t help but notice that your list of input collections may also be missing some other required inputs for onward data reductions, such as information about defects or sky maps. These live in collections established when DECam is first set up in your repo, such as DECam/calib/curated/XXX and DECam/calib/unbounded. You could add all of these as comma-separated inputs when you attempt to run the step 1 subset, for example:

-i DECam/raw/all,refcats,DECam/calib/demi,DECam/calib/demi/crosstalk,\

Alternatively, an easier approach might be to group all of these collections together into a parent CHAINED collection, which will then allow you to only link the single parent collection as an input thereafter. To construct a CHAINED collection (section 4 in my notes file) you can run something like this:



butler collection-chain $REPO $INPUT $CHILDREN

This then allows you to replace the -i section in all subsequent data processing with this:

-i DECam/defaults/demi

The science pipelines should then be able to find all the inputs it needs for step 1+ data reductions.


Thank you both… but I still get the same error message.
I had run step 0, and I have all the collections that @lskelvin mentioned. I did not include some of them because I misinterpreted something I read, but including them didn’t change the error message.
And I know I could have used a CHAINED collection, but I wanted to list all of them for the sake of clarity. The notebook is indeed very clear, so following the various steps has been straightforward.
I am quite puzzled, as I everything seems to be where it is supposed to be, but apparently something is wrong.

Ah, that’s a shame! And did you try only running the step 1 subset as Meredith suggested above? I.e.:

-p $DRP_PIPE_DIR/pipelines/DECam/DRP-Merian.yaml#step1

From your command above it looks like you’re trying to run the entire DRP-Merian.yaml pipeline file, which will likely also lead to an error such as this.

1 Like

Hi, I did it and I get a different error message:

ERROR 2022-07-27T15:21:29.527-04:00 lsst.daf.butler.cli.utils ()( - Caught an exception, details are in traceback:
Traceback (most recent call last):
  File "/home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/ctrl_mpexec/g269e72b56f+15f56307d2/python/lsst/ctrl/mpexec/cli/cmd/", line 130, in run
    qgraph = script.qgraph(pipelineObj=pipeline, **kwargs)
  File "/home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/ctrl_mpexec/g269e72b56f+15f56307d2/python/lsst/ctrl/mpexec/cli/script/", line 187, in qgraph
    qgraph = f.makeGraph(pipelineObj, args)
  File "/home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/ctrl_mpexec/g269e72b56f+15f56307d2/python/lsst/ctrl/mpexec/", line 603, in makeGraph
    qgraph = graphBuilder.makeGraph(
  File "/home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/pipe_base/ge25eb00a14+a59d6d83de/python/lsst/pipe/base/", line 1227, in makeGraph
    return scaffolding.makeQuantumGraph(metadata=metadata, datastore=self.datastore)
  File "/home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/pipe_base/ge25eb00a14+a59d6d83de/python/lsst/pipe/base/", line 1088, in makeQuantumGraph
    qset = task.makeQuantumSet(unresolvedRefs=self.unfoundRefs, datastore_records=datastore_records)
  File "/home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/pipe_base/ge25eb00a14+a59d6d83de/python/lsst/pipe/base/", line 484, in makeQuantumSet
    raise exc
  File "/home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/pipe_base/ge25eb00a14+a59d6d83de/python/lsst/pipe/base/", line 462, in makeQuantumSet
    tmpQuanta = q.makeQuantum(datastore_records)
  File "/home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/pipe_base/ge25eb00a14+a59d6d83de/python/lsst/pipe/base/", line 332, in makeQuantum
    helper.adjust_in_place(self.task.taskDef.connections, self.task.taskDef.label, self.dataId)
  File "/home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/pipe_base/ge25eb00a14+a59d6d83de/python/lsst/pipe/base/", line 750, in adjust_in_place
    adjusted_inputs_by_connection, adjusted_outputs_by_connection = connections.adjustQuantum(
  File "/home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/pipe_base/ge25eb00a14+a59d6d83de/python/lsst/pipe/base/", line 634, in adjustQuantum
    raise FileNotFoundError(
FileNotFoundError: Not enough datasets (0) found for non-optional connection calibrate.photoRefCat (ps1_pv3_3pi_20170110) with minimum=1 for quantum data ID {instrument: 'DECam', detector: 1, visit: 155556, ...}.
Overriding default configuration file with /home/ddecicco/lsstw_v22_26/lsst-w.2022.26/scripts/stack/miniconda3-py38_4.9.2-4.0.1/Linux64/dustmaps_cachedata/g41a3ec361e+ac198e9f13/config/.dustmapsrc

I found the same error mentioned here: Gen3 ProcessCcd with DECam - #4 by Paula, so I guess my Pan-STARRS reference catalog is too small?
I’ll try to download a larger region and post updates here, so that other people can hopefully benefit from them.

1 Like