Issues with ingestCalib.py using DECAM images

I think K-T is correct. v21 is too old, unfortunately, and you will need to set up a recent weekly or wait for v22 (soon!). You may be able to get past this particular roadblock by setting up a local clone of ap_pipe along with cp_pipe so it finds the correct pipeline files to import, but I can pretty much guarantee you’ll hit some other middleware (Butler-related) problem if you try that route.

1 Like

Okay :slight_smile: thank you both for your input. I was able to install the newest weekly version of the LSST science pipelines :D, and following the draft guide I got a FileNotFoundError

(lsst-scipipe-0.6.0) jahumada@leftraru1:/mnt/flock/jahumada/data_hits$ pipetask run -b . -i DECam/raw/all
-o DECam/raw/crosstalk-sourcelss -p CP_PIPE_DIR/pipelines/DarkEnergyCamera/RunIsrForCrosstalkSources.yaml
Traceback (most recent call last):
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/bin/pipetask", line 29, in <module>
sys.exit(main())
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/cli/pipetask.py", line 43, in main
return cli()
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/cli/cmd/commands.py", line 102, in run
qgraph = script.qgraph(pipelineObj=pipeline, **kwargs)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/cli/script/qgraph.py", line 145, in qgraph
qgraph = f.makeGraph(pipelineObj, args)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/cmdLineFwk.py", line 550, in makeGraph
qgraph = graphBuilder.makeGraph(pipeline, collections, run, args.data_query, metadata=metadata)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/pipe_base/21.0.0-29-gf27b79e+8c09b75fa7/python/lsst/pipe/base/graphBuilder.py", line 907, in makeGraph
return scaffolding.makeQuantumGraph(metadata=metadata)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/pipe_base/21.0.0-29-gf27b79e+8c09b75fa7/python/lsst/pipe/base/graphBuilder.py", line 805, in makeQuantumGraph
graph = QuantumGraph({task.taskDef: task.makeQuantumSet() for task in self.tasks}, metadata=metadata)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/pipe_base/21.0.0-29-gf27b79e+8c09b75fa7/python/lsst/pipe/base/graphBuilder.py", line 805, in <dictcomp>
graph = QuantumGraph({task.taskDef: task.makeQuantumSet() for task in self.tasks}, metadata=metadata)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/pipe_base/21.0.0-29-gf27b79e+8c09b75fa7/python/lsst/pipe/base/graphBuilder.py", line 359, in makeQuantumSet
return set(q.makeQuantum() for q in self.quanta.values())
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/pipe_base/21.0.0-29-gf27b79e+8c09b75fa7/python/lsst/pipe/base/graphBuilder.py", line 359, in <genexpr>
return set(q.makeQuantum() for q in self.quanta.values())
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/pipe_base/21.0.0-29-gf27b79e+8c09b75fa7/python/lsst/pipe/base/graphBuilder.py", line 260, in makeQuantum
helper.adjust_in_place(self.task.taskDef.connections, self.task.taskDef.label, self.dataId)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/pipe_base/21.0.0-29-gf27b79e+8c09b75fa7/python/lsst/pipe/base/connections.py", line 699, in adjust_in_place
adjusted_inputs_by_connection, adjusted_outputs_by_connection = connections.adjustQuantum(
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/pipe_base/21.0.0-29-gf27b79e+8c09b75fa7/python/lsst/pipe/base/connections.py", line 582, in adjustQuantum
raise FileNotFoundError(
FileNotFoundError: Not enough datasets (0) found for non-optional connection overscan.camera (camera) with minimum=1 for quantum data ID {instrument: 'DECam', detector: 1, exposure: 288849, ...}.

I could do the ingest-raws of the calib and raw images with no problem, so I don’t know why this error raises. I also wasn’t aware of the quantum data ID :0.

I would advise against working inside your butler repository. You don’t want to delete that gen3.sqlite3 file by mistake and you will have to remind yourself which files you wrote and which files butler wrote.

Looks like your copy and paste missed off some important parts of the command line.

This is caused by you not providing a collection that includes a camera definition. Did you run butler write-curated-calibrations REPO DECam ? Your input collection only seems to include raw files but other datasets are required to run the pipeline. Can you show us what butler query-collections REPO gives you?

Okay!, I will work outside the butler repository from now on:). And yes, the -p Traceback was a typo of copying the error with the command line, I will edit it! sorry for that [now the command line is complete]. I did run butler write-curated-calibrations . lsst.obs.decam.DarkEnergyCamera inside the REPO directory.
The butler query-collections REPO gives:

(lsst-scipipe-0.6.0) jahumada@leftraru3:/mnt/flock/jahumada$ butler query-collections /mnt/flock/jahumada/data_hits/
                Name                     Type   
------------------------------------ -----------
DECam/calib                          CALIBRATION
DECam/calib/unbounded                RUN        
DECam/calib/curated/19700101T000000Z RUN        
DECam/calib/curated/20130115T013000Z RUN        
DECam/calib/curated/20130916T092600Z RUN        
DECam/calib/curated/20140117T012900Z RUN        
DECam/calib/curated/20141020T103000Z RUN        
DECam/calib/curated/20150105T011500Z RUN        
DECam/calib/curated/20131130T000000Z RUN        
DECam/raw/all                        RUN

As long as you update your input collections (-i DECam/raw/all,DECam/calib), the pipeline should run.

Use that as a second input collection for your pipetask run. The graph builder needs to know where to find the inputs so you need to tell it what raws to use and what calibrations to use. Your previous command only specified the raw location.

Ok :slight_smile: I used DECam/calib as a second input to the pipetask run and got the following:

(lsst-scipipe-0.6.0) [jahumada@leftraru4 jahumada]$ pipetask run -b /mnt/flock/jahumada/data_hits -i DECam/raw/all,DECam/calib -o DECam/raw/crosstalk-sources -p $CP_PIPE_DIR/pipelines/DarkEnergyCamera/RunIsrForCrosstalkSources.yaml
numexpr.utils INFO: Note: NumExpr detected 20 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
numexpr.utils INFO: NumExpr defaulting to 8 threads.
ctrl.mpexec.cmdLineFwk INFO: QuantumGraph contains 155682 quanta for 1 tasks, graph ID: '1625765004.8131242-14655'
Traceback (most recent call last):
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/bin/pipetask", line 29, in <module>
    sys.exit(main())
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/cli/pipetask.py", line 43, in main
    return cli()
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/decorators.py", line 21, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/cli/cmd/commands.py", line 103, in run
    script.run(qgraphObj=qgraph, **kwargs)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/cli/script/run.py", line 168, in run
    f.runPipeline(qgraphObj, taskFactory, args)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/cmdLineFwk.py", line 624, in runPipeline
    preExecInit.initialize(graph,
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/preExecInit.py", line 90, in initialize
    self.initializeDatasetTypes(graph, registerDatasetTypes)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/preExecInit.py", line 137, in initializeDatasetTypes
    expected = self.butler.registry.getDatasetType(datasetType.name)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/daf_butler/21.0.0-109-g75b85349+7e5b4c34a6/python/lsst/daf/butler/registries/sql.py", line 391, in getDatasetType
    return self._managers.datasets[name].datasetType
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/daf_butler/21.0.0-109-g75b85349+7e5b4c34a6/python/lsst/daf/butler/registry/interfaces/_datasets.py", line 491, in __getitem__
    raise KeyError(f"Dataset type with name '{name}' not found.")
KeyError: "Dataset type with name 'packages' not found."

Probably there is a package I should declare before the pipetask?. Thanks for your time :slight_smile: .

I think you just need to append --register-dataset-types to the end of your pipetask run command :crossed_fingers: This tells the Butler you intend to create (“register”) a new kind of output dataset it has not seen before.

1 Like

Ok thanks :slight_smile: adding the --register-dataset-types tag in the end of the pipetask command did avoid the error I was encountering :D. I have been compiling pipetask run -b /mnt/flock/jahumada/data_hits -i DECam/raw/all,DECam/calib -o DECam/raw/crosstalk-sources -p $CP_PIPE_DIR/pipelines/DarkEnergyCamera/RunIsrForCrosstalkSources.yaml --register-dataset-types for 19 hours now and unfortunately got an error (ps. I am analyzing 2300 raw images approx)

Traceback (most recent call last):
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/daf_butler/21.0.0-109-g75b85349+7e5b4c34a6/python/lsst/daf/butler/datastores/fileDatastore.py", line 1021, in _write_in_memory_to_artifact
    formatter.write(inMemoryDataset)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/obs_base/21.0.0-60-g2baedfa+3512c75b65/python/lsst/obs/base/formatters/fitsExposure.py", line 266, in write
    inMemoryDataset.writeFitsWithOptions(outputPath, options=ps)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/afw/21.0.0-46-g27369044f+532d44eec1/python/lsst/afw/image/image/_fitsIoWithOptions.py", line 132, in exposureWriteFitsWithOptions
    self.writeFits(dest, **writeOptionDict)
lsst.pex.exceptions.wrappers.FitsError:
  File "src/fits.cc", line 1351, in void lsst::afw::fits::Fits::writeImage(const lsst::afw::image::ImageBase<PixelT>&, const lsst::afw::fits::ImageWriteOptions&, std::shared_ptr<const lsst::daf::base::PropertySet>, std::shared_ptr<const lsst::afw::image::Mask<int> >) [with T = float]
    cfitsio error (/mnt/flock/jahumada/data_hits/DECam/raw/crosstalk-sources/20210712T224845Z/overscanRaw/20150226/ct4m20150226t215115/overscanRaw_DECam_g_DECam_SDSS_c0001_4720_0_1520_0_ct4m20150226t215115_N15_DECam_raw_crosstalk-sources_20210712T224845Z.fits): error writing to FITS f$
cfitsio error stack:
  HIERARCH ARCHIVE_ID_PHOTOCALIB = 1 / archive ID for generic component 'PHOTOCALI
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH SKYWCS_ID =         3 / archive ID for generic component 'SKYWCS'
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH ARCHIVE_ID_SKYWCS = 3 / archive ID for generic component 'SKYWCS'
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH DETECTOR_ID =       4 / archive ID for generic component 'DETECTOR'
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH ARCHIVE_ID_DETECTOR = 4 / archive ID for generic component 'DETECTOR'
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH FILTER_ID =       131 / archive ID for generic component 'FILTER'
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH ARCHIVE_ID_FILTER = 131 / archive ID for generic component 'FILTER'
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH BORE-AIRMASS =   1.58
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH BORE-ROTANG =     90.
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH INSTRUMENT = 'DECam   '
 Error writing elements 1 thru 3374 of input data array (ffpclb).
 {0}
lsst::afw::fits::FitsError: 'cfitsio error (/mnt/flock/jahumada/data_hits/DECam/raw/crosstalk-sources/20210712T224845Z/overscanRaw/20150226/ct4m20150226t215115/overscanRaw_DECam_g_DECam_SDSS_c0001_4720_0_1520_0_ct4m20150226t215115_N15_DECam_raw_crosstalk-sources_20210712T224845Z.fits): error writing to FITS file (106) : Writing image
cfitsio error stack:
  HIERARCH ARCHIVE_ID_PHOTOCALIB = 1 / archive ID for generic component 'PHOTOCALI
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH SKYWCS_ID =         3 / archive ID for generic component 'SKYWCS'
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH ARCHIVE_ID_SKYWCS = 3 / archive ID for generic component 'SKYWCS'
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH DETECTOR_ID =       4 / archive ID for generic component 'DETECTOR'
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH ARCHIVE_ID_DETECTOR = 4 / archive ID for generic component 'DETECTOR'
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH FILTER_ID =       131 / archive ID for generic component 'FILTER'
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH ARCHIVE_ID_FILTER = 131 / archive ID for generic component 'FILTER'
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH BORE-AIRMASS =   1.58
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH BORE-ROTANG =     90.
  Warning: the following keyword does not conform to the HIERARCH convention
  HIERARCH INSTRUMENT = 'DECam   '
  Error writing data buffer to file:
  /mnt/flock/jahumada/data_hits/DECam/raw/crosstalk-sources/20210712T224845Z/overs
  canRaw/20150226/ct4m20150226t215115/overscanRaw_DECam_g_DECam_SDSS_c0001_4720_0_
  1520_0_ct4m20150226t215115_N15_DECam_raw_crosstalk-sources_20210712T224845Z.fits
  Error writing elements 1 thru 3374 of input data array (ffpclb).
'


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/bin/pipetask", line 29, in <module>
    sys.exit(main())
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/cli/pipetask.py", line 43, in main
    return cli()
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/decorators.py", line 21, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/cli/cmd/commands.py", line 103, in run
    script.run(qgraphObj=qgraph, **kwargs)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/cli/script/run.py", line 168, in run
    f.runPipeline(qgraphObj, taskFactory, args)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/cmdLineFwk.py", line 643, in runPipeline
    executor.execute(graph, butler)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/mpGraphExecutor.py", line 301, in execute
    self._executeQuantaInProcess(graph, butler)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/mpGraphExecutor.py", line 351, in _executeQuantaInProcess
    self.quantumExecutor.execute(qnode.taskDef, qnode.quantum, butler)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/singleQuantumExecutor.py", line 137, in execute
    self.runQuantum(task, quantum, taskDef, butler)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/singleQuantumExecutor.py", line 345, in runQuantum
    task.runQuantum(butlerQC, inputRefs, outputRefs)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ip_isr/21.0.0-21-gc8894c7+b39e0bca51/python/lsst/ip/isr/isrTask.py", line 1053, in runQuantum
    butlerQC.put(outputs, outputRefs)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/pipe_base/21.0.0-29-gf27b79e+8c09b75fa7/python/lsst/pipe/base/butlerQuantumContext.py", line 198, in put
    self._put(valuesAttribute, refs)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/pipe_base/21.0.0-29-gf27b79e+8c09b75fa7/python/lsst/pipe/base/butlerQuantumContext.py", line 98, in _put
    butler.put(value, ref)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/daf_butler/21.0.0-109-g75b85349+7e5b4c34a6/python/lsst/daf/butler/core/utils.py", line 265, in inner
    return func(self, *args, **kwargs)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/daf_butler/21.0.0-109-g75b85349+7e5b4c34a6/python/lsst/daf/butler/_butler.py", line 939, in put
    self.datastore.put(obj, ref)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/daf_butler/21.0.0-109-g75b85349+7e5b4c34a6/python/lsst/daf/butler/core/utils.py", line 265, in inner
    return func(self, *args, **kwargs)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/daf_butler/21.0.0-109-g75b85349+7e5b4c34a6/python/lsst/daf/butler/datastores/fileDatastore.py", line 1638, in put
    storedInfo = self._write_in_memory_to_artifact(inMemoryDataset, ref)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/daf_butler/21.0.0-109-g75b85349+7e5b4c34a6/python/lsst/daf/butler/datastores/fileDatastore.py", line 1023, in _write_in_memory_to_artifact
    raise RuntimeError(f"Failed to serialize dataset {ref} of type {type(inMemoryDataset)} "
RuntimeError: Failed to serialize dataset overscanRaw@{instrument: 'DECam', detector: 46, exposure: 415219, ...}, sc=Exposure] (id=7b67bb3b-ba23-430d-ba7e-cbf5f99794ed) of type <class 'lsst.afw.image.exposure.ExposureF'> to location file:///mnt/flock/jahumada/data_hits/DECam/raw/crosstalk-sources/20210712T224845Z/overscanRaw/20150226/ct4m20150226t215115/overscanRaw_DECam_g_DECam_SDSS_c0001_4720_0_1520_0_ct4m20150226t215115_N15_DECam_raw_crosstalk-sources_20210712T224845Z.fits

Do you know why I get this?. Also, I’m not sure what this pipetask command does? or what CrosstalkSources mean? If you know, please let me know :smiley:
I appreciate your time and answers :slight_smile:

We sometimes see errors like this if a disk fills up. Could that be the case here?

1 Like

The goal of this pipeline is to preprocess the set of input RAW frames so they can be used to correct for inter-chip crosstalk (where a bright source on detector X shows up as a faint source on detector Y). This step applies the overscan correction to all the raw frames so that they can be supplied as an input to the full ISR processing. When the ISR processing is called for a given {exposure, detector} set, it passes all detectors that are overscanRaw for the same exposure as the crosstalkSources input.
In any case, this will effectively double the disk usage for the raw data (which are now in both dataset_types raw and overscanRaw), and makes K-T’s suggestion fairly likely.

1 Like

Indeed, K-T’s suggestion is right ! My disk got full, I will delete some stuff then :sweat_smile: Thanks!

Hi Cristopher :slight_smile: , your description was really useful to understand what I was doing!. I was wondering if there is any documentation that could go deeper regarding this pre-processing? I understand that gen3 is still very new and probably there is not a lot of documentation available yet, but maybe it follows the same logic as the gen2 approach? and looking at the latter can be useful to a degree.

Unfortunately, there isn’t much documentation on this, as it’s a DECam specific issue. The gen2 approach was to have the crosstalk code identify which additional detectors could be CT sources, get them from the butler, and then apply the overscan as part of the crosstalk processing. This isn’t allowed in gen3 (all butler calls happen outside of the task’s run() method), and so this was chosen as a way to solve the issue.

1 Like

Thanks for the response :slight_smile: .
I was able to run pipetask run to correct for crosstalk sources (it took 6 days!!).
The following step is to build nightly (or similar) bias and flat frames and certify them into a calib collection. But I’m confused as to where should I “separate” the calib images within the REPO directory. Here is the current structure of my calib directory:

/mnt/flock/jahumada/data_hits/DECam/calib # calib directory from REPO
|-- curated
|   |-- 19700101T000000Z # HIERARCH CALIBDATE
|   |   |-- crosstalk
|   |   |   |-- crosstalk_DECam_N10_DECam_calib_curated_19700101T000000Z.fits
|   |   |   |-- crosstalk_DECam_N11_DECam_calib_curated_19700101T000000Z.fits 
|   |   |   `-- ... # more images for all ccds
|   |   |-- defects
|   |   |   |-- defects_DECam_N10_DECam_calib_curated_19700101T000000Z.fits 
|   |   |   |-- defects_DECam_N11_DECam_calib_curated_19700101T000000Z.fits
|   |   |   `-- ... # more images for all ccds
|   |   `-- linearizer
|   |       |--linearizer_DECam_N10_DECam_calib_curated_19700101T000000Z.fits 
|   |       |--linearizer_DECam_N11_DECam_calib_curated_19700101T000000Z.fits 
|   |       `-- ... # more images for all ccds
|   |-- 20130115T013000Z
|   |   `-- defects -> # defect images
|   |-- 20130916T092600Z
|   |   `-- defects -> # defect images
|   |-- 20131130T000000Z
|   |   `-- defects -> # defect images
|   |-- 20140117T012900Z
|   |   `-- defects -> # defect images
|   |-- 20141020T103000Z
|   |   `-- defects -> # defect images
|   `-- 20150105T011500Z
|       `-- defects -> # defect images
`-- unbounded
    `-- camera 
        `-- camera_DECam_DECam_calib_unbounded.fits

Apparently, the calib images in the curated folder are separated by HIERARCH CALIBDATE folders, but I’m not sure why, or how should I rearrange the images for them to be built nightly or similar. Additionally, I have some questions about what curated, unbound, defects, and linearizer mean. If you know any references that could help me understand, I will be very happy to check them!
Thanks for your time. :smiley:

PS: I have posted this issue on another post (now flagged), but I figured It could be better to reply under this one!

What’s nice about the Butler is it doesn’t care about the directory structure. Don’t move any files around! It’s entirely up to you what visits you choose to use for building each “master calibration” bias and flat frames. I find it easiest to go through my images manually and make visit lists for each night (or few nights). You can do this with queries such as

butler query-dimension-records REPO exposure  --where "instrument='DECam' AND exposure.observation_type='bias' AND exposure.exposure_time=0.0 AND exposure.day_obs > 20210101"

Once you have your exposure/visit lists, you can construct a master bias, e.g.,

pipetask run -b REPO -p $CP_PIPE_DIR/pipelines/cpBias.yaml -i DECam/raw/all,DECam/calib -o u/USERNAME/biasGen1  -d "instrument='DECam' AND exposure IN (1, 2, 3, 4)" -c CONFIG-OPTIONS-HERE

Note that having collections named u/USERNAME/stuff is a convention, not a requirement.

You need to do a similar pipeline run for each master calib you want to build. Do biases first and then flats, and be sure to add the output collection from building biases as an input into the runs for building flats. The final step is certifying each of the calibs you built to be valid for the timeframe you intend. It’s best to use a collection within the calib collection for this, e.g.,

butler certify-calibrations REPO u/USERNAME/biasGen1 DECam/calib/July2021Calibs --begin-date 2020-01-01 --end-date 2020-01-31 bias

Hopefully that’s enough to get you started! @czw is working hard on complete documentation for this process, so do stay tuned.

1 Like

Thanks a lot for your response Meredith! I was wondering if there are any command tags for pipetask run that could “continue” a run that was stopped due to computer-related issues. I’ve seen the pipetask run tags, and It’s not clear for me which one could do that or if any of them can. Many times a pipetask run stops (due to computer issues) and I find it a little inefficient to erase the crosstalk-sources folder and start again.
I’ve tried adding --skip-existing --extend-run but it looks like it’s not intended for that purpose :confused:
Thanks!

I think --skip-existing --extend-run should do what you describe. What is happening instead?

One idea: try not specifying input collections the second time you call pipetask run; once input collections are specified, they can’t be changed for subsequent reruns to the same output collection.

1 Like

when I added --skip-existing --extend-run to the command line, I got:

  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/home/jahumada/.conda/envs/lsst-scipipe-0.6.0/lib/python3.8/site-packages/click/decorators.py", line 21, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/cli/cmd/commands.py", line 102, in run
    qgraph = script.qgraph(pipelineObj=pipeline, **kwargs)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/cli/script/qgraph.py", line 145, in qgraph
    qgraph = f.makeGraph(pipelineObj, args)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/ctrl_mpexec/21.0.0-34-g2cbe45c+f487817123/python/lsst/ctrl/mpexec/cmdLineFwk.py", line 550, in makeGraph
    qgraph = graphBuilder.makeGraph(pipeline, collections, run, args.data_query, metadata=metadata)
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/pipe_base/21.0.0-29-gf27b79e+8c09b75fa7/python/lsst/pipe/base/graphBuilder.py", line 904, in makeGraph
    scaffolding.resolveDatasetRefs(self.registry, collections, run, commonDataIds,
  File "/mnt/flock/jahumada/lsstw_v21_27/lsst-w.2021.27/scripts/stack/miniconda3-py38_4.9.2-0.6.0/Linux64/pipe_base/21.0.0-29-gf27b79e+8c09b75fa7/python/lsst/pipe/base/graphBuilder.py", line 718, in resolveDatasetRefs
    raise OutputExistsError(
lsst.pipe.base.graphBuilder.OutputExistsError: Quantum {instrument: 'DECam', detector: 56, exposure: 411752, ...} of task with label 'overscan' has some outputs that exist ([DatasetRef(DatasetType('overscanRaw', {band, instrument, detector, physical_filter, exposure}, Exposure), {instrument: 'DECam', detector: 56, exposure: 411752, ...}, id=b2d0c20c-fb4c-4e57-b1bb-cd360763cc01, run='DECam/raw/crosstalk-sources/20210811T065927Z')]) and others that don't ([DatasetRef(DatasetType('overscan_metadata', {band, instrument, detector, physical_filter, exposure}, PropertySet), {instrument: 'DECam', detector: 56, exposure: 411752, ...})]), with no metadata output, and clobbering outputs was not enabled.

I will try not to specify the input collection next time! and will give you updates on that once I try it out. For now, I was able to do the pipetask run and was wondering about your nightly build:

In my case I ran:

butler query-dimension-records /mnt/flock/jahumada/data_hits exposure --where "instrument='DECam' AND exposure.observation_type='bias' AND exposure.exposure_time=0.0 AND exposure.day_obs=20150313"

What could mean that this command line yields no results? Do you know of a command tag that could show me the available bias exposures without building anything yet? Something like --show --where "instrument='DECam' AND exposure.observation_type='bias' AND exposure.exposure_time=0.0"

Thanks for your time :slight_smile:

and clobbering outputs was not enabled.

The error message tells you how to fix it. Looking at the help you will find --clobber-outputs.

That is the right query fragment to find bias observations. You shouldn’t need to also have to specify the exposure time.

Do you know that you ingested some raw bias exposures? Maybe the biases were from different days?