How to run ProcessCcdTask via runDataRef

Hello,
I would like to run ProcessCcdTask for a single dataId obtained through a Butler instance. I do not want to use parseAndRun or parse any arguments as if I were using the command line. A naive attempt looks like this:

ref = butler.dataRef(datasetType="raw", dataId=dataId)
task.runDataRef(ref)

but this does not work as the code is unable to find the required calibs (note that it works fine using the command line task parseAndRun approach). Can anyone advise?

Many thanks,
Dan Prole,
Postdoctoral Researcher, Macquarie University

You need to instantiate the butler with the path to the calibrations repo as a parameter. E.g.:

butler = Butler("/path/to/data", calibRoot="/path/to/calibs")

Hi Paul,

Thanks for your response. This has led me to another error:

from lsst.pipe.tasks.processCcd import ProcessCcdTask 
task = ProcessCcdTask()         
task.runDataRef(ref)

which results in this error:

AttributeError: 'HuntsmanMapper' object has no attribute 'map_defects'

This leads me to suspect that the config has not been loaded properly since the ISR config file contains doDefect = False. This confuses me since it works fine when using parseAndRun without explicitly providing the config file.

I could explicitly load the config from file and parse it as an arg when initialising the task, but I would expect the task to be able to do that itself. Am I missing something?

Thanks.

Hi again,

I believe I have answered my own question, although I do not particularly like the answer!

It seems the obs package config overrides are applied by the ArgumentParser:

This basically means one is forced into either applying config overrides manually or by using the CmdLineTask paradigm, making it quite awkward to use the underlying task functionality. Is there an alternative?

You are correct that the soon-to-be-deprecated Gen2 middleware implements obs_ package overrides in the ArgumentParser so calling the underlying task directly requires manual synthesis of the appropriate configuration.

I’ll let someone else describe how this changes (or doesn’t) in Gen3.

1 Like

In gen3 instrument config overrides are applied by the instrument class and can be overridden by subclasses:

Gen3 changes pipeline execution completely such that we now define pipelines in YAML and have standard executors and no longer have command line tasks. Gen3 also has a completely rewritten butler and obs packages are now more standardized. New instruments are very easy to add and the main task is to write a metadata translator (as defined by GitHub - lsst/astro_metadata_translator: Observation metadata handling infrastructure).

Hi @timj, thanks for the information. I look forward to using gen3! I have two questions:

  • Where can I find more information / documentation on the gen3 changes?
  • When will gen3 be officially rolled out?

Thanks!

It depends on what you mean by “official”. The v22 release coming out this week or next is going to be pretty solid but if you are using gen3 for real you are better off using whatever the newest weekly happens to be (w_2021_26 at the moment).

We are working on the pipelines.lsst.io tutorials and may even have something out next week. The data preview documentation will also cover the gen3 system extensively and should be published next week.

There are also community documents such as the one here.

1 Like