Problems while trying to create a new LSST obs packag

Thank you for your suggestions, I have used the setup.cfg and pyproject.toml to pack the obs_package.

And seems like the pipeline function getPackageDir doesn’t work with package build by setup.cfg and pyproject.toml? So I use the python bulitin fuction path = Path(wfst.__path__[0]).parents[1].
But I wonder if the warning below show something serious that I don’t know? just like before I use sys.path.append.

Input In [47], in <cell line: 1>()
----> 1 getPackageDir(wfst)

File ~/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/utils/g336def89a8+7bff505259/python/lsst/utils/_packaging.py:45, in getPackageDir(package_name)
     20 """Find the file system location of the EUPS package.
     21
     22 Parameters
   (...)
     42 Does not use EUPS directly. Uses the environment.
     43 """
     44 if not package_name or not isinstance(package_name, str):
---> 45     raise ValueError(f"EUPS package name '{package_name}' is not of a suitable form.")
     47 envvar = f"{package_name.upper()}_DIR"
     49 path = os.environ.get(envvar)

ValueError: EUPS package name '<module 'wfst' from '/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/wfst/src/wfst/__init__.py'>' is not of a suitable form.

The instrument is still under construction, The CCDs we use are 9k x 9k, 16 amplifiers per CCD. And we plan to store all amplifiers in one extension. Are there some cautions for the two situations? Or are there differences in performance like process speed?
I still wonder, the official obs_package, CFHT, DEcam, HSC, LSST. The size of them are 4k x 4k or 2k x 4k. Don’t know if the pipeline works well for a bigger CCD 9k x 9k, like the primary part astrometry I focus on, for example if the FitTanSipWcsTass fits well? or other parts of the pipeline. I thought it may need to process data to answer the question.
And since I haven’t full know the following steps of the pipeline, so I haven’t process the simulated data until know. As for the curated calibrations. HSC has written bfkernel and transmissionCurve in official function writeAdditionalCuratedCalibrations. At least we will do the transmission correction. And choose other calibration from the official calibration classes you supply.

Thank you!

Manually messing with sys.path seems like the wrong approach.

Where is the getPackageDir coming from? I assume it’s coming from the curated calibrations handling but I’m not sure. If you are using EUPS with sconsUtils to setup the package then getPackageDir will work. If you are not using that but are instead wanting to use something like python package resources you will have to implement your own Instrument.getObsDataPackageDir method. The base class assumes you have an obs_x_data package that is separate but contains the defects etc. You if you change getObsDataPackageDir you can make it point wherever you like.

Those are some large detectors. @yusra do we have any reason to believe there will be problems with detectors that large?

1 Like

Recently I try to ingest the raws,
In my jupyter-notebook, every to_method, no matter it’s constant map, trivial map or the method I write all work normally.
But I can’t ingest them to the databse. Seems like there were sort out to the bad_files.

(lsst-scipipe-0.8.1) [yu@localhost wfst_data]$ butler ingest-raws testbutler_wfst/ ~/wfst_data/data/raw
py.warnings WARNING: /home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/daf_butler/g6b22db343a+d18c45d440/python/lsst/daf/butler/registry/databases/sqlite.py:444: SAWarning: Class _Ensure will not make use of SQL compilation caching as it does not set the 'inherit_cache' attribute to ``True``.  This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions.  Set this attribute to True if this object can make use of the cache key generated by the superclass.  Alternatively, this attribute may be set to False which will disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
  return connection.execute(_Ensure(table), rows).rowcount
ingest INFO: Successfully extracted metadata from 0 files with 2 failures
ingest WARNING: Could not extract observation metadata from the following:
ingest WARNING: - file:///home/yu/wfst_data/data/raw/data_modified.fits
ingest WARNING: - file:///home/yu/wfst_data/data/raw/wfst00000010.fits
ingest INFO: Successfully processed data from 0 exposures with 0 failures from exposure registration and 0 failures from file ingest.
ingest INFO: Ingested 0 distinct Butler datasets
lsst.daf.butler.cli.utils ERROR: Caught an exception, details are in traceback:
Traceback (most recent call last):
  File "/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/obs_base/gb0fc2ca601+a0cf348625/python/lsst/obs/base/cli/cmd/commands.py", line 118, in ingest_raws
    script.ingestRaws(*args, **kwargs)
  File "/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/obs_base/gb0fc2ca601+a0cf348625/python/lsst/obs/base/script/ingestRaws.py", line 71, in ingestRaws
    ingester.run(locations, run=output_run, processes=processes, file_filter=regex)
  File "/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/pipe_base/g5c83ca0194+970dd35637/python/lsst/pipe/base/timer.py", line 181, in wrapper
    res = func(self, *args, **keyArgs)
  File "/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/obs_base/gb0fc2ca601+a0cf348625/python/lsst/obs/base/ingest.py", line 1137, in run
    raise RuntimeError("Some failures encountered during ingestion")
RuntimeError: Some failures encountered during ingestion

So, what may have casued it? What’s the criterion between good_files and bad_files?
Thank you!

If you turn on DEBUG logging for the lsst.obs.base logger (butler --log-level lsst.obs.base=DEBUG ingest-raws) then it will tell you the metadata problem.

For ingest to work you must be able to run metadata translation. You can see if it works with

$ astrometadata -p my.translator.module translate data_modified.fits

where my.translator.module is the name of the package that will provide your MetadataTranslator subclass.

1 Like

Thank you!
After reading the log of DEUBG, I have figure out this problem.
I didn’t put my translator in the folder astro_metedata_translator.translators and add from .wfst import * to the __init__.py of astro_metedata_translator.translators, which make there are only official translators like [Decam, SDSS, HSC…] and some errors like circular import.

Then, how to create a DRP.yaml file describes the pipeline like hsc’ DRP.yaml? The tutorial Configuring a Butler is too abstract for me :frowning:

Looking forward to your Boot camp hours later!

That is not a pipelines tutorial. It’s an overview of how butler configuration works.

Now that you have some raw files in a butler, you need to think about things like biases and darks and defect masks. Later you need to think about skymaps for how to break the sky into reasonable pieces for coadds.

As for next steps you should take a look at:

And also this discussion about calibrations:

Hello,
I want to process my raw data, but I come cross an error:

cfitsio error (/home/yu/wfst_data/wfst_try/WFST/raw/all/raw/20220627/WFST00000010/raw_WFST_WFST-G_WFST00000010_4_WFST_raw_all.fits) : Incompatible type for FITS image: on disk is uint32 (HDU 0), in-memory is uint16. Read with allowUnsafe=True to permit conversions that may overflow.

What’s the meaning of the above error? I guess there may have something wrong with the simulated data.
And the topic Error while trying to create master bias frames seems not work for me.

And I meet this error when I want to process HSC's data using the Gaia catalog, and seems need to modify some setting related to filter if want to use Gaia catalog .

ctrl.mpexec.singleQuantumExecutor ERROR: Execution of task 'calibrate' on quantum {instrument: 'HSC', detector: 73, visit: 238096, ...} failed. Exception RuntimeError: Unknown reference filter r_flux
ctrl.mpexec.mpGraphExecutor ERROR: Task <TaskDef(CalibrateTask, label=calibrate) dataId={instrument: 'HSC', detector: 73, visit: 238096, ...}> failed; processing will continue for remaining tasks.

Thank you!

You would need to show me the raw formatter class that you are using to read these files. Some of our formatters read the native on-disk type, others try to force the in-memory python object to be a different type. In this case it looks like you are trying to read 32-bit integer FITS as 16-bit.

Hi!
Thank you for your advice, it let me know the error was related to the rawformatter. With the direction you pointed I read the rawformatter of DECam and HSC, I knew that we should use the class like lsst.afw.image.ImageF to control the type we used to read the data.

And I have some questions about rawformatter. This is the class of DECam, it will not try to read the “extension 0”, it only read the extension related to that CCDs.

class DarkEnergyCameraRawFormatter(FitsRawFormatterBase):
    translatorClass = astro_metadata_translator.DecamTranslator
    filterDefinitions = DarkEnergyCamera.filterDefinitions

    # DECam has a coordinate system flipped on X with respect to our
    # VisitInfo definition of the field angle orientation.
    # We have to specify the flip explicitly until DM-20746 is implemented.
    wcsFlipX = True

    def getDetector(self, id):
        return DarkEnergyCamera().getCamera()[id]

    def _scanHdus(self, filename, detectorId):
        """Scan through a file for the HDU containing data from one detector.

        Parameters
        ----------
        filename : `str`
            The file to search through.
        detectorId : `int`
            The detector id to search for.

        Returns
        -------
        index : `int`
            The index of the HDU with the requested data.
        metadata: `lsst.daf.base.PropertyList`
            The metadata read from the header for that detector id.

        Raises
        ------
        ValueError
            Raised if detectorId is not found in any of the file HDUs
        """
        log = logging.getLogger("DarkEnergyCameraRawFormatter")
        log.debug("Did not find detector=%s at expected HDU=%s in %s: scanning through all HDUs.",
                  detectorId, detector_to_hdu[detectorId], filename)

        fitsData = lsst.afw.fits.Fits(filename, 'r')
        # NOTE: The primary header (HDU=0) does not contain detector data.
        for i in range(1, fitsData.countHdus()):
            fitsData.setHdu(i)
            metadata = fitsData.readMetadata()
            if metadata['CCDNUM'] == detectorId:
                return i, metadata
        else:
            raise ValueError(f"Did not find detectorId={detectorId} as CCDNUM in any HDU of {filename}.")

    def _determineHDU(self, detectorId):
        """Determine the correct HDU number for a given detector id.

        Parameters
        ----------
        detectorId : `int`
            The detector id to search for.

        Returns
        -------
        index : `int`
            The index of the HDU with the requested data.
        metadata : `lsst.daf.base.PropertyList`
            The metadata read from the header for that detector id.

        Raises
        ------
        ValueError
            Raised if detectorId is not found in any of the file HDUs
        """
        filename = self.fileDescriptor.location.path
        try:
            index = detector_to_hdu[detectorId]
            metadata = lsst.afw.fits.readMetadata(filename, index)
            if metadata['CCDNUM'] != detectorId:
                # the detector->HDU mapping is different in this file: try scanning
                return self._scanHdus(filename, detectorId)
            else:
                fitsData = lsst.afw.fits.Fits(filename, 'r')
                fitsData.setHdu(index)
                return index, metadata
        except lsst.afw.fits.FitsError:
            # if the file doesn't contain all the HDUs of "normal" files, try scanning
            return self._scanHdus(filename, detectorId)

    def readMetadata(self):
        index, metadata = self._determineHDU(self.dataId['detector'])
        astro_metadata_translator.fix_header(metadata)
        return metadata

    def readImage(self):
        index, metadata = self._determineHDU(self.dataId['detector'])
        return lsst.afw.image.ImageI(self.fileDescriptor.location.path, index)

But if I mimic the behavior of ‘DECam’, like(Here I only read the extension 1 by fitsData.setHdu(1). And the structure of ZTF’ fits I used is: every fits have 2 extensions, extension 0 for primary hdu, and extension 1 related to the image come from a single amplifier.

class ZTFCameraRawFormatter(FitsRawFormatterBase):
    
    wcsFlipX = True
    
    translatorClass = ZTFTranslator
    filterDefinitions = ZTF_FILTER_DEFINTIONS
    
    def getDetector(self, id):
        return ZTFCamera().getCamera()[id]
    
    def readMetadata(self):
        
        fitsData = lsst.afw.fits.Fits(self.fileDescriptor.location.path, 'r')
        fitsData.setHdu(1)
        metadata = fitsData.readMetadata()
        fix_header(metadata)
        return metadata
    
    def readImage(self):
        return lsst.afw.image.ImageF(self.fileDescriptor.location.path, 1)

And then use the pipetask run to process the data ingested successfully, It will have a lot of warning like:

astro_metadata_translator.observationInfo WARNING: Ignoring Error calculating property 'altaz_begin' using translator <class 'astro_metadata_translator.translators.ztf.ZTFTranslator'> and file /home/yu/wfst_data/ztf_try/ZTF/raw/all/raw/20210804/ztf_20210804364132/raw_ZTF_ZTF_r_ztf_20210804364132_43_ZTF_raw_all.fits: 'DATE-OBS not found'
astro_metadata_translator.observationInfo WARNING: Ignoring Error calculating property 'observation_type' using translator <class 'astro_metadata_translator.translators.ztf.ZTFTranslator'> and file /home/yu/wfst_data/ztf_try/ZTF/raw/all/raw/20210804/ztf_20210804364132/raw_ZTF_ZTF_r_ztf_20210804364132_43_ZTF_raw_all.fits: 'IMGTYPE not found'
astro_metadata_translator.observationInfo WARNING: Ignoring Error calculating property 'dark_time' using translator <class 'astro_metadata_translator.translators.ztf.ZTFTranslator'> and file /home/yu/wfst_data/ztf_try/ZTF/raw/all/raw/20210804/ztf_20210804364132/raw_ZTF_ZTF_r_ztf_20210804364132_43_ZTF_raw_all.fits: "Could not find ['EXPTIME'] in header"
astro_metadata_translator.observationInfo WARNING: Ignoring Error calculating property 'datetime_end' using translator <class 'astro_metadata_translator.translators.ztf.ZTFTranslator'> and file /home/yu/wfst_data/ztf_try/ZTF/raw/all/raw/20210804/ztf_20210804364132/raw_ZTF_ZTF_r_ztf_20210804364132_43_ZTF_raw_all.fits: "Could not find ['EXPTIME'] in header"
astro_metadata_translator.observationInfo WARNING: Ignoring Error calculating property 'detector_exposure_id' using translator <class 'astro_metadata_translator.translators.ztf.ZTFTranslator'> and file /home/yu/wfst_data/ztf_try/ZTF/raw/all/raw/20210804/ztf_20210804364132/raw_ZTF_ZTF_r_ztf_20210804364132_43_ZTF_raw_all.fits: 'FILENAME not found'

But if I can ingest the data, why this warning happen?
Only normal if I modify the hdu to “0”.

    def readMetadata(self):
        
        fitsData = lsst.afw.fits.Fits(self.fileDescriptor.location.path, 'r')
        fitsData.setHdu(0)
        metadata = fitsData.readMetadata()
        fix_header(metadata)
        return metadata

And the result is

characterizeImage.detection INFO: Detected 6441 positive peaks in 2287 footprints and 0 negative peaks in 0 footprints to 100 sigma
characterizeImage.detection INFO: Resubtracting the background after object detection
characterizeImage.measurement INFO: Measuring 2287 sources (2287 parents, 0 children) 
characterizeImage.measurePsf INFO: Measuring PSF
characterizeImage.measurePsf INFO: PSF star selector found 3 candidates
characterizeImage.measurePsf.reserve INFO: Reserved 0/3 sources
characterizeImage.measurePsf INFO: Sending 3 candidates to PSF determiner
characterizeImage.measurePsf.psfDeterminer WARNING: NOT scaling kernelSize by stellar quadrupole moment, but using absolute value

> WARNING: 1st context group-degree lowered (not enough samples)


> WARNING: 1st context group removed (not enough samples)

characterizeImage.measurePsf INFO: PSF determination using 3/3 stars.
py.warnings WARNING: /home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/pipe_tasks/gf1799c5b72+6048f86b6d/python/lsst/pipe/tasks/characterizeImage.py:495: FutureWarning: Default position argument overload is deprecated and will be removed in version 24.0.  Please explicitly specify a position.
  psfSigma = psf.computeShape().getDeterminantRadius()

ctrl.mpexec.singleQuantumExecutor ERROR: Execution of task 'characterizeImage' on quantum {instrument: 'ZTF', detector: 43, visit: 20210804, ...} failed. Exception InvalidParameterError: 
  File "src/PsfexPsf.cc", line 233, in virtual std::shared_ptr<lsst::afw::image::Image<double> > lsst::meas::extensions::psfex::PsfexPsf::_doComputeImage(const Point2D&, const lsst::afw::image::Color&, const Point2D&) const
    Only spatial variation (ndim == 2) is supported; saw 0 {0}
lsst::pex::exceptions::InvalidParameterError: 'Only spatial variation (ndim == 2) is supported; saw 0'

At least, it begins to work, I will read the task’config class to figure out the above error.
My question is, why these warnings happen?

astro_metadata_translator.observationInfo WARNING: Ignoring Error calculating property 'dark_time' using translator <class 'astro_metadata_translator.translators.ztf.ZTFTranslator'> and file /home/yu/wfst_data/ztf_try/ZTF/raw/all/raw/20210804/ztf_20210804364132/raw_ZTF_ZTF_r_ztf_20210804364132_43_ZTF_raw_all.fits: "Could not find ['EXPTIME'] in header"
astro_metadata_translator.observationInfo WARNING: Ignoring Error calculating property 'datetime_end' using translator <class 'astro_metadata_translator.translators.ztf.ZTFTranslator'> and file /home/yu/wfst_data/ztf_try/ZTF/raw/all/raw/20210804/ztf_20210804364132

the exptime indeed only exists in extension 0 and not in extension 1 for ZTF.
But why DECam only read the extension ralated to that CCDs(I didn’t find it read the extension 0 and the structure of DECam is extension 0 for the primary header and tens of extensions for image, and there are lots of header key that used in DECamTranslator only exist in 0 extension ) but without the warning like ZTF? for example:

DECam_raw[0].header['EXPTIME']
30

DECam_raw[1].header['EXPTIME']
Input In [75], in <cell line: 1>()
----> 1 DECam_raw[1].header['EXPTIME']
KeyError: "Keyword 'EXPTIME' not found."

It may related to how the rawformatted I have not know. Looks like it will be convenient if we set the fits like HSC( one fits one extension) :cry:

Thank you!

Raw ingest is doing:

and I don’t know what your metadata translator class is doing but if you copied the DECam one then that is always merging HDU 0 with the other headers:

The lsst.afw.fits.readMetadata() method is being clever and automatically merging the primary header with the requested header because DECam is using the INHERIT = T header to tell the reader that this should happen. If your data does not have that set you will have to do the merging yourself in your raw formatter. You can use the merge_headers method just like we do in the metadata translator.

That can also work but you can also fix it in code as described above or declare the inheritance.

1 Like

Hello! Recently I successfully finished the characterizeImage and calibrate steps. And get the calexp dataset_type. Without doing fgcm and jointcal, I want to do coadd directly.
When I run the following command:

pipetask run -b ./butler.yaml -p pipelines/DRP.yaml#makeWarp -i u/yu/calib,skymaps -o u/yu/warps -d "skymap = 'sky'" --register-dataset-types

I get:

py.warnings WARNING: /home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/daf_butler/g6b22db343a+d18c45d440/python/lsst/daf/butler/registry/interfaces/_database.py:1379: SAWarning: TypeDecorator Base64Region() will not produce a cache key because the ``cache_ok`` attribute is not set to True.  This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions.  Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
  connection.execute(table.insert().from_select(names, select))

py.warnings WARNING: /home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/daf_butler/g6b22db343a+d18c45d440/python/lsst/daf/butler/registry/interfaces/_database.py:1640: SAWarning: TypeDecorator Base64Region() will not produce a cache key because the ``cache_ok`` attribute is not set to True.  This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions.  Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
  return connection.execute(sql, *args, **kwargs)

py.warnings WARNING: /home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/ctrl_mpexec/g6727979600+15d2600a0d/python/lsst/ctrl/mpexec/cli/script/qgraph.py:148: UserWarning: QuantumGraph is empty
  qgraph = f.makeGraph(pipelineObj, args)

lsst.daf.butler.cli.utils ERROR: Caught an exception, details are in traceback:
Traceback (most recent call last):
  File "/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/ctrl_mpexec/g6727979600+15d2600a0d/python/lsst/ctrl/mpexec/cli/cmd/commands.py", line 106, in run
    qgraph = script.qgraph(pipelineObj=pipeline, **kwargs)
  File "/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/ctrl_mpexec/g6727979600+15d2600a0d/python/lsst/ctrl/mpexec/cli/script/qgraph.py", line 151, in qgraph
    raise RuntimeError("QuantumGraph is empty.")
RuntimeError: QuantumGraph is empty.

Which means it can’t find data to execute the makeWarp step, and if I change the skyMap to a wrong name deliberately, It get the same error, for example:

(lsst-scipipe-0.8.1) [yu@localhost 2022-09-08]$ pipetask run -b ./butler.yaml -p pipelines/DRP.yaml#makeWarp -i u/yu/calib,skymaps -o u/yu/warps -d "tract = 9813 AND skymap = 'hsc_rings_v1' AND patch in (38, 39, 40, 41)"  --register-dataset-types
py.warnings WARNING: /home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/daf_butler/g6b22db343a+d18c45d440/python/lsst/daf/butler/registry/interfaces/_database.py:1379: SAWarning: TypeDecorator Base64Region() will not produce a cache key because the ``cache_ok`` attribute is not set to True.  This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions.  Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
  connection.execute(table.insert().from_select(names, select))

py.warnings WARNING: /home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/daf_butler/g6b22db343a+d18c45d440/python/lsst/daf/butler/registry/interfaces/_database.py:1640: SAWarning: TypeDecorator Base64Region() will not produce a cache key because the ``cache_ok`` attribute is not set to True.  This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions.  Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
  return connection.execute(sql, *args, **kwargs)

py.warnings WARNING: /home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/ctrl_mpexec/g6727979600+15d2600a0d/python/lsst/ctrl/mpexec/cli/script/qgraph.py:148: UserWarning: QuantumGraph is empty
  qgraph = f.makeGraph(pipelineObj, args)

lsst.daf.butler.cli.utils ERROR: Caught an exception, details are in traceback:
Traceback (most recent call last):
  File "/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/ctrl_mpexec/g6727979600+15d2600a0d/python/lsst/ctrl/mpexec/cli/cmd/commands.py", line 106, in run
    qgraph = script.qgraph(pipelineObj=pipeline, **kwargs)
  File "/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/ctrl_mpexec/g6727979600+15d2600a0d/python/lsst/ctrl/mpexec/cli/script/qgraph.py", line 151, in qgraph
    raise RuntimeError("QuantumGraph is empty.")
RuntimeError: QuantumGraph is empty.

but actually, I only have the following skymaps(forgiving for naming the skymap arbitrarily):

(lsst-scipipe-0.8.1) [yu@localhost 2022-09-08]$ butler query-data-ids . skymap
         skymap        
-----------------------
                    di3
            discrete/hi
           discrete/hit
          discrete/hits
discrete/validation_hsc
                    sky
                   sky1

It meas the command didn’t read the content after -d

pipetask run -b ./butler.yaml -p pipelines/DRP.yaml#makeWarp -i u/yu/calib,skymaps -o u/yu/warps -d "tract = 9813 AND skymap = 'hsc_rings_v1' AND patch in (38, 39, 40, 41)"  --register-dataset-types

So, What should I modify or what should I refer to?
And I may set the config.skyMap["discrete"] wrong, too, if there are command like that can get the tract automatically from the image I input?
Thank you!

Are there any additional warnings above the ones you’ve printed? With recent versions of our code you should get much more diagnostic information about an empty QuantumGraph, and while a bad skymap can be one cause of that, it’s not the only one.

That said, I’m worried the lsst-scipipe-0.8.1 in your prompt suggests you’re using a fairly old version of the stack, and you may not be able to get those diagnostics without upgrading.

This is all the log it printed, and the version I use is 23.0.1, I will install a newer pipeline later. I remember in butler2, there is command that make tracts and patches from the data I input, does it has been removed?

makeDiscreteSkyMap.py DATA --rerun coadd_test_2 --calib DATA --id visit=176..184 --clobber-config

Thank you!

You use this:

$ butler make-discrete-skymap --help

You may also want to look at the FAQ on why a quantum graph can be empty:

https://pipelines.lsst.io/v/weekly/middleware/faq.html#how-do-i-fix-an-empty-quantumgraph

1 Like

Recently I wondering about the pixel scale the pipeline gives,
For example:
Using the function in ref_match.py

ref_match._getExposureMetadata(raw_ztf)

And a part of the header of ZTF is

XTENSION= 'IMAGE   '           / IMAGE extension                                
BITPIX  =                  -32 / number of bits per data pixel                  
NAXIS   =                    2 / number of data axes                            
NAXIS1  =                 3072 / length of data axis 1                          
NAXIS2  =                 3080 / length of data axis 2                          
PCOUNT  =                    0 / required keyword; must = 0                     
GCOUNT  =                    1 / required keyword; must = 1                     
EXTNAME = '2       '                                                            
HDUVERS =                    1                                                  
CCD_ID  =                   11 / ID value of CCD detector                                   
WCSVERS = '2.1     '           / WCS version number                             
WCSAXES =                    2 / Number of WCS axes                             
WCSNAME = 'ZTF     '           / Name of the WCS coordinate system              
RADESYS = 'ICRS    '           / Coordinate reference system                    
CTYPE1  = 'RA---TAN'           / Name of coord X axis                           
CTYPE2  = 'DEC--TAN'           / Name of coord Y axis                           
CRPIX1  =            -3319.671 / Coord system X ref pixel                       
CRPIX2  =             6469.751 / Coord system Y ref pixel                       
CRVAL1  =              6.66667 / Coord system value at CRPIX1                   
CRVAL2  =                62.15 / Coord system value at CRPIX2                   
CUNIT1  = 'deg     '           / Coordinate units for X axis                    
CUNIT2  = 'deg     '           / Coordinate units for Y axis                    
CD1_1   =        -0.0002815649 / WCS matrix                                     
CD1_2   =            6.814E-07 / WCS matrix                                     
CD2_1   =           -5.349E-07 / WCS matrix                                     
CD2_2   =        -0.0002815497 / WCS matrix                                     
CCD_ROT =                   0. / CCD rotation angle                                                                          

And get the following output:

Struct(bbox=(minimum=(0, 0), maximum=(3071, 3079)); wcs=Non-standard SkyWcs (Frames: PIXELS, IWC, SKY): 
Sky Origin: (6.6666700000, +62.1500000000)
Pixel Origin: (6154, -3090)
Pixel Scale: 0.263498 arcsec/pixel; photoCalib=None; filterName=r; epoch=59430.36474012732)

But in ZTF’s website, it gives the following:

Pixel scale	1.0"/pixel

and the CD1_1 also suggested that:

CD1_1   =        -0.0002815649 / WCS matrix 

So, why the function returns the value 0.263498 arcsec/pixel?
I guess it may caused by having set something wrong in the obs_package?

Thank you!

Forgive me if this is a red herring but, I can see DECam mentioned above in this topic, and the DECam platescale is 0.2626 - 0.2637, so your guess of the wrong obs package, or perhaps there being DECam-based inputs or values being set somewhere, might be the culprit?

Thank you!

The topic is Problems while trying to create a new LSST obs packag, And the DECam mentioned above is for reference to create a new obs_package.

And I created an obs_ZTF to process the ZTF image aiming at learning the pipeline, the Pixel Scale: 0.263498 arcsec/pixel above is get by the image ingested with the obs_ZTF.

And in another package obs_wfst I created, the same problem occured again:

The header of the image:

SIMPLE  =                    T / conforms to FITS standard                      
BITPIX  =                  -64 / array data type                                
NAXIS   =                    2 / number of array dimensions                     
NAXIS1  =                 9216                                                  
NAXIS2  =                 9232                                                  
EXTEND  =                    T                                                  
CRPIX1  =               4608.5                                                  
CRPIX2  =               4616.5                                                  
CRVAL1  =                  180                                                  
CRVAL2  =                   36                                                  
CD1_1   = 9.16666666666666E-05                                                  
CD1_2   =                    0                                                  
CD2_1   =                    0                                                  
CD2_2   = 9.16666666666666E-05                                                  
CTYPE1  = 'RA---TAN'                                                            
CTYPE2  = 'DEC--TAN'                                                            
EXPTIME =                   30                                                  
DARKTIME=                   30                                                  
AIRMASS =                    1                                                  
EXP-ID  = 'WFST00000060'                                                        
OBJECT  = '9999    '                                                            
CCDNUM  =                    4                                                  
HUMIDITY=                   20                                                  
OUTTEMP =                   20                                                  
PRESSURE=                621.6                                                  
INSTRUME= 'WFST    '                                                            
FILTER  = 'WFST-G  '                                                            
DATE-OBS= '2022-09-27'                                                          
UT-STR  = '10:55:27.728'                                                        
UT-END  = '10:55:57.728'                                                        
DATA-TYP= 'object  '                                                            
RA2000  =                  180                                                  
DEC2000 =                   36                                                  
ALTITUDE=                    0                                                  
AZIMUTH =                    0         

The pixel scale by the wcs of LSST pipeline:

FITS standard SkyWcs:
Sky Origin: (180.0000000000, +36.0000000000)
Pixel Origin: (4607.5, 4615.5)
Pixel Scale: 0.175665 arcsec/pixel

which should be 0.33(3600*9.16e-5 )?
It’s that normal? I guess it’s related to I have set something wrong in my obs_package?

Thank you!

Hello!
What you have said is correct, the value Pixel Scale: 0.263498 arcsec/pixel is indeed related to the DECam, since I write the package based on the obs_decam, and the above question is caused by the following code in camera.py:

# Name of native coordinate system
config.transformDict.nativeSys = 'FocalPlane'

config.transformDict.transforms = {}
config.transformDict.transforms['FieldAngle'] = lsst.afw.geom.transformConfig.TransformConfig()
config.transformDict.transforms['FieldAngle'].transform['multi'].transformDict = None
# x, y translation vector
config.transformDict.transforms['FieldAngle'].transform['affine'].translation = [0.0, 0.0]

# 2x2 linear matrix in the usual numpy order;
#             to rotate a vector by theta use: cos(theta), sin(theta), -sin(theta), cos(theta)
config.transformDict.transforms['FieldAngle'].transform['affine'].linear = [1.0, 0.0, 0.0, 1.0]

# Coefficients for the radial polynomial; coeff[0] must be 0
config.transformDict.transforms['FieldAngle'].transform['radial'].coeffs = [
    0.0, 8.516497674138379e-05, 0.0, -4.501399132955917e-12]

config.transformDict.transforms['FieldAngle'].transform.name = 'radial'

I didn’t fully understand the code above when I wrote the package a long time ago, and copied that of the DECam.
If they are any tutorial about how to set these value? And if there are any article or website introducing these transform and the exact meaning of these term(for example FieldAngle) used in the coordinate describe the camera?
Thank you!

That code is how you define the camera geometry. We are currently migrating away from that scheme to something based on YAML.

We needed something a bit more flexible for LSSTCam. This is also not really documented though.

cc/ @leanne

Hello,
Recently I want to process the CCD with different division methods to test which divison is better,
The distortion is only related to the radius, but the correct match is mainly concentrate in the bottom left part of the image:

And seems like the TAN-SIP can’t descirbe the top right area well,
Figure_1

So I want to divide the image into some smaller part, (our CCDs is 9k * 9k with 16 amplifier 1k * 4k). I also want to know If it’s useless to divide them?
And the way I think that can achieve this aim is to modify the description of CCDs in camery.py and the file about amplifier that end with .fits meanwhile. But it’s kinda troublesome to modify so much files, if exists a simpler way, for example to modify something in isrTask.py or assemebkeCcdTask.py that can achieve these too?
Thank you!