Problems while trying to create a new LSST obs packag

Hello, I am a postgraduate student new to astronomy, and I am working on WFST, trying to use your well designed LSST pipeline to process our instrument’s data. At the very begining, my aim is just about the part the astrometry, but then I find the pipeline is well designed and can apply to other instruments once we set the package documents well. So, finally i decide to create a package obs_wfst. But I meet a lot of problems and I want to post the problems about creating a new package concentratly in this topic.
First, about How to create a new LSST obs package, I am following the four steps, about the instrument-test:

import unittest

import lsst.utils.tests
import lsst.obs.example
from lsst.obs.base.instrument_tests import InstrumentTests, InstrumentTestData


class TestExampleCam(InstrumentTests, lsst.utils.tests.TestCase):
    def setUp(self):
        physical_filters = {"example g filter",
                            "example z filter"}

        self.data = InstrumentTestData(name="Example",
                                       nDetectors=4,
                                       firstDetectorName="1_1",
                                       physical_filters=physical_filters)
        self.instrument = lsst.obs.example.ExampleCam()

if __name__ == '__main__':
    lsst.utils.tests.init()
    unittest.main()

and the 4 tests passed 3, the test about sqlite falled.

self = <sqlalchemy.dialects.sqlite.pysqlite.SQLiteDialect_pysqlite object at 0x7f1442060c70>, cursor = <sqlite3.Cursor object at 0x7f14336727a0>
statement = 'INSERT INTO instrument (name, visit_max, exposure_max, detector_max, class_name) VALUES (?, ?, ?, ?, ?)'
parameters = ('WFST', 1073741824, 1073741824, 10, '_instrument.WFSTCamera'), context = <sqlalchemy.dialects.sqlite.base.SQLiteExecutionContext object at 0x7f14336816d0>

    def do_execute(self, cursor, statement, parameters, context=None):
>       cursor.execute(statement, parameters)
E       sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: instrument.name
E       [SQL: INSERT INTO instrument (name, visit_max, exposure_max, detector_max, class_name) VALUES (?, ?, ?, ?, ?)]
E       [parameters: ('WFST', 1073741824, 1073741824, 10, '_instrument.WFSTCamera')]
E       (Background on this error at: https://sqlalche.me/e/14/gkpj)

../../../../../conda/miniconda3-py38_4.9.2/envs/lsst-scipipe-0.7.0/lib/python3.8/site-packages/sqlalchemy/engine/default.py:719: IntegrityError
================================================================================== short test summary info ==================================================================================
FAILED test_instrument.py::TestInstrument::test_register - sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: instrument.name
================================================================================ 1 failed, 3 passed in 1.79s ================================================================================

And I inspect the sqlite document of HSC, find that in the instrument table, it’s class_name is lsst.obs.subaru.HyperSuprimeCam, I guess, it should have a similar format. I think this question can be solved if i can import my package like

from lsst.obs.wfst import WFSTCamera

not

import sys
sys.path.append("/home/yu/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/obs_wfst/python/obs/wfst")

from _instrument import WFSTCamera

And this make the function get_full_type_name(self) can’t work well.
So, how can I let my package can interacte with your interface and can import by lsst.obs.wfst?

I also wondering the the meaning of 22.0.1-44-g74bdbb4e+988f982fce in the url, looks like the dataset id in sqlite. I guess the package I created should also have folder named similar to commuicate with the interface?

/home/yu/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/obs_subaru/22.0.1-44-g74bdbb4e+988f982fce/hsc/

Second, is about the method makeDataIdTranslatorFactory

    def makeDataIdTranslatorFactory(self) -> TranslatorFactory:
        # Docstring inherited from lsst.obs.base.Instrument.
        factory = TranslatorFactory()
        factory.addGenericInstrumentRules(self.getName(), calibFilterType="band",
                                          detectorKey="ccdnum")
        # DECam calibRegistry entries are bands or aliases, but we need
        # physical_filter in the gen3 registry.
        factory.addRule(_DecamBandToPhysicalFilterKeyHandler(self.filterDefinitions),
                        instrument=self.getName(),
                        gen2keys=("filter",),
                        consume=("filter",),
                        datasetTypeName="cpFlat")
        return factory

I guess it’s a package that communicate from gen2 to gen3, if I want to make a completely gen3 package, should I need that? without this won’t pass the test.

Third, it’s about the transmission curves, they are many documents about transmissions curve for hsc:

[yu@localhost transmission]$ pwd
/home/yu/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/obs_subaru/22.0.1-44-g74bdbb4e+988f982fce/hsc/transmission
[yu@localhost transmission]$ ls
filterTraces.py                              wHSC-IB945.txt  wHSC-NB656.txt
M1-2010s.txt                                 wHSC-N1010.txt  wHSC-NB718.txt
modtran_maunakea_am12_pwv15_binned10ang.dat  wHSC-N400.txt   wHSC-NB816.txt
qe_ccd_HSC.txt                               wHSC-NB387.txt  wHSC-NB921.txt
README.txt                                   wHSC-NB468.txt  wHSC-NB926.txt
throughput_popt2.txt                         wHSC-NB515.txt  wHSC-NB973.txt
throughput_win.txt                           wHSC-NB527.txt

And in hsc’s _instrument.py, there are method about how to make use of these documents to create transmission curve, for example:

        # Write transmission sensor
        sensorTransmissions = getSensorTransmission()
        datasetType = DatasetType("transmission_sensor",
                                  ("instrument", "detector",),
                                  "TransmissionCurve",
                                  universe=butler.registry.dimensions,
                                  isCalibration=True)
        butler.registry.registerDatasetType(datasetType)
        for entry in sensorTransmissions.values():
            if entry is None:
                continue
            for sensor, curve in entry.items():
                dataId = DataCoordinate.standardize(baseDataId, detector=sensor)
                refs.append(butler.put(curve, datasetType, dataId, run=run))

        # Write filter transmissions
        filterTransmissions = getFilterTransmission()
        datasetType = DatasetType("transmission_filter",
                                  ("instrument", "physical_filter",),
                                  "TransmissionCurve",
                                  universe=butler.registry.dimensions,
                                  isCalibration=True)
        butler.registry.registerDatasetType(datasetType)
        for entry in filterTransmissions.values():
            if entry is None:
                continue
            for band, curve in entry.items():
                dataId = DataCoordinate.standardize(baseDataId, physical_filter=band)
                refs.append(butler.put(curve, datasetType, dataId, run=run))

        # Write atmospheric transmissions
        atmosphericTransmissions = getAtmosphereTransmission()
        datasetType = DatasetType("transmission_atmosphere", ("instrument",),
                                  "TransmissionCurve",
                                  universe=butler.registry.dimensions,
                                  isCalibration=True)
        butler.registry.registerDatasetType(datasetType)
        for entry in atmosphericTransmissions.values():
            if entry is None:
                continue
            refs.append(butler.put(entry, datasetType, {"instrument": self.getName()}, run=run))

        # Associate all datasets with the unbounded validity range.
        butler.registry.certify(collection, refs, Timespan(begin=None, end=None))

But I didn’t find similar method in DEcam’s _instrument.py, but in the quantum graph about the single_frame_process, it actually make use of the transmission curve, so where is the transmission of DEcam? What make this difference happen? And I also noticed that there are something like bfkernel, stary night in the hsc’ _instrument.py.
single_frame.pdf (23.0 KB)

There are also other questions, and I will post here later. I think i’m just a recruit new to astronomy and still in bootcamp, the problems may seems naive for you. Any answers, sugesstions or documents you point out for me will be pretty helpful.

Thank you!

You should not be naming your package lsst.obs.X – it’s going to be much clearer if you use your project’s namepsace.

Without seeing your code debugging this is going to be difficult. I think you have copied and pasted some HSC code into your package. See for example the instrument record construction in obs_cfht:

This seems to indicate that your package is not lsst.obs.wfst but is obs.wfst.

Can you import lsst.obs.wfst? get_full_type_name(self) should always be able to work. What error do you get?

Delete it. You don’t need it.

Now you are getting into curated calibrations. You will see that there are obs_x_datapackages that contain defects and QE curves and other text-based calibrations. Transmission curves are similar in that they are fairly static and can be ingested one off. Some of them are prebuilt binaries that must be ingested, others are built differently.

Curated calibrations are ingested using the butler write-curated-calibrations command. We have some default definitions and then instruments can add some to the list. See for example:

I’m not sure how DECam handles transmission curves.

This is the brighter-fatter kernel. We are currently transitioning the calibration classs for this from the HSC-specific one to a more generic form. I do not know whether your instrument needs such a correction.

1 Like

Thank you!
After inspect and correct my codes. I have passed the instrument_test.

The meaning I want to expressed this sentence is that I thought the error was caused by the format class_name should be lsst.obs.X, since subaru’s class name in its sqlite is class_name: lsst.obs.subaru.HyperSuprimeCam, as for me the get_full_type_name(self) function returns _instrument.WFSTCamera. And now I know the error not caused by above reason.

I used to put my package in a independent folder:

[yu@localhost obs_wfst]$ pwd
/home/yu/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/obs_wfst
[yu@localhost obs_wfst]$ ls
config  python  tests  ups  wfst

And can only be imported by the following, it’s the reason that make the class_name without the format lsst.obs.X.

import sys
sys.path.append("/home/yu/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/obs_wfst/python/obs/wfst")
from _instrument import WFSTCamera

Try to import lsst.obs.wfst will get a very common error:

ModuleNotFoundError: No module named 'lsst.obs.wfst'

And if put the python module into the obs_lsst subfolder

/home/yu/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/obs_lsst/22.0.1-61-gbd5239c+96886d66ff/python/lsst/obs/wfst

I can imported the WFSTCamera by

from lsst.obs.wfst._instrument import WFSTCamera

And above doesn’t matter know. Since after correct my code. I find that both urls I put wfst package can pass the instrument_test

test_instrument.py::TestInstrument::testMakeTranslatorFactory PASSED
test_instrument.py::TestInstrument::test_getCamera PASSED
test_instrument.py::TestInstrument::test_name PASSED
test_instrument.py::TestInstrument::test_register PASSED

Before next step, I have some doubt on the amplifier fits documents. Though there are comments, I still can’t get the certain meaning about the headers like readoutcorner, linearity_coeffs, raw_flip_xand so on. Is there are manual have more detailed information about the ampiflier setting?
for example, the 0_00.fits of hsc:

TTYPE11 = 'readoutcorner'      / readout corner, in the frame of the assembled i
TFORM11 = '1J      '           / format of field                                
TDOC11  = 'readout corner, in the frame of the assembled image'                 
TCCLS11 = 'Scalar  '           / Field template used by lsst.afw.table          
TTYPE12 = 'linearity_coeffs'   / coefficients for linearity fit up to cubic     
TFORM12 = '4D      '           / format of field                                
TDOC12  = 'coefficients for linearity fit up to cubic'                          
TCCLS12 = 'Array   '           / Field template used by lsst.afw.table          
TTYPE13 = 'linearity_type'     / type of linearity model                        
TFORM13 = '64A     '           / format of field                                
TDOC13  = 'type of linearity model'                                             
TCCLS13 = 'String  '           / Field template used by lsst.afw.table          
TFLAG1  = 'hasrawinfo'                                                          
TFDOC1  = 'is raw amplifier information available (e.g. untrimmed bounding box&'
CONTINUE  'es)?    '         
TFLAG2  = 'raw_flip_x'                                                          
TFDOC2  = 'flip row order to make assembled image?'                             
TFLAG3  = 'raw_flip_y'                                                          
TFDOC3  = 'flip column order to make an assembled image?'                       

Also, now I have wrote simplied CCD.py and amplifier.fits and want to process simulated data, What package do you use to simulate the data based on the CCD and amplifier, like raw, bias and so on?

And in the Getting started with the LSST Science Pipelines you give us a package: rc2_subset. Is they are guide about how to create a similar package?

Thank you!

This name indicates that there is something very wrong with the way you have set up your python package.

Can you try to set it up as a buildable python package? You should not need to be using sys.path.append to make things work. The reason we use _instrument.py is to show that the general user should never try to import from that file – there is an expectation that your package’s __init__.py will pull the class from _instrument.py (and the get_full_type_name code is clever enough to realize that the _ in the class name means it can be dropped from the full name).

You should be able to build a normal python package with a setup.cfg or pyproject.toml and build and install it (using pip install -e . or somesuch). Alternatively you can try to set up obs packages like we do using SCons and eups but that’s not necessary.

It would be much better for you if you can learn how to create a python package that can be installed and distributed. Note that you do not want to put it in the lsst.obs namespace.

There isn’t a centralized guide but there is a discussion here:

Some questions:

  • How are your raw files laid out? One FITS extension per amplifier or all amplifiers in one extension?
  • Does astrometadata translate work on your raw files?
  • Do you know what format your defects and other curated calibrations take?

We have used a package called imsim to simulate the data. I don’t know anything about the simulations though.

Thank you for your suggestions, I have used the setup.cfg and pyproject.toml to pack the obs_package.

And seems like the pipeline function getPackageDir doesn’t work with package build by setup.cfg and pyproject.toml? So I use the python bulitin fuction path = Path(wfst.__path__[0]).parents[1].
But I wonder if the warning below show something serious that I don’t know? just like before I use sys.path.append.

Input In [47], in <cell line: 1>()
----> 1 getPackageDir(wfst)

File ~/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/utils/g336def89a8+7bff505259/python/lsst/utils/_packaging.py:45, in getPackageDir(package_name)
     20 """Find the file system location of the EUPS package.
     21
     22 Parameters
   (...)
     42 Does not use EUPS directly. Uses the environment.
     43 """
     44 if not package_name or not isinstance(package_name, str):
---> 45     raise ValueError(f"EUPS package name '{package_name}' is not of a suitable form.")
     47 envvar = f"{package_name.upper()}_DIR"
     49 path = os.environ.get(envvar)

ValueError: EUPS package name '<module 'wfst' from '/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/wfst/src/wfst/__init__.py'>' is not of a suitable form.

The instrument is still under construction, The CCDs we use are 9k x 9k, 16 amplifiers per CCD. And we plan to store all amplifiers in one extension. Are there some cautions for the two situations? Or are there differences in performance like process speed?
I still wonder, the official obs_package, CFHT, DEcam, HSC, LSST. The size of them are 4k x 4k or 2k x 4k. Don’t know if the pipeline works well for a bigger CCD 9k x 9k, like the primary part astrometry I focus on, for example if the FitTanSipWcsTass fits well? or other parts of the pipeline. I thought it may need to process data to answer the question.
And since I haven’t full know the following steps of the pipeline, so I haven’t process the simulated data until know. As for the curated calibrations. HSC has written bfkernel and transmissionCurve in official function writeAdditionalCuratedCalibrations. At least we will do the transmission correction. And choose other calibration from the official calibration classes you supply.

Thank you!

Manually messing with sys.path seems like the wrong approach.

Where is the getPackageDir coming from? I assume it’s coming from the curated calibrations handling but I’m not sure. If you are using EUPS with sconsUtils to setup the package then getPackageDir will work. If you are not using that but are instead wanting to use something like python package resources you will have to implement your own Instrument.getObsDataPackageDir method. The base class assumes you have an obs_x_data package that is separate but contains the defects etc. You if you change getObsDataPackageDir you can make it point wherever you like.

Those are some large detectors. @yusra do we have any reason to believe there will be problems with detectors that large?

1 Like

Recently I try to ingest the raws,
In my jupyter-notebook, every to_method, no matter it’s constant map, trivial map or the method I write all work normally.
But I can’t ingest them to the databse. Seems like there were sort out to the bad_files.

(lsst-scipipe-0.8.1) [yu@localhost wfst_data]$ butler ingest-raws testbutler_wfst/ ~/wfst_data/data/raw
py.warnings WARNING: /home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/daf_butler/g6b22db343a+d18c45d440/python/lsst/daf/butler/registry/databases/sqlite.py:444: SAWarning: Class _Ensure will not make use of SQL compilation caching as it does not set the 'inherit_cache' attribute to ``True``.  This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions.  Set this attribute to True if this object can make use of the cache key generated by the superclass.  Alternatively, this attribute may be set to False which will disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
  return connection.execute(_Ensure(table), rows).rowcount
ingest INFO: Successfully extracted metadata from 0 files with 2 failures
ingest WARNING: Could not extract observation metadata from the following:
ingest WARNING: - file:///home/yu/wfst_data/data/raw/data_modified.fits
ingest WARNING: - file:///home/yu/wfst_data/data/raw/wfst00000010.fits
ingest INFO: Successfully processed data from 0 exposures with 0 failures from exposure registration and 0 failures from file ingest.
ingest INFO: Ingested 0 distinct Butler datasets
lsst.daf.butler.cli.utils ERROR: Caught an exception, details are in traceback:
Traceback (most recent call last):
  File "/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/obs_base/gb0fc2ca601+a0cf348625/python/lsst/obs/base/cli/cmd/commands.py", line 118, in ingest_raws
    script.ingestRaws(*args, **kwargs)
  File "/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/obs_base/gb0fc2ca601+a0cf348625/python/lsst/obs/base/script/ingestRaws.py", line 71, in ingestRaws
    ingester.run(locations, run=output_run, processes=processes, file_filter=regex)
  File "/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/pipe_base/g5c83ca0194+970dd35637/python/lsst/pipe/base/timer.py", line 181, in wrapper
    res = func(self, *args, **keyArgs)
  File "/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/obs_base/gb0fc2ca601+a0cf348625/python/lsst/obs/base/ingest.py", line 1137, in run
    raise RuntimeError("Some failures encountered during ingestion")
RuntimeError: Some failures encountered during ingestion

So, what may have casued it? What’s the criterion between good_files and bad_files?
Thank you!

If you turn on DEBUG logging for the lsst.obs.base logger (butler --log-level lsst.obs.base=DEBUG ingest-raws) then it will tell you the metadata problem.

For ingest to work you must be able to run metadata translation. You can see if it works with

$ astrometadata -p my.translator.module translate data_modified.fits

where my.translator.module is the name of the package that will provide your MetadataTranslator subclass.

1 Like

Thank you!
After reading the log of DEUBG, I have figure out this problem.
I didn’t put my translator in the folder astro_metedata_translator.translators and add from .wfst import * to the __init__.py of astro_metedata_translator.translators, which make there are only official translators like [Decam, SDSS, HSC…] and some errors like circular import.

Then, how to create a DRP.yaml file describes the pipeline like hsc’ DRP.yaml? The tutorial Configuring a Butler is too abstract for me :frowning:

Looking forward to your Boot camp hours later!

That is not a pipelines tutorial. It’s an overview of how butler configuration works.

Now that you have some raw files in a butler, you need to think about things like biases and darks and defect masks. Later you need to think about skymaps for how to break the sky into reasonable pieces for coadds.

As for next steps you should take a look at:

And also this discussion about calibrations:

Hello,
I want to process my raw data, but I come cross an error:

cfitsio error (/home/yu/wfst_data/wfst_try/WFST/raw/all/raw/20220627/WFST00000010/raw_WFST_WFST-G_WFST00000010_4_WFST_raw_all.fits) : Incompatible type for FITS image: on disk is uint32 (HDU 0), in-memory is uint16. Read with allowUnsafe=True to permit conversions that may overflow.

What’s the meaning of the above error? I guess there may have something wrong with the simulated data.
And the topic Error while trying to create master bias frames seems not work for me.

And I meet this error when I want to process HSC's data using the Gaia catalog, and seems need to modify some setting related to filter if want to use Gaia catalog .

ctrl.mpexec.singleQuantumExecutor ERROR: Execution of task 'calibrate' on quantum {instrument: 'HSC', detector: 73, visit: 238096, ...} failed. Exception RuntimeError: Unknown reference filter r_flux
ctrl.mpexec.mpGraphExecutor ERROR: Task <TaskDef(CalibrateTask, label=calibrate) dataId={instrument: 'HSC', detector: 73, visit: 238096, ...}> failed; processing will continue for remaining tasks.

Thank you!

You would need to show me the raw formatter class that you are using to read these files. Some of our formatters read the native on-disk type, others try to force the in-memory python object to be a different type. In this case it looks like you are trying to read 32-bit integer FITS as 16-bit.

Hi!
Thank you for your advice, it let me know the error was related to the rawformatter. With the direction you pointed I read the rawformatter of DECam and HSC, I knew that we should use the class like lsst.afw.image.ImageF to control the type we used to read the data.

And I have some questions about rawformatter. This is the class of DECam, it will not try to read the “extension 0”, it only read the extension related to that CCDs.

class DarkEnergyCameraRawFormatter(FitsRawFormatterBase):
    translatorClass = astro_metadata_translator.DecamTranslator
    filterDefinitions = DarkEnergyCamera.filterDefinitions

    # DECam has a coordinate system flipped on X with respect to our
    # VisitInfo definition of the field angle orientation.
    # We have to specify the flip explicitly until DM-20746 is implemented.
    wcsFlipX = True

    def getDetector(self, id):
        return DarkEnergyCamera().getCamera()[id]

    def _scanHdus(self, filename, detectorId):
        """Scan through a file for the HDU containing data from one detector.

        Parameters
        ----------
        filename : `str`
            The file to search through.
        detectorId : `int`
            The detector id to search for.

        Returns
        -------
        index : `int`
            The index of the HDU with the requested data.
        metadata: `lsst.daf.base.PropertyList`
            The metadata read from the header for that detector id.

        Raises
        ------
        ValueError
            Raised if detectorId is not found in any of the file HDUs
        """
        log = logging.getLogger("DarkEnergyCameraRawFormatter")
        log.debug("Did not find detector=%s at expected HDU=%s in %s: scanning through all HDUs.",
                  detectorId, detector_to_hdu[detectorId], filename)

        fitsData = lsst.afw.fits.Fits(filename, 'r')
        # NOTE: The primary header (HDU=0) does not contain detector data.
        for i in range(1, fitsData.countHdus()):
            fitsData.setHdu(i)
            metadata = fitsData.readMetadata()
            if metadata['CCDNUM'] == detectorId:
                return i, metadata
        else:
            raise ValueError(f"Did not find detectorId={detectorId} as CCDNUM in any HDU of {filename}.")

    def _determineHDU(self, detectorId):
        """Determine the correct HDU number for a given detector id.

        Parameters
        ----------
        detectorId : `int`
            The detector id to search for.

        Returns
        -------
        index : `int`
            The index of the HDU with the requested data.
        metadata : `lsst.daf.base.PropertyList`
            The metadata read from the header for that detector id.

        Raises
        ------
        ValueError
            Raised if detectorId is not found in any of the file HDUs
        """
        filename = self.fileDescriptor.location.path
        try:
            index = detector_to_hdu[detectorId]
            metadata = lsst.afw.fits.readMetadata(filename, index)
            if metadata['CCDNUM'] != detectorId:
                # the detector->HDU mapping is different in this file: try scanning
                return self._scanHdus(filename, detectorId)
            else:
                fitsData = lsst.afw.fits.Fits(filename, 'r')
                fitsData.setHdu(index)
                return index, metadata
        except lsst.afw.fits.FitsError:
            # if the file doesn't contain all the HDUs of "normal" files, try scanning
            return self._scanHdus(filename, detectorId)

    def readMetadata(self):
        index, metadata = self._determineHDU(self.dataId['detector'])
        astro_metadata_translator.fix_header(metadata)
        return metadata

    def readImage(self):
        index, metadata = self._determineHDU(self.dataId['detector'])
        return lsst.afw.image.ImageI(self.fileDescriptor.location.path, index)

But if I mimic the behavior of ‘DECam’, like(Here I only read the extension 1 by fitsData.setHdu(1). And the structure of ZTF’ fits I used is: every fits have 2 extensions, extension 0 for primary hdu, and extension 1 related to the image come from a single amplifier.

class ZTFCameraRawFormatter(FitsRawFormatterBase):
    
    wcsFlipX = True
    
    translatorClass = ZTFTranslator
    filterDefinitions = ZTF_FILTER_DEFINTIONS
    
    def getDetector(self, id):
        return ZTFCamera().getCamera()[id]
    
    def readMetadata(self):
        
        fitsData = lsst.afw.fits.Fits(self.fileDescriptor.location.path, 'r')
        fitsData.setHdu(1)
        metadata = fitsData.readMetadata()
        fix_header(metadata)
        return metadata
    
    def readImage(self):
        return lsst.afw.image.ImageF(self.fileDescriptor.location.path, 1)

And then use the pipetask run to process the data ingested successfully, It will have a lot of warning like:

astro_metadata_translator.observationInfo WARNING: Ignoring Error calculating property 'altaz_begin' using translator <class 'astro_metadata_translator.translators.ztf.ZTFTranslator'> and file /home/yu/wfst_data/ztf_try/ZTF/raw/all/raw/20210804/ztf_20210804364132/raw_ZTF_ZTF_r_ztf_20210804364132_43_ZTF_raw_all.fits: 'DATE-OBS not found'
astro_metadata_translator.observationInfo WARNING: Ignoring Error calculating property 'observation_type' using translator <class 'astro_metadata_translator.translators.ztf.ZTFTranslator'> and file /home/yu/wfst_data/ztf_try/ZTF/raw/all/raw/20210804/ztf_20210804364132/raw_ZTF_ZTF_r_ztf_20210804364132_43_ZTF_raw_all.fits: 'IMGTYPE not found'
astro_metadata_translator.observationInfo WARNING: Ignoring Error calculating property 'dark_time' using translator <class 'astro_metadata_translator.translators.ztf.ZTFTranslator'> and file /home/yu/wfst_data/ztf_try/ZTF/raw/all/raw/20210804/ztf_20210804364132/raw_ZTF_ZTF_r_ztf_20210804364132_43_ZTF_raw_all.fits: "Could not find ['EXPTIME'] in header"
astro_metadata_translator.observationInfo WARNING: Ignoring Error calculating property 'datetime_end' using translator <class 'astro_metadata_translator.translators.ztf.ZTFTranslator'> and file /home/yu/wfst_data/ztf_try/ZTF/raw/all/raw/20210804/ztf_20210804364132/raw_ZTF_ZTF_r_ztf_20210804364132_43_ZTF_raw_all.fits: "Could not find ['EXPTIME'] in header"
astro_metadata_translator.observationInfo WARNING: Ignoring Error calculating property 'detector_exposure_id' using translator <class 'astro_metadata_translator.translators.ztf.ZTFTranslator'> and file /home/yu/wfst_data/ztf_try/ZTF/raw/all/raw/20210804/ztf_20210804364132/raw_ZTF_ZTF_r_ztf_20210804364132_43_ZTF_raw_all.fits: 'FILENAME not found'

But if I can ingest the data, why this warning happen?
Only normal if I modify the hdu to “0”.

    def readMetadata(self):
        
        fitsData = lsst.afw.fits.Fits(self.fileDescriptor.location.path, 'r')
        fitsData.setHdu(0)
        metadata = fitsData.readMetadata()
        fix_header(metadata)
        return metadata

And the result is

characterizeImage.detection INFO: Detected 6441 positive peaks in 2287 footprints and 0 negative peaks in 0 footprints to 100 sigma
characterizeImage.detection INFO: Resubtracting the background after object detection
characterizeImage.measurement INFO: Measuring 2287 sources (2287 parents, 0 children) 
characterizeImage.measurePsf INFO: Measuring PSF
characterizeImage.measurePsf INFO: PSF star selector found 3 candidates
characterizeImage.measurePsf.reserve INFO: Reserved 0/3 sources
characterizeImage.measurePsf INFO: Sending 3 candidates to PSF determiner
characterizeImage.measurePsf.psfDeterminer WARNING: NOT scaling kernelSize by stellar quadrupole moment, but using absolute value

> WARNING: 1st context group-degree lowered (not enough samples)


> WARNING: 1st context group removed (not enough samples)

characterizeImage.measurePsf INFO: PSF determination using 3/3 stars.
py.warnings WARNING: /home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/pipe_tasks/gf1799c5b72+6048f86b6d/python/lsst/pipe/tasks/characterizeImage.py:495: FutureWarning: Default position argument overload is deprecated and will be removed in version 24.0.  Please explicitly specify a position.
  psfSigma = psf.computeShape().getDeterminantRadius()

ctrl.mpexec.singleQuantumExecutor ERROR: Execution of task 'characterizeImage' on quantum {instrument: 'ZTF', detector: 43, visit: 20210804, ...} failed. Exception InvalidParameterError: 
  File "src/PsfexPsf.cc", line 233, in virtual std::shared_ptr<lsst::afw::image::Image<double> > lsst::meas::extensions::psfex::PsfexPsf::_doComputeImage(const Point2D&, const lsst::afw::image::Color&, const Point2D&) const
    Only spatial variation (ndim == 2) is supported; saw 0 {0}
lsst::pex::exceptions::InvalidParameterError: 'Only spatial variation (ndim == 2) is supported; saw 0'

At least, it begins to work, I will read the task’config class to figure out the above error.
My question is, why these warnings happen?

astro_metadata_translator.observationInfo WARNING: Ignoring Error calculating property 'dark_time' using translator <class 'astro_metadata_translator.translators.ztf.ZTFTranslator'> and file /home/yu/wfst_data/ztf_try/ZTF/raw/all/raw/20210804/ztf_20210804364132/raw_ZTF_ZTF_r_ztf_20210804364132_43_ZTF_raw_all.fits: "Could not find ['EXPTIME'] in header"
astro_metadata_translator.observationInfo WARNING: Ignoring Error calculating property 'datetime_end' using translator <class 'astro_metadata_translator.translators.ztf.ZTFTranslator'> and file /home/yu/wfst_data/ztf_try/ZTF/raw/all/raw/20210804/ztf_20210804364132

the exptime indeed only exists in extension 0 and not in extension 1 for ZTF.
But why DECam only read the extension ralated to that CCDs(I didn’t find it read the extension 0 and the structure of DECam is extension 0 for the primary header and tens of extensions for image, and there are lots of header key that used in DECamTranslator only exist in 0 extension ) but without the warning like ZTF? for example:

DECam_raw[0].header['EXPTIME']
30

DECam_raw[1].header['EXPTIME']
Input In [75], in <cell line: 1>()
----> 1 DECam_raw[1].header['EXPTIME']
KeyError: "Keyword 'EXPTIME' not found."

It may related to how the rawformatted I have not know. Looks like it will be convenient if we set the fits like HSC( one fits one extension) :cry:

Thank you!

Raw ingest is doing:

and I don’t know what your metadata translator class is doing but if you copied the DECam one then that is always merging HDU 0 with the other headers:

The lsst.afw.fits.readMetadata() method is being clever and automatically merging the primary header with the requested header because DECam is using the INHERIT = T header to tell the reader that this should happen. If your data does not have that set you will have to do the merging yourself in your raw formatter. You can use the merge_headers method just like we do in the metadata translator.

That can also work but you can also fix it in code as described above or declare the inheritance.

1 Like

Hello! Recently I successfully finished the characterizeImage and calibrate steps. And get the calexp dataset_type. Without doing fgcm and jointcal, I want to do coadd directly.
When I run the following command:

pipetask run -b ./butler.yaml -p pipelines/DRP.yaml#makeWarp -i u/yu/calib,skymaps -o u/yu/warps -d "skymap = 'sky'" --register-dataset-types

I get:

py.warnings WARNING: /home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/daf_butler/g6b22db343a+d18c45d440/python/lsst/daf/butler/registry/interfaces/_database.py:1379: SAWarning: TypeDecorator Base64Region() will not produce a cache key because the ``cache_ok`` attribute is not set to True.  This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions.  Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
  connection.execute(table.insert().from_select(names, select))

py.warnings WARNING: /home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/daf_butler/g6b22db343a+d18c45d440/python/lsst/daf/butler/registry/interfaces/_database.py:1640: SAWarning: TypeDecorator Base64Region() will not produce a cache key because the ``cache_ok`` attribute is not set to True.  This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions.  Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
  return connection.execute(sql, *args, **kwargs)

py.warnings WARNING: /home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/ctrl_mpexec/g6727979600+15d2600a0d/python/lsst/ctrl/mpexec/cli/script/qgraph.py:148: UserWarning: QuantumGraph is empty
  qgraph = f.makeGraph(pipelineObj, args)

lsst.daf.butler.cli.utils ERROR: Caught an exception, details are in traceback:
Traceback (most recent call last):
  File "/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/ctrl_mpexec/g6727979600+15d2600a0d/python/lsst/ctrl/mpexec/cli/cmd/commands.py", line 106, in run
    qgraph = script.qgraph(pipelineObj=pipeline, **kwargs)
  File "/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/ctrl_mpexec/g6727979600+15d2600a0d/python/lsst/ctrl/mpexec/cli/script/qgraph.py", line 151, in qgraph
    raise RuntimeError("QuantumGraph is empty.")
RuntimeError: QuantumGraph is empty.

Which means it can’t find data to execute the makeWarp step, and if I change the skyMap to a wrong name deliberately, It get the same error, for example:

(lsst-scipipe-0.8.1) [yu@localhost 2022-09-08]$ pipetask run -b ./butler.yaml -p pipelines/DRP.yaml#makeWarp -i u/yu/calib,skymaps -o u/yu/warps -d "tract = 9813 AND skymap = 'hsc_rings_v1' AND patch in (38, 39, 40, 41)"  --register-dataset-types
py.warnings WARNING: /home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/daf_butler/g6b22db343a+d18c45d440/python/lsst/daf/butler/registry/interfaces/_database.py:1379: SAWarning: TypeDecorator Base64Region() will not produce a cache key because the ``cache_ok`` attribute is not set to True.  This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions.  Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
  connection.execute(table.insert().from_select(names, select))

py.warnings WARNING: /home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/daf_butler/g6b22db343a+d18c45d440/python/lsst/daf/butler/registry/interfaces/_database.py:1640: SAWarning: TypeDecorator Base64Region() will not produce a cache key because the ``cache_ok`` attribute is not set to True.  This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions.  Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
  return connection.execute(sql, *args, **kwargs)

py.warnings WARNING: /home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/ctrl_mpexec/g6727979600+15d2600a0d/python/lsst/ctrl/mpexec/cli/script/qgraph.py:148: UserWarning: QuantumGraph is empty
  qgraph = f.makeGraph(pipelineObj, args)

lsst.daf.butler.cli.utils ERROR: Caught an exception, details are in traceback:
Traceback (most recent call last):
  File "/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/ctrl_mpexec/g6727979600+15d2600a0d/python/lsst/ctrl/mpexec/cli/cmd/commands.py", line 106, in run
    qgraph = script.qgraph(pipelineObj=pipeline, **kwargs)
  File "/home/yu/lsst_stack/23.0.1/stack/miniconda3-py38_4.9.2-0.8.1/Linux64/ctrl_mpexec/g6727979600+15d2600a0d/python/lsst/ctrl/mpexec/cli/script/qgraph.py", line 151, in qgraph
    raise RuntimeError("QuantumGraph is empty.")
RuntimeError: QuantumGraph is empty.

but actually, I only have the following skymaps(forgiving for naming the skymap arbitrarily):

(lsst-scipipe-0.8.1) [yu@localhost 2022-09-08]$ butler query-data-ids . skymap
         skymap        
-----------------------
                    di3
            discrete/hi
           discrete/hit
          discrete/hits
discrete/validation_hsc
                    sky
                   sky1

It meas the command didn’t read the content after -d

pipetask run -b ./butler.yaml -p pipelines/DRP.yaml#makeWarp -i u/yu/calib,skymaps -o u/yu/warps -d "tract = 9813 AND skymap = 'hsc_rings_v1' AND patch in (38, 39, 40, 41)"  --register-dataset-types

So, What should I modify or what should I refer to?
And I may set the config.skyMap["discrete"] wrong, too, if there are command like that can get the tract automatically from the image I input?
Thank you!

Are there any additional warnings above the ones you’ve printed? With recent versions of our code you should get much more diagnostic information about an empty QuantumGraph, and while a bad skymap can be one cause of that, it’s not the only one.

That said, I’m worried the lsst-scipipe-0.8.1 in your prompt suggests you’re using a fairly old version of the stack, and you may not be able to get those diagnostics without upgrading.

This is all the log it printed, and the version I use is 23.0.1, I will install a newer pipeline later. I remember in butler2, there is command that make tracts and patches from the data I input, does it has been removed?

makeDiscreteSkyMap.py DATA --rerun coadd_test_2 --calib DATA --id visit=176..184 --clobber-config

Thank you!

You use this:

$ butler make-discrete-skymap --help

You may also want to look at the FAQ on why a quantum graph can be empty:

https://pipelines.lsst.io/v/weekly/middleware/faq.html#how-do-i-fix-an-empty-quantumgraph

1 Like

Recently I wondering about the pixel scale the pipeline gives,
For example:
Using the function in ref_match.py

ref_match._getExposureMetadata(raw_ztf)

And a part of the header of ZTF is

XTENSION= 'IMAGE   '           / IMAGE extension                                
BITPIX  =                  -32 / number of bits per data pixel                  
NAXIS   =                    2 / number of data axes                            
NAXIS1  =                 3072 / length of data axis 1                          
NAXIS2  =                 3080 / length of data axis 2                          
PCOUNT  =                    0 / required keyword; must = 0                     
GCOUNT  =                    1 / required keyword; must = 1                     
EXTNAME = '2       '                                                            
HDUVERS =                    1                                                  
CCD_ID  =                   11 / ID value of CCD detector                                   
WCSVERS = '2.1     '           / WCS version number                             
WCSAXES =                    2 / Number of WCS axes                             
WCSNAME = 'ZTF     '           / Name of the WCS coordinate system              
RADESYS = 'ICRS    '           / Coordinate reference system                    
CTYPE1  = 'RA---TAN'           / Name of coord X axis                           
CTYPE2  = 'DEC--TAN'           / Name of coord Y axis                           
CRPIX1  =            -3319.671 / Coord system X ref pixel                       
CRPIX2  =             6469.751 / Coord system Y ref pixel                       
CRVAL1  =              6.66667 / Coord system value at CRPIX1                   
CRVAL2  =                62.15 / Coord system value at CRPIX2                   
CUNIT1  = 'deg     '           / Coordinate units for X axis                    
CUNIT2  = 'deg     '           / Coordinate units for Y axis                    
CD1_1   =        -0.0002815649 / WCS matrix                                     
CD1_2   =            6.814E-07 / WCS matrix                                     
CD2_1   =           -5.349E-07 / WCS matrix                                     
CD2_2   =        -0.0002815497 / WCS matrix                                     
CCD_ROT =                   0. / CCD rotation angle                                                                          

And get the following output:

Struct(bbox=(minimum=(0, 0), maximum=(3071, 3079)); wcs=Non-standard SkyWcs (Frames: PIXELS, IWC, SKY): 
Sky Origin: (6.6666700000, +62.1500000000)
Pixel Origin: (6154, -3090)
Pixel Scale: 0.263498 arcsec/pixel; photoCalib=None; filterName=r; epoch=59430.36474012732)

But in ZTF’s website, it gives the following:

Pixel scale	1.0"/pixel

and the CD1_1 also suggested that:

CD1_1   =        -0.0002815649 / WCS matrix 

So, why the function returns the value 0.263498 arcsec/pixel?
I guess it may caused by having set something wrong in the obs_package?

Thank you!

Forgive me if this is a red herring but, I can see DECam mentioned above in this topic, and the DECam platescale is 0.2626 - 0.2637, so your guess of the wrong obs package, or perhaps there being DECam-based inputs or values being set somewhere, might be the culprit?