Running the stack on HSC data for a DESC PSF project

This may indicate a problem in the code. The singleFrameDriver.py (and the ctrl_pool framework) is more sensitive to memory problems than processCcd.py. I’ll have a dig around.

I’ve filed a ticket to reduce the chatter.

Oh, I found the problem in the log you sent:

python: src/hsm/PSFCorr.cpp:731: void galsim::hsm::find_ellipmom_1(galsim::ConstImageView<double>, double, double, double, double, double, double&, double&, double&, double&, double&, double&, double&, boost::shared_ptr<galsim::hsm::HSMParams>): Assertion `iy1 <= iy2' failed.

This was fixed a month ago, both in our stack and upstream in GalSim. Please check that you’re using a recent version of the LSST stack.

Dunno why you wouldn’t have seen this error with your own parallelisation, except that perhaps you didn’t have the same environment (e.g., didn’t setup meas_extensions_shapeHSM).

What is the easiest/cleanest way to update GalSim to deal with this problem?

I tried to follow the instructions in here but, even if the correct version appeared when I did eups list -s, when I tried to do import galsim it didn’t work. I also tried doing eups distrib install with the latest weekly release but, when I tried to use it in combination with other packages (that worked fine with v12_0) I had a strange memory allocation problem when using galsim inside an ipython notebook.

eups distrib install is the way to go. Did you mix and match versions after installing the latest weekly release? If you tried to use, e.g., lsst_distrib from v12_0 and galsim from the latest weekly, e.g., using setup -j, then that could cause memory problems. You need to use a consistent set of versions.

If that’s not the problem, could you please post more details?

Yes, I was mixing lsst_distrib and galsim from v12_0 and the new one. Thanks!

Hi all. We’re coming back to this project after a bit of a hiatus now.

Things seemed to have changed a bit (a good sign I think!) I’m trying to access an old repository I made with version 11 with the data butler and the newest version of the ci_hsc module .

Traceback (most recent call last):
  File "test1.py", line 9, in <module>
    butler = lsst.daf.persistence.Butler("/global/cscratch1/sd/zuntz/lsst/wilman-run/run2")
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/daf_persistence/12.1/python/lsst/daf/persistence/butler.py", line 285, in __init__
    self._addRepo(args, inout='out', defaultMapper=defaultMapper, butlerIOParents=butlerIOParents)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/daf_persistence/12.1/python/lsst/daf/persistence/butler.py", line 375, in _addRepo
    "Could not infer mapper and one not specified in repositoryArgs:%s" % args)
RuntimeError: Could not infer mapper and one not specified in repositoryArgs:RepositoryArgs(root='/global/cscratch1/sd/zuntz/lsst/wilman-run/run2', cfgRoot=None, mapper=None, mapperArgs={}, tags=set([]), mode='rw')

I get similar errors when trying to create a new repository by ingesting raw data again:

Traceback (most recent call last):
  File "/opt/lsst/software/stack/Linux64/pipe_tasks/12.1-20-g324f6d3+6/bin/ingestImages.py", line 3, in <module>
    IngestTask.parseAndRun()
  File "/opt/lsst/software/stack/Linux64/pipe_tasks/12.1-20-g324f6d3+6/python/lsst/pipe/tasks/ingest.py", line 380, in parseAndRun
    args = parser.parse_args(config)
  File "/opt/lsst/software/stack/Linux64/pipe_base/12.1-5-g06c326c+6/python/lsst/pipe/base/argumentParser.py", line 459, in parse_args
    namespace.camera = mapperClass.getCameraName()
AttributeError: 'NoneType' object has no attribute 'getCameraName'

Are there some changes I have to make now? Thanks so much for ongoing help - I know you guys are amazingly busy.

I’ve managed to fix these problems, which were mainly due to having wrong versions of repositories and a wrong _parent link because I copied things from a remote machine.

Cheers,
Joe

Hi all - congratulations on the data release!

One more problem I’ve hit trying to use the NERSC installation v12.1 - I can ingest okay but when I try processCcd.py I get this - wondered if anyone had seen it before?

Traceback (most recent call last):
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pipe_base/12.1/python/lsst/pipe/base/cmdLineTask.py", line 346, in __call__
    result = task.run(dataRef, **kwargs)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pipe_base/12.1/python/lsst/pipe/base/timer.py", line 121, in wrapper
    res = func(self, *args, **keyArgs)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pipe_tasks/12.1/python/lsst/pipe/tasks/processCcd.py", line 181, in run
    icSourceCat = charRes.sourceCat,
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pipe_base/12.1/python/lsst/pipe/base/timer.py", line 121, in wrapper
    res = func(self, *args, **keyArgs)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pipe_tasks/12.1/python/lsst/pipe/tasks/calibrate.py", line 383, in run
    icSourceCat=icSourceCat,
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pipe_tasks/12.1/python/lsst/pipe/tasks/calibrate.py", line 462, in calibrate
    sourceCat=sourceCat,
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pipe_base/12.1/python/lsst/pipe/base/timer.py", line 121, in wrapper
    res = func(self, *args, **keyArgs)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/meas_astrom/12.1/python/lsst/meas/astrom/astrometry.py", line 197, in run
    res = self.solve(exposure=exposure, sourceCat=sourceCat)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pipe_base/12.1/python/lsst/pipe/base/timer.py", line 121, in wrapper
    res = func(self, *args, **keyArgs)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/meas_astrom/12.1/python/lsst/meas/astrom/astrometry.py", line 285, in solve
    calib=expMd.calib,
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pipe_base/12.1/python/lsst/pipe/base/timer.py", line 121, in wrapper
    res = func(self, *args, **keyArgs)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/meas_algorithms/12.1/python/lsst/meas/algorithms/loadReferenceObjects.py", line 214, in loadPixelBox
    loadRes = self.loadSkyCircle(ctrCoord, maxRadius, filterName)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pipe_base/12.1/python/lsst/pipe/base/timer.py", line 121, in wrapper
    res = func(self, *args, **keyArgs)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/meas_astrom/12.1/python/lsst/meas/astrom/loadAstrometryNetObjects.py", line 98, in loadSkyCircle
    self._readIndexFiles()
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pipe_base/12.1/python/lsst/pipe/base/timer.py", line 121, in wrapper
    res = func(self, *args, **keyArgs)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/meas_astrom/12.1/python/lsst/meas/astrom/loadAstrometryNetObjects.py", line 162, in _readIndexFiles
    self.multiInds = AstrometryNetCatalog(self.andConfig)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/meas_astrom/12.1/python/lsst/meas/astrom/multiindex.py", line 186, in __init__
    self._initFromCache(cacheName)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/meas_astrom/12.1/python/lsst/meas/astrom/multiindex.py", line 238, in _initFromCache
    with pyfits.open(filename) as hduList:
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pyfits/3.4.0+6/lib/python/pyfits-3.4-py2.7-linux-x86_64.egg/pyfits/hdu/hdulist.py", line 124, in fitsopen
    return HDUList.fromfile(name, mode, memmap, save_backup, **kwargs)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pyfits/3.4.0+6/lib/python/pyfits-3.4-py2.7-linux-x86_64.egg/pyfits/hdu/hdulist.py", line 266, in fromfile
    save_backup=save_backup, **kwargs)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pyfits/3.4.0+6/lib/python/pyfits-3.4-py2.7-linux-x86_64.egg/pyfits/hdu/hdulist.py", line 823, in _readfrom
    hdu = _BaseHDU.readfrom(ffo, **kwargs)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pyfits/3.4.0+6/lib/python/pyfits-3.4-py2.7-linux-x86_64.egg/pyfits/hdu/base.py", line 370, in readfrom
    **kwargs)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pyfits/3.4.0+6/lib/python/pyfits-3.4-py2.7-linux-x86_64.egg/pyfits/hdu/base.py", line 430, in _readfrom_internal
    header = Header.fromfile(data, endcard=not ignore_missing_end)
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pyfits/3.4.0+6/lib/python/pyfits-3.4-py2.7-linux-x86_64.egg/pyfits/header.py", line 423, in fromfile
    padding)[1]
  File "/global/common/cori/contrib/lsst/lsstDM/v12_1/Linux64/pyfits/3.4.0+6/lib/python/pyfits-3.4-py2.7-linux-x86_64.egg/pyfits/header.py", line 492, in _from_blocks
    raise IOError('Header missing END card.')
IOError: Header missing END card.

I suggest having a look at your astrometry_net_data package. Specifically, you might check that the andCache.fits file isn’t corrupted.

Thanks - you’re right! Looks like the ASTROMETRY_NET_DATA_DIR is not set quite right on the NERSC installation (there are two subdirs that do seem to be valid).

Cheers!

Hello, I’m a 1st year graduate student at Carnegie Mellon working with Rachel Mandelbaum and Joe on testing the PSF modelling errors for the LSST.

I’m using the HSC PSF data that Joe talked about here. Is anyone familiar with how to extract the WCS from the data files?

Many thanks,
Husni

What are you trying to do with the WCS?

You should be able to get the WCS with a butler call: butler.get('calexp_wcs', dataIds). That will give you an lsst.afw.wcs object.

I’m trying to connect the data from all the CCDs so thought getting the WCS would help with this (or is there a simpler way, i.e. to ask the butler for the calibration exposure from all the CCDs at the same time?

I’m not sure how dataIds should be specified exactly, but I tried using butler.get('calexp_wcs', dataIds) and specifying the visit and the ccd and got back an error saying AttributeError: 'HscMapper' object has no attribute 'map_calexp_wcs'. Am I using it wrong?

Thanks,
Husni

What do you mean “connect the data from all CCDs?” Are you trying to make a mosaic of all the images (I believe we have a tool for that), or do something with cross-CCD catalogs?

Oh, right: what version of the stack are you using? That looks like you don’t have a recent-enough version.

You could try the older way:

calexp = butler.get("calexp", dataId, immediate=True)
tanWcs = calexp.getWcs()

I’m trying to do analysis on the entire PSF field at once, so something like a mosaic of all the images would be very helpful. Could you please point me at the tools for that?

That seems to work, thanks a lot!

@price @reiss or @jbosch can probably help you with that more than I can.

@husni, I’m afraid it still isn’t really clear to me what you want to do. I think if you want to look at the PSF model, a coadd wouldn’t really be very helpful. I imagine you’d be better off looking at postage stamps of the PSF (vs. postage stamps of stars at the same positions on the focal plane), or perhaps various shape residuals. Having a WCS (to correct for geometric distortions) would give you different version of all of those metrics, but I don’t believe it’s intrinsic.

Hello, I would like to ask something related.
I am trying to process the data from HSC. I tried to download the calibration files close to the date of the raw data from HSC as well as the calibRegistry.sqlite3 from http://tigress-web.princeton.edu/~pprice/CALIB-LSST-20160419/. When I run singleFrameDriver.py, the following error come out:

FATAL 2017-07-11T03:23:01.001 singleFrameDriver ({'taiObs': '2014-09-24', u'pointing': 997, 'visit': 7802, u'dateObs': '2014-09-24', u'filter': 'HSC-I', u'field': 'ABELL2319', u'ccd': 1, 'expTime': 240.0})(cmdLineTask.py:351)- Failed on dataId={'taiObs': '2014-09-24', u'pointing': 997, 'visit': 7802, u'dateObs': '2014-09-24', u'filter': 'HSC-I', u'field': 'ABELL2319', u'ccd': 35, 'expTime': 240.0}: Unable to retrieve bias for {'taiObs': '2014-09-24', u'pointing': 997, 'visit': 7802, u'dateObs': '2014-09-24', u'filter': 'HSC-I', u'field': 'ABELL2319', u'ccd': 35, 'expTime': 240.0}: No locations for get: datasetType:bias dataId:DataId(initialdata={'taiObs': '2014-09-24', u'pointing': 997, 'visit': 7802, u'dateObs': '2014-09-24', u'filter': 'HSC-I', u'field': 'ABELL2319', u'ccd': 35, 'expTime': 240.0}, tag=set([]))

I thought it might be the difference in the dates prohibited isr to locate the bias.

So I tried to construct the BIAS and DARK from the downloaded calibration frame on the same date. However, when I run constructBias.py, the error is:

Traceback (most recent call last): 
  File "/../lsstsw/stack/Linux64/pipe_drivers/13.0-3-gfcfef02+7/bin/constructBias.py", line 3, in <module>
    BiasTask.parseAndSubmit()
  File "/../lsstsw/stack/Linux64/ctrl_pool/13.0-2-gb1fa231+3/python/lsst/ctrl/pool/parallel.py", line 424, in parseAndSubmit
    **kwargs)
  File "/../lsstsw/stack/Linux64/ctrl_pool/13.0-2-gb1fa231+3/python/lsst/ctrl/pool/parallel.py", line 333, in parse_args
    args.parent = self._parent.parse_args(config, args=leftover, **kwargs)
  File "/../lsstsw/stack/Linux64/pipe_drivers/13.0-3-gfcfef02+7/python/lsst/pipe/drivers/constructCalibs.py", line 269, in parse_args
    namespace = ArgumentParser.parse_args(self, *args, **kwargs)
  File "/../lsstsw/stack/Linux64/pipe_base/13.0-3-g7fa07e0/python/lsst/pipe/base/argumentParser.py", line 465, in parse_args
    namespace.camera = mapperClass.getCameraName()
AttributeError: 'NoneType' object has no attribute ‘getCameraName'

Would anyone knows how to overcome this error? I would like to generate calexp and src. Any help or discussion is appreciated with many thanks.



The problem with AttributeError: 'NoneType' object has no attribute ‘getCameraName' is solved somehow. Something to do with the layer of directory confusion I guess.

The reason why I cannot use the calibration files released is still unclear. The date of raw file is 20140924 while the file is 20140920 which is within 720 days.

Hi Vera. Sorry for not getting back to you sooner.

Could you please post the command-line you’re using, and perhaps the full log? I suspect you may not be using the --calib command-line argument to point to your downloaded calib root directory.