DECam Data: RuntimeError: No PSF candidates supplied

We are attempting to process some DECam data from raw files. We have ingested and certified the files, and are able to run ISR following these pipelines (namely an ISR-crosstalk-prep, then ISR). However, when we try to run the chacterizeImage task with

description: The AP template building pipeline specialized for DECam
instrument: lsst.obs.decam.DarkEnergyCamera
imports:
  - location: $AP_PIPE_DIR/pipelines/ProcessCcd.yaml
    exclude:
      - isr
  - location: $AP_PIPE_DIR/pipelines/DarkEnergyCamera/RunIsrWithCrosstalk.yaml
tasks:
  characterizeImage:
    class: lsst.pipe.tasks.characterizeImage.CharacterizeImageTask
    config:
      refObjLoader.ref_dataset_name: 'ps1_pv3_3pi_20170110'

we get an error showing

characterizeImage.measurement INFO: Measuring 688 sources (688 parents, 0 children)
characterizeImage.measurePsf INFO: Measuring PSF
characterizeImage.measurePsf INFO: PSF star selector found 0 candidates
characterizeImage.measurePsf.reserve INFO: Reserved 0/0 sources
characterizeImage.measurePsf INFO: Sending 0 candidates to PSF determiner   
RuntimeError: No PSF candidates supplied.

Does anyone know why this is occurring or what we might do to investigate this further?

Does this happen for only one detector, or for all of them? What do the images (butler "postISRCCD" dataset) that go into characterizeImage look like?

Running them one detector at a time, at least for a few different detectors, most fail with RuntimeError: No PSF candidates supplied. One (CCD 10), however, does find 3 PSF candidates and continues on for another step before failing with

characterizeImage.measurement INFO: Measuring 1188 sources (1188 parents, 0 children) 
characterizeImage.measurePsf INFO: Measuring PSF
characterizeImage.measurePsf INFO: PSF star selector found 3 candidates
characterizeImage.measurePsf.reserve INFO: Reserved 0/3 sources
characterizeImage.measurePsf INFO: Sending 3 candidates to PSF determiner
characterizeImage.measurePsf.psfDeterminer WARN: NOT scaling kernelSize by stellar quadrupole moment, but using absolute value

> WARNING: 1st context group-degree lowered (not enough samples)


> WARNING: 1st context group removed (not enough samples)

characterizeImage.measurePsf INFO: PSF determination using 3/3 stars.

Zooming in on one of the postISRCCD images may reveal the problem. I’m not sure what happened, but this does not look like I expected. This is just done with plt.imshow(np.arcsinh(data)). image

For comparison, here is what I think is the raw data of this same region image

Looking at other regions and ccds in the raw data shows more or less what I would expect. Namely, PSF-like stars. All ccds seem to have this same split only in the middle of the image. I haven’t worked very much with DECam raw data, but perhaps that is normal ccd readout.

Something is definitely going wrong during ISR. What calibrations are you using?

The different background level you’re seeing in the raws is the two different amplifiers per chip.

Whoa, well there’s why it can’t find any PSFs :joy: When you first shared this problem with me I thought it was wonderful that ISR worked, but obviously it just pretended to work. Can you please share the full log from ISR, and snippets of a representative “master” bias and flat frames? I wonder if this has something to do with the fact that you’re using the VR filter.

It looks like something may have gone wrong even further back. The calibration files I used initially are DECam flats and biases. Here is one of the raw fri.fits.fz images:
image

Here is one of the constructed flats (Not the same one. I’m not sure how to trace a file back in the stack):
image

Here is the yaml I used to make the constructed flats:

description: cp_pipe FLAT calibration construction optimized for single-CCD cameras
instrument: lsst.obs.decam.DarkEnergyCamera
tasks:
  isr:
    class: lsst.ip.isr.isrTask.IsrTask
    config:
      connections.ccdExposure: 'raw'
      connections.outputExposure: 'cpFlatProc'
      connections.bias: 'bias'
      doBias: True
      doVariance: True
      doLinearize: False
      doCrosstalk: False
      doDefect: True
      doNanMasking: True
      doInterpolate: True
      doDark: False
      doBrighterFatter: False
      doFlat: False
      doFringe: False
      doApplyGains: False
  cpFlatMeasure:
    class: lsst.cp.pipe.cpFlatNormTask.CpFlatMeasureTask
    config:
      connections.inputExp: 'cpFlatProc'
      connections.outputStats: 'flatStats'
      doVignette: False
  cpFlatNorm:
    class: lsst.cp.pipe.cpFlatNormTask.CpFlatNormalizationTask
    config:
      connections.inputMDs: 'flatStats'
      connections.outputScales: 'cpFlatNormScales'
  cpFlatCombine:
    class: lsst.cp.pipe.cpCombine.CalibCombineTask
    config:
      connections.inputExps: 'cpFlatProc'
      connections.inputScales: 'cpFlatNormScales'
      connections.outputData: 'flat'
      calibrationType: 'flat'
      calibrationDimensions: ['physical_filter']
      exposureScaling: InputList
      scalingLevel: AMP
      doVignette: False
contracts:
  - isr.doFlat == False

It looks like something may have gone wrong at this step and is affecting all future processing?

I ingested the flats with butler ingest-raws . /filepath/c4d*fri.fits.fz --transfer link and ran the flat.yaml file with pipetask run -d "exposure IN (845294, 845295, 845296)" -b . -i DECam/raw/all,DECam/calib,DECam/calib/cert-biases -o DECam/calib/constructed-flats -p flat.yaml --register-dataset-types. Now that I’m thinking of it, I’m not sure that I have specified which files were flats, raw science images, and biases when ingesting. Is that something the stack can do based on e.g. file extension, or do I need to specify that in some way? If so, how should I alter my ingest and processing of the biases and flats?

You’ll have to do a lot of the bookkeeping manually for now, I’m afraid. But this does look like the culprit - something with stars snuck into your flats! I’d be willing to bet the issue is 100% there and nothing to do with how you are running the pipelines.

During the ingest-raw step, you can ingest any/all raws (science, bias, flat) no problem. When you do the actual construction step (running the flat.yaml pipeline) you need to manually make sure you’re only feeding it actual raw flat frame exposure numbers that you want combined. Same goes for the previous step of running bias.yaml. I have not tested to see what happens if I run the a flat-building pipeline on non-flats, but I’m willing to guess it will plow ahead and yield exactly what you’re seeing.

1 Like

When I run pipetask run, how can I specify to only use flats? My command (shown fully in my previous comment), just used -d "exposure IN (845294, 845295, 845296)". Can I add to this statement to make it only run on flats (and similarly for biases)?

Actually, I think these visit numbers are the visit numbers of the science images. I suppose it needs the visit id of the flats and biases that correspond to the science images I want to process later.

I could have been more clear - yes, the pipeline is only running on those exposures you specify in the -d data query. What I mean is, you need to keep track yourself of which ingested exposures are flats, and ought to visually inspect at least a couple CCDs in each of those exposure and make sure they are truly the flats you think they are.

1 Like

You can say in the -d clause something like ... AND exposure.observation_type = 'flat' for example to restrict the query to flats (or bias or dark or whatever).