Generating BFK for DECam

Hi Folks!

I’m working on generating a Brighter-Fatter Kernel for DECam, and just wanted to check in about something I noticed with the workflow. So… BFK requires extracting a PTC dataset from paired-flats first, which is to be expected since fundamentally you need px-px covariances to extract the kernel. I’m a little worried because in previous work I’ve seen on BF for DECam and HSC all use large datasets of paired flats (>1000), sometimes with a ramp in exposure time… but previous work has noted that many pairs of flats were all taken with the same exposure time. The problem is that when passing a large collection of paired flats to cpPtc, the cpExtractPtcTask will only use the first pair of exposures at a given exposure-time then ignore the rest.

Does anyone know if this will cause any significant issues while extracting the BFK? My understanding was that the kernel was fairly small, so you need a very large sample of flats to bring down the noise when measuring the covariances… but I’m not sure if the range of exposure times sampled in ramps by DECam will be sufficient to measure the kernel against noise. I know that the simple fix here is to create a modified version of cpExtractPtcTask.py to run across all pairs of flats at a given exposure time, not a terribly hard thing to do but I wanted to see if anyone has encountered or thought about this issue before.

Thanks for the help!

Hi @antenglert . Thank you for your post! Have you tried running the PTC pipeline with matchExposuresType=FLUX in the Extract task? This way, the flats will be matched by flux rather than exposure time. It will look for pairs of flats with the same flux. Note that the parameter matchExposuresByFluxKeyword for that same task tells the code the header keyword where the flux can be found. I hope this helps!

Hmm, I saw those options but wasn’t sure if enabling it would make a difference. Since many of my flats have the same exposure time, they should have similar fluxes, causing the same issue… additionally looking at the headers for DECam flats in ds9 I don’t think the exposure-flux is present. One fix I was thinking about would be to add a header to each fits file with a custom index and telling the LSP to pair frames by that index (just by enabling matchExosuresType=FLUX and tweaking matchExposuresByFluxKeyword appropriately), not a very difficult fix to be honest although I am not sure if/how the pipeline carries all fits headers through the isr-step.

Could you elaborate on the input dataset of flats you are using with the PTC pipeline? I thought that perhaps you had flats with the same exposure time but with different flux levels. The code assumes that the input dataset consists of a “ramp” of flats increasing in flux and that at each flux level, there is at least one pair of flats

Right, currently I’m using flats from the “PTC-Ramp” from DECam engineering in 2013, from what I can tell the ramp is just in exposure time, so indirectly a ramp on flux (in principle I can check this by evaluating the mean of each frame since flux-information isn’t stored in the header or the Science Archive, otherwise I would have used it instead of exposure-time for selecting flats). This includes 38 different exposure-times (ranging from 0.02s to 35s) with a minimum of 40 flat-frames (20 pairs, most important for the lower exposure-times with more noise) each that I want to compute covariances based on. I’m writing a script now to confirm that frames with the same exptime have identical fluxes (and which appends an index to the fits header to use for pairing).

OK, let me know. I built a DECam PTC with a version of the code from November 2023 (and the code has changed since then, as it’s still evolving). I remember finding a similar dataset to what you describe (from 2012), and the code was able to produce a PTC for the few detectors I checked, using the exposure time as a proxy for flux (i.e., I did not have to use the matchExosuresType keyword to something different to the default).

I documented some of this in the last comment of this Jira Ticket: [DM-29695] - Rubin Jira
I’m not sure if you can see it without having to log in (please let me know), but in case you can’t see it, I can copy the commands I used here, in case it helps:

  • First, I ingested the DECam flats I downloaded from the NOIRLab website into a custom repo (as I could not find any PTC ramps in the standard repos at the USDF or at the IDF; I’m not sure where you are working):
butler create DECamGen3Test3-2023NOV09
butler register-instrument ./DECamGen3Test3-2023NOV09 lsst.obs.decam.DarkEnergyCamera
butler write-curated-calibrations ./DECamGen3Test3-2023NOV09 DECam
butler ingest-raws ./DECamGen3Test3-2023NOV09 ./DECamGen3Test2/DECam/raw/all/raw/20121119/ct4m20121119t*/*.fz
  • Then, I ran the PTC pipeline (with simplistic options):
pipetask run -j 8 -d "detector IN (5, 10, 15, 20) AND instrument='DECam' AND exposure IN (153088,153089,153090,153091,153092,153095,153096,153097,153098,153099,153100,153101,153102,153103,153104,153105,153106,153107,153108,153109,153110,153111,153112,153115,153116,153117,153118,153119,153120,153121,153122,153123,153124,153125,153126,153127,153128,153129,153130,153131,153132,153133,153134,153135,153136,153030,153035,153039,153040,153079,153080,153081,153082,153085,153086,153087)" -b ./DECamGen3Test3-2023NOV09 -i DECam/raw/all,DECam/calib  -p ${CP_PIPE_DIR}/pipelines/DarkEnergyCamera/cpPtc.yaml -c ptcSolve:ptcFitType=FULLCOVARIANCE -c ptcIsr:doLinearize=False -c ptcIsr:doCrosstalk=False -c ptcIsr:doDefect=False -c ptcIsr:doBias=False -c ptcIsr:doDark=False -c ptcIsr:doFlat=False -c ptcSolve:doLegacyTurnoffSelection=True -c ptcSolve:sigmaCutPtcOutliers=6 -o DM-29695-ptc_2023NOV09.13 --register-dataset-types
  • Plotting one detector:
butler = dB.Butler("./DECamGen3Test3-2023NOV09", collections=["DM-29695-ptc_2023NOV09.13"])
 
detector=5
plot_names = ['ptcVarMean', 'ptcVarMeanLog', 'ptcNormalizedVar', 'ptcCov01Mean', 'ptcCov10Mean', 'ptcVarResiduals',
                       'ptcNormalizedCov01', 'ptcNormalizedCov10', 'ptcAandBMatrices', 'ptcAandBDistance', ' ptcACumulativeSum', 'ptcARelativeBias']
for plot_name in plot_names:
 ref = butler.registry.findDataset(plot_name, detector=detector)
 print ("Plot number", plot_name)
 uri = butler.getURI(ref)
 display(Image(data=uri.read()))

Then there’s the question of whether this number of points (flat pairs) is enough or not for measuring the covariances with a particular precision (as discussed, for example, in Astier+19 and Broughton+24). I’m not sure if we can find or even request the taking of a more dense PTC ramp.

Thanks for the help, I’ve already generated a ptc per-detector (using the same ramp as you) so yes the question really is a matter of how many flat pairs is sufficient per point and what is the best way of utilizing them. For example, single covariance per exposure-time (Coulton+18 Fig1) vs “denser” ptc-curve with each pair being assigned a single-point (Astier+19 Fig3/5). Based on our discussing and double-checking previous work… I’ll stick to assigning each pair a unique index and let the fitting take care of the spread in covariances (due to noise) per exposure-time/flux (effectively taking the “denser” curve route). If I still run into issues I may have to modify my local LSP installation to compute covariances per exposure-time, but it shouldn’t be too challenging. But, if this works, I’ll flag your initial correction of sorting by-flux as the solution.

Sounds good. There’s also the option of arranging the exposures by ID, which may be helpful, although the doscstring of the function warns of caveats: cp_pipe/python/lsst/cp/pipe/utils.py at main · lsst/cp_pipe · GitHub

Right, but that won’t work for this sequence since, often, successive exposures have different exposure-times.