DECam Instrumental Signature Removal

Non-linearity corrections should be there (I wrote it), and it’s supported in camGeom. @SimonKrughoff do you know anything about this?

The notes at http://www.ctio.noao.edu/noao/content/decam-known-problems imply that it used to be non-linear but now isn’t, so we’ll need to version camGeom – we can do this as an obs_decam version but as we know it’d be better in the nbutler.

We should be able to do this in the Old Butler; that’s one of the things that I think you wanted pushed after registry-less repositories (“more important than […] improvements to versioning”).

It’s 249 xtalk coefficients for 124 amps (62 chips, 2 amps each), so there are some non-negligible inter-chip cross-talk.

It isn’t the highest priority because we can version obs_decam if we must (ignoring Von Neumann’s strictures)

Looks to me like non-linearity corrections for HSC are done here, not in general-purpose ISR code: https://github.com/lsst/obs_subaru/blob/3452c1f7ab32105cbe465c4e5806f6da42d457fc/python/lsst/obs/subaru/isr.py

Right, just like crosstalk. So this needs to be copied/cherry-picked to obs_decam and we need to think about how to make this more generic or more easily extended or more easily plugged into. “Use this external cross-talk module from HSC in Decam”. Or something.

I noticed you specifically mentioned the Community Pipeline processing. Is the DES ISR different? Substantively or just in implementation?

It’s very similar. Actually the CP is a subset of the DESDM processing software. But DESDM only processes their own data, while the CP processes all the community data (and the documentation seems to be a bit better).

We do use the fringe code for HSC (it’s needed for y-band). It has been in ip_isr for ages now.

That’s great! Hopefully it’ll work for DECam as well.

Sorry for missing this conversation. Evidently the discourse emails got lost in the noise.

Regarding non-linearity, there was an implementation of this in ISR done long ago, but I don’t believe it ever satisfied all the requirements, thus the re-implementation in obs_subaru.

As far as inter-chip crosstalk goes, we don’t currently have a defined way to do this. I think we can probably work around it by getting relevant images from the butler inside the isr routine. From a very quick look, it seems that any amp is only affected by one other chip.

Great.

Yes, I think you are right about the the crosstalk. From the coefficients it just seems like one other (neighboring?) chip is affected.

I/O in the task code (and therefore passing in blobs such as dataRefs) is exactly the pattern I’m trying to move away from. In this short run it might be pragmatic though.

I agree that we should be moving away from that pattern, but I’m not sure how to make headway without having to produce a new crosstalk design. Maybe it’s easy. I haven’t had a chance to look.

I’ve been wondering if there’s a way to do cross-talk without having to pull in every image for every chip (in reality a chip only affects a small number of other chips). Only the pixels with lots of flux actually have a cross-talk effect. So I’m wondering if it would be possible to pull out all of the “bright” pixels (above a few thousand ADU or something like that) and put it in a smallish image/array/file that, of course, remembers the (x,y) coordinates of the pixels. Then you run through all the chips once and create these “bright pixel” files. When you run cross-talk on an individual chip you load in all of the bright pixels from all chips (which should not be very memory intensive), and remove the cross-talk just from those pixels (from chips that have non-negligible cross-talk coefficients). Does this sound doable/reasonable? Of course it breaks down in crowded fields.

There is currently the concept of a “heavy” footprint in the stack. This is essentially the minimum set of bounding boxes for a source and the pixels associated. I think that’s what you need.

That being said, I don’t think it would be hard to load the image you need inside the cross-talk method. It’s not very efficient, but I think the efficient mechanism needs a new design before we can do it in a non-hacky way.

What exactly does “heavy footprint” mean? Is there a “light footprint”?

Is there a way to stuff many of these footprints in a file?

The word heavy just refers to the fact that the pixel values are included along with the region of the footprint. The “light” footprint is just the set of bounding boxes that enclose the soruce.

I’m not sure if there is a way to persist more than one heavy footprint in a file. Certainly it’s possible to persist a single footprint in a file: see here. It shouldn’t be hard to bundle those up in either an MEF or a tarball.

Okay, thanks. Seems like it should be possible to stick these together in a compact way.

Each record in a SourceCatalog has an attached Footprint, which can be a HeavyFootprint. There’ll be a small amount of overhead, but I think this is by far the easiest way to persist a bunch of HeavyFootprints right now.