Report from December 2015 UC Davis WL Systematics Meeting

A few belated comments from a meeting on “foreground effects” in LSST weak lensing: systematics milky way dust, the atmospheric, and the instrument. @mjuric, @CStubbs, and @mwv were also present (and I’m sure some others whose c.l.o. account names I couldn’t guess).

The agenda and all talks and posters are posted here: https://indico.bnl.gov/conferenceDisplay.py?confId=1604

The single biggest issue the meeting brought up for me (well, brought to the top of my worry list) is correcting for the small-scale astrometric variations in the chips that have long been misinterpreted as QE variations. For those who haven’t heard about this yet, the idea is that the pixel grid really isn’t exactly rectangular; nearly all the small-scale features we see in flat field images actually represent shifting pixel boundaries (and hence shifting pixel positions), and that means we should be including them in the WCS rather than dividing them out. But it’s much, much worse than that: our notion that images have a “PSF” and a “WCS” as separate entities depends on both of those being slowly-varying on the scale of the PSF. When that’s not true, you have to think of it as a single transfer function that maps distributions on the sky to values in pixels. And you can’t fold the pixel response into the effective PSF, or use sinc interpolation to resample (since the pixel grid isn’t regular).

The good news on all of this is:

  • The effects are quite small for most chips.
  • DES already has smart people already working on this (Mike Jarvis referred to a plan Gary Bernstein had that sounded like they’d treat the flat fields as QE just to model the background, then ignore them for sources - but that doesn’t sound right and we know we miscommunicated on this quite a bit at the meeting, so take that with a grain of salt).
  • The extra coordinate mapping is frozen in the chips, so we should have enough information to constrain the mapping from dithered star positions, if we can come up with an algorithm to utilize it (flat fields provide some information, but not enough to constrain it).
  • @RHL thinks that once we know the mapping we can probably fix it well enough just by shifting some charge around between pixels at a very early stage; if we do that, we get back to having rectangular images, and we can proceed as before.

Some other topics I found interesting, in no particular order:

  • Aaron Roodman gave a very good introductory presentation on wavefront-based PSF modeling, along with the best description I’ve seen so far on what his group is doing in that area on DES. I strongly recommend anyone interested in that topic look at his slides.

  • I was impressed by Daniel Gruen’s poster on brighter-fatter mitigation for DES (which unfortunately isn’t posted on the conference site). It seems to be the first effort that’s moving beyond the first generation methods that only correct about 90% of the effect, and he claims that it’s because he’s including a change to the charge diffusion kernel as well as the pixel boundaries.

  • The discussions on mapping milky way dust were interesting, but never strayed into territory that made me think it’d be something that DM will have to worry about: it seems pretty clear that whatever the best approach is when LSST operations begin, it will be quite orthogonal to the lower-level photometric calibration problems. It does seem quite likely that the best dust extinction maps will come directly from the stars observed by LSST itself, rather than long wavelength data.

  • One particularly interesting idea that came out of the dust mapping discussion was a (vague) proposal to put a narrow-band filter optimized to determine stellar metallicity on LSST during commissioning (or on DECam even sooner); having a sample of stars with well-known metallicities across the sky could help quite a bit with extinction maps, and could help with photometric calibration as well.

  • I was encouraged to see multiple projects working on simulating atmospheric PSFs, all with at least slightly different assumptions. I’m not enough of an expert to know whose work here is most interesting, but having a wealth of independent options for simulation in this area can help help to validate them.

Finally, this was perhaps the most useful interaction I’ve had with Project people from other subsystems at an LSST meeting, even though it was nominally a science meeting. I think it could serve as a very effective template for Project-sponsored technical meetings: instead of just bringing together everyone on all the teams and trying to put together a schedule that works for them, put together a schedule on a technical topic first and only bring the people for which it’s relevant (including non-Project people from precursor surveys and science collaborations).

4 Likes

Thanks for this great summary.

I very much enjoyed the meeting, and the format: some good plenaries, followed by round table discussions assigned to brainstorm about specific questions.

I like and support your idea about using these technical topics as a natural way to reach across Project + non-Project as well as reaching across teams within construction Project.

Can you ask him to send us a copy? In fact, it would be good if we used this meeting as a test bed for the archiving of meeting content that I’m working on with @jsick.

Mike Jarvis referred to a plan Gary Bernstein had that sounded like they’d treat the flat fields as QE just to model the background, then ignore them for sources

To clarify, we use the flat field normally for the background estimation, not because we are treating it as QE, but rather because we are treating it as pixel area. So dividing by the flat is the right thing to do for estimating the sky by turning the flux in each pixel into an estimate of the surface brightness in each pixel.

Then once the sky is subtracted, we multiply the flat field back in to get back to a flux per pixel image, which is used for the rest of the downstream processing. This still ignores the fine grained pixel size variation when measuring fluxes and shapes, since our WCS is approximated as constant over the size of any object, but at least we aren’t compounding the problem by dividing by flats, thinking they are QE when they are pixel area.

This doesn’t apply to any published DES data yet. The algorithms for all this were first ready for the year 2 processing, which is still ongoing. So we’ll report back at some point whether we come across any problems with this technique.