DCR literature overview - for discussion Dec 1st

The attached report presents a summary of current research into the effects of Differential Chromatic Refraction (DCR), with notes on and links to each paper. To help give a sense for the magnitude of DCR and the relative importance of different environmental factors, we include the results of simple simulations.

We will discuss the existing literature and solicit feedback on mitigation techniques and approaches we should take at the DM RFC slot on December 1st 12:30-2:00 PT at ls.st/sup

Agenda:
Introduction to the problem and discussion (Simon?): 20 min
Results of literature search (David and Ian): 20 min
Discussion of potential mitigation techniques (all): 50 min
* Mitigation in measurement
* Modeling techniques

DCR literature overview.pdf (495.0 KB)

How does the Alcock procedure seeing match before estimating the S/N (probably)? How do they deal with low-S/N (does it e.g. use a weighted sum of neighbouring pixels)? The use of multiple airmasses should also help with objects that are superimposed at some airmasses but not others.

I don’t understand figure 3, as it apparently doesn’t conserve flux (the residuals aren’t dipoles with net zero flux). Did they do anything to conserve the flux – as described, the algorithm should. Am I being fooled by the plot?

A picky question. In the P v. T you assume an isothermal atmosphere. Does an adiabatic or numerical profile make a difference?

@RHL The Alcock procedure stacks only the best seeing and low airmass observations to make the template image (they note that they used only 5 out of 324 observations), and they degrade it to match the seeing of the test image. So, for the template, they deal with low-S/N and high airmass observations by excluding them.

I suspect figure 3 is slightly misleading with the color scale. The algorithm certainly should conserve flux, but in their color scale the positive and negative lobes of the dipoles are saturated. If they’re subtracting their test images from the template, we should expect a small but bright positive lobe and a large but dimmer negative lobe since the high-airmass image will be smeared by DCR.

I’m afraid I can’t answer the question about different atmospheric profiles at this point. I believe Hohenkerk and Sinclair (1985) used a more complicated model of the atmosphere, but my simulations have been based on Stone (1996) who approximates the atmosphere as a slab with uniform temperature and pressure. If I had to guess, I would say that more complicated structure would increase the integral in equation 4 compared to the approximation, which would increase beta in equation 1. If I simply fudge beta to be 2x larger in my simple simulation without changing any other parameters, it only changes the refraction by 0.1% .

My noise concern is that even for the best S/N there’s almost no S/N in the wings of stars.

Of course this is an old paper, and for PSF matching they used the old IRAF PSFMATCH task, which implements the method of AC Phillips (1993) - modeling the wings of the PSF with a Gaussian.

It’s after my paper with Christophe Alard so I’d assumed that they did it by kernel convolution. Thanks for the correction.

The oldest paper I know is Ciardullo, Tamblyn, and Phillips 1990 (and Phillips and Davis 1995). What’s the 1993 reference?

I’m pretty sure it’s based on the material in here:

http://www.ucolick.org/~phillips/misc_papers/adass4.ps

(and references therein).

O.K. The link for the hangout is: http://ls.st/jar

Which seems to be:

http://adsabs.harvard.edu/abs/1995ASPC...77..297P

and so it’s Phillips & Davis 1995

Here is a very good recent talk on image differencing in LSST by Simon:
http://www.astro.washington.edu/users/krughoff/data/diffim_keynote2009_short.pdf

Couple more papers not mentioned:

Auer & Standish: http://adsabs.harvard.edu/abs/2000AJ…119.2472A

Mangum & Wallace: http://adsabs.harvard.edu/abs/2015PASP..127...74M

(Pat Wallace wrote the SLALIB refraction code that was redone for PAL)

Based on comments during our discussion this afternoon I have edited Figure 1 to use the actual LSST bandwidths. As @RHL suggested, u and g bands now have comparable DCR and all of the redder bands have improved. I will update the report again later with the additional references that have been mentioned.
DCR literature overview.pdf (495.0 KB)

Also Tomaney and Crotts 1996, Section 4.4

Sorry to take so long to get the notes out. See here for the full notes. Video is here for now.

Executive Summary
It was noted that it is important to make sure we are taking care of all dipoles related to the astrometric fitting before we begin optimizing for DCR. Astrometry is hard and is the main cause of false positives for other projects, so shouldn’t be overlooked.

Three techniques were identified as possible routes for mitigation of DCR.

  1. Forward modeling of a perfect above the atmosphere scene (catalog).
  • Difficult because a truth catalog must be acquired in some way
  1. Template interpolation techniques.
  • Doesn’t require explicit knowledge of the object SEDs
  1. Account for DCR in measurement.
  • Since the point of image differencing is to reduce the number of sources (crowding) this may not work at all in dense regions. In fact, it may make the problem worse near the plane.

In general, it was agreed that we likely need to look into all three techniques. It looks likely that some form of template interpolation will be necessary in all cases. The Alcock (1999) result shows that it can potentially work. The following experiments were identified, in priority order, as the most important next steps in determining the mitigation technique for DCR in LSST in the context of image differencing.

  1. Simulate (or find data) that we can use to reproduce Alcock result, and expand upon that.  Comparing a reimplementation of Alcock with naïve linear interpolation of templates.  This should be done at a place where density is interesting.  Turn off chromatic effects of seeing.
    
  2. Look into Meyers paper and do estimation of shifts and moments based on incomplete color information.
    
  3. Look into how one would estimate color of objects based on shifts (and higher order moments).

I wasn’t clear from the discussion this week, but are we saying that no-one has improved upon this in 16 years? Or is it just that the people doing image differencing are keeping quiet?

It’s possible that others are doing a similar thing, but there is nothing in the literature. I believe that part of the issue is that dedicated transient surveys can plan the survey to largely avoid DCR issues. We don’t have that luxury since we are going to be imaging everywhere all the time.

I think no-one’s worrying about DCR and difference imaging (yet). Bad astrometry is more of a problem, and once you give up on fixing that and use ML instead your DCR will be squashed by the same sledgehammer (or not, as the case may be).

Nate Lust and I just had a conversation about a possible cunning new algorithmic approach – stay tuned for a more detailed proposal from Nate.

@RHL @natelust We’ll be discussing DCR over lunch here tomorrow. If you have time, could you elaborate on the new approach that you alluded to so we can talk about it tomorrow?

Here is a brief outline. I hope to have a cartoon example finished by early January (Its not too complicated, but I have lacked time to finish it before now).

Assumptions

  • Take as a given that the direction of DCR is aligned with the pixel grid (a smile transform will fix this if it is not true)
  • Ignore color dependent psf for now
  • Take psf as constant between observations (it can always be degraded to the worst)
  • Assume the function describing the displacement due to airmass is know
  • All work is done using only images taken with one filter

Then:
We consider each pixel to be effectively its own source with it’s own SED. This may be contributed by multiples sources such as in a crowded field

Because the direction of dcr is aligned with the pixel grid, each column of pixels can be addressed individually ( the change will only be down a column). This reduces the problem down to one dimension.

Next we recognize there is a conservation of flux from observation to observation (baring noise, sky brightness changes, and transparency effects all which can be calibrated for ). Any change in the distribution of flux from observation to observation must come from the DCR effect (or a changing source such as an asteroid).

There are also a few additional constraints. First, the DCR effect will only happen in one direction, towards the horizon. Second, there will be a maximum displacement of flux (since the filter has a finite width), the furthest wavelength can have only a maximum movement of flux.

This problem is then an optimization problem to determine how the flux of each pixel maps from an observation at airmass 1 to another at airmass 2 given the direction and magnitude of movement for each “color”. Effectively you optimize to find out what the SED for a given pixel had to be such that the flux results in image 2. This is easiest to imagine in the hypothetical case with one illuminated pixel with unknown SED (again in one band) Given that the movement of flux due to color is well defined, there will be one SED which maps the flux of image 1 into the surrounding pixels in image 2.

As noted this only needs observations in one band, with observations at two airmasses. However, It may be possible to include information from other filters to further constrain plausible SEDs.

Once the mapping is known, it would be possible to construct a convolution kernel to apply to the image and “undo” the DCR for the purposes of image differencing.

This is just a quick outline, there are particulars to investigate, such as what the best model for and SED with in a band is (PCA of known curves, simple control points, etc). I hope this gives enough info to start a conversation about the method, and help flesh out the exact implementation.