Here is a brief outline. I hope to have a cartoon example finished by early January (Its not too complicated, but I have lacked time to finish it before now).
Assumptions
- Take as a given that the direction of DCR is aligned with the pixel grid (a smile transform will fix this if it is not true)
- Ignore color dependent psf for now
- Take psf as constant between observations (it can always be degraded to the worst)
- Assume the function describing the displacement due to airmass is know
- All work is done using only images taken with one filter
Then:
We consider each pixel to be effectively its own source with it’s own SED. This may be contributed by multiples sources such as in a crowded field
Because the direction of dcr is aligned with the pixel grid, each column of pixels can be addressed individually ( the change will only be down a column). This reduces the problem down to one dimension.
Next we recognize there is a conservation of flux from observation to observation (baring noise, sky brightness changes, and transparency effects all which can be calibrated for ). Any change in the distribution of flux from observation to observation must come from the DCR effect (or a changing source such as an asteroid).
There are also a few additional constraints. First, the DCR effect will only happen in one direction, towards the horizon. Second, there will be a maximum displacement of flux (since the filter has a finite width), the furthest wavelength can have only a maximum movement of flux.
This problem is then an optimization problem to determine how the flux of each pixel maps from an observation at airmass 1 to another at airmass 2 given the direction and magnitude of movement for each “color”. Effectively you optimize to find out what the SED for a given pixel had to be such that the flux results in image 2. This is easiest to imagine in the hypothetical case with one illuminated pixel with unknown SED (again in one band) Given that the movement of flux due to color is well defined, there will be one SED which maps the flux of image 1 into the surrounding pixels in image 2.
As noted this only needs observations in one band, with observations at two airmasses. However, It may be possible to include information from other filters to further constrain plausible SEDs.
Once the mapping is known, it would be possible to construct a convolution kernel to apply to the image and “undo” the DCR for the purposes of image differencing.
This is just a quick outline, there are particulars to investigate, such as what the best model for and SED with in a band is (PCA of known curves, simple control points, etc). I hope this gives enough info to start a conversation about the method, and help flesh out the exact implementation.