Hi,
Some people may have followed our debate about whether to use VISTA pipeline six exposure stacks or the ‘to some extent raw’* exposures in our development of the obs_vista package. When we pass the exposures to processCcd the resultant calexp images have large negative peaks following background subtraction (right below). I think these are being propagated to very strange negative peaks with positive fringes in the warps and subsequent coadds (left below, different region). I think the reason these are not present in the VISTA six exposure stacks is they are based on a sigma clipped mean on dithered exposures so they disappear from the mean.
One suggestion has been to find these defects using the LSST stack to create a list of defective pixels.
Is there a simpler means to for instance interpolate pixels that are highly negative following background subtraction? Or is this a more fundamental problem that needs to be addressed by going back to the original dark/flat subtraction?
I think the most thorough course of action is to enable processing of both the original VISTA pipeline stacks and the ‘to some extent raw’* exposures and to compare performance of the final photometry. At present the stacks are producing coadds that appear to be far better behaved but there are lingering concerns about measuring the PSF from a coadd rather than the exposures.
Many thanks for everyone’s ongoing support,
Raphael.
*To some extent raw in that I am currently using exposures that have had dark, flat and had the original pipeline ISR performed but can go further back to the true raw exposures if required all under the proviso that previous processing involved much work over many years and I have exactly two years of FT to produce the best possible product.