Handling images from multiple instruments in the stack

In some cases it appears that it can be interesting to run a task / pipeline on a set of images coming from different instrument. For instance, one can think of running a simultaneous astrometry fit on HSC + CFHT images.
Pierre Astier is also suggesting another use case where we may want to use older images together with LSST ones in order to increase the lever arm for proper motion detection.
I have the impression that the current stack design does not allow this as we have to specify the instrument before running any task.
Is this something foreseen in the development plans ?

I know of few of us (me, @price, @RHL) have always expected that the stack would eventually support processing data from multiple cameras jointly, but I think it has always been unclear who is responsible for delivering it and on what timescale. Some of this clearly fits into Butler development, but it clearly goes farther than that.

This definitely seems like an excellent use-case. Is it even possible with the current design?

Not without some hacking. But can you build coadds from different instruments by running the pipeline completely independently on those two datasets, but do it with the same skymap. And then you make some symlinks that make the rest of the pipeline treat all the coadds as if they came from the same camera, and it mostly just works from there.

Supporting multiple input repositories, each of which could have its own camera, is an explicit goal of the Butler. As Jim says, once we get to the coadd stage, there is much less about a given dataset that’s specific to a camera (although filters are).

One question that would eventually need to be settled is whether the data products of any step that processed data from more than one camera would need to have information or dataIds that refer to one or all of the cameras in some way. If the data products are always camera-independent, this becomes a lot easier.

@jbosch Assuming I had calibrated exposures from two cameras (e.g., DECam and HSC), is it possible right now to write code that simultaneously models (say) the shape and the SED of an object? Are we keeping enough metadata in the calexps to know about bandpasses (e.g., color terms)?

We may have this situation with the u band if we end up with a heterogeneous focal plane.

The simultaneous modeling would require a multifit implementation as well as the sort of butler improvements that (as @ktl mentioned) are already on the table. And there’s not quite enough information about bandpasses there now, but that just reflects the fact that the Filter object we attach to Exposure is really just a placeholder right now.

This should be quite possible with the software we were already planning to build (there’s no new scope) - we just haven’t built much of it yet.

It’s scary how you knew what I was asking :).

Thanks!

To expand a little, a Filter is as Jim says just a placeholder (it has a little bit of information, such as central wavelength, mostly as a proof of principle). It needs to be extended to handle filter properties as a function of focal plane position and time (think, atmospheric absorption).

However, I don’t think this is central to the question asked. Filters are associated with Exposures so there’s already no assumption that all the data has the same filter. The way that we currently do photometric calibration (transforming the catalogue into the natural system of the data) is also OK – we need to define the system we are working in. What isn’t present is asking what counts taken at this position with this device on this night for an assumed SED mean – that requires tracking SEDs and system throughputs. It’s something LSST needs to do to reach our SRD goals for photometry.

@RHL While Filter objects are attached to Exposures, I’m worried about what the output should look like for data that comes from different cameras that presumably have different Filters (even if they have the same name). Do both Filters have to be attached to the result in some way? What filter-related keys (if any) would you want in the data id for the output?

Well, images have well defined Filters (combinations of the input filters). Are you worried about provenance or image processing?

I’m worried about both provenance and downstream processing. If a well-defined, singular Filter object can be attached to any result images, that helps quite a bit. Such Filter objects are currently defined in obs_* packages, but presumably a combined Filter would be newly-defined (and named).

I think it can always be done – the Filter defines the passband at a given point in a detector for a specific hardware configuration, so there’s no fundamental problem in defining the filter for a coadd.

This doesn’t mean that we are out of the woods, of course, as e.g. native pixel scales will be different.

I’d like to see some experiments with different cameras (e.g. HSC and CFHT-megacam) to explore what is really involved, and while I’m not sure that @jbecla will sign off on this for LSST in the short term, it’s something that HSC might be able to do (as we have u-band CFHT and JHK from Vista for parts of the HSC survey). The part of the work to support more complex Filters (i.e. functions of time and space) is certainly in LSST’s scope due to our photometric requirements.