Stack workflow instructions

Is dmtn-023 still the best set of instructions for the steps needed to take a set of raw data files through the stack?

Following those instruction, and using our own obs package, we’ve currently:

  • ingested raw data files
  • constructed master bias, dark, flat
  • ingested calibs
  • ran singleFrameDriver
    and end up with a catalogue of sources with astrometry (which appears to be in radians), fluxes (in counts) and a photometric zeropoint.

dmtn-023 instructs that Joint Calibration is the next step. Is jointcal now developed enough to work with other obs packages? If not, can we go straight on to the coaddition stage (i.e., make(Discrete)SkyMap and coAddDriver)?

Also, what post-coaddition steps are developed enough to try with our data? And how can we end up with a catalogue with more “usual” astrometry (i.e., degrees or hh:mm:ss) and photometry (i.e., mags) than is output with singleFrameDriver?

(Apologies for the bombardment of questions)

Thanks!

jointcal should be able to currently run on any obs package that has a well-populated VisitInfo. What obs package are you using?

I will be releasing a beta testing version of jointcal in the next week or so, once I complete work on DM-8552.

If jointCal isn’t ready, you can proceed to coaddDriver. The astrometry and photometry won’t be as good as with jointCal, but you can start to get the mechanics of coaddition working. makeSkyMap should be run before coaddDriver, and may even be required to run before jointCal (if jointCal works on tracts; I don’t know).

  • Multiband measurements for static sky science: use multiBandDriver.py. If you have only a single band, then you don’t need the full multiband measurement scheme, but it doesn’t look like we have a way to switch off the unnecessary pieces (you’re welcome to file a ticket for a feature request).
  • Forced measurement on CCDs: once you have a reference catalog from multiband, you can use forcedPhotCcd.py to make forced measurements on your CCDs. I don’t think we have a pipe_drivers wrapper for that which would make it easier to run on a cluster (again, file a ticket if you need that functionality; it wouldn’t be hard).
  • Image subtraction: subtract coadds from warps, using imageDifference.py. I don’t know much about the state of this, and there’s no pipe_drivers wrapper either.

You’ll have to do your own post-processing. If your science system can’t handle radians then I suggest you dump it or fix it to make it configurable. I believe there are some transform*Measurement.py scripts that will apply photometric calibrations to give you fluxes or mags.

Yes, jointcal requires a skymap.

Thanks for the quick responses, we’ll go ahead with makeSkyMap and coadding.

We’ve built our own for a high cadence survey telescope we’re commissioning in the next few months. It’s here and is largely a stripped-down version of obs_subaru.

Great. We’ll look forward to that.

Ok. Of course, it’s trivial for us to convert, I just thought this may already be implemented somewhere in the stack, since presumably the public-facing LSST database will have positions in degs(?).

Thanks again

It looks like your makeSwaspRawVisitInfo.py is not significantly modified from the obs_subaru one. In particular, your observatory value is almost certainly wrong (unless your telescope is at the exact same location as Subaru!). I’m assuming you do have all of the other listed fields correctly filled out in your raw files, otherwise singleFrameDriver should fail, but jointcal does need many of those fields correctly filled out for your observatory/observation.

Also, depending on the reference catalog you’re using, jointcal may or may not give a major improvement in astrometric fit at present.

How exactly are you trying to use the output? Are you going through the Butler, or are you trying to read the FITS catalogs directly (not recommended)?

Our catalog positions are Coord objects that contain afw.Angles, which can be converted on the fly via e.g. coord.getLatitude().asDegrees(). We have plans to eventually make these interoperate with astropy.Quantity, but that’s quite a ways off.

Similarly for fluxes/magnitudes, you want to use the Calib object to convert from calibrated instrument fluxes to magnitudes. Given a calib object from the calexp and a flux (e.g. catalog['slot_PsfFlux_flux']) from the source catalog, you can get an AB magnitude via calib.getMagnitude(flux) and a flux in Janskys via fluxFromABMag(calib.getMagnitude(flux)).

Does this help?

Yes, we simply copied makeSwaspRawVisitInfo.py from obs_subaru when we upgraded to v12.1 from our previous version (and made the small edits you see to stop it from crashing). It was always something we knew we’d need to go back to. At present, our top priority is to investigate (a) whether we can even use the stack on our data and (b) whether the LSST stack is appropriate for our needs (we’re developing our own in-house pipeline as well), rather than getting robust outputs. We’ll have to go through thorough rounds of testing and honing once we understand what the stack will give us. At present, it’s a major achievement if it runs without crashing!

Originally, I was just reading the FITS catalogs directly, but my student figured out how to use the butler to access them.

Yes. That very much does help. Thank you. We’ve searched around for documentation, but we seem to only find (often incomplete) snippets distributed across different places online (DOxygen, a few dmtns, jbosch’s butler tutorial). I know writing documentation is currently a priority; have we just been looking in the wrong places?

Certainly an achievement! We’re all curious whether you can make good use of the stack outputs.

Yes, there is sadly no good high level overview of how to work with the output of the stack, beyond what you’ve mentioned. Some of that is because our interfaces and algorithms are currently in flux, and some of it is because nobody’s taken the time to write said documentation yet. It’s unfortunate, but not entirely unexpected give that the stack is still very much under development. There is an “lsst-the-docs” project in the works that will make finding existing documentation easier, and give new docs a better place to live.