Has the source association workflow changed in v29.2.0?

In previous version (v24), we organized the image subtraction and source association pipeline as the following sequence of tasks:

  • retrieveTemplate
  • subtractImages
  • detectAndMeasure
  • transformDiaSrcCat
  • diaPipe

This workflow worked well in v24.

However, after upgrading to v29.2.0, we find that the diaPipe task no longer runs successfully. It fails due to missing input connections:

  • preloadedDiaObjects
  • preloadedDiaSources
  • preloadedDiaForcedSources

We are now confused about these new required inputs. Could you please clarify:

  1. How should these preloaded datasets be generated?
  2. Has the recommended workflow for source association changed in v29.2.0?
  3. Is there a way to replicate the old behavior, or do we need to adapt to a new process flow?

Thank you for your guidance!

Most of the changes are just in the names, but as you found loading sources from the APDB has been moved out of diaPipe and to a new task loadDiaCatalogs. This change was necessary for speed in Prompt Processing, since loading diaSources and diaObjects from the APDB can be done in preload before the image arrives. However, it does complicate offline processing, since care must be taken to ensure that catalogs for the current visit are loaded after association of the previous visit is complete. If you are running a large job in batch using BPS, use https://pipelines.lsst.io/modules/lsst.ctrl.bps/quickstart.html#job-ordering
I would strongly encourage you to use one of our full pipelines, either AP (in ap_pipe) or DRP (in drp_pipe). If you do, then you will pick up changes as they happen. There are also a few other new tasks added in v29 since v24: rbClassify which computes reliability scores of diaSources, filterDiaSrcCat which filters out likely junk diaSources based on flags. You will also want to set the config doSolarSystemAssociation=False for diaPipe, since the task that loads predicted solar system object positions currently only works for Prompt Processing with live Rubin data. We hope to extend that capability to historical data in v30.

1 Like

@isullivan One complication in this discussion is that I believe @caimx is asking from the point of view of an external developer trying to get our pipelines to run on their telescope. Our release notes are sometimes a bit opaque about describing changes to internal recipe organization. Going from v24 to v29 is also going to be a huge leap since a lot has changed.

Looking at our complete recipes should provide clues as to how to modify things.

1 Like

Thank you both for the detailed and very helpful explanation! We appreciate the guidance and the context about the evolution from v24 to v29 — it helps us better align with the current design philosophy of the LSST pipelines.

Thank you again for your support!