Dears,
First, I want to thank everyone who participated in the work and writing of the DP1 paper. I found it extremely well written and informative.
We are working on a pipeline for searching optical counterparts to high-energy transients. For this purpose, we have prepared a set of visit images injected with fake sources, initially using data from DP02 and now from DP1. We are using these images to test custom detection pipelines that will be fine-tuned for the sources of interest in our research.
Our approach so far has been to take bits and pieces from the LSST pipeline (e.g., routines from the diffim package) and use them to produce custom catalogues, tables, and data structures to be later searched for interesting transients. This has some unfortunate consequences:
- We are writing a lot of code that is probably reinventing the wheel and is neither as efficient nor as battle-tested as what is already in place.
- Our data products are not conformant those produced by LSST, which means we need to develop separate analysis pipelines and glue code to handle both our custom data products and the official LSST data products
Now my question: is it feasible to run the whole DIA pipeline on a (small-sized) set of user-provided visit and template images, using custom configurations, and receive as output fresh (and small-ish) versions of, at least, the DIASources
, DiaObjects
, and ForcedSourceOnDiaObject
catalogs?
If this is achievable, does anyone have a pointer on where to start looking for a hook?
I have checked the docs, the official DP02/DP1 tutorials and the community forum search but could not find anything answering this question.
Thank you so much,
Peppe