Questions regarding real-time data reduction

Dear all,

I was wondering if I could ask a few questions about the LSST data reduction pipelines.

For reference, I have no relation to LSST but have recently been reading some of the technical papers out of interest — I am a physics undergraduate with some background in software instrumentation and image reconstruction for radio interferometers (e.g., arxiv.org/abs/1708.00720).

Here are my questions:

  • Does the computational cluster for the real-time data reduction pipelines have a well-defined set of hardware yet? If so, does it include GPUs? What is the estimated data rate per node?
  • How much automation will there be between LSST transient detection and, e.g., LCO follow-up?
    • If there is some sense of a “priority metric,” how is the observing priority (for LCO) defined based on observation? As in, will there be real-time data analysis to classify transients? If so, does that mean more computational power = more accurate priority-levels?
  • Referencing Željko, et al. (2016): “Cadence optimization: are two visits per night really needed? Would perhaps a substantial increase in the computing power solve the association problem with just a single detection per night?”
    • To clarify, does the computing power here refer to image processing power to find sources from their predicted trajectories based on limited data (which may instead be easier to fit with more than one visit/night)?
  • I have read some mentions of compressive sampling used for analysis of transient variability. Does this mean that the cadence optimization will involve some level of explicit randomness in the time duration between passes to satisfy incoherence?

Thank you in advance.
Best regards,

Miles Cranmer

Hi Miles:

Let me give it a shot:

Yes, the hardware is assumed to be relatively “vanilla” x86-64 CPUs. We don’t believe there’s a need for special accelerators (GPUs, Phis, etc.), at this time.

This will depend on LCO and community organization. LSST will transmit alerts to all potential transients, and it’s certainly possible to fully automate follow-up, but whether one does that will depend on what each particular observatory wishes to do (and how pure our stream is, etc.).

This is not about image processing, more about the combinatorial problem of “connecting the dots” along Keplerian orbits in so-called “MOPS” codes. Take a look at https://www.slideshare.net/MarioJuric/lsst-solar-system-science-mops-status-the-science-and-your-questions, and also Kubica et al. (2007) (https://arxiv.org/pdf/astro-ph/0703475.pdf).

I think that cadence optimizations sims do try not to prefer a specific time scale (or create potential for aliasing), but I’d leave it up to @ivezic or @ljones to answer in more detail.

Hope this helps!

Hi @mjuric,

Thank you for your detailed response and for answering all of my questions.
I will have a look at the references you mentioned as well.

Thanks again,
Miles

1 Like