Handling your Filtered Alerts

Suppose you have an algorithm to decide if an LSST object is interesting enough to consider follow up observation with a big telescope. You start by splitting the algorithm into primary and secondary filters: first perhaps choosing only bright objects from a watchlist, with a host galaxy, or fast hot risers; then you have a code to analyse the lightcurve and other parameters in more detail, perhaps fitting a model or running a machine-learning algorithm.

Lasair is a platform for the primary filter: it offers Sherlock crossmatching with many catalogues, matching with your own uploaded watchlist, fitting an explosion model (BBB), colour temperature, and many other ways to filter. There is then near-real-time delivery via machine-readable Kafka.

Your secondary filter receives those alerts that pass that primary filter, that may include the light curve in six filters, then selects what is most interesting. There are a number of ways to handle those objects deemed interesting by the secondary filter.

  • Annotate Lasair. Results from the secondary filter are bundled into a classification and a JSON packet to be sent back to Lasair. Others can now run their own web/API queries to discover these; there can be another filter for as those objects get new alerts. If you ask the Lasair team, we can make your annotator “fast”, meaning that as soon as an object is annotated, the filters that use it are run immediately and their Kafka streams updated.

  • Automated message to a Slack channel. Your secondary filter posts messages automatically with a Slack webhook, then you and your group can see these and start discussion, with an eye to follow-up observations.

  • Build a stream with Hopskotch or another Kafka-based streaming producer.

  • Assemble results into a spreadsheet, webpage, or marshall system, for example xlsxwriter or YSE-PZ, which can be reviewed by you and your group preceding followup.

  • The secondary filter could initiate follow-up completely automatically, using TOM toolkit or another automated observation system.

  • Convince the Lasair team that your algorithm should become part of their codebase, thus generating new “features” that will appear in the Lasair schema and therefore can be used to build a primary filter.

  • Work with the Fink or Antares teams to add your filters and algorithm to the classification pipelines that they host, through creating a pull request.