Access to Full Alert Stream

Hello,
Apologies if this has been discussed previously. Will Lasair provide access to the full alert stream? Ultimately I’m looking for a data product, or to create a data product described here as the “Lite” version of the alert stream: stripped of history and cutouts. My understanding is that egress bandwidth would be the major concern.

Cheers,
Alec

Hi Alec – The Lasair-ZTF has had such a “lightweight” stream for a couple of years now, its used to drive a real-time display with a raspberry pi. I’m sure the Lasair-LSST will do something similar. The ZTF attributes are:
objectId, ra, dec, jdmin,jdmax, gmag, rmag, ncand, sherlock_classification
and you can get it with the topic lasair_2Lightweight from kafka.lsst.ac.uk. More info here or just DM me and say what you what to do. – Roy

Hi @alec_b612 , can you confirm if @roy’s response provided what you were looking for? If so, could you please mark it as the solution? If it wasn’t quite what you were looking for, please provide further clarification and we can keep this thread open for additional discussion.

HI @roy - What I’m looking for is a stream that contains every alert (both DIASource and SS), but with less columns. Specifically I would like to subtract the postage stamps and object histories to cut down on bandwidth (and potentially storage, though I imagine I will serialize the data we want to keep and truncate the kafka storage regularly). Perhaps there is a way to do this with the filter functionality?

I realise now that our filter capability is truncated at 1000 out for 40,000 in, to make sure we are not overwhelmed by people asking ten million packets per night. I think we might be able to help but you will need to give us a compelling reason for such a big effort.

Yes, of course, let me explain our science goals. We aim to ingest a relatively up to date (daily) point source catalog to perform additional point source linking, both with the THOR algorithm and with our arc extension service (ADAM::Precovery). The data we need is essentially the astrometry, plus exposure metadata.

Our two main options to retrieve this large set of data are 1) work with a broker who is willing to deal with the outgoing bandwidth or 2) Perform daily ETL via the TAP service from the prompt processing db. I am trying to establish the most reliable means of to collect the point source data, there is no telling if TAP will be up for the bulk data exports.

I know there is a possible third option, pulling the serialized data from a Cloud bucket, but to my knowledge that is not codified. I am trying to establish the most successful option so tha twe are not scrambling to build one when data products become ready.

Looks like your Asteroid Institute is partnered with Google Cloud. I note that the Pitt-Google broker is also Google Cloud. If I were you, I’d want to have a chat with Michael Wood-Vesey.

Hi @alec_b612,

Were you able to get the guidance needed to address your request? I’d also like to point you to the Rubin Alerts & Brokers page for further information and contact info on the brokers, like the Pitt-Google broker Roy mentioned.

Hi @ryanlau
Yes, I think so. I’m in touch with Pitt-Google broker to discuss next week. Thank you.

Great, I’m going to go ahead and mark Roy’s response on the filter capability as the solution here, but @alec_b612 , if you have an update or feedback to share from your discussion it would be great to hear about that here.