We are interested in detecting strongly lensed transients using time-series of images, which requires access not only to the nightly stream but also to past alert cutouts.
My questions are:
Will Lasair provide image cutouts (reference/visit/DI) for all ~10 million alerts per night?
Will Lasair store past alert cutouts (since the Prompt Products Database does not)? If yes, for how long
Will there be usage limits for individual users (e.g. maximum number of alert packets, bandwidth caps, or rate limits)?
Our plan is not to download all alerts, but to filter the nightly stream down to a much smaller candidate set. However, having access to past cutouts is critical for our science goals.
I am pleased to see that you intend to filter down the full stream, hopefully by quite a bit. Please note that one of the community resources is a watchlist of the Abell clusters which might be a good starting place to build your filter for strong lenses.
(1) Lasair provides image cutouts for all the alerts: target+reference+difference. You can fetch them with the Lasair API as follows
result = L.object(diaObjectId, lasair_added=True)
urls = result['lasairData']['imageUrls']
The list urls will be URLs pointing to 3N cutout FITS, for the N diaSources of that diaObject. For more information see the documentation
(2) Lasair will store the past cutouts for at least a month, perhaps more, then they will be deleted. The cutouts take a lot of storage. It means that the older URLs in the above list will not work.
(3) The Lasair API is throttled to 100 calls per hour to prevent abuse. See here. However you can ask to be in the power user group that allows 10,000 calls per hour.
With the alerts starting yesterday, I was trying to find a way to get the alert cutouts when I found this thread. However, the proposed solution does not seem to work for me. For example, let’s say I want to download the image triplet for an epoch of diaObjectId = 170019696262250510. My code:
objResult = L.object(diaObjectId, lasair_added=True)
urls = objResult['lasairData']['imageUrls']
science_url = urls[0]['Science']+'.fits' #select the first one
wget.download(science_url, out= 'data/temp.fits')
This does seem to download something, but it is not a fits file. The url: https://lasair-lsst.lsst.ac.uk/fits/170054922241310762_cutoutScience.fits also redirects to the Lasair homepage, and not a fits file. I tried not appending the “.fits”, but it does not help either. I tried consulting the documentation but could not find a solution. Any help is much appreciated I’m not sure if this matters, but I am running this code in the RSP.
I have just seen your recent response here and it does work for me now as well. Thanks! I had one last question: is it possible to also get a cutout with the variance?
No sorry. The cutouts will already be a huge amount of terabytes. We will not be able to keep years of them. We have chosen to keep 4 times as many by not storing the variance or mask.
Hi @roy ,
Many thanks. I am playing with the notebooks. While downloading the cutouts, I sometimes get the TimeoutError: The read operation timed out after getting cutouts for a few MJDs. Sometimes I get cutouts for all the MJDs without any error.
for cutouts_triple,ds in zip(result['lasairData']['imageUrls'], result['diaSourcesList']):
print('diaSourceId=%d at MJD %.3f' % (cutouts_triple['diaSourceId'], ds['midpointMjdTai']))
render_cutouts(cutouts_triple)
I don’t think this is related to the throttling (?) I was trying diaObjectId 170019696274833525.
Is this a server issue?