Question regarding PSFs for images

Are the PSFs included in the FITS files the actual PSFs or are some of them encoded for retrieving the PSF from a database? I am referring to WarpedPSF, PsfexPSF, CoaddPSF as depicted in the image attached from an lsst_deepCoadd.

Thank you in advance!

Hi @sierrajanson , could you provide a bit more information here about where these FITS files came from and how they were made?

The PSFs that can be extracted from, e.g., a calexp exposure (or other) which was retrieved from, e.g., the data butler, are actually the PSFs for the image itself.

Yes!

The above FITS file came from Rubin, pulled from this query:
“”“SELECT TOP 10 dataproduct_type,dataproduct_subtype,calib_level,lsst_band,em_min,em_max,lsst_tract,lsst_patch,
lsst_filter,lsst_visit,lsst_detector,lsst_ccdvisitid,t_exptime,t_min,t_max,s_ra,s_dec,s_fov,
obs_id,obs_collection,o_ucd,facility_name,instrument_name,obs_title,s_region,access_url,
access_format FROM ivoa.ObsCore WHERE dataproduct_type = ‘image’ AND dataproduct_subtype = ‘lsst.deepCoadd_calexp’ AND CONTAINS(POINT(‘ICRS’, 62, -37), s_region)=1"”"

I was wondering if I could be pointed in the right direction to detailed documentation about fields in coadded images/the structure of the HDUs; I am trying to get information about the PSF so I can recreate it for a function I am using locally (external to Rubin Notebooks)

OK great, thank you @sierrajanson for confirming these are DP0 deepCoadd images, that’s helpful.

There is not yet a DP0 tutorial that demonstrates how PSF information is included when images are obtained via the TAP service, but there is one that demonstrates how to access PSF data for images obtained via the Butler (different image formats). In case it’s useful: DP0.2 tutorial notebooks 12a " Introduction to Point Spread Function (PSF) Data Products" can be found in the notebooks/tutorial-notebooks/ folder that is in all users’ home directories in the Notebook Aspect of the Rubin Science Platform at data.lsst.cloud. For a quick look, a rendered html version is also available.

I’d like to try and create a demo for you that shows how to get information about the PSF from FITS files obtained via ObsTAP. It would help to know what you need, or whether you’ve tried something and got an unexpected error?

And if anyone with experience accessing the PSF information from FITS files would like to chime in, please do.

1 Like

Thank you Melissa for offering to make a demo that would be quite helpful!

What I need:
I need to find the PSFs for each source in a FITS image so I can pass it as a parameter to pysersic’s FitSingle class. It needs to be graphable; and similar to the below screenshot (from the PSF in the examples folder on Pysersic Readthedocs Github). I have to do this locally, so I can’t use any lsst resources that are notebook-specific (butler).

What I have tried/the information I have:
The getPSF function in the 12a notebook tutorial uses a CoaddPSF object as shown below, so perhaps the CoaddPsf section in the FITS header should be used for reconstructing the PSF (but the PsfexPsf also looks promising).

image

I have tried printing out the CoaddPSFs (there are two included in the header; it seems as though one is metadata and the other one contains the actual PSF
) and their columns, and I can easily share the screenshots of those if helpful, just not sure how comfortable Rubin is with that information being public.

It appears as though someone tried to do something similar in a previous Rubin post but the code didn’t work for the psfex PSF header in my image and left me with a 2D array in which the same 3-column 1D array was repeated 255 times.

Thank you for your time :]

As per request, here’s the exact code I’m using to extract the image:

import pyvo
import os

# AUTHENTICATION FOR TAP SERVICE ----------------#
RSP_TAP_SERVICE = 'https://data.lsst.cloud/api/tap'
homedir = os.path.expanduser('~')
token_file = os.path.join(homedir,'.rsp-tap.token')
with open(token_file, 'r') as f:
    token_str = f.readline()
cred = pyvo.auth.CredentialStore()
cred.set_password("x-oauth-basic", token_str)
credential = cred.get("ivo://ivoa.net/sso#BasicAA")
service = pyvo.dal.TAPService(RSP_TAP_SERVICE, credential)

# QUERYING ------------------------------------------#
from pyvo.dal.adhoc import DatalinkResults, SodaQuery
query = """SELECT TOP 10 dataproduct_type,dataproduct_subtype,calib_level,lsst_band,em_min,em_max,lsst_tract,lsst_patch,
       lsst_filter,lsst_visit,lsst_detector,lsst_ccdvisitid,t_exptime,t_min,t_max,s_ra,s_dec,s_fov,
       obs_id,obs_collection,o_ucd,facility_name,instrument_name,obs_title,s_region,access_url,
       access_format FROM ivoa.ObsCore WHERE dataproduct_type = 'image' AND dataproduct_subtype = 'lsst.deepCoadd_calexp' AND CONTAINS(POINT('ICRS', 62, -37), s_region)=1"""
results = service.search(query)

dataLinkUrl = results[0].getdataurl()
auth_session = service._session
dl = DatalinkResults.from_result_url(dataLinkUrl, session=auth_session)
fits_images = []

for i in range(len(results)):
    dataLinkUrl = results[i].getdataurl()
    auth_session = service._session
    dl = DatalinkResults.from_result_url(dataLinkUrl, session=auth_session)
    fits_image_url = dl.__getitem__("access_url")[0]
    fits_images.append(fits_image_url)

# downloading to folder
import requests
for i in range(len(fits_images)):
    response = requests.get(fits_images[i])
    if response.status_code == 200:
         with open(f"rsb_fits_images/test{i}.fits", 'wb') as file:
             file.write(response.content)
         print("File downloaded successfully")
    else:
        print(f"Failed to download file. Status code: {response.status_code}")

I haven’t used the TAP service to know enough about its capabilities/limitations. But the FITS file is self-contained and has all the info needed to reconstruct the PSF at any point. The coadd PSF image is constructed lazily on demand. The PsfexPSF objects are the PSF models for the single visit images (calexp) that get warped to the coadd coordinate frame as WarpedPsf objects which are then coadded to form a CoaddPsf object. Calling computeImage method on any of these PSF objects does this all under the hood, and this is where my limitation of TAP service comes in - whether the images obtained still retain all of these methods attached to them. These calculations are non-trivial and I wouldn’t necessarily recommend doing it outside of LSST Science Pipelines for now. We do plan on making coadded images where the coadded PSF images would be available directly, but that functionality would be available starting from DP1 onwards.

The VO services return the file as was written by butler with no modifications.

1 Like

Is there a method in the LSST Science Pipelines that could take the FITS file and return a calexp or deepCoadd? And then the mehtods to produce a PSF model that these objects have can be used.

exposure = lsst.afw.image.ExposureF.readFits(<path_to_fits_file>)
psf = exposure.getPsf()
psfImage = psf.computeImage(<Point2D object corresponding to the location>)

This circumvents having to access the images themselves via a Butler, but uses the LSST Science Pipelines to convert the FITS file to appropriate in-memory objects that have the nifty methods. This is agnostic to whether the image was calexp or deepCoadd to begin with.

2 Likes

I think this is the relevant post from the previous topic: How to extract PSFEx PSF from a PVI/calexp outside of Science Pipelines? - #2 by gpdf

At the moment, while the image data products are all required to be FITS-4.00 conformant, and the image, variance, and flags/mask extensions should be straightforward to use in community software, we have not (yet) focused on making the substantial additional content in the files (all in binary-table extensions) usefully available via mechanisms other than the Rubin Science Pipelines. This is not something that will be fully addressed before DR1, I think, realistically.

We do have a Observatory-level requirement in the long run to maintain the accessibility of the data, and I personally (this is not formal policy, to be clear) think this means being able to document the format of the data in such a way that it is scientifically usable without the stack.

1 Like

But I do believe that all the PSF data is actually contained in the extensions; there is no external lookup to another data source in the Butler repo, and I don’t think there’s any information that comes from the obs_lsst code package.

1 Like

Thank you, Arun. This is what I had in mind!!

Thank you all for your responses and support, particularly that of Andrés A. Plazas Malagón from DM!

He found that changing the code from the above linked PSFEx PSF extraction Rubin community post from
psf_basis_image = comp[0].reshape(*size[0][::-1]) to psf_basis_image = comp.reshape(*size[0][::-1]) the code will yield a graphable PSF for deepCoadd FITS images. It sounds like you could download the LSST stack locally as well and use their methods; (I would have tried this too but I had some difficulties downloading the stack on my system; but all is well)

Here is an example PSF scaled harshly for sake of viewing the structure below:
image

It appears perhaps that there is only one PSF attached to each FITS image from PSFEx.
This PSF works for my purposes.

After consulting with other members of Rubin and developers of the LSST Science Pipelines, we recommend utilizing the methods provided by the LSST Science Pipelines to accurately obtain a PSF representation from specific points in the coadded images. Currently, there is no documentation available on the contents of the headers of the individual FITS files, and they do not fully comply with the FITS standard, as noted in another post ([How to extract PSFEx PSF from a PVI/calexp outside of Science Pipelines? - #2 by gpdf]). Integrating the information from these headers, such as appropriate weighting and warping, to generate an accurate coadded PSF at a given point is non-trivial.

I discussed with @sierrajanson and proposed that, for now, they work primarily within the cloud-based Rubin Science Platform (RSP) to generate the required PSF images. Subsequently, they can transfer these images to their local computer or institution’s cluster equipped with GPUs for their machine learning project (Sierra mentioned their intention to train a ML algorithm using third-party software on GPUs).

1 Like

Just to wrap up on this, based on the ensuing discussion and the fact that a solution has been marked, we’ve decided this isn’t needed/appropriate at this time.