Hi, I would like to learn how well it will be possible to tell apart a star from a galaxy? How will this depend on the distance/galaxy redshift? Is it known how is the performance of the star-galaxy classifier expected to change during the first year of operation and in subsequent data releases? My apologies if this is already covered somewhere but I couldn’t find it.
Hi @andreja.gomboc , thanks for this post!
I don’t yet have an answer for you, but I asked around a bit, and other Rubin staff advise that the performance will depend on seeing (and how well we quantify PSF), SNR, object magnitude and position in the sky (the last two through priors). It was recommend to take a look at this paper, Morphological Star-Galaxy Separation - NASA/ADS.
We’d also be able to assess this with Data Preview 0 (dp0-1.lsst.io), e.g., by comparing measured extendedness values with the truth table’s truth_type
, which can identify objects that are stars, galaxies, and supernovae. Since you’re a DP0 delegate I could help you with that (and for anyone reading this who is not yet a DP0 delegate, registration is open until Apr 30 2022 ls.st/clo6362).
Some more background for anyone reading this thread: I checked some of Rubin’s requirements documents (like the Science Requirements Document, ls.st/srd) for any requirements that would give you at least the minimum performance levels to anticipate, but after a quick check I didn’t find any requirements related specifically to the performance of a S/G separator algorithm. And, although the Data Management Science Pipelines Design Document (ls.st/ldm-151) does have Section 6.20 on S/G, it’s currently light on details.