Dear all,
I hope you will join us for the March LINCC Tech Talk session that will take place next week, Thursday, March 14, at 10h PT = 13h ET = 14h CST = 18h CET on Zoom (https://ls.st/lincc-talks ). We will hear from recent participants of the LINCC Frameworks Incubator Program: Grant Merz, about DeepDISC photo-z: Integrating image-based photo-z estimation in RAIL, and Tom Wilson, about lsdb-macauff: The Trials and Tribulations of Catalogue Cross-Matching in the Era of LSST.
DeepDISC photo-z: Integrating image-based photo-z estimation in RAIL
Taking a huge step in the new era of survey science, the upcoming Legacy Survey of Space and Time will produce deep optical observations of billions of galaxies. These data will open the door for new and precise cosmological analyses. However, the lack of complete spectroscopic coverage of the LSST footprint requires the use of redshifts derived from the available photometry (photometric redshifts or photo-zs). Current frameworks for photometric redshifts typically involve template fitting or machine learning methods. These frameworks utilize catalog-level information as input features, i.e., magnitude measurements and errors for each object. In order to validate photometric redshift codes, the LSST DESC package, Redshift Assessment Infrastructure Layers (RAIL), provides a modular structure for implementing and testing new methods. In this talk, I will describe the work done as part of a LINCC Frameworks Incubator to adapt a novel image-based photo-z estimator, DeepDISC, into RAIL. I will describe technical challenges (and their solutions) of synthesizing DeepDISC + RAIL and present preliminary results of applying our package, rail_deepdisc, to simulated DESC DC2 image data. I will present a comparison of our results with catalog-based methods and contextualize them with the LSST science requirements. Finally, I will describe the creation of a simulation framework developed as part of this Incubator and future applications for image-based photo-z estimators.
lsdb-macauff: The Trials and Tribulations of Catalogue Cross-Matching in the Era of LSST
The Vera C. Rubin Observatory’s LSST will, with its unparallelled completeness limit, suffer unprecedented levels of crowding. Conventional cross-matching techniques will paradoxically have both an unusably high false positive and false negative rate. The next generation of photometric catalogues, beginning with LSST, will then need to rely on more advanced techniques for counterpart association. Through a Rubin in-kind contribution within LSST:UK, we have developed a new cross-match framework, macauff, capable of overcoming these high levels of confusion and issues such as positional shifts caused by unresolved contaminant objects as well as confirming true matches through the colour of the sources in addition to using astrometric information.
LSST is expected to eventually detect around 30 billion astrophysical objects, a dataset of unrivalled size. This further compounds the issues of counterpart assignment, as, in addition to increases in complexity of algorithms like cross-matching, there is a step-increase in the complexity of the data handling itself. To overcome this part of the problem, the LINCC team have been developing lsdb, capable of handling queries to data on LSST scales. In this talk I will discuss our work incorporating macauff into the lsdb framework, extending the service to be able to include more complex catalogue cross-matching algorithms, both as a static service and as user-callable software.
LINCC Tech Talks are held on the second Thursday of every month. Events are also advertised at our web page and also provided in calendar form; and the #lincc-tech-talks LSSTC Slack channel is always available for discussions before, during, and after the talks.