Title: LOR: Flagging Likely Photometric Redshift Outliers
Contributors: Adam Broussard (Rutgers University)
Co-signers: Eric Gawiser (Rutgers University)
0. Summary Statement
This letter of recommendation advocates adding a flag that identifies the probability that a given object’s photometric redshift is not trustworthy. There are several approaches to this in existing codes and the literature. A new approach that we use as an example implementation here is that of Broussard & Gawiser (2021) where, after running a standard photo-z code, a separate neural network classifier (NNC) is trained and used to estimate the relative confidence that each object has an accurate photometric redshift. It is capable of greatly improving the outlier fraction and standard deviation of the resulting photo-z sample with only a small increase in the normalized median absolute deviation when retaining ~1/3 of the original sample, and it outperforms similar cuts made with reported photo-z uncertainties.
1. Scientific Utility
This NNC selection method is designed with a tomographic large scale structure analysis in mind, but the extensive utility of selecting particularly accurate photo-z’s makes it useful for a large number of science applications. The NNC output confidence values could be calibrated to a statistical probability and used to generate a flag indicating that a particular galaxy has a high (e.g., >95%) confidence of an accurate photo-z fit, defined in Broussard & Gawiser (2021) as having (z_phot - z_spec)/(1 + z_spec)<0.10.
2. Outputs
This method would require the outputs of a separate initial photo-z code as training data, and can flexibly accommodate any number of descriptive statistics (though the inclusion of at least the point redshift estimate and its Gaussian uncertainty is recommended). In turn, it would produce a confidence value between 0 and 1, with 0 representing a strong confidence in an inaccurate fit and 1 indicating a strong confidence of an accurate fit.
3. Performance
The NNC selection method does not have any particular photo-z performance requirements. We find that the NNC is capable of producing an improved sub-sample regardless of photo-z fit quality, though it yields better overall results as the quality of the initial photo-z fits improves. A sample of at least ~50,000 spectroscopic redshifts for detected objects is needed for training.
4. Technical Aspects
Scalability - Will Meet
The NNC produces nearly identical results for training samples of more than 50,000 objects. Spectroscopic data sets in Hyper Suprime-Cam fields already meet this criterion, with deep spectroscopic coverage expected in the Deep Drilling Fields also.
Inputs and Outputs - Will Meet
All inputs are catalog-level and the numerical confidence values can be directly added to the catalog or processed into a Boolean flag.
Storage Constraints - Will Meet
This method requires no additional storage beyond the photometric data and outputs of initial photo-z fits.
External Data Sets - Will Meet
This method has already been demonstrated using spec-z’s from various spectroscopic surveys compiled by the Hyper Suprime-Cam team. These or other spectroscopic surveys could be used to train the NNC when deployed for the LSST.
Estimator Training and Iterative Development - Will Meet
Broussard and Gawiser (2021) demonstrates the capability of the NNC in classifying galaxies using a boundary of delta z /(1+z) < 0.10. While we do not anticipate the need for major revisions to this training boundary, it may be useful to tune it prior to full deployment for the LSST.
Computational Processing Constraints - Will Meet
Due to the relatively small necessary training sample size and ability of neural network algorithms to train in epochs, a large amount of memory is not required to train or apply the NNC.
Implementation Language - Will Meet
The NNC is implemented using the Keras software package. Keras is a machine learning API that implements TensorFlow for the neural network itself. Tensorflow is written in a combination of Python and C++.