Anticipated data products or tools quantifying completeness or detection efficiency vs. magnitude

Hello, data q&a watchers—

This is somewhat related to Alessandro’s question about survey depth: Depths for DR1 and more - Science / Data Q&A - Rubin Observatory LSST Community forum.

Are there any specific plans for producing completeness functions (detection efficiency vs. magnitude and perhaps other observables) for released catalogs, or tools for users to compute them if they impose their own selections (e.g., for a limited footprint, or an altered estimated-magnitude threshold)?

In Alessandro’s thread, @MelissaGraham pointed to the 2019 LSST paper for info about survey depths per band: LSST: From Science Drivers to Reference Design and Anticipated Data Products - NASA/ADS. There’s a detailed discussion of survey completeness there for PHAs/NEOs (see, e.g., Fig 20), but even for those, there are estimates of completeness brighter than a single fiducial (absolute) magnitude, but not completeness vs. (apparent) magnitude. And I didn’t see any specific plan for providing data products that compile completeness vs. magnitude (for various source types), or tools for users to compute them (e.g., for a specific footprint, or for a below-nominal threshold), though it’s a long paper and I might have missed this.

I understand that users may impose complicated cuts so that there can’t be a universal set of completeness functions. But I’d like to know if the DPs and DRs will include tables or tools to assist users in computing completeness functions.

—Tom

Thank you Tom for this question.

I can say that yes, there are plans to produce detection efficiencies as a function of, e.g., magnitude, local surface brightness, etc. These would be for the released catalogs, yes. There will also be tools for synthetic source injection for edge science cases that aren’t covered by the supplied detection efficiencies.

But you won’t find the plan anywhere because the Data Management Tech Note to describe them is not yet ready – still in a draft state (dmtn-231.lsst.io is where it will appear). It’s on my plate to get this done in the next six months or so.

There’s also a plan to make a tutorial for the DP0.2 data set that demonstrates use of the synthetic source injection tool, once it is complete and integrated into the LSST Science Pipelines. I think the timeline for that is <6 months.

In the meantime, if you have any comments or thoughts about the detection efficiencies data product, or a particular science case for the tutorial, I’d be interested to hear.

1 Like

Thanks, Melissa, just what I wanted to know for now. I’ll think about tutorial examples.

1 Like