SMWLV Observing Strategy Hack Day 1, Thurs Feb 18th, 10:00am - 3:00pm EST

Hi Leo, yes, I agree with your points. In the absence of artificial star tests, a potentially useful rule of thumb is that in a crowding-limited case, completeness typically drops below 100% when crowding error reaches 0.1 mag. So in talking with @willclarkson yesterday, who mentioned that their observations were constrained to 0.05 mag errors (as reported by the pipeline) and >90% completeness , I wondered whether the actual crowding error is closer to 0.1 mag than 0.05 mag, which would make a difference of ~1.5 mag in depth.

Hi @knutago and @lgirardi - the uncertainties in our DECam data are the estimate of the uncertainty on the mean brightness over all exposures at a given filter for each object (which varies from a few to >30 for some objects), not the uncertainties output by daophot. So in that sense they are empirical estimates of the random uncertainty on the mean brightness (though not yet from artificial star tests). If indeed we are currently underestimating our uncertainty by a factor two, then yes that could bring our estimated depth up as you suggest. I doubt we’re off by that much, but artificial star tests would be a way to validate the validation set.

I can ask Christian Johnson in BDBS on what sort of timescale we could expect artificial star tests for BDBS (we’d probably want to do this for the full pipeline used in that project). DECaPS may have already published artificial star tests for their photometry.

In @knutago and my discussion yesterday (I forget over which channel) I agreed that it would be useful for me to plot a BDBS depth map just showing the apparent magnitude at which our photometric uncertainty estimate is 0.1 mags to be in line with Knut’s rule of thumb. I will try to do that in the coming week.

Edit: Christian’s paper on BDBS has much more information on the calibration of the BDBS photometry and its uncertainty. Here’s the ADS link to Johnson et al. 2020.

One way to make progress on a shorter timescale might be to compare the MAF prediction with DECam depth for a fairly generous uncertainty limit - or even just whether objects are detected at all. For example, DECaPS is detecting stars at b=0 down to r~22-23 (figure 10 of Schlafly et al. 2017), just from a look at where their CMDs drop off. I wonder if there is a prediction from Leo’s simulations and/or MAF that predicts the faint end? (Or is that still critically dependent on the selection function in the photometry?)

[Edit - I retract this comment, I think this is what is shown in @lgirardi’s community post linked above: LSST crowded field static science discussion]

Also sharing this comment from @calamida