SMWLV Observing Strategy Hack Day 1, Thurs Feb 18th, 10:00am - 3:00pm EST

FYI @dallora, @pmmcgehee, @lgirardi

1 Like

@willclarkson if it would be helpful for you, I could make up a similar kind of notebook to https://github.com/lsst-pst/survey_strategy/blob/master/fbs_1.7/Demo_SSOmetrics.ipynb if there are some specific summary metrics which you would be interested in.

If you are curious “what summary metrics are even available?” I would encourage you to look through the stack of metrics illustrated for the baseline_nexp2_v1.7_10yrs run at
http://astro-lsst-01.astro.washington.edu:8081/allMetricResults?runId=391 (these mostly include basic metrics about the survey characteristics … depth, airmass, etc.)
and
http://astro-lsst-01.astro.washington.edu:8081/allMetricResults?runId=393 (these are higher level science metrics)

If none of these reflect things you’re interested in, that would be great to know and to remedy.

Thanks, @ljones, this is very helpful. We’ll make sure hack day participants know about this.

Looking over the set of summary metrics for baseline 1.7, I didn’t see crowding_to_precision included as a summary metric in either the basic metrics or the higher level science metric summary, but it might not have been in a category I would recognize… is crowding_to_precision among the metrics summarized in that second pair of links?

Do you mean the results of this metric CrowdingMetric ? (which calculates the magnitude at which crowding error equals photometric precision, if I’ve got that right)

We’re not calculating that explicitly, but we do use the underlying crowding formula to calculate the number of stars down to X precision (with and without crowding).
http://astro-lsst-01.astro.washington.edu:8081/summaryStats?runId=393#Milky%20Way
which are calculated using the NStarsMetric.

I will note that we have had numerous complaints that the crowding calculation must be inaccurate because the resulting stellar counts seem too low to people. (the input stellar luminosity function seems about right and matches with model expectations … but the crowding effect seems to throw the count results off*). I don’t believe we’re doing anything incorrect when applying the crowding effect cutoff, so it would be good to hear what you think.

Now, that said, if you would still like the crowding error metric calculation results - we can run the crowding metric (per bandpass? all bandpasses?) over the sky, but how would you like to consolidate that from maps into numbers you can compare across different simulations?

*the number of stars we estimate to measure with crowding ‘on’ seems to result in fewer stars than others report in surveys such as DECAPs. It’s not clear to me if part of that is because DECAPS is doing deblending differently than assumed in what was built into the crowding metric? Or if it’s something else?

Hi @ljones - apologies, I think I confused the plot title with the metric name there… I meant the crowdingMetric.crowdingM5metric. For one of our bulge metrics and investigations validating this metric with data, we have been running this separately for each filter using crowding_error=0.05 mag.

In terms of numbers to compare across different simulations: some of us have been working to implement the metrics and figure of merit specified in the Gonzalez et al. (2018) whitepaper so that it can be run on the opsims. Validation of crowdingM5Metric against real data is important to that figure of merit - that’s also ongoing.

That said, it’s likely that something simpler - say, summary statistics for the crowdingM5metric evaluated over a few spatial regions of interest (like the inner plane, the Magellanic Clouds, and an “outer” plane region) - might be more broadly useful. It’s been very easy for us to get stuck in the complications of what the apparent magnitude of a given tracer population might actually be: a simple depth figure of merit though might be a useful thing for us to specify during our hack day tomorrow.

Hi all - here are some useful resources for the Feb 18th hack day. All three have the zoom link and password for the meeting on the first page.

Thanks, all, looking forward to seeing you at either or both of the sessions!

HI @ljones, are these complaints about the inaccurate crowding calculation documented somewhere? I would like to check if there is some obvious trick (e.g. a change in the crowding formula) to approach the real numbers.

Hi all - we are in “lunch break” and will resume the hack day at 12:00 Noon EST. I have left the zoom meeting active so that folks can continue to interact over “lunch” if they wish.

I don’t think they’re officially documented anywhere, it was more some people wrote an email … I can try to dig back into my email history, but the emails weren’t even sent directly to me, so it’s more ‘heresay’.

Ok - so in trying to look for any more details, mostly I came up with this thread (which you feature prominently in:) )


and what I note is that it doesn’t say the MAF limiting magnitude results or star counts are wrong … just that dust is maybe fairly uncertain.

However, I also think it was maybe @willclarkson who brought the problem up, and it was perhaps in comparison with not DECAPS, but his BDBS survey.

Thanks @ljones - this was a point of dicsussion at the morning session of the hack day. We find BDBS (DECam survey led by Mike Rich) achieves completeness on the order of 2-4 magnitudes fainter than predicted by sims_maf in inner bulge regions (also fainter than @lgirardi’s maps). We will be doing a bit more digging into this issue just to clarify what we mean by “completeness” - it might be that the issue is more of interpretation of the numbers MAF is outputting in terms of the depths achieved with seeing-limited data.

1 Like

Post-hack-day-summary: It looks like this first hack day was a success, with some good discussion and definite actions / specifications arising from the work. Thanks very much to all the participants, to @yoachim for MAF support during the meeting and to @ljones for input and insights. We are planning a second cadence hack day: while we have penciled in Thursday March 4th, the actual date will be announced a little closer to the time.

We intend that people will be able to continue the work they have started using the online resources they have created during the hack day. Here are some key links from the first hack day (links will be updated as feedback from participants comes in):

Here are some specifications for figures of merit that are in progress, based on interactions at today’s hack day. I have tagged who I think the lead authors are in each case, any corrections are welcome. If I have missed your specification document, please do add it here!

There was also useful discussion in the afternoon session on strategies and metrics for Galactic bulge science.

Much of the morning session focused on the key issue of validating the crowded field estimates from LSST. Slide 14 in the welcome slides lists actions from that discussion.

I mentioned this on the slack channel, but please do not use the saturation stacker in the jupyternotebook linked in the google doc about saturation limit above. We have incorporated this stacker into MAF itself, and it has newer/updated parameters that will give correct answers (the values returned by the version of the stacker in the doc link above will not be accurate).
Example of the updated values: https://github.com/lsst-pst/survey_strategy/blob/master/fbs_1.7/Saturation%20Limits.ipynb

Note, that I think if the criteria for ‘bright’ is only ~15 mag, we have visits that do not saturate at this limit even in the standard survey, without short exposures.

  • Do you care if you have a bright saturation value, but it’s in bad seeing? Is that ‘better’ than a bright saturation limit in a short exposure, but with better seeing? (there is considerable overlap).
  • Are you more interested in evaluating the brightest saturation limit over the sky (i.e. there may only be 1 visit at that limit) or something like the 5th percentile value (i.e. there will be visits which may have much brighter saturation magnitudes, but at least some number of visits in that filter will have that value or brighter)?

I also noted a comment in one of the docs asking about Rubin magnitudes for various spectral types of stars … you can definitely calculate this for any SED using sims_photUtils, but a few types are already available via a utility in lsst.sims.utils:

The StellarMags utility can help.

import lsst.sims.utils as utils
mags = utils.StellarMags(spectraltype, rmag)

Returns a dictionary (mags) with the magnitude in each bandpass (u, g, r, i, z, y). (see the class itself for more information on what types are available/SEDs used).

(this utility may move in the future, as it probably might be more at home in sims_photUtils, but we’ll provide warning).

1 Like

Thanks, @ljones - I will correct the notebook linked there with [updated:] the class now found in MAF version (since I don’t know on what timescale MAF will be updated on Datalab or Sciserver). I doubt I’ll get to that for another few hours. [Edit: our messages crossed, here I’m referring to the saturation limit.]

Thanks for the questions about the saturation level, we’ll give it some thought and update the specification, when I wrote that example spec I actually took the output from the older notebook and used that as the upper limit! We definitely need a more scientifically justified bright limit, which I would guess would be brighter than 15. I think at the moment we’re using the median saturation level within the block of “short” exposures, but that too is something we should probably consider more scientifically.

Update: the notebooks have been corrected (pending approval of a pull request to SMWLV-metrics: here’s the updated notebook in a github repo I do control, which I think is the one linked from our unofficial walkthrough). They now use the Saturation Stacker defined in sims_maf: once both Sciserver and Datalab have updated their sims_maf to the latest github version, I’ll update the notebooks again so that they just use stackers.SaturationStacker.

Thanks @ljones . For people who want to dig into this, bolometric corrections in LSST filters are available in the YBC database https://gitlab.com/cycyustc/ybc_tables , for very extended libraries of synthetic spectra (Kurucz ATLAS9, PHOENIX BT-settl, Aringer’s cool giants, Koester’s WDs, Wolf-Rayets,…). You just need to assume logL, Teff, and a metallicity to convert them to absolute magnitudes and colours. The library is actually too extended to be used in MAF, but a shorter version could be prepared if people want it.

HI @willclarkson @mdallora @knutago , regarding the issues with the crowding limiting maps, and the previous discussion about it mentioned by @ljones : I fear that people might be referring to diferent quantities in the definition of the crowding limit. We all agree that it is the “magnitude at which photometric errors due to crowding become larger than 0.1 mag”. But these errors should be due to crowding, not to shot noise. So, when we look at empirical maps (from DECaps, BDBS surveys), might we be looking at the point with 0.1 mag errors are due to shot noise?
My understanding is that the errors due to crowding are better assessed with artificial star tests, which usually indicate errors larger than those provided by photometry pipelines. That is, could it be that we modellers and we photometrists are using different errors to define the crowding limit?

Hi Leo, yes, I agree with your points. In the absence of artificial star tests, a potentially useful rule of thumb is that in a crowding-limited case, completeness typically drops below 100% when crowding error reaches 0.1 mag. So in talking with @willclarkson yesterday, who mentioned that their observations were constrained to 0.05 mag errors (as reported by the pipeline) and >90% completeness , I wondered whether the actual crowding error is closer to 0.1 mag than 0.05 mag, which would make a difference of ~1.5 mag in depth.

Hi @knutago and @lgirardi - the uncertainties in our DECam data are the estimate of the uncertainty on the mean brightness over all exposures at a given filter for each object (which varies from a few to >30 for some objects), not the uncertainties output by daophot. So in that sense they are empirical estimates of the random uncertainty on the mean brightness (though not yet from artificial star tests). If indeed we are currently underestimating our uncertainty by a factor two, then yes that could bring our estimated depth up as you suggest. I doubt we’re off by that much, but artificial star tests would be a way to validate the validation set.

I can ask Christian Johnson in BDBS on what sort of timescale we could expect artificial star tests for BDBS (we’d probably want to do this for the full pipeline used in that project). DECaPS may have already published artificial star tests for their photometry.

In @knutago and my discussion yesterday (I forget over which channel) I agreed that it would be useful for me to plot a BDBS depth map just showing the apparent magnitude at which our photometric uncertainty estimate is 0.1 mags to be in line with Knut’s rule of thumb. I will try to do that in the coming week.

Edit: Christian’s paper on BDBS has much more information on the calibration of the BDBS photometry and its uncertainty. Here’s the ADS link to Johnson et al. 2020.

One way to make progress on a shorter timescale might be to compare the MAF prediction with DECam depth for a fairly generous uncertainty limit - or even just whether objects are detected at all. For example, DECaPS is detecting stars at b=0 down to r~22-23 (figure 10 of Schlafly et al. 2017), just from a look at where their CMDs drop off. I wonder if there is a prediction from Leo’s simulations and/or MAF that predicts the faint end? (Or is that still critically dependent on the selection function in the photometry?)

[Edit - I retract this comment, I think this is what is shown in @lgirardi’s community post linked above: LSST crowded field static science discussion]

Also sharing this comment from @calamida