Hi all - This is to let you know that we will be holding the first SMWLV Observing Strategy Hack Day this coming Thursday, Feb 18th, from 10:00am - 3:00pm EST. This will consist mostly of time blocks for groups working on an SMWLV-relevant Cadence Note to work together remotely on their MAF-based investigations. This first hack day will be somewhat informal: this community thread will be its internet home. This google doc contains the schedule and Zoom link & password.
The main objective for the Feb 18th hack day is to ensure that metrics and figures of merit for the planned cadence notes have been specified and the specifications posted in a publicly shared location, ideally at a sufficient level of detail that you can start to implement your investigations in MAF.
Registration is not required: if you are planning on joining us for the Hack Day, you might reply to this post or email Will Clarkson so that we have an idea how many to expect.
If you are planning on working towards a Cadence Note at this Hack Day, we recommend you (re-)familiarize yourself with at least the following resources:
While the Feb 18th hack day will focus on specifications of metrics, some groups may already be at the stage of coding their investigations in MAF. If you fit this description, we recommend you request a NOIRLAB-Datalab account, as the Datalab will be the preferred venue for most investigators to run MAF remotely. Information on Datalab, including the link to request an account, can be found at this Community post.
The announcement of this was supposed to go out last Friday but there was some hiccup in the lists I think.
Here’s an update to the “cheat sheet” - https://github.com/lsst-pst/survey_strategy/blob/master/fbs_1.7/SummaryInfo.ipynb
Yes, it’s a jupyter notebook … it includes the newer 1.7 runs and drops the references to families which aren’t relevant or are superseded now.
But the other cool thing is the tool which was used to build that notebook is also available to you for your investigations - check out https://github.com/lsst-pst/survey_strategy/blob/master/fbs_1.7/Demo_FamilyInfo.ipynb as an example.
(note that you can git clone the lsst-pst:survey_strategy repo and you get both the run_infos.py and metric_infos.py classes AND the csv file containing the summary statistic information for all of the 1.5, 1.6 and 1.7 runs).
Thanks, @ljones, this is very helpful. We’ll make sure hack day participants know about this.
Looking over the set of summary metrics for baseline 1.7, I didn’t see crowding_to_precision included as a summary metric in either the basic metrics or the higher level science metric summary, but it might not have been in a category I would recognize… is crowding_to_precision among the metrics summarized in that second pair of links?
Do you mean the results of this metric CrowdingMetric ? (which calculates the magnitude at which crowding error equals photometric precision, if I’ve got that right)
I will note that we have had numerous complaints that the crowding calculation must be inaccurate because the resulting stellar counts seem too low to people. (the input stellar luminosity function seems about right and matches with model expectations … but the crowding effect seems to throw the count results off*). I don’t believe we’re doing anything incorrect when applying the crowding effect cutoff, so it would be good to hear what you think.
Now, that said, if you would still like the crowding error metric calculation results - we can run the crowding metric (per bandpass? all bandpasses?) over the sky, but how would you like to consolidate that from maps into numbers you can compare across different simulations?
*the number of stars we estimate to measure with crowding ‘on’ seems to result in fewer stars than others report in surveys such as DECAPs. It’s not clear to me if part of that is because DECAPS is doing deblending differently than assumed in what was built into the crowding metric? Or if it’s something else?
Hi @ljones - apologies, I think I confused the plot title with the metric name there… I meant the crowdingMetric.crowdingM5metric. For one of our bulge metrics and investigations validating this metric with data, we have been running this separately for each filter using crowding_error=0.05 mag.
In terms of numbers to compare across different simulations: some of us have been working to implement the metrics and figure of merit specified in the Gonzalez et al. (2018) whitepaper so that it can be run on the opsims. Validation of crowdingM5Metric against real data is important to that figure of merit - that’s also ongoing.
That said, it’s likely that something simpler - say, summary statistics for the crowdingM5metric evaluated over a few spatial regions of interest (like the inner plane, the Magellanic Clouds, and an “outer” plane region) - might be more broadly useful. It’s been very easy for us to get stuck in the complications of what the apparent magnitude of a given tracer population might actually be: a simple depth figure of merit though might be a useful thing for us to specify during our hack day tomorrow.
HI @ljones, are these complaints about the inaccurate crowding calculation documented somewhere? I would like to check if there is some obvious trick (e.g. a change in the crowding formula) to approach the real numbers.
Hi all - we are in “lunch break” and will resume the hack day at 12:00 Noon EST. I have left the zoom meeting active so that folks can continue to interact over “lunch” if they wish.
I don’t think they’re officially documented anywhere, it was more some people wrote an email … I can try to dig back into my email history, but the emails weren’t even sent directly to me, so it’s more ‘heresay’.
Ok - so in trying to look for any more details, mostly I came up with this thread (which you feature prominently in:) )
and what I note is that it doesn’t say the MAF limiting magnitude results or star counts are wrong … just that dust is maybe fairly uncertain.
However, I also think it was maybe @willclarkson who brought the problem up, and it was perhaps in comparison with not DECAPS, but his BDBS survey.
Thanks @ljones - this was a point of dicsussion at the morning session of the hack day. We find BDBS (DECam survey led by Mike Rich) achieves completeness on the order of 2-4 magnitudes fainter than predicted by sims_maf in inner bulge regions (also fainter than @lgirardi’s maps). We will be doing a bit more digging into this issue just to clarify what we mean by “completeness” - it might be that the issue is more of interpretation of the numbers MAF is outputting in terms of the depths achieved with seeing-limited data.
Post-hack-day-summary: It looks like this first hack day was a success, with some good discussion and definite actions / specifications arising from the work. Thanks very much to all the participants, to @yoachim for MAF support during the meeting and to @ljones for input and insights. We are planning a second cadence hack day: while we have penciled in Thursday March 4th, the actual date will be announced a little closer to the time.
We intend that people will be able to continue the work they have started using the online resources they have created during the hack day. Here are some key links from the first hack day (links will be updated as feedback from participants comes in):
If you participated in the hack day and have issues and actions you still wish to add, please add them to slide 16 (or add more slides if you need more space). I will close the slides to edits in the next few days.
Here are some specifications for figures of merit that are in progress, based on interactions at today’s hack day. I have tagged who I think the lead authors are in each case, any corrections are welcome. If I have missed your specification document, please do add it here!
There was also useful discussion in the afternoon session on strategies and metrics for Galactic bulge science.
Much of the morning session focused on the key issue of validating the crowded field estimates from LSST. Slide 14 in the welcome slides lists actions from that discussion.
I mentioned this on the slack channel, but please do not use the saturation stacker in the jupyternotebook linked in the google doc about saturation limit above. We have incorporated this stacker into MAF itself, and it has newer/updated parameters that will give correct answers (the values returned by the version of the stacker in the doc link above will not be accurate).
Example of the updated values: https://github.com/lsst-pst/survey_strategy/blob/master/fbs_1.7/Saturation%20Limits.ipynb
Note, that I think if the criteria for ‘bright’ is only ~15 mag, we have visits that do not saturate at this limit even in the standard survey, without short exposures.
Do you care if you have a bright saturation value, but it’s in bad seeing? Is that ‘better’ than a bright saturation limit in a short exposure, but with better seeing? (there is considerable overlap).
Are you more interested in evaluating the brightest saturation limit over the sky (i.e. there may only be 1 visit at that limit) or something like the 5th percentile value (i.e. there will be visits which may have much brighter saturation magnitudes, but at least some number of visits in that filter will have that value or brighter)?
I also noted a comment in one of the docs asking about Rubin magnitudes for various spectral types of stars … you can definitely calculate this for any SED using sims_photUtils, but a few types are already available via a utility in lsst.sims.utils:
import lsst.sims.utils as utils
mags = utils.StellarMags(spectraltype, rmag)
Returns a dictionary (mags) with the magnitude in each bandpass (u, g, r, i, z, y). (see the class itself for more information on what types are available/SEDs used).
(this utility may move in the future, as it probably might be more at home in sims_photUtils, but we’ll provide warning).
Thanks, @ljones - I will correct the notebook linked there with [updated:] the class now found in MAF version (since I don’t know on what timescale MAF will be updated on Datalab or Sciserver). I doubt I’ll get to that for another few hours. [Edit: our messages crossed, here I’m referring to the saturation limit.]
Thanks for the questions about the saturation level, we’ll give it some thought and update the specification, when I wrote that example spec I actually took the output from the older notebook and used that as the upper limit! We definitely need a more scientifically justified bright limit, which I would guess would be brighter than 15. I think at the moment we’re using the median saturation level within the block of “short” exposures, but that too is something we should probably consider more scientifically.
Thanks @ljones . For people who want to dig into this, bolometric corrections in LSST filters are available in the YBC database https://gitlab.com/cycyustc/ybc_tables , for very extended libraries of synthetic spectra (Kurucz ATLAS9, PHOENIX BT-settl, Aringer’s cool giants, Koester’s WDs, Wolf-Rayets,…). You just need to assume logL, Teff, and a metallicity to convert them to absolute magnitudes and colours. The library is actually too extended to be used in MAF, but a shorter version could be prepared if people want it.
HI @willclarkson@mdallora@knutago , regarding the issues with the crowding limiting maps, and the previous discussion about it mentioned by @ljones : I fear that people might be referring to diferent quantities in the definition of the crowding limit. We all agree that it is the “magnitude at which photometric errors due to crowding become larger than 0.1 mag”. But these errors should be due to crowding, not to shot noise. So, when we look at empirical maps (from DECaps, BDBS surveys), might we be looking at the point with 0.1 mag errors are due to shot noise?
My understanding is that the errors due to crowding are better assessed with artificial star tests, which usually indicate errors larger than those provided by photometry pipelines. That is, could it be that we modellers and we photometrists are using different errors to define the crowding limit?