As a glance over v0.99 of the Observing Strategy document and https://milkyway.science.lsst.org did not reveal anything quantitative in this regard, here is a proposal for science use cases and metrics for LSST crowded field photometry.
These three metrics should be defined for (at least) the following MW environments:
a) Specific locations in the MW disk as parameterized by GLAT/GLON/heliocentric distance,
b) The bulge, and
c) Globular clusters.
For each environment a faint limit should be specified in terms of stellar SpT(or Teff)/Mass/Luminosity.
Case 1: Determination of fundamental stellar parameters for individual stars:
a) Teff to within TBD K
b) Assign luminosity class
c) [Fe/H] to within TBD dex.
Case 2: Determination of age and metallicity for selected stellar populations (via CMD analysis):
a) Age to with TBD Gyr
b) [Fe/H] to within TBD dex.
Case 3: Astrometry
a) RA, DEC to within TBD arcsec.
b) pmRA, pmDEC to within TBD arcsec/yr.
These also place requirements on star-galaxy separation and correcting for foreground interstellar extinction.
So - what are the numbers we really need here for our science? We can talk about the need to make measurements both astrometric and photometric to specific precisions, but without reference to the science returned, these are merely numbers.
Not only do we need state science needs that drive the numerical requirements, but also address what part of the science gets recovered if we can’t meet those goals.
I can imagine goals such as, " look for evidence of multiple (age/[Fe/H] segregated) populations within globular clusters", or “look for chemical/age/kinematic differing populations in the bulge to constrain formation scenarios”.
As stated in recent telecon so far the project has only dealt with crowded field issues on a best-effort basis, now they want numbers. For the most part SMWLV science requires determination of stellar parameters which is why I suggested metrics based on their determination. The accuracy to which we need them determined is dependent on specific science cases. So what do we really want?
We also need to be able to identify which object varied in crowded fields - a shared concern with the Transients folks.
Example science case - population studies in NGC 5272 (M3):
Here is an example science case for thinking about crowded field studies. While it’s neither in the bulge or the disk, it caught my interest. Others may have alternate scenarios… whatever works for discussions is fine.
The well-studied globular cluster NGC 5272 (M3), at Dec = +28, is a prime example of multiple stellar populations within a single GC (the “second parameter” problem). We would like to study the spatial and kinematic distributions of each population. Certainly, the expectations are that, with the exception of mass segregation, that these populations are well-mixed.
NGC 5272 is at a heliocentric distance of 10.4 kpc (m-M = 15.1). In the ideal case we would like to:
a) Have better than 1% photometry for stars less than 1 solar mass (stretch to the HBL).
b) Assign reasonable proper motion values to members of each population. From section 4.3.1 in the WP, the nominal PM error at r ~ 24 is 1 mas/yr, or ~48 km/s at this distance. This is large compared to the ~6 km/s velocity dispersion in M3, so we need to go brighter.
It would be interesting to see how the limits for the following vary for different photometry codes:
- maximum field crowding or how close can we work to the core.
- minimum stellar mass for accurate photometry.
- minimum stellar mass for accurate astrometry.