Has the LSST data release coadd tiling been finalized/decided yet?

I’m asking this question on behalf of someone else in the community who’s working on Roman: if somebody wanted to start making data products or plan future data products so as to be coadded onto footprints that match what LSST data releases will use, would this currently be possible? Has the exact choice of LSST data release coadd tiling been finalized/decided yet?

I know the LSST pipelines skymap concept is configurable, so one experiment I did was to run butler register-skymap with all default config parameters, generating the attached pickle object and printouts appended at bottom of this message. But I don’t know if it’s safe to assume that the default-parameter skymap is what LSST data releases will actually adopt.

@sfu also pointed me to the makeSkyMap config file within the obs_lsst package, so maybe that’s the answer?

Thanks very much.

printouts

pipe.tasks.script.registerSkymap INFO: sky map has 12 tracts
pipe.tasks.script.registerSkymap INFO: tract 0 has corners (35.576, 13.571), (324.424, 13.571), (261.130, 56.421), (98.870, 56.421) (RA, Dec deg) and 350 x 334 patches
pipe.tasks.script.registerSkymap INFO: tract 1 has corners (155.576, 13.571), (84.424, 13.571), (21.130, 56.421), (218.870, 56.421) (RA, Dec deg) and 350 x 334 patches
pipe.tasks.script.registerSkymap INFO: tract 2 has corners (275.576, 13.571), (204.424, 13.571), (141.130, 56.421), (338.870, 56.421) (RA, Dec deg) and 350 x 334 patches
pipe.tasks.script.registerSkymap INFO: tract 3 has corners (97.175, -20.621), (22.825, -20.621), (11.988, 42.674), (108.012, 42.674) (RA, Dec deg) and 350 x 334 patches
pipe.tasks.script.registerSkymap INFO: tract 4 has corners (217.175, -20.621), (142.825, -20.621), (131.988, 42.674), (228.012, 42.674) (RA, Dec deg) and 350 x 334 patches
pipe.tasks.script.registerSkymap INFO: tract 5 has corners (337.175, -20.621), (262.825, -20.621), (251.988, 42.674), (348.012, 42.674) (RA, Dec deg) and 350 x 334 patches
pipe.tasks.script.registerSkymap INFO: tract 6 has corners (48.012, -42.674), (311.988, -42.674), (322.825, 20.621), (37.175, 20.621) (RA, Dec deg) and 350 x 334 patches
pipe.tasks.script.registerSkymap INFO: tract 7 has corners (168.012, -42.674), (71.988, -42.674), (82.825, 20.621), (157.175, 20.621) (RA, Dec deg) and 350 x 334 patches
pipe.tasks.script.registerSkymap INFO: tract 8 has corners (288.012, -42.674), (191.988, -42.674), (202.825, 20.621), (277.175, 20.621) (RA, Dec deg) and 350 x 334 patches
pipe.tasks.script.registerSkymap INFO: tract 9 has corners (158.870, -56.421), (321.130, -56.421), (24.424, -13.571), (95.576, -13.571) (RA, Dec deg) and 350 x 334 patches
pipe.tasks.script.registerSkymap INFO: tract 10 has corners (278.870, -56.421), (81.130, -56.421), (144.424, -13.571), (215.576, -13.571) (RA, Dec deg) and 350 x 334 patches
pipe.tasks.script.registerSkymap INFO: tract 11 has corners (38.870, -56.421), (201.130, -56.421), (264.424, -13.571), (335.576, -13.571) (RA, Dec deg) and 350 x 334 patches

skyMap_test_skymaps.pickle (1.0 KB)

Hi @ameisner, thanks for posting this question. I checked with the Rubin Data Management System Science team and for the question of “are the bounding boxes for the deep coadds tracts and patches set”, the answer is “no not yet”.

Perhaps the following information is already known, or was not part of the underlying question, but, just in case it is helpful: the center coordinates for all LSST visits (the individual on-sky exposures) will not be predetermined. In other words, the LSST will not use discrete tiling; it will not predefine a set of field centers to be used for all visits.

If the above answers your question please mark this post as the solution, and if not please feel free to follow-up with additional questions.

1 Like

Thanks, this answers the original question.

I’m not sure that I understand this, particularly the part about “will not predefine”. I have seen some discussion of using random dithers for LSST (e.g., https://iopscience.iop.org/article/10.3847/0004-637X/829/1/50) but even in that case I would assume that the list of planned tile centers is predetermined by virtue of specifying a random seed or similar. I suppose there’s also the potential for relatively small uncorrected pointing offsets to be in the mix…not sure if that’s what’s been referred to?

I’m going to mention @ljones and @yoachim here because they can better describe plans for field/tile centers from a survey strategy perspective.

1 Like

Yes, @MelissaGraham is correct that our current approach is to not used fixed field centers for most of the survey (the deep drilling fields are still single pointings with traditional dithers). We still use the sky tessellation from earlier, but rather than shift those field centers to dither we instead randomize the orientation of the tessellation each night. We track the progress of the survey on a HEALpix grid that has higher resolution than the field of view, and each night make a new assignment of HEALpix to pointing.

This notebook has an example of how we construct the overall survey footprint: rubin_sim_notebooks/3-MDP_surveys.ipynb at main · lsst/rubin_sim_notebooks · GitHub

2 Likes

Thanks a lot for the response, @yoachim . I still feel like, even if the orientation of the tessellation is randomized per night, it should be possible to know the full list of potential LSST field centers before the survey starts, for instance by assigning the random seed for each observing night to some integer hash of the calendar date like 20280101 (or similar). Or does the nightly tessellation randomization take into account details like the survey progress history and/or weather conditions that could only be known shortly before a given night’s observations?

Also, I’m not sure if I’m interpreting "randomize the orientation of the tessellation” correctly…does it mean that the “pole” of the tessellation randomly changes so as to tilt the whole sphere and/or some random amount of spin about the pole is applied? Thanks again…

You have it right, the tessellation is first spun a random amount about the pole, then a random point on the sphere is drawn and two rotations are applied to move the pole to that spot. So over the 10 year survey there will be around 19 million potential pointing centers, but we only expect ~2 million observations and most of those will be in pairs, so of order 1 million unique field centers. We don’t know ahead of time which subset of the 19 million will end up being observed due to weather downtime, etc.

I can write a script that generates the full list of potential pointings for you. We also have lots and lots of simulated surveys if you want to see what the realized pointing list looks like (the sqlite3 .db files linked off here: http://astro-lsst-01.astro.washington.edu:8080/).

1 Like

Thanks! No need to make/send the list of 19 million possible pointing centers, I’m just trying to understand the overall concept.

So if the 19 million possible LSST pointing centers are all known in advance, wouldn’t an alternative approach be to randomly downsample that parent list to ~1 million field centers (all at once via software, before the survey starts) in a way that preserves whatever areal density function is desired? It seems to me that such an approach would allow one to know the full list of LSST pointing centers before the survey starts. Maybe there’s some advantage to having a per-night rotated tessellation (like preserving certain angular spacings between consecutively observed field centers), though that’s not immediately obvious to me.

We start with the desired coverage as input and have it as a function that can generate HEALpix maps at arbitrary resolution.

Here are some input footprint maps, for r and u band:


and then the resulting coverage after simulating 10 years of observations:
image
image

The dithering scheme means all the sharp lines on the input maps get blurred a bit (and the real history has the deep drilling fields included). So we know where the borders of the survey will be, but not the exact pointing centers that will generate it.

We do actually want to maintain similar angular distances between pointings. That way we can have a set amount of field overlap which is useful for rapid timescale science, and needed for running global photometric calibration.

Generating all our desired pointings ahead of time introduces a host of issues. For one, the scheduler takes longer to run in the early part of the survey because it has a long list of potential observations to search through compared to later. Then, if the weather is better than expected, the scheduler ends up making lots of long slews or thrashing the filter changer as the list gets low.

The early OpSim code was built on using large lists of pre-determined observations. It had the above issues and more which is why we’ve move to using a decision tree plus Markov Decision Process that can generate observations on the fly.

1 Like