How to suggest a breakout session topic for Rubin 2024

Starting from our experience with software management via a dedicated task force and development and implementation of software training workshops for the Science Collaborations, the TVS SC would like to organize a Rubin software training and management session. In this session, we could discuss software training modalities, and receive suggestions for improving existing training modules, discuss software development for the Science Collaborations. We would like to see the participation of LINNC to discuss how to integrate best with their infrastructure and management team

1 Like

I propose a Survey Strategy Update session where the Survey Cadence Optimization Committee (SCOC) and Survey Strategy team will update the community on the status of the SCOC Phase 3 recommendation including plans for WFD, Early Science (including Y1 microsurveys), and ToOs ahead of the official release of its recommendation.

2 Likes

I would like to propose that the survey strategy team host a session where we talk about metrics and survey progress monitoring and reporting during operations. During operations, we will report on how the survey is progressing (on an at least quarterly basis) and make predictions through to the end of operations.
We would also like to check if our predictions are on track with what can actually be done with the survey.

As an example: parallax and proper motion measurements have metrics in the MAF framework, which we can use together with survey simulations, to predict the performance of the survey in terms of proper motion and parallax error. As operations progress, we will start to simulate “the rest of the survey” – including the observations that have already been acquired. This lets us understand if the observations we have acquired mean that we need to update the survey strategy to still meet our goals.
However, another piece of this puzzle is that we need to understand if our metrics are making realistic predictions. In this example, we would then need to compare our metric predicted parallax and proper motion errors with the actual errors observed in each Data Release, and consult with DM if the numbers do not match, and adjust our metrics as needed.
For science metrics coming from the community, this raises the question of – how do we close the loop to understand if metrics are matching science?
This session is an invitation to the community to discuss “Survey (strategy and progress) Metrics in Operations”.

3 Likes

I offer to chair DM’s annual update on algorithms and pipelines.

3 Likes

If desirable, I can offer to chair an LGBTQIA+ Social session early in the week (mirroring the 2023 PCW).

1 Like