DM Monthly Status Report for May 2020

The DM monthly status report covering May activities has been posted to DocuShare, collection-1101. For convenience, the High-level Summary is pasted below. Direct link to the full report:

High-level Summary

Community Interactions, Meetings and Workshops

The DM Leadership Team meeting was held May 12 through 14. This was a virtual meeting, but nevertheless proved very productive. Discussions on a variety of key topics, including middleware development; time-series interfaces; plans for the Data Facility; rebaselining and schedule; and the design of the prompt processing system.

In addition, several members of the team participated in the IVOA 2020 Virtual Interoperability Meeting, May 4–8. This meeting included important discussion about standardized representation for world coordinate system (WCS) information and the development of science platforms, as well as a number of other topics relevant to Rubin development.

Preparations are being made for June 15–17 Pre-Verification Review of Networks and our second Operations Rehearsal, June 2–4.

Technical Progress

Various documents were released or updated:

DMTN-148, was made available in draft form. This document defines the overall architecture and strategy for management of calibration products in the construction, commissioning, and operational eras.

DMTN-150, which describes the Google Cloud Proof of Concept, was released. Work began on data transfer, Data Release Production execution, and Alert Production execution in the cloud.

LDM-732, the Rubin Observatory Network Verification Baseline, was published.

SQR-039 was published. This document proposes improvements to the Science Platform Authentication and Authorization service. These are of immediate relevance to the deployment of the Intermediate Data Facility.

LCR-2313, to baseline updates to LSE-78 — the Rubin Observatory Network Design — was submitted.

The DM team achieved a significant milestone this month with the generation of the first alert packets by the Alert Production system. This represents a major new capability for the DM system. Currently, these alerts are not being available by the real time Alert Distribution system which will be used in the operational era. However, sample alerts based on precursor data will be made available to the community for bulk download, in particular with a view to assisting potential authors of community brokers.

A number of other key improvements were made to the Science Pipelines. Particular highlights include implementing the first version of “sky sources” — corresponding to measurements made on empty patches of sky — in single frame processing. This provides an important QA tool. The team also provided driver scripts for processing with simulated (“fake”) sources inserted at large scales.

Members of the DM System Science Team, in collaboration with members of the project’s Commissioning Team having been making rapid progress on the new scalable system for calculating performance metrics based on the results delivered by those pipelines. This represents a key capability for both ongoing QA of our processing and ultimately for formal verification and validation of the as-delivered system.

The Qserv build process was updated to take advantage of Conda (refer to last month’s report for a discussion of the new Conda environment as applied to the Science Pipelines), and improvements and updates forced by upstream changes were made to the Conda environment. The new Qserv parallel data ingest tooling was heavily exercised with both reprocessed Hyper Suprime-Cam and DESC Data Challenge 2 datasets, and a simulated catalog data creation tool is now being integrated to enable the next regime of Qserv scale testing.

New monitoring software for tracking images as they are transferred to the Data Facility at NCSA was deployed. This includes all images from all cameras (both the Auxiliary Telescope and ComCam from the Summit Facility or Base Data Center, and from test environments at NCSA and SLAC). A new software environment, including the ComCam Header Service, Archiver Service, Forwarder, and Observatory Operations Data Service, was deployed at the Base Data Center.