The DM monthly status report covering October activities has been posted to DocuShare, collection-873. For convenience, the High-level Summary is pasted below. Direct link to the full report (pdf): http://ls.st/o7r
High-level Summary
Community Interactions, Meetings and Workshops
During this month, members of the DM leadership have continued their work with Operations and Project Management Office staff to assess the possibility of hosting the LSST Data Facility at a Department of Energy Lab. In November, this included visits to Brookhaven National Laboratory and Fermi National Accelerator Laboratory (having visited SLAC in October), as well as answering questions from, and providing technical support to, the labs as they prepare their proposals for submission in early December.
During regular operations, LSST will produce alerts by comparing new images with composite “templates” based on earlier observations of the same area of sky. During the first year of operations this presents obvious difficulties: since LSST is observing the sky for the first time, templates are not yet available. The DM Subsystem Scientist presented various options for alert production under these circumstances to the Project Science Team and Science Collaboration chairs.
A “boot camp” aimed at software developers and scientists who intend to start contributing code to the Science Pipelines was organized by the Princeton team, with extensive remote participation, from both inside and outside the subsystem. We are grateful to several members of the DM team who gave presentations and hosted tutorials. Materials from the event, and videos of some sessions, are available through the LSST Community Forum.
During the Supercomputing 2019 conference, the AmLight engineering team demonstrated TCP and UDP flows on LSST’s Miami – Boca Raton – Atlanta, at 100 Gbps.
Technical Progress
Improvements were made to the way in which the properties of cameras are described in the LSST codebase were made this month. This included a number of important usability and robustness improvements to the “cameraGeom” system, as well as the ability to describe the quantum efficiency of detectors over time. The latter has been demonstrated in JupyterLab-based plotting (see the image under Science Quality and Reliability Engineering, below).
The implementation of the low-level distributed data-loader service for Qserv was completed, deployed at the Data Facility, and tested by ingesting the Gaia Data Release 2 and Hyper Suprime-Cam Release Candidate 2 catalogs.
Efforts are ongoing between the System Science Team and the Science Pipelines group to expand the Science Data Model to capture extra data products which will be generated to assist in performing QA and other analyses on pipeline outputs. This effort has, so far, focused on data release processing; it will be extended to cover alert production in future.
Development of “Generation 3” middleware continues apace, with the goal of achieving feature parity with the old system in early calendar year 2020. Effort is now focused on a major functionality demonstration scheduled for mid-December. In parallel with this, existing code is being rapidly converted to use the new system.
A number of key algorithmic enhancements were made to the Science Pipelines codebase. These included substantial speed improvements to the system used to calculate DIAObject properties based on their constituent DIASources, and the completion of an investigation into false positive detections in difference images.
Across the Pipelines and Architecture teams, work focused on preparations for the Science Pipelines 19.0.0 release. This is expected in early December, and will bring the many enhancements that have been reported over the last several months to a wider audience.
An instance of the Observatory Operations Data Service (OODS), which provides low-latency access to images, files, and metadata, was installed on the Auxiliary Telescope (AuxTel) test stand in Tucson. Meanwhile, members of the DM team assisted with installation of the AuxTel Data Acquisition (DAQ) system on the mountain, and tested the DAQ over the summit–base network.
The first Data Facility hardware procurement for FY20 was completed. This included nodes for a new Qserv environment to support commissioning; a single AMD “Rome” node for testing, an expansion of the storage available at the US Data Facility at the Base Data Center in La Serena, and additional Kubernetes nodes which will support commissioning through the LSST Science Platform.
The Continental United States Network Implementation Team, led by Paul Wefel of ESnet, continued working on implementation of the Miami–Atlanta–Chicago network. The last piece of the route from Miami to NCSA, using the ESnet paths, has been completed and BGP sessions have been established between NCSA and Miami over those paths. Note that these paths currently use shared leased paths between Miami and Atlanta; dedicated paths will be implemented during operations.
Border Gateway Protocol