Verification Datasets Meeting 2016-04-27 Minutes

Verification Datasets telecon
Attendees: MWV, Simon, Angelo, Hsin-Fang, Colin, David, Frossie, Robert

Colin:

Not much to report

Simon:

  • twinkles, fixed the astrometric registration problem

  • can make pretty pictures

  • can make good deep-detection catalogs

  • better forced photometry, people looking at light-curves

  • problem was a miscommunication of the workflow, and the
    WCSs were different

  • MWV suggested to add a check step to verify that the coaddTempExp
    WCS and skymap have the same WCS

  • make camera package for monoCam, Dave Monet

  • MWV would like to learn how to make the camera geom

  • Colin suggests having an obs package for a single chip that could also
    be used with processFile, similar to source extractor

  • a discussion of processFile
    https://github.com/lsst-dm/processFile

Hsin-Fang:

  • Working on middleware
  • single frame driver script works on decam, from pipe_drivers repo ported over from HSC

Angelo:

Frossie:

  • Paul was here last week, ran validate_drp on a bunch of HSC data
  • will make it more generally available and runable on a harness
  • RHL mentioned all the HSC data goes public in Feb 2017, raw data and catalogs

MWV:

  • Zeljko metioned that the SRD has all of the photometric KPM values
  • Michael fit the coefficients in validate_drp
    http://dmtn-008.lsst.io/
  • repeatability RMS vs. reported errors don’t correlate well
  • RHL suggested plotting chi histogram, sqrt(residual/error)

David:

  • working on photometric jointCal for the COSMOS data, trying to improve the
    relative calibation, and get to the bottom the higher-than-expected repeatability RMS
  • also working on document for CoDR on measuring KPMs

Robert:

  • Merlin visiting this week, figuring out difference between CP and stack assumptions
  • woking on obs_monocam as well

@mwv , @frossie do you think it would be useful to have the plots generated by the validate drp in the dashboard at this point? I thought initially it would make more sense for level 2 QA where we evaluate the quality of the data instead of the quality of the code in the CI system, however these plots are complementary to the metrics and can tell us about code changes by comparing the outputs from different builds, even visually… (perhaps not feasible with static plots but it becomes more interesting with interactive plots where we can select build 1 against build 2) It seems to me that there is value in keeping this history in the long term. Also, the plots produced by a specific build of the stack could be used as a reference when we run level 2 QA on different datasets with the same build, everything linked from the level 2 QA dashboard to the level0 QA dashboard… just dreaming…

To be more specific, it is the HSC Strategic Survey Program data that goes public in Feb 2017. There is a bunch of HSC data publicly available now, both from commissioning and open-use.