Contributing Metrics

Tags: #<Tag:0x00007f27a928b810>

Hi all,

I know many of you wrote new metrics as part of the Cadence Notes effort, and I thought an update on the general “how to submit metrics to MAF” idea would be useful – along with some new additions.

The process to make a PR (Pull Request) to add it into sims_maf_contrib is basically as described in the post above: create a fork of sims_maf_contrib, add your metric to your fork, and then create a PR to merge it back into lsst-nonproject/sims_maf_contrib.

I would like to also request a couple of additional considerations:

  • If your software requires additional software packages for dependencies, please make a note of this in the PR.
    Because we don’t automatically install all of the dependencies on (e.g.) DataLab, missing dependencies can cause imports of sims_maf_contrib to fail. Until we get these additional software dependencies installed (and installable automatically, in a future update for lsst_sims & sims_maf_contrib), the best work-around is to import these new dependencies inside the metric class itself, instead of at the top of your file. That way, the dependency will only be searched for if someone is trying to use your metric, instead of when it’s imported. Here is an example: import george is hidden inside a metric class StaticProbesFoMSummaryMetric.
  • Please add some documentation to your metric.
    • A docstring in the code is very useful – see this time domain challenge metric example – and will show up in API documentation.
    • A notebook demonstrating the use of the metric, together with more background information about its usefulness or limitations is helpful. These should go in the ‘science’ directory – if none of the existing directories match an appropriate category for your science, make a new one. The “Time Delay Challenge” notebook is extremely thorough; this crowding metric example is less so, but still helpful.
  • If you go the extra step of adding a unit test for your metric, we will really appreciate it! The bonus here is if we need to reconfigure something in your metric or something elsewhere changes (maybe your software dependency changes their API), we can much more easily diagnose and fix the problem without having to come ask you to do it for us. This will also facilitate us moving your metric into sims_maf in the future (which is where we would like to be able to eventually put all metrics that do not have complex dependencies and we’re running as part of the standard metric evaluation of runs). Here is an example of a unit test for the transient ascii metric.

Thank you and we really appreciate your work on metrics!
We do really want to gather up as many of the community metrics as possible, particularly those which feature in cadence notes or white papers. When we generate new survey simulations, being able to run these metrics and test the effects of the survey strategy variations immediately is immensely helpful.

If you have any questions or need help with this process, please do feel free to reach out - either here on community.lsst.org or on the #sims-maf slack channel is preferred.

Lynne