Hi Tim thanks a lot for sharing the gen3 tutorial
I wanted to ask if the ingestion of the reference catalog happens as default when you construct the butler in the r2c_subset?
Thanks for your patience as well
Hi Tim thanks a lot for sharing the gen3 tutorial
I wanted to ask if the ingestion of the reference catalog happens as default when you construct the butler in the r2c_subset?
Thanks for your patience as well
The butler repository used in the tutorial is already prefilled with all necessary datasets.
okay, I’ll give that a test. thanks, and kudos on the advent of the official Gen3 tutorial.
appears to have worked, although it was not very apparent to me in past days. I’ll continue on a few more steps then circle back to use the latest/greatest Gen3 tutorial.
thanks; my console below…
$butler make-discrete-skymap ./GEN3_run HSC --collections processCcdOutputs
makeDiscreteSkyMap INFO: Extracting bounding boxes of 33 images
makeDiscreteSkyMap INFO: Computing spherical convex hull
makeDiscreteSkyMap INFO: tract 0 has corners (321.154, -0.596), (320.594, -0.596), (320.594, -0.036), (321.154, -0.036) (RA, Dec deg) and 3 x 3 patches
My coadd pipetask apparently errored with the message below. I think I understand the message…
pipetask run -b GEN3_run/ --input processCcdOutputs --input skymaps --register-dataset-types -p “${PIPE_TASKS_DIR}/pipelines/DRP.yaml”#coaddition --instrument lsst.obs.subaru.HyperSuprimeCam --output-run coadd -c makeWarp:doApplySkyCorr=False -c makeWarp:doApplyExternalSkyWcs=False -c makeWarp:doApplyExternalPhotoCalib=False -c assembleCoadd:doMaskBrightObjects=False
Resulted in
RuntimeError: Error finding datasets of type visitSummary in collections [processCcdOutputs, skymaps]; it is impossible for any such datasets to be found in any of those collections, most likely because the dataset type is not registered. This error may become a successful query that returns no results in the future, because queries with no results are not usually considered an error.
If the options are not clear, pls don’t expend too much effort. I will transition to a restart of Gen3 with the latest tutorial steps announced by Tim…
@parejkoj has given some guidance for this on a different post.
Which version of the software are you using? It looks like visitSummary dataset types are expected to be created as part of single frame processing but that’s really out of my area of expertise.
My latest run-through used v34:
eups distrib install -t w_2021_34 lsst_distrib
I’m repeating gen3 with the latest release per Tim’s recent announcement:
https://pipelines.lsst.io/v/weekly/getting-started/index.html
These tutorial steps reference v22…pls confirm that v22 is what we use. I would have thought gen3 would have a higher version number… So, I guess v22 now includes one of the later/greater weekly updates [i.e. 34 or 35?).
Please confirm.
thanks
…errrr maybe v22 is just referring to the level of the lsst_distrib EUPS package, not the core gen3 content.
thanks
Apologies! I am the one who did the updates to the tutorials and there is a reference to v22 that I missed. An updated version will fix this shortly. The tutorials were developed and tested using the w_2021_33
version of the science pipelines. I’m sorry for the confusion. I’ll post here when the corrected pages are up.
and should be compatible with w34 or w35 if that’s what you have lying around.
I have updated the tutorials to make it explicit which version of the science pipelines were used for producing them. Sorry for the confusion. The docs will be rebuilt as part of our nightly software build and should be available tomorrow.
Thanks Tim and Simon.
little by little, I’m grasping more of this pipeline process. Spent a good 1.5 hours on zoom with Joshua Kitenge at RAL over in UK and realized it’s amazing how much you can move forward with just technical conversation. It was great!!!
Simon, I’ll watch for your official declaration that the pages are re-rendered and start my Gen3 adventure anew.
Getting cooler now in Texas…Fred
You don’t need to wait. The only change is to fix the version number so it doesn’t say v22 but says the weekly as described above. The weekly you have is perfectly fine for driving the tutorial.
Okay, in Step 3. Run newinstall.sh, it refers to a url as shown below. It is correct that it refers to /22.0.0/
, or ,
is this not relevant to our discussions here…
curl -OL https://raw.githubusercontent.com/lsst/lsst/22.0.0/scripts/newinstall.sh
bash newinstall.sh -ct
thanks Tim
Another detail, is that I’m just following the new tutorial…I’m not working separately to grab W33, 34 or 35, as I have been in the past weeks in trying to make the prototype gen3 execute.
thanks
You should probably use the master
version of the newinstall.sh
script, and then install the w_2021_33
tag of the science pipelines
You can use the w34 or w35 you already have. You are not required to download a new stack version to run the tutorial if the tutorial is compatible with the version of the pipelines software you are already using. The documentation does indicate this.
Yes, the tutorial does say that you can use the software you already have.
Safest to use the w.2021.33
tag on the git repo so that it’s guaranteed to have the matching conda environment.
Very good point
In the part 1 step to set up the Butler data repository, the downline step entitled
Creating a Butler object for HSC data
it references the os.environ variable as DC2_SUBSET_DIR, which I believe should be RC2.…