If you are running scons locally you have to make sure you use setup -k -r . or something similar so that EUPS knows you are wanting to run the tests on the local checkout.
Your problems though are mainly related to w33 being very old and the pipeline is not compatible with it. Updating daf_butler won’t help because the pipelines are defined elsewhere.
butler query-datasets SMALL_HSC raw --collections HSC/RC2/defaults
Here are part of the result:
(lsst-scipipe-0.7.0) [yu@localhost rc2_subset]$ butler query-datasets SMALL_HSC raw --collections HSC/RC2/defaults
type run id band instrument detector physical_filter exposure
---- ----------- ------------------------------------ ---- ---------- -------- --------------- --------
raw HSC/raw/all d0998707-a8f4-5825-a13a-ab04be5f3439 y HSC 41 HSC-Y 322
raw HSC/raw/all 5fd3efcf-81ad-57a5-b205-463c1521ee60 y HSC 42 HSC-Y 322
raw HSC/raw/all bb7f3ec8-481f-5933-a1c3-22fbf2a06544 y HSC 47 HSC-Y 322
raw HSC/raw/all 8f71b3e0-2ba6-50cf-b8c5-bfe821e1cdc1 y HSC 49 HSC-Y 322
It was pointed out to me that the change in the query-data-ids results was a known issue, and I just didn’t recognize that it was what you had run into (despite being the person responsible for the change).
In short, the old behavior with w_2021_33 was incorrect, because you didn’t pass the dimensions of the data ID to the command - note that butler query-data-ids accepts positional arguments - and it seemed to work because it (incorrectly) automatically included the dimensions of the raw dataset (query-datasetsis supposed to do this, but query-data-ids is more subtle). The behavior with w_2021_48 was perhaps a bit closer to correct, but the error message is very misleading, and that’s something we’ve opened a ticket to fix.
The right way to use query-data-ids to get similar results (the tutorial needs to be updated, too) is:
I have run the whole LSST and HSC pipeline now, since my teacher gives me a mission to develop a pipeline for a new observatory(not the whole pipeline, only the very beginning include removing instrumental signatures with dark, bias and flat field and then do photometric and astrometric ). like HSC’s pipeline based on LSST’s.
To achieve these, where should I focus on? Where should I change? (For example, we use different CCDs, we have different FOV,and we may need to have some changes on the code to calibrate the raw image?) What should I do to realize it? And what documents should I read?
Thank you very much!
I have a question, it may be kinda stupid, I want to know how LSST’s pipeline do astrometry,and I want to read and modify these codes, and finally use them to do astrometry for other observatory’s images, and I follow the Command editting
(lsst-scipipe-0.7.0) [yu@localhost astrom]$ which singleFrameDriver.py ~/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/pipe_drivers/22.0.0+26c05adf09/bin/singleFrameDriver.py
(lsst-scipipe-0.7.0) [yu@localhost astrom]$ cat ~/lsst_stack/stack/miniconda3-py38_4.9.2-0.7.0/Linux64/pipe_drivers/22.0.0+26c05adf09/bin/singleFrameDriver.py
#!/usr/bin/env python
from lsst.pipe.drivers.singleFrameDriver import SingleFrameDriverTask
SingleFrameDriverTask.parseAndSubmit()
And I opened the lsst.pipe.drivers.singleFrameDriver, find it inherits from BatchParrelTask,which inherits from BatchCmdLineTask, then to CmdLineTask, and finally to Task. I find no footprint that use the package in
Parts of these must be applied to the images in some sequences while doing the astrometry.
So, how can I know how LSST’s do astrometry? What packages did it call? And the order of them?
Thank you!
from astro_metadata_translator import ExampleTranslator
by reading the MetadataTranslator and the links it gives.
So how can I realize it?
(I guess HSC may have experience this step and finally make a new package and it may named obs_HSC, it may be helpful to read the ExampleTranslator(HSCTranslator) HSC create)
Thank you!
Thank you!
You and other people really help me a lot! And your detailed documents about pipeline are pretty useful!
I am a postgraduate student new to astronomy, and I am working on WFST in China.
That ValueError problem likely relates to using an older version of the pipeline. Try using a recent weekly pipeline version (as recent as possible), and make sure that you checkout (and set up) the same tagged versions of the Science Pipelines and rc2_subset (i.e., if you’re using weekly w_2022_05, then checkout and setup versions with tags w.2022.05, following the instructions in Jim’s reply above).
Hello Jeff.
I’m migrating to the latest V23 of the pipeline. Looking forward to it, as I have learned a lot from the version I installed Q3 last year.
I have a tech question for you, as it relates to doing some self-debugging.
I’m fond of using DEBUG level logging, with pipetasks parameters "–log-level DEBUG --long-log --log-tty ".
In the log files I enjoy seeing the actual “funcName” and “lineno” and whatever the log statement provides.
My reading on python logging suggests that we should be able to ALSO have the logging module include the actual ARGUMENTS that are passed to the subject “funcName”.
Are you familiar how I can do this? Do the LSST developers use this feature to test/verify that expected ARGUMENTS are being passed to a function? Realizing also that arguments can take several forms.
I enjoy doing a lot of debugging on my own and hope you can help me affirm how to do this.
Enjoy your posts…usually very clear with good suggestions.
Fred, Dallas
We do not set stack_info=True in our logging – it’s something that each log user has to decide would be important to include in the log and for production code I doubt we would ever want to include it.
Calculating just the caller arguments for the function (taking stacklevel into account) is possible but I’m not sure how we would decide when such a thing should be calculated – the inspect module has a pretty big overhead and we’d want to avoid it in general. If we find that there is a use case we could possibly write a special log helper that calculates it and logs a message at TRACE level. We would not want every debug log message inside a function to have to recalculate the stack.
I see. How do I checkout the version of rc2_subset. Where can I find that version of it ? Or How do I create the new version of rc2_subset ? I use w_2022_08 version of pipeline currently.
Timj
I never suggested anything about production code. Geeze.
The relevant extracts from my post are:
My reading on python logging suggests that we should be able to ALSO have the logging module include the actual ARGUMENTS
and
I enjoy doing a lot of debugging on my own …
I want this for my own analysis and debugging…not for production lsst.
I never suggested we would calculate “just” the caller arguments…How do you calculate an argument? I only wish to include them.
Never mind Tim…I can see other options on the public domain…just thought that LSST developers may have a common/quick/easy trick to include the args passed in function calls.
I’ll figure it out.
Fred
It’s entirely possible I misunderstood what you were asking for.
Can you be more explicit about where this API is?
My reading of the logging API is that:
logging.debug has a parameter stack_info where the developer can request a full stack trace in the log message. That is what I meant when I said I couldn’t see a use case whereby we would set that parameter in production code.
My reading of the logging.LogRecord API is that the function arguments are not available anywhere. This means they can not be included in a log format string. If you can’t tell me what I’ve missed that would be wonderful.
I’ll be open to adding a --log-format option to the command line to give people more control over the log output but that’s not been requested before because the JSON log output option includes everything that is available in the log record.
Sorry for my inaccurate English. I meant “determine” the caller arguments by explicitly using the inspect API. Doing that would require we write a special log handler. Again, maybe I’ve missed something an LogRecord does have this information already and I’m failing to read the documentation.