I was trying to run the image differencing on two downloaded HSC images of the same field, but I have a problem with the ingestion. I put the files into a test directory, with the images into a img folder and an ups directory with the file test.table, trying to reply the structure of the tutorial testdata_ci_hsc.
Then, I have launched these commands:
setup -j -r test
echo $TEST_DIR
mkdir DATA
echo “lsst.obs.hsc.HscMapper” > DATA/_mapper
ingestImages.py DATA $test/img/*.fits --mode=copy
But I got this error message:
root INFO: Loading config overrride file ‘/home/user/lsst_stack/stack/miniconda3-4.7.10-4d7b902/Linux64/obs_subaru/19.0.0+2/config/ingest.py’
HscMapper WARN: Unable to find calib root directory
CameraMapper INFO: Loading exposure registry from /home/user/Scrivania/DATA/registry.sqlite3
ingest WARN: /img/*.fits doesn’t match any file
ingest.register INFO: Table “raw” exists. Skipping creation
Any idea of what can be the problem?
What I would like to do is to use the lsst.ip.diffim module, but the images of the tutorial doesn’t seem to be suitable for this purpose. If you could indicate me a set of tested images to launch the task and see the output or any suggested reading about it, I would appreciate it a lot.
Sorry, it was a distraction error. The error message that I got with the correct procedure is the following:
ingestImages.py DATA $TEST_DIR/img/*.fits --mode=copy
root INFO: Loading config overrride file ‘/home/user/lsst_stack/stack/miniconda3-4.7.10-4d7b902/Linux64/obs_subaru/19.0.0+2/config/ingest.py’
HscMapper WARN: Unable to find calib root directory
CameraMapper INFO: Loading Posix exposure registry from /home/user/Scrivania/DATA
ingest.parse WARN: Unable to find value for proposal (derived from PROP-ID)
ingest.parse WARN: Unable to find value for dataType (derived from DATA-TYP)
ingest.parse WARN: Unable to find value for ccd (derived from DET-ID)
ingest.parse WARN: Unable to find value for pa (derived from INST-PA)
ingest.parse WARN: Unable to find value for config (derived from T_CFGFIL)
ingest.parse WARN: Unable to find value for frameId (derived from FRAMEID)
ingest.parse WARN: Unable to find value for expId (derived from EXP-ID)
ingest.parse WARN: Unable to find value for dateObs (derived from DATE-OBS)
ingest.parse WARN: Unable to find value for taiObs (derived from DATE-OBS)
ingest.parse WARN: translate_field failed to translate field: ‘OBJECT not found’
ingest.parse WARN: translate_visit failed to translate visit: ‘EXP-ID not found’
ingest.parse WARN: Unable to determine suitable ‘pointing’ value; using 0
ingest WARN: Failed to ingest file /home/user/Scrivania/images/img/cutout-HSC-R-9813-pdr1_deep-200514-123452.fits: ‘visit’
ingest.parse WARN: Unable to find value for proposal (derived from PROP-ID)
ingest.parse WARN: Unable to find value for dataType (derived from DATA-TYP)
ingest.parse WARN: Unable to find value for expTime (derived from EXPTIME)
ingest.parse WARN: Unable to find value for ccd (derived from DET-ID)
ingest.parse WARN: Unable to find value for pa (derived from INST-PA)
ingest.parse WARN: Unable to find value for config (derived from T_CFGFIL)
ingest.parse WARN: Unable to find value for frameId (derived from FRAMEID)
ingest.parse WARN: Unable to find value for expId (derived from EXP-ID)
ingest.parse WARN: translate_field failed to translate field: ‘OBJECT not found’
ingest.parse WARN: translate_visit failed to translate visit: ‘EXP-ID not found’
ingest WARN: Failed to ingest file /home/user/Scrivania/images/img/cutout-HSC-R-9813-pdr2_wide-200514-123417.fits: ‘visit’
Do I need to modify something into the obs.package for a generic ingestion?
Thank you again!
The ingestImages.py script may not be the appropriate tool for what you want to do. It is intended to ingest raw, unprocessed images into a Butler repository. I’m guessing from the filename and lack of appropriate header that this is not such an image.
With the possible exception of DECam, I believe the pipelines are not typically set up (at this time) to work with externally-processed images. It should be possible to do so, but I suspect it may involve considerable work: ingesting these into a Butler repository, adding any additional data products expected by the pipeline, and/or adjusting default configurations.