These are some step-by-step instructions on how to use the stack to ingest raft test-stand data and process it with
Install a weekly build, with a version greater than or equal to w_2017_50
- installation instructions can be found here:
- alternatively, use a shared stack on, e.g. lsst-dev (all the weekly builds are on there already, it’s really easy and potentially saves a lot of installation pain!)
Setup your stack. This example should work on lsst-dev:
setup lsst_distrib -t w_2017_50
Setup eotest (as it’s not part of lsst_distrib). Note that this is a fork of eotest, to be used until the changes made here are merged upstream (and perhaps even then, e.g. if you want to make pdfs from the outputs)
NB: you must be within that directory, and in future you will need to do something like
setup -j eotest -r ~/path/to/eotest each time you setup the stack.
git clone email@example.com:lsst-dm/eotest.git
git checkout tickets/DM-11348
setup -j -r .
Make some TS8 data available. Let’s pick one raft, I’m going to use RTM-005 (aka RTM2). This would go something along the lines of:
rsync -r <username>@rhel6-64.slac.stanford.edu:/nfs/farm/g/lsst/u1/mirror/BNL-prod/prod/LCA-11021_RTM/LCA-11021_RTM-005/* /path/to/ts8_data --progress
If you’re working on
lsst-dev you can find this data already copied over, in
Now we enter stack-land. Make a repository for this data, put a mapper in it, and ingest the data. We want to only ingest the .fits files which are part of the actual acquisitions, so we use the following pattern to select them. Hopefully this is robust. Note that we are using the
obs_comCam as this is a single raft. If you want to process data from more than one raft, make a different repo for each raft. (Note that you can have multiple runs in each repo, but you can’t have multiple rafts in a repo).
echo "lsst.obs.comCam.ComCamMapper" > /path/to/ts8_repo/_mapper
ingestImages.py /path/to/ts8_repo /path/to/ts8_data/*/*_acq/v0/*/S*/*.fits
Assuming all the above worked, in that repo you should now have the _mapper, a directory called “raw” containing symlinks to all the ingested files, and a sqlite3 registry file called “registry.sqlite3”. We can now open a python session/jupyter notebook and do things. Let’s start by making a butler for that repo.
# import the relevant stuff
import lsst.daf.persistence as dafPersist
from lsst.cp.pipe import CpTask
# make ourselves a butler
repo_path = '/path/to/ts8_repo/'
butler = dafPersist.Butler(repo_path)
Now, if you used the path on the SLAC servers in this example, you will have copied and ingested data from many different runs, which obviously need to be processed separately. So, we ask the butler which runs are available like so:
which should return something like
['3849', '4080', '4390', '4418', '5597', '5704', '5819', '4068', '4389', '4417', '5105', '5703', '5811']
So, pick a run number to use. If you’re very brave you can try to deduce which the good runs are using things like eTravellers and the Camera Data Portal. I’m just going to pick 4389 because word on the grape-vine is that that’s a good one.
# instantiate a config for our task, and set the mandatory output config parameter
cpConfig = CpTask.ConfigClass()
# examples of how you turn the eotest sub-tasks on and off. All on by default
# cpConfig.doFe55 = False
# cpConfig.doReadNoise = False
# cpConfig.doBrightPixels = False
# cpConfig.doDarkPixels = False
# cpConfig.doTraps = False
# cpConfig.doCTE = False
# cpConfig.doPTC = False
# cpConfig.doFlatPair = False
# instantiate our task use the config
cpTask = CpTask(config=cpConfig)
# run the task, specifying our chosen run if there is more than one in the repo
# once the task has run, make a pdf of the results
If all went well, eotest should have run, and produced all its results and written them in the usual
eotest output format. If things went really well and you have a TeX distro installed with
pdflatex on your path, then pdfs for each sensor in the raft should have been produced in