Issue in running `pipetask` command

For the tutorial you need to use the 24.0.0 tag of rc2_subset if you are using v24 of science pipelines.

1 Like

git checkout 24.0.0 should do the right thing.

(git branch shows you local branches by default. There are more visible with git branch -r. But you want the tag, as Tim says, not a branch.)

Thank you both. Using 24.0.0 tag of rc2_subset resolved that problem.

Now, when I follow the instructions here: Getting started tutorial part 4: Using all the data to calibrate — LSST Science Pipelines , everything is good for FGCM and jointcal, but I get the following error when trying to apply the calibrations:

lsst.daf.butler.cli.utils ERROR: Caught an exception, details are in traceback:
Traceback (most recent call last):
  File "/home/amir/lsst_stack/stack/miniconda3-py38_4.9.2-4.0.5/Linux64/ctrl_mpexec/g4d64b21cde+4a4456e283/python/lsst/ctrl/mpexec/cli/cmd/commands.py", line 129, in run
    pipeline = script.build(**kwargs)
  File "/home/amir/lsst_stack/stack/miniconda3-py38_4.9.2-4.0.5/Linux64/ctrl_mpexec/g4d64b21cde+4a4456e283/python/lsst/ctrl/mpexec/cli/script/build.py", line 87, in build
    pipeline = f.makePipeline(args)
  File "/home/amir/lsst_stack/stack/miniconda3-py38_4.9.2-4.0.5/Linux64/ctrl_mpexec/g4d64b21cde+4a4456e283/python/lsst/ctrl/mpexec/cmdLineFwk.py", line 505, in makePipeline
    pipeline = Pipeline.from_uri(args.pipeline)
  File "/home/amir/lsst_stack/stack/miniconda3-py38_4.9.2-4.0.5/Linux64/pipe_base/g641d719c08+0e065976f8/python/lsst/pipe/base/pipeline.py", line 334, in from_uri
    pipeline = pipeline.subsetFromLabels(label_specifier)
  File "/home/amir/lsst_stack/stack/miniconda3-py38_4.9.2-4.0.5/Linux64/pipe_base/g641d719c08+0e065976f8/python/lsst/pipe/base/pipeline.py", line 402, in subsetFromLabels
    return Pipeline.fromIR(self._pipelineIR.subset_from_labels(labelSet))
  File "/home/amir/lsst_stack/stack/miniconda3-py38_4.9.2-4.0.5/Linux64/pipe_base/g641d719c08+0e065976f8/python/lsst/pipe/base/pipelineIR.py", line 817, in subset_from_labels
    raise ValueError(
ValueError: Not all supplied labels (specified or named subsets) are in the pipeline definition, extra labels: {'source_calibration'}

Could you please help me to resolve this one too?

1 Like

@jeffcarlin is looking into the tutorial. Please confirm that you are using the 24.0.0 release of pipelines along with the 24.0.0 tag of rc2_subset?

Hi Amir -

It looks like the tutorial got updated with instructions for versions that come after v24 – apologies for the confusion.

I think that if you just skip the source_calibration step in the tutorial, it should work fine. (Don’t worry - the calibrations will still be applied, but it “just happens” under the hood without an explicit call.)

You could also follow the tutorial for pipelines version 23, which should work for v24 as well. In the meantime, I’ll get to work on identifying what went wrong in the v24 documentation.

1 Like

Sorry for my late response. I have moved to using the weekly version and got some distance from this. However, I decided to install v24_0_0 of the stack and find out if following the tutorials is straightforward and one can complete instructions there with the least problems.

Surprisingly, installing v24_0_0 fails when I follow the instructions in the tutorials. Here is the error I get when trying that:

  [  1/85 ]  alert_packet g5d8fbe4fe2 (already installed)               done. 
   [  2/85 ]  eigen g04a8d4365e (already installed)                      done. 
   [  3/85 ]  fgcm g941b12d670 (already installed)                       done. 
   [  4/85 ]  obs_decam_data gc3e517dea3 (already installed)             done. 
   [  5/85 ]  obs_lsst_data gbdb8a927be (already installed)              done. 
   [  6/85 ]  obs_subaru_data g2f68bc2906 (already installed)            done. 
   [  7/85 ]  proxmin g33b4157f25 (already installed)                    done. 
   [  8/85 ]  sconsUtils g7374e9d467 (already installed)                 done. 
   [  9/85 ]  sdm_schemas g8d82b3c5dd (already installed)                done. 
   [ 10/85 ]  spectractor g03aae25595 (already installed)                done. 
   [ 11/85 ]  astshim g38293774b4+ac198e9f13 (already installed)         done. 
   [ 12/85 ]  base gf9f5ea5b4d+ac198e9f13 (already installed)            done. 
   [ 13/85 ]  dustmaps_cachedata g41a3ec361e+ac198e9f13 (already installed)  done. 
   [ 14/85 ]  jointcal_cholmod ga68e3ac08d+ac198e9f13 (already installed)  done. 
   [ 15/85 ]  kht g14ffe67dc2+c057cea34b (already installed)             done. 
   [ 16/85 ]  psfex g57437a15a7+ac198e9f13 (already installed)           done. 
   [ 17/85 ]  scarlet gd32b658ba2+4083830bf8 (already installed)         done. 
   [ 18/85 ]  sphgeom ga1cf026fa3+ac198e9f13 (already installed)         done. 
   [ 19/85 ]  verify_metrics g40f75c44ca+ac198e9f13 (already installed)  done. 
   [ 20/85 ]  pex_exceptions g48ccf36440+89c08d0516 (already installed)  done. 
   [ 21/85 ]  scarlet_extensions g9d18589735+cc492336a9 (already installed)  done. 
   [ 22/85 ]  cpputils ga32aa97882+7403ac30ac (already installed)        done. 
   [ 23/85 ]  utils ga4dd45a4c7+82f6db4df0 (already installed)           done. 
   [ 24/85 ]  daf_base g5c4744a4d9+9e5e24d318 (already installed)        done. 
   [ 25/85 ]  geom g3b44f30a73+6ed7a0bf37 (already installed)            done. 
   [ 26/85 ]  log g9d27549199+9e5e24d318 (already installed)             done. 
   [ 27/85 ]  resources g73ff3781d8+9e5e24d318 (already installed)       done. 
  [ 28/85 ]  astro_metadata_translator g83a23aef33+846e1f9efd (already installed   [ 28/85 ]  astro_metadata_translator g83a23aef33+846e1f9efd (already installed)  done. 
   [ 29/85 ]  daf_butler gdde7253329+1ed234098f (already installed)      done. 
   [ 30/85 ]  pex_config gc75b51116a+846e1f9efd (already installed)      done. 
   [ 31/85 ]  afw gb77f0a74f7+8a093cac5b (already installed)             done. 
   [ 32/85 ]  daf_persistence g17e5ecfddb+2f99ec5bff (already installed)  done. 
   [ 33/85 ]  dax_apdb ga786bb30fb+e485989f06 (already installed)        done. 
   [ 34/85 ]  cbp g7177720bbd+f09366fa86 (already installed)             done. 
   [ 35/85 ]  display_ds9 gd01420fc67+f09366fa86 (already installed)     done. 
   [ 36/85 ]  display_firefly g1d67935e3f+f09366fa86 (already installed)  done. 
   [ 37/85 ]  display_matplotlib gbec6a3398f+f09366fa86 (already installed)  done. 
   [ 38/85 ]  pipe_base g641d719c08+0e065976f8 (already installed)       done. 
   [ 39/85 ]  shapelet gd877ba84e5+f09366fa86 (already installed)        done. 
   [ 40/85 ]  skymap gdb4cecd868+d155e19190 (already installed)          done. 
   [ 41/85 ]  coadd_utils gf3ee170dca+4a4456e283 (already installed)     done. 
   [ 42/85 ]  ctrl_mpexec g4d64b21cde+4a4456e283 (already installed)     done. 
   [ 43/85 ]  ctrl_pool g6c8d09e9e7+4a4456e283 (already installed)       done. 
   [ 44/85 ]  obs_base gbaa45dfa32+2718a75a08 (already installed)        done. 
   [ 45/85 ]  verify g77c5fecd56+422e7247c4 (already installed)          done. 
  [ 46/85 ]  ctrl_bps gca52d74647+a5413a7a82 ... 

***** error: from /Users/abazkiaei/lsst_stack/stack/miniconda3-py38_4.9.2-4.0.5/EupsBuildDir/DarwinX86/ctrl_bps-gca52d74647+a5413a7a82/build.log:
ERROR python/lsst/ctrl/bps/restart.py::flake-8::FLAKE8 - _pytest.nodes.Collec...
ERROR python/lsst/ctrl/bps/submit.py::flake-8::FLAKE8 - _pytest.nodes.Collect...
ERROR python/lsst/ctrl/bps/version.py::flake-8::FLAKE8 - _pytest.nodes.Collec...
ERROR python/lsst/ctrl/bps/wms_service.py::flake-8::FLAKE8 - _pytest.nodes.Co...
ERROR python/lsst/ctrl/bps/cli/opt/arguments.py::flake-8::FLAKE8 - _pytest.no...
ERROR python/lsst/ctrl/bps/cli/bps.py::flake-8::FLAKE8 - _pytest.nodes.Collec...
ERROR python/lsst/ctrl/bps/transform.py::flake-8::FLAKE8 - _pytest.nodes.Coll...
ERROR python/lsst/ctrl/bps/cli/cmd/commands.py::flake-8::FLAKE8 - _pytest.nod...
ERROR python/lsst/ctrl/bps/cli/opt/option_groups.py::flake-8::FLAKE8 - _pytes...
ERROR python/lsst/ctrl/bps/cli/opt/options.py::flake-8::FLAKE8 - _pytest.node...
================= 20 passed, 20 warnings, 32 errors in 17.29s ==================
Global pytest run: failed with 1
Failed test output:
Global pytest output is in /Users/abazkiaei/lsst_stack/stack/miniconda3-py38_4.9.2-4.0.5/EupsBuildDir/DarwinX86/ctrl_bps-gca52d74647+a5413a7a82/ctrl_bps-gca52d74647+a5413a7a82/tests/.tests/pytest-ctrl_bps.xml.failed
The following tests failed:
/Users/abazkiaei/lsst_stack/stack/miniconda3-py38_4.9.2-4.0.5/EupsBuildDir/DarwinX86/ctrl_bps-gca52d74647+a5413a7a82/ctrl_bps-gca52d74647+a5413a7a82/tests/.tests/pytest-ctrl_bps.xml.failed
1 tests failed
scons: *** [checkTestStatus] Error 1
scons: building terminated because of errors.
+ exit -4
eups distrib: Failed to build ctrl_bps-gca52d74647+a5413a7a82.eupspkg: Command:
	source "/Users/abazkiaei/lsst_stack/conda/miniconda3-py38_4.9.2/envs/lsst-scipipe-4.0.5/eups/bin/setups.sh"; export EUPS_PATH="/Users/abazkiaei/lsst_stack/stack/miniconda3-py38_4.9.2-4.0.5"; (/Users/abazkiaei/lsst_stack/stack/miniconda3-py38_4.9.2-4.0.5/EupsBuildDir/DarwinX86/ctrl_bps-gca52d74647+a5413a7a82/build.sh) >> /Users/abazkiaei/lsst_stack/stack/miniconda3-py38_4.9.2-4.0.5/EupsBuildDir/DarwinX86/ctrl_bps-gca52d74647+a5413a7a82/build.log 2>&1 4>/Users/abazkiaei/lsst_stack/stack/miniconda3-py38_4.9.2-4.0.5/EupsBuildDir/DarwinX86/ctrl_bps-gca52d74647+a5413a7a82/build.msg 
exited with code 252

Hi @bazkiaei,
I’m not a pipeline expert so @timj or @jeffcarlin may clearer answer for you on this. But it seems like it’s crashing when trying to install “ctrl_bps” and it has some issue with flake8. I believe this is the github respository for that: GitHub - lsst/ctrl_bps: A PipelineTask execution framework for multi-node processing for the LSST Batch Production Service (BPS).

When searching that repo for any issues related to flake, I noticed this issue: DM-35971: Pytest requires flake<5 by mwittgen · Pull Request #117 · lsst/ctrl_bps · GitHub

It seems like the installation requires a version of flake8 lower than 5, so perhaps you could try reinstalling a lower version of flake8 and then giving it a try again?

Best,
Ryan

2 Likes

It does look like the error we get from newer flake8 installations (you can look at the full failure by looking in the .xml.failed file). It might be worth to look in that failure file because it’s confusing that it got all the way to ctrl_bps before it became a problem. A newer flake8 should have caused failures almost immediately.

It’s entirely possible that the rubin-env created for the v24.0 release is no longer compatible but I can’t explain why it took so long to manifest.

1 Like

Thank you both.
Installing flake8 with a version lower than 5 did not fix the problem and I got the same error. Here I attach the .xml.failed files. (the file starting with 1 contains the error before lowering the flake8 version and the file starting with 2 contains the error I got after lowering the flake8 version.)

1_pytest-ctrl_bps.xml.failed (941.6 KB)

2_pytest-ctrl_bps.xml.failed (931.6 KB)

And may I ask, in case the rubin-env for v24.0 release is no longer compatible, which rubin-env and stack release do you recommend for following the tutorials?

1 Like

OK, I may be slightly out of my depth here, but I’ll share my thoughts on this and perhaps @timj or @jeffcarlin can jump in later.

It seems like at least the first error that pops up is related to the “read_gpickle” function from the “networkx” module. It looks like that function as well as “write_gpickle” were deprecated networkx functions, and perhaps that’s why it can’t find those. Interestingly, it seems this should have been removed in “python/lsst/ctrl/bps/generic_workflow.py” based on this commit: Remove deprecated NetworkX functions · lsst/ctrl_bps@5e1bb28 · GitHub

Indeed the generic_workflow.py script on the main version of ctrl_bps (ctrl_bps/generic_workflow.py at main · lsst/ctrl_bps · GitHub) seems to have those _gpickle function removed from the networkx imports. Maybe trying to install v24 reads in an older version that is indeed no longer compatible.

I don’t currently have any advice on how to proceed, but hopefully those more informed can provide some more guidance.

That explains it. We have fixed the networkx problem in the soon-to-be-released v24.1. For now you should downgrade the networkx in your conda environment.

These errors are the networkx problem reported by @ryanlau . Downgrading networkx will fix it. 24.1 should be out soon.

1 Like

Thank you!
Downgrading the networkx resolved the problem and now the stack is installed.

eups list lsst_distrib gives me: g0b29ad24fb+8f7d15ecf4 current v24_0_0 v24_0_0_rc4 setup.

I also cloned the rc2_subset. To checkout to 24.0.0 as @ktl @timj recommended, I did git checkout origin/tickets/DM-33959.

I am running the first step of reducing data based on the tutorial right now, which is single_frame. No error has been raised so far.

nice, glad to hear that worked!

1 Like

While installing the stack went well I got some problems running stack following the tutorial.

The #single_frame, #fgcm, and #jointcal steps work well when following the tutorial. Following the advice from @jeffcarlin , I ignored the #source_calibration step to skip the error that running that step raises (you can see the error here: Issue in running `pipetask` command - #6 by bazkiaei).

The next step in the tutorial is running #makeWarp. When I run that I get the following error:

(lsst-scipipe-4.0.5) amir@SCI-10071 ~/lsst_stack $ pipetask run -b /home/amir/lsst_stack/rc2_subset/SMALL_HSC/butler.yaml -d "tract = 9813 AND skymap = 'hsc_rings_v1' AND patch in (38, 39, 40, 41)" -p /home/amir/lsst_stack/rc2_subset/pipelines/DRP.yaml#makeWarp -i u/amir/jointcal,u/amir/fgcm -o u/amir/warps --register-dataset-types
lsst.pipe.base.graphBuilder WARNING: Dataset type finalized_psf_ap_corr_catalog is not registered.
py.warnings WARNING: /home/amir/lsst_stack/stack/miniconda3-py38_4.9.2-4.0.5/Linux64/ctrl_mpexec/g4d64b21cde+4a4456e283/python/lsst/ctrl/mpexec/cli/script/qgraph.py:187: UserWarning: QuantumGraph is empty
  qgraph = f.makeGraph(pipelineObj, args)

lsst.daf.butler.cli.utils ERROR: Caught an exception, details are in traceback:
Traceback (most recent call last):
  File "/home/amir/lsst_stack/stack/miniconda3-py38_4.9.2-4.0.5/Linux64/ctrl_mpexec/g4d64b21cde+4a4456e283/python/lsst/ctrl/mpexec/cli/cmd/commands.py", line 130, in run
    qgraph = script.qgraph(pipelineObj=pipeline, **kwargs)
  File "/home/amir/lsst_stack/stack/miniconda3-py38_4.9.2-4.0.5/Linux64/ctrl_mpexec/g4d64b21cde+4a4456e283/python/lsst/ctrl/mpexec/cli/script/qgraph.py", line 190, in qgraph
    raise RuntimeError("QuantumGraph is empty.")
RuntimeError: QuantumGraph is empty.

Could you please help me with this one?

1 Like

Hi, I have been following this trend because I was having the same issues installing eups distrib install -t v24_0_0 lsst_distrib. I downgraded networkx as following: mamba install networkx==2.5 (I hope I did that correctly). I tried again the installation, passed the 46/85 step (the one showing the networkx problem) but it failed in the 47/85. I attached the error message for the fail test. I hope you can help me.
1_pytest-ctrl_bps.xml.failed (941.6 KB)

It seems like something has gone wrong because your error is still the same error. If you look in the .failed file you will see:

ImportError while importing test module '/Users/abazkiaei/lsst_stack/stack/miniconda3-py38_4.9.2-4.0.5/EupsBuildDir/DarwinX86/ctrl_bps-gca52d74647+a5413a7a82/ctrl_bps-gca52d74647+a5413a7a82/tests/test_bps_utils.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
../../../../../../conda/miniconda3-py38_4.9.2/envs/lsst-scipipe-4.0.5/lib/python3.10/importlib/__init__.py:126: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
tests/test_bps_utils.py:26: in <module>
    from lsst.ctrl.bps.bps_utils import chdir
python/lsst/ctrl/bps/__init__.py:29: in <module>
    from .generic_workflow import *
python/lsst/ctrl/bps/generic_workflow.py:35: in <module>
    from networkx import DiGraph, read_gpickle, topological_sort, write_gpickle
E   ImportError: cannot import name 'read_gpickle' from 'networkx'

What does mamba list | grep networkx report for you?

Sorry, my mistake I upload the incorrect .failed file, here is
pytest-meas_base.xml.failed (14.7 KB)

And just in case this is the output of mamba list | grep networkx
networkx 2.5 py_0 conda-forge

Aha. Right. We discovered this one yesterday. The problem is that pandas came out with a new release and so you will need to downgrade pandas to <2. We are trying to fix the problem but likely won’t be backporting the fix to 24.x and will be pinning the pandas version in rubin-env (which I think we did last night).

1 Like

Thank you. It worked!