Bug in pipe_drivers?

Hello,

I encountered a bug in pipe_drivers when I was processing some Monocam data here at BNL. At line 796 in constructCalibs.py there is a command that begins with self.log.logdebug that triggers the following error when I ran constructFlat.py to process some flats:

Traceback (most recent call last):
File
"/gpfs01/astro/workarea/jbrooks/lsst/pipe_drivers/python/lsst/pipe/drivers/constructCalibs.py",
line 282, in __call__
result = task.run(**args)
File
"/gpfs01/astro/workarea/jbrooks/lsst/pipe_drivers/python/lsst/pipe/drivers/constructCalibs.py",
line 356, in run
scales = self.scale(ccdIdLists, data)
File
"/gpfs01/astro/workarea/jbrooks/lsst/pipe_drivers/python/lsst/pipe/drivers/constructCalibs.py",
line 795, in scale
self.log.logdebug("Iteration %d exposure scales: %s" % (iterate, numpy.exp(expScales)))
File "/gpfs01/astro/workarea/stack/anaconda2/envs/lsst/opt/lsst/log/python/lsst/log/logLib.py",
line 117, in <lambda>
__getattr__ = lambda self, name: _swig_getattr(self, Log, name)
File "/gpfs01/astro/workarea/stack/anaconda2/envs/lsst/opt/lsst/log/python/lsst/log/logLib.py",
line 89, in _swig_getattr
raise AttributeError("'%s' object has no attribute '%s'" % (class_type.__name__, name))
AttributeError: 'Log' object has no attribute 'logdebug'

In an attempt to fix the problem I went in and changed self.log.logdebug to self.log.debug at lines 796 and 797 in constructCalibs.py (which seemed like a reasonable choice given the contents of log.py located in .../envs/lsst/opt/lsst/log/python/lsst/log). Doing this allowed me to process my data with no errors, so I was wondering if this correction is okay?

I am using the 12.1 conda version of the DM stack and I am also using pipe_drivers 05687232830cd1e4c312a746f9d5f19cf7b22b59 from JIRA ticket DM-7742.

Thanks,
Jason

Yes. This was fixed in DM-7741 after 12.1 was released (and after DM-7742 was merged). You could consider switching to a more recent weekly.

I don’t think Jason can use the weeklies I’m afraid. We tried to get them running, but were only able to get the binary installs to work on the BNL cluster. I can’t remember what the exact problem was, but someone knowledgeable was overseeing our struggles, and thought this was a reasonable conclusion.

@jbrooks If this is for stuff on your local machine then Tim’s suggestion is a good one.

@merlin Can we get a bit more description about why this is impossible on the cluster?

I believe it’s just a matter of making the appropriate development tools (compilers, cmake, etc etc) available easily. I’m sure that’s possible, but it’s not a good use of time.

@KSK What @swinbank said - I think it was the fact that users don’t have the ability to install make/cmake/gcc etc, so only a binary would do. It’s not a failing of the stack, it’s a BNL thing. Also, I no longer have access to the BNL machines, so I can’t actually test I’m afraid.

I surprised that the correct tools aren’t available via modules or another package management tool for the cluster, but fair enough. Discussion about binary installs is going on right now, so hopefully new binaries will be available soon.

Ah okay, thanks for letting me know!

I have been using the cluster to do these tasks, so it seems it may be challenging to use the weeklies.

@KSK We did manage to switch compilers etc using modules actually, but, IIRC (and that’s a big if), it still didn’t work for some reason, though I honestly can’t remember why, sorry.