Hmm. Ok. That means we need to see more of the pytest output. I need to see the contents of the .failed file in the meas_algorithms tests/.tests/ directory. I’m not sure it’s going to help much though since it’s not clear why Htm importing is failing.
Okey-doke, here you are.pytest-meas_algorithms.xml.failed (45.9 KB)
Hope uploading a file is preferable to pasting 757 lines of text. I don’t know the culture here but fora elsewhere prefer that approach.
That’s great. Turns out that every test involving htm is failing so here is one snippet:
============================= test session starts ==============================
platform linux -- Python 3.7.2, pytest-3.6.2, py-1.7.0, pluggy-0.6.0
rootdir: /usr/local/src/lsst_stack/stack/miniconda3-4.5.12-1172c30/EupsBuildDir/Linux64/meas_algorithms-17.0.1/meas_algorithms-17.0.1, inifile: setup.cfg
plugins: flake8-1.0.4, xdist-1.20.1, forked-0.2, session2file-0.1.9, cov-2.5.1, remotedata-0.3.1, openfiles-0.3.2, doctestplus-0.3.0, arraydiff-0.3
gw0 I / gw1 I / gw2 I / gw3 I / gw4 I / gw5 I / gw6 I / gw7 I / gw8 I / gw9 I / gw10 I / gw11 I
gw0 [282] / gw1 [282] / gw2 [282] / gw3 [282] / gw4 [282] / gw5 [282] / gw6 [282] / gw7 [282] / gw8 [282] / gw9 [282] / gw10 [282] / gw11 [282]
scheduling tests via LoadScheduling
........................................................................................................E....E..EE....E...E..s....E.........E........ss..s..s...E......
.EE..E....E.s......................................................................................................
==================================== ERRORS ====================================
________________ ERROR at setup of HtmIndexTestCase.testIngest _________________
[gw6] linux -- Python 3.7.2 /usr/local/src/lsst_stack/python/miniconda3-4.5.12/envs/lsst-scipipe-1172c30/bin/python3.7
cls = <class 'test_htmIndex.HtmIndexTestCase'>
@classmethod
def setUpClass(cls):
cls.outPath = tempfile.mkdtemp()
cls.testCatPath = os.path.join(os.path.dirname(os.path.realpath(__file__)), "data",
"testHtmIndex.fits")
# arbitrary, but reasonable, amount of proper motion (angle/year)
# and direction of proper motion
cls.properMotionAmt = 3.0*lsst.geom.arcseconds
cls.properMotionDir = 45*lsst.geom.degrees
cls.properMotionErr = 1e-3*lsst.geom.arcseconds
cls.epoch = astropy.time.Time(58206.861330339219, scale="tai", format="mjd")
ret = cls.make_skyCatalog(cls.outPath)
cls.skyCatalogFile, cls.skyCatalogFileDelim, cls.skyCatalog = ret
cls.testRas = [210., 14.5, 93., 180., 286., 0.]
cls.testDecs = [-90., -51., -30.1, 0., 27.3, 62., 90.]
cls.searchRadius = 3. * lsst.geom.degrees
cls.compCats = {} # dict of center coord: list of IDs of stars within cls.searchRadius of center
cls.depth = 4 # gives a mean area of 20 deg^2 per pixel, roughly matching a 3 deg search radius
config = IndexerRegistry['HTM'].ConfigClass()
# Match on disk comparison file
config.depth = cls.depth
> cls.indexer = IndexerRegistry['HTM'](config)
tests/test_htmIndex.py:135:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
python/lsst/meas/algorithms/indexerRegistry.py:46: in makeHtmIndexer
return HtmIndexer(depth=config.depth)
python/lsst/meas/algorithms/htmIndexer.py:36: in __init__
self.htm = esutil.htm.HTM(depth)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <esutil.htm.htm.HTM; >, depth = 4
def __init__(self, depth=10):
> this = _htmc.new_HTMC(depth)
E SystemError: <built-in function new_HTMC> returned a result with an error set
../../../../Linux64/esutil/0.6.2.5.lsst2+1/lib/python/esutil/htm/htmc.py:152: SystemError
---------------------------- Captured stderr setup -----------------------------
RuntimeError: FATAL: module compiled as little endian, but detected different endianness at runtime
______ ERROR at setup of HtmIndexTestCase.testLoadIndexedReferenceConfig _______
It seems that this line esutil.htm.HTM(depth)
is causing the problem. Can you see if you can run that line? (maybe the import wasn’t enough). I can guess that depth
is an integer but I’m not sure of the API.
A simple copy and paste told me, correctly, that depth wasn’t defined. Using an explicit 4 in the call gave this output:
esutil.htm.HTM(4)
RuntimeError: FATAL: module compiled as little endian, but detected different endianness at runtime
ImportError: numpy.core.multiarray failed to import
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File “”, line 1, in
File “/usr/local/src/lsst_stack/stack/miniconda3-4.5.12-1172c30/Linux64/esutil/0.6.2.5.lsst2+1/lib/python/esutil/htm/htmc.py”, line 152, in init
this = _htmc.new_HTMC(depth)
SystemError: returned a result with an error set
Aha. Numpy in your conda env becomes a smoking gun?
That may be significant. I seem to remember the build process discovering the system installed numpy and using it. Unfortunately that message has long since scrolled off the screen. It may yet be n a log file and I’ll go looking. Would you suggest (temporarily) uninstalling numpy and letting the miniconda build process install its own?
(Edit: removing numpy from the system is not going to happen. It’s an essential component of the local installation of astrometry.net and I can’t afford to lose that.)
I confess I dislike anaconda (miniconda in this instance) and would prefer to use native tools where possible. I thought going with the recommendations would make life easier. Ah, the irony.
Despite the above, I just flushed the system numpy and have restarted the install. I can always re-get the Ubuntu packages when it next becomes necessary to do some plate solving.
Fingers crossed …
Hmm. That made not the slightest difference.
After doing some research, I find that we’ve actually seen this exact problem before in DM-9457 and DM-9796.
As the latter ticket says, this has been fixed upstream, but there hasn’t been a release yet, so we never included the fix in our stack. Your alternatives would seem to be to downgrade to gcc 7 or to manually patch esutil, since presumably you don’t want to wait for us to issue a new release.
Patching holds no terrors. After a bit of digging into DM-9796 I found the patch. I’ll give it a go later today and report back.
Thanks,
You will probably want to look at https://developer.lsst.io/stack/packaging-third-party-eups-dependencies.html#testing-the-package to see how to rebuild after patching.
I was too hasty. The files which need patching aren’t even on my system. For instance, although htm.py is present, the to-be-patched file htmc.cc is nowhere to be found. (Either that or find(1) is seriously broken.)
Looks like the best approach may be to downgrade to gcc 7. Now to try that one.
Another miserable failure. This time the build complained about the lack of support for C++14. All this activity is strongly reminiscent of running Gentoo systems, an environment with which I have a good number of years of experience.
In principle I suppose that I may be able to dig up an otherwise unused workstation and install CentOS on it (to gain access to the tarballs) but that is another can of worms which I would rather not open.
Thank you for your patience and all your help but I’m going to have to give up at this point and wait for a new release. Any indication when that might happen?
gcc 7 definitely supports C++14, so something else is fishy. Let me get you specific instructions on how to patch esutil, though.
(Another alternative could be to run docker, in which case you can use our containers directly.)
In Ubuntu 19.04:
$ apt-get update --fix-missing
$ # note addition of rsync to the list below
$ apt-get install bison ca-certificates cmake curl default-jre flex gettext git libbz2-dev libcurl4-openssl-dev libfontconfig1 libglib2.0-dev libncurses5-dev libreadline6-dev libx11-dev libxrender1 libxt-dev m4 make perl-modules rsync zlib1g-dev
$ cd
$ mkdir -p lsst_stack
$ cd lsst_stack
$ curl -OL https://raw.githubusercontent.com/lsst/lsst/17.0.1/scripts/newinstall.sh
$ bash newinstall.sh -ct
$ source loadLSST.bash
$ # eups distrib install -t v_17_0_1 --onlydepend esutil (not actually necessary in this case)
$ git clone https://github.com/lsst/esutil
$ cd esutil
$ mkdir patches
$ cd patches
$ curl -O https://github.com/esheldon/esutil/commit/88a715.patch
$ cd ..
$ eupspkg -er fetch
$ eupspkg -er prep
$ eupspkg -er config
$ eupspkg -er build
$ # autodetected version is not quite right; fix it manually
$ eupspkg -er VERSION=0.6.2.5.lsst2+1 install
$ eupspkg -er VERSION=0.6.2.5.lsst2+1 decl
$ cd ..
$ eups distrib install -t v17_0_1 lsst_distrib
(I think this should work; my install is only halfway through, but that’s all I have time for right now.)
You are not expected to have known how to do this .
I’ll investigate Docker. Good idea. Thanks. Another learning experience. Learning is good — it stops ones brain silting up.
However, a rummage around in my scrap heap turned up a laptop which I’d forgotten about. It must be a few years old because I inherited it from my late brother 18 months ago. It presently has Win10 installed, an OS of no interest whatsoever, and so I’ve no qualms about booting up a magazine cover disc with Fedora 29 on it. Even if it’s under-powered it may let me learn how to drive the LSST software while waiting for the next release.
“You are not expected to have known how to do this .”
Nice! I recognize the cultural reference:
/*
* If the new process paused because it was
* swapped out, set the stack level to the last call
* to savu(u_ssav). This means that the return
* which is executed immediately after the call to aretu
* actually returns from the last routine which did
* the savu.
*
* You are not expected to understand this.
*/
if(rp->p_flag&SSWAP) {
rp->p_flag =& ~SSWAP;
aretu(u.u_ssav);
}
JAUH. I’ve been hacking Unix since 1983.
Paul
Looking good!
Already 112/128 built, including the previously failing case.
All is good. Yay! I built the system in /usr/local/src running as root as hat’s how I’ve built most 3rd-party stuff over the years. Running the demo as an unprivileged user took a bit of tweaking with "chgrp -R pcl " and “chmod -R g+w” (I’m user pcl in group pcl) and the addition of symlinks to eups and loadLSST.bash from a working directory in ~pcl. There may be a more elegant way of doing this, or even a script to do it automagically, but I didn’t go looking for it. The step seemed to be needed because something wanted to write a temporary file in lsst_stack/stack/current/ups_db/. (An obvious question: is this wise? Should not /tmp or some other such temporary directory be used instead?)
After that the demo worked perfectly. I’m a happy bunny
Now to learn how to drive this stuff.
Once more, many thanks for your assistance and your patience in dealing with a clueless newbie.
Paul
If that’s what I think it is, it isn’t a temporary file. But the system can be configured to work without it. https://dev.lsstcorp.org/trac/wiki/EupsTips#LockProblems