How do I get scons run to force re-run all tests?

I recently merged something to master that failed a test because I didn’t understand the caching behavior of scons.

[Yes, I should have Jenkinsed it. But it was a simple change that loaded a module from the wrong file. I thought I could have the answer in 10 seconds if I just tested the scons run. The error was in the test script itself.]

Story:

  1. A test succeeds under one branch; I run scons and that success is cached
  2. I switch to a different branch, I run scons and the test doesn’t fail because scons doesn’t run it because it’s already been cached.

Suggestions from @rowen @jbosch :

  1. scons --clean : This isn’t quite what I want, this removes the tests/.tests (and other things) after scons is run.
  2. rm -rf tests/.tests : Fear-inducing. I don’t what to be typing rm -rf as part of my standard workflow
  3. rm .sconsign.dblite : Workable. Satisfies my needs. Would this actually be something we put in as a regular work flow?

What other good suggestions for workflow or desired behavior does the LSST DM community have to contribute?

I find the behavior of not rerunning previously executed tests surprising. However, I’m also accustom to testing frameworks that make it easy to select a subset of tests to run when debugging a specific problem.

I would be in favor of either changing the default behavior such that all tests are rerun or the addition of a scons target that will clear all prior test results.

`[quote=“mwv, post:1, topic:526”]
rm -rf tests/.tests : Fear-inducing. I don’t what to be typing rm -rf as part of my standard workflow
[/quote]

I agree, but have the following in my .bashrc

alias rmtests="rm -rf tests/.tests"

Using that greatly reduces my worry about removing the wrong thing, as I don’t normally use tests/.tests for anything else.

scons --clean and rm .sconsign.dblite cause a full rebuild, which is more than you asked for and can be painfully slow on a large package such as afw. Unless and until scons behavior is changed, I think the alias that deletes tests/.tests may be your best bet.

Also note that a few tests cache results independently of scons and these sometimes cause rude surprises.

I often do a git clean before running scons. I particularly like the dry run option.

1 Like

I’m surprised that the tests aren’t run. The way that this is done is that the test foo.py depends on the file tests/.tests/foo.py (the test output is written to tests/.tests/foo.py.failed, and moved to tests/.tests/foo.py if it passes). So if the test code changes the test should be rerun.

So it sounds like an error in the way that the dependencies are declared.

If you want to force rerunning tests, rm tests/.tests/* should work (no need for -f)

This would ideally be solved by fixing SCons’ awareness of the dependencies of the test, but I think that’s very difficult.

All tests are already marked as depending on all the Swig-built Python module module for that package, and that in turn depends on all the C++ source code that goes into it (because SCons can track dependencies across C++ files through includes and linking). But SCons doesn’t know about pure-Python dependencies, because it has no scanner that tracks module imports.

As a result, if you change a C++ file, the tests will all be re-run. If you change a Python file (and that file hasn’t been marked explicitly as a dependency of a particular test - which we rarely do) the tests will not be re-run, which can make it easy to miss failures.

Are you sure? The code seems to indicate that tests depends on python, shebang, lib and the SWIG module. Why wouldn’t a change to a python file be noticed?

There is a ticket related to this in DM-2839 which describes how changes to the python code in a .i file are ignored.

There is also the problem described in DM-2345 whereby a test failure report can still happen even if you switch branches purely because sconsUtils looks for the .failed files.

I believe I tried to make this work (my memory is a little fuzzy) by making the tests depend on the “python” directory. That still wouldn’t capture dependencies on Python modules in other packages through import statements, but I think the behavior actually suggests that it’s even worse than that, and tests aren’t always re-run even when Python code within that directory changes. I’m not sure why.

Thanks all. In the immediate I’ll take the hybrid @rowen + @RHL 's solution as:

alias rmtests='rm tests/.tests/*'

I don’t keep my package directories clean enough for @laurenam 's good git clean suggestion. Perhaps I should. It would be appropriate for the current Python-focused packages I’m working on.