Pytest is now being used for test execution

I have changed the way that tests are run when building DM and Sims code using sconsUtils. Before DM-11514 each python test file found in the tests/ directory was run independently and the output written to a file in tests/.tests. I have just merged a new implementation that uses pytest. The main user-visible change is that when you run scons from the command line you will see the output from pytest indicating that tests are being run, a summary of warnings (and any failures) and a final test summary. This output will also be written to tests/.tests just as before and if the tests failed a .failed file will be written.

We use multi-process testing via pytest-xdist where the number of processes is derived from the number of jobs given to scons. pytest reads all the tests in before running them, so global state can result in test failures. Additionally, pytest-xdist can run tests from the same test class in different processes so it is important that tests do not rely on a particular file system state. Rather than fixed filenames in tests use lsst.utils.tests.getTempFilePath(); rather than fixed scratch space, consider using tempfile.mkdtemp() to create a temporary scratch directory.

In some rare cases, a test should not run in a shared process with others. It is possible to force those into a separate process. Currently only two tests are called out like that: one in base that is checking whether imports work properly; and one in pipe_tasks that creates 200MB of test data that wastes resources when run in multi-process mode.

Now that we are using pytest there are a number of ways you can take advantage of the features:

  • If the test files have been renamed following RFC-229 you can enable pytest auto test discovery. You do this by editing the tests() method in tests/SConscript to use the argument pyList=[].
  • Once test discovery is enabled you can enable flake8 testing of the code in the package. Code should be compliant with flake8 and this can be ensured by insisting that tests fail if the code is not compliant. flake8 testing can be enabled by adding a setup.cfg file to the root of the package. An example file can be found in meas_base. Please add support for this the next time you are working on a package.

We intend to add test order randomization in the future.

A key driver for this migration is that Jenkins will be able to present data on all the tests when a CI job completes. This will allow you to easily see all the output from tests, how long they took to run and how many were skipped (With the reasons). @josh is working on implementing this feature. JUnit format XML is always written so you can examine it offline.

Thank you to everyone that helped in the migration, and especially @danielsf who spent a significant amount of time sorting through pull requests and fixing race conditions in Sims tests.


If you are building with a machine that has more than about 10 cores then you might trigger a test failure in meas_base caused by sensitivity to the random number generator. This is being dealt with in DM-11620. For now the work around is to set export EUPSPKG_NJOBS=8 before doing the build.

This sounds like a big improvement.

I want to push back gently on your recommendation to always use lsst.utils.tests.getTempFilePath. Sometimes it is useful to save a file if a test fails. In such cases a file name that is descriptive and unique (among all test methods of all test classes in the package) is easier to find and should be safe.

That is fine in limited cases, with the understanding that we tend not to want lots of output files hanging around after tests run. By default I would expect tests to clean up after themselves.

I have updated the Developer Guide with new instructions on testing with pytest.