What’s our requirement on being backward compatible with older versions of AstroPy and matplotlib?
This question was specifically prompted by realizing that the miniconda install created by lsstsw/bin/deploy only install Astropy 1.2.1 and matplotlib 1.5.1.
The current versions of those are AstroPy 2.0.1 and matplotlib 2.0.2.
The two specific features I wanted to use are overwrite=True for ASCII tables in AstroPy and a certain matplotlib.pyplot.figure construction. Neither are key, but doing it to support Astropy 1.2 or matplotlib 1.5 would result in code that looks outdated and indirect compared to the features in 2.0
I would rather we upgrade our numpy/astropy/matplotlib to new releases relatively quickly so that we can use new features. In fact, I asked a similar question about the versions on lsst-dev: https://jira.lsstcorp.org/browse/DM-7361
Given the responses to that ticket, I’m not sure that we have a specific policy at present. Paging @swinbank…
I would love to require matplotlib 2 as there are a couple of hacks in our code that deal with the font cache that can be removed if we update.
I think the real answer is that we need an RFC for something that will quite possibly affect people using their existing python installations. Given we have releases every 6 months or so I don’t think a full on deprecation cycle is where we want to end up – that would imply we wouldn’t be able to use matplotlib/astropy 2 features until next year some time.
Once we decide, to upgrade, we then change the version pinning in the conda configuration files, and update the matplotlib and astropy stub packages to check for the newer version.
I think an RFC might be an excellent way to clarify what’s the process really is.
There is some talk of the shared stack there. We have to rebuild that sometime soon anyhow so we can switch it to python3. We are also trying to switch to GCC6 but there’s that weird test failure to contend with.
We ought to (but do not necessarily) rigorously check that all code we commit works with those versions. (In practice, I suspect almost everybody is running with at least Astropy 1.2.1, per the original post, so we might well have already violated that constraint.)
Upgrading the versions of third party packages would indeed require an RFC. In our current mode of operation, I don’t think it needs a heavier-weight process than that. However, I would expect discussion on that RFC to focus not just on how great the new version is, but also on the disruption caused by the update.
While obviously the Conda environments (and hence Jenkins, shared stacks, etc) have to track at least the minimum version specified in the pseudo-packages, I’m not sure whether we ought to require that they be the same as. Certainly, we don’t at the moment. Opinions on this welcome.
We definitely do not try to ensure that Jenkins and version pinning in general track the minimum versions defined by stub packages. The pinned versions for lsstsw etc are effectively locking in the state when we last did something that needed versions to tweak. We have relied on developers having newer versions of conda packages installed to tickle compatibility problems. We have in the passed floated the idea of one Jenkins system using the cutting edge moving target of conda packages so that we can spot potential breakage early.
My opinion: the CI system must test at least the minimum authorized version (“the same as”); other versions can be tested as well depending on resources.
Shared stacks and other environments should use only CI-tested versions by default.
If the stub packages are requiring something other than what we are actually testing, we should probably update those stubs.
Upgrading our Python to 3.6 is a related topic. At the end of the year when DM is fully Python 3 I want our baseline Python to be the then current python.