In a Confluence discussion on the WCS redesign I asked about dependencies of the proposed code in the light of its possible use in the TCS. In reply, @parejkoj properly noted that I was taking the discussion away from its original intent; respecting that, and because it’s really a larger concern, I’ll try continuing it here.
In that thread, @jbosch wrote:
It’s hard to imagine that the requirements for a lightweight install at the summit could be more restrictive than what a typical level 3 science user would tolerate […]
That’s a different perspective. I’d have said that a Level 3 user would be quite possibly more like the model of a maximal install, at least as far as algorithmic code is concerned. (It may be different for middleware; e.g., they may not be interested in the heavy-duty workflow framework unless they are setting up their own large scale productions.)
My past experience has been that trying to keep quasi-real-time and/or high-reliability processes tight is generally a good engineering principle. I wonder if we are just seeing this from very different angles.
By the way, I’m thinking both of code dependencies and, though I didn’t make this clear, of run-time dependencies that can easily (though need not, with good design) accrete the larger the transitive closure of the code base becomes. More shared libraries ==> more dynamic loads, more traversals of search paths, more system calls, slower startup of applications. More code included ==> lower probability that the final application author has a full understanding of the environment that is required and the surprising things that may happen under the covers - e.g., unexpected I/Os loading configuration and conditions data associated with big frameworks.
This conversation seems worth continuing, as we appear to be heading to a phase of the project when it is becoming more likely that DM code will be appearing in a variety of summit processes. (See, for instance, the discussions at the recent Camera workshop, which I’ll write up elsewhere.)
Similar concerns arise, for instance, in the question of whether DM code will be used/usable in the centroiding of the stars observed by the guide sensors, and of how much of the usual application framework can/must be stripped away to allow the actual analysis to run in the few milliseconds available.
It is relevant to our design for packaging and deployment of the DM software whether our vision is that people should simply quasi-monolithically install “the stack” or whether it is possible to create narrow slices as appropriate. This is also relevant if we wish to share elements of the architecture we are developing (e.g., the Butler) with other projects without requiring them to absorb all of DM or even all of afw. It’s relevant to whether there are any parts of our C++ software that are useful without the Python, and vice versa.
One way to address this is to wait until very concrete use cases appear - e.g., the construction of the guider system software - and then tackle the issue as seems necessary. It may just be too complicated and fuzzy to attempt to reason about it without at least a few guiding examples. If so, I feel like we should run through a few such examples soon-ish (certainly by 2017).