Of Vitalism and System Engineering
Simplifying somewhat, Vitalism is the philosophical theory that living objects differ from non-living objects because they contain a specific element sometimes called the “vital spark”.
In contrast, and again simplifying somewhat, Mechanism is the philosophy that living objects are made of parts which in turn are made of smaller parts until the smallest parts are clearly non-living. It is the actions of the parts operating together that produces a living object.
Mechanism has prevailed. Vitalism has been refuted.
It took experimentation for the mechanistic view to take hold. Vitalism is not a stupid or nonsensical theory. Without the evidence, Mechanism and Vitalism are both reasonable theories.
Indeed, and importantly for this article, Vitalism is a natural explanation for how living objects come about; to steal a quote from Wikipedia:
Recent experimental results show that a majority of preschoolers tend to choose vitalistic explanations as most plausible. Vitalism, together with other forms of intermediate causality, constitute unique causal devices for naive biology as a core domain of thought.
This ease of acceptance of Vitalism can be seen in marketing. In his book Bad Science, Ben Goldacre comments about cosmetic advertising saying:
The link between the magic ingredient and efficacy is made only in the customer’s mind.
My point here is not about the ethics of making such a link. My point is that it is desirable and easy to make the link because humans are predisposed to expect and accept explanations of the form that a product works because it has a magic ingredient, that is, a vital essence. Advertisers understand this.
My audience here is engineers. You might expect that engineers would be less susceptible to this philosophical pitfall. However, we fall for it in a different way. In fact, we fall for it in two ways.
If reductionist science is the discipline of breaking the universe down into small components to see where complex behaviours come from, then engineering is the discipline of putting together small components to build the desired complex behaviour. For example, the desired complex behaviour might be a bridge capable of transporting vehicles across a river and the small components may be concrete and steel.
So, if the Vitalism pitfall in science is the assumption that an observed complex behaviour must imply the existence of a vital essence of that behaviour, then the Vitalism pitfall in engineering is the assumption that to achieve a desired complex behaviour then the key is to build components that embody that specific complex behaviour. That is, if the system as a whole does X, then we must be able to point at a small component which does X.
This pitfall shows in systems engineering. We want to build, for example, a networking system that guarantees video latency, then we will be drawn to the starting assumption is that to achieve this goal we must drill all the way down to the lowest levels of the network baseband and tag the data specifically as video data. This is the first trap.
The problem with this is one of scale. If every layer of the networking stack has components designed for each individual use case then there will be too many components to manage effectively, and these components will interact unpredictably.
There’s a secondary hazard. Building these components takes time. If the use case goes away, as can happen if the market changes, then components designed for this one use case become useless.
The solution is to break down the problem into small, generally reusable, independent components that can be combined to produce the desired complex behaviour. Some of these components will be usable for other use cases. Even if the original use case goes away, well defined components may be salvaged for other uses.
Small, generally reusable, independent components are also more amenable to testing than components designed for a single higher level use case. Small components can be unit tested before being passed to system testing. Reusable components can be tested with a wide variety of inputs. In contrast, components designed for a specific use case might be testable only in a system test for that one use case.
The second trap is worse. The second trap is the assumption that to get the desired complex behaviour, the only thing we need is the vital essence of that behaviour. Even if we’ve redefined the vital essence into something reusable (say, a load-balancing framework instead of a video networking component) the trap is to ignore the rest of the system (buffering, CPU load, memory, multi-tasking requirements, latency and so on). It’s the equivalent of Dr. Frankenstein assuming that all he needs to do to endow his creation with life is to provide a galvanic impulse to reanimate the dead flesh, but not worrying about whether the blood vessels are leak-tight, or, worse, not bothering to fit the lungs at all.
If the rest of the system is ignored the problem may not be found till late in testing and may show as the system almost working. A lot of time could be wasted tweaking the vital spark before it is realised the problem is elsewhere and is so fundamental it requires a redesign with resources that were never requested. As has been said elsewhere:
When a program is being tested, it is too late to make design changes.
In the worst case, after the rest of the system is upgraded and redesigned to smooth out the problems, it may be found that the original, apparently vital, component was never needed at all. The problem was elsewhere.
BG01 Ben Goldacre. Bad Science. Fourth Estate (2008). pp. 25–26. ISBN: 978-0-00-724019-7.
GJ01 Geoffrey James. The Tao of Programming. Info Books (1986). Book 3. ISBN: 0-931137-07-1.
Up to the welcome page.
Comments should be addressed to firstname.lastname@example.org
Copyright © 2020 Steven Singer.