Complexity Engineering

Does Optimal Mean Best? Nope!

A theorem by J. Marczyk, which postulates that optimal systems are fragile. (C) J. Marczyk, 2000.

The understanding, assessment and management of risk and uncertainty is important not only in engineering, but in all spheres of social life. Given that the
complexity of human-made products, and the related manufacturing processes,
is quickly increasing, these products are becoming more and more exposed to
risk, given that complexity, in combination with uncertainty, inevitably leads
to fragility. Complex systems are characterized by a huge number of possible
failure modes and it is a practical impossibility to analyze them all. Therefore,
the alternative is to design systems that are robust, i.e. that possess built-in
capacity to absorb both expected and unexpected random variations of operational conditions, without failing or compromising their function. This capacity of resilience, main characteristic of robust systems, is reflected in the fact that the system is no longer optimal, a property that is linked to a single and precisely defined operational condition, but results acceptable (fit) in a wide range of conditions. In fact, contrary to popular belief, robustness and optimality are mutually exclusive. Complex systems are driven by so many interacting variables, and are designed to operate over such wide ranges of conditions, that their design must favor robustness, not optimality. In other words, robustness is equivalent to an acceptable compromise, while optimality is synonymous to excessive specialization. An optimal system is no longer such as soon as a single variable changes – something quite possible in a world of ubiquitous uncertainty.

The ancient Romans already knew, corruptio optimi pessima – when something is perfect, it can only get worse. When you’re sitting on a peak, the only way is down – when you’re optimal, your performance can only degrade. It is for this reason, that

optimal systems are fragile

It is for this reason that a state of optimality is not the most probable state of a system. Recently, I have tried to translate the above intuitions into something a bit more analytical and technical. The result is the theorem below.


The implications of this simple theorem are very important. Entropy reflects
the level of organization of a system. However, in virtue of the second principle
of thermodynamics, the entropy of a closed system tends only to increase, reflecting the incessant urge of things towards lower levels of organization. What the above theorem proves is that for the class of systems in question, i.e. systems whose behaviour can be locally approximated by a second-order response surface (something quite popular nowadays) are not willing to spend much time being optimal. In practice, such systems will not privilege states of optimality, given the fact that these correspond to states of minimum entropy. Since entropy tends to increase, this will remove the system from its state of grace. Given the chance, a system with minimum entropy, will try to increase it. The important thing, however, is the fact that the inevitable increase in entropy is more likely when you’re close to a minimum (maximum). This is because in the vicinity of extremal points of a function, the entropy gradient is the highest. It is also true that no matter what state a system is in, it will try to increase its entropy – even a robust systems will. But it is for optimal systems that this increase is more probable and dramatic. The proof of this statement, which I intentionally omit here, is based on the fact that the curvature of a function is highest in the vicinity of a minimum (maximum) and this translates to a higher skewness of pY (y). It so happens that skewness is also a measure of entropy.


In short, the theorem explains why being optimal is risky, why optimal solutions are frgile. Nature doesn’t privilege optimality at all. Self-organization – the main engine behind the evolution of biospheres – prefers to favour fitness instead.

Clearly, the theorem can be easily extended to other distributions and other more general classes of response surfaces, but I leave that to the academics.

Four years later, this article appeared in Nature. See highlighted text.

Omnis ars imitatio est naturae

www.ontonix.com

Established originally in 2005 in the USA, Ontonix is a technology company headquartered in Como, Italy. The unusual technology and solutions developed by Ontonix focus on countering what most threatens safety, advanced products, critical infrastructures, or IT network security - the rapid growth of complexity. In 2007 the company received recognition by being selected as Gartner's Cool Vendor. What makes Ontonix different from all those companies and research centers who claim to manage complexity is that we have a complexity metric. This means that we MEASURE complexity. We detect anomalies in complex defense systems without using Machine Learning for one very good reason: our clients don’t have the luxury of multiple examples of failures necessary to teach software to recognize them. We identify anomalies without having seen them before. Sometimes, you must get it right the first and only time!

0 comments on “Does Optimal Mean Best? Nope!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: