Complexity Economics Engineering Medicine Society

Complexity and the Natural Limits of AI and LLMs

The Principle of Incompatibility, introduced by Lotfi Zadeh in 1973, states:

“As the complexity of a system increases, our ability to make precise and yet significant statements about its behavior diminishes until a threshold is reached beyond which precision and significance (or relevance) become almost mutually exclusive characteristics.”

In simpler terms:

High precision is incompatible with high complexity

This is what it looks like. If you have something very complex, it will not behave precisely and any pecise statements about it will be irrelevant. Think of humans and human nature.

Self-Driving Cars and the Principle of Incompatibility - Artificial ...

In other words: The more complex a problem is, the less meaningful precise statements become.

This principle highlights a fundamental trade-off:

  • High Precision + High Significance/Relevance = Only possible for simple systems
  • Complex Systems = You must choose between precision and relevance

In essence, what the principle states is that a precise solution to highly complex problems is not possible and one has to live with something that is approximate, fuzzy, or just good enough.

The Principle of Incompatibility applies to everything: climate, society, economics, engineering, human nature, or Artificial “Intelligence”.

Apple has shown in a recent paper “The Illusion of Thinking:
Understanding the Strengths and Limitations of Reasoning Models
via the Lens of Problem Complexity”, that:

“the best reasoning LLMs perform no better (or even worse) than their non-reasoning counterparts on low-complexity tasks and that their accuracy completely collapses beyond a certain problem complexity.”

Let’s look at the Hanoi Towers problem: move all disks from needle 1 to needle 3 (or 2) by moving only one disk at a time and never placing a larger disk on top of a smaller one.

This is a problem that has only one precise solution, even though there may be various ways of obtaining it. In the above paper it is shown how LLMs collapse beyond ten disks, with accuracy falling to 0%.

From the cited paper: “As complexity increases, reasoning models initially spend more tokens while accuracy declines gradually, until a critical point where reasoning collapses—performance drops sharply and reasoning effort decreases.”

It is also stated that:

“Our detailed analysis of reasoning traces further exposed complexity-dependent reasoning patterns, from inefficient “overthinking” on simpler problems to complete failure on complex ones. These insights challenge prevailing assumptions about LRM capabilities and suggest that current approaches may be encountering fundamental barriers to generalizable reasoning.”

One such fundamental barrier emanates from the Principle of Incompatibility. Nature offers no free lunch. And it doesn’t negotiate.

The tower of Hanoi illustration (source)

Unknown's avatar

Established originally in 2005 in the USA, Ontonix is a technology company headquartered in Como, Italy. The unusual technology and solutions developed by Ontonix focus on countering what most threatens safety, advanced products, critical infrastructures, or IT network security - the rapid growth of complexity. In 2007 the company received recognition by being selected as Gartner's Cool Vendor. What makes Ontonix different from all those companies and research centers who claim to manage complexity is that we have a complexity metric. This means that we MEASURE complexity. We detect anomalies in complex defense systems without using Machine Learning for one very good reason: our clients don’t have the luxury of multiple examples of failures necessary to teach software to recognize them. We identify anomalies without having seen them before. Sometimes, you must get it right the first and only time!

0 comments on “Complexity and the Natural Limits of AI and LLMs

Leave a comment