Complexity Engineering

How Much AI Can a Car Handle?

Just because something can be done doesn’t mean it has to be done. This is true especially when speaking of technology. The fact that something is technically feasible doesn’t mean that implementing it is necessarily a good idea. Remember when you could wave your hand to control the volume of a car radio? And today, what about the huge three feet long dashboard display screens? Are they really necessary or are they just a desparate means of trying to be original?

(source)

Setting aside the immense information overload, or the impracticality of having to navigate through complex menus to perform the simplest of tasks, these displays are indubitably highly distracting. It looks like the days of when the most important gadget in a car was under the bonnet, are over. Only time will show how much this trend will impact road safety.

However, the point of this blog is another. It is AI. Artificial Intelligence is rapidly penetrating not just many spheres of social life but also modern products, such as automobiles. One example is autonomous driving. Just like on modern aircraft, where the level of automation is so high that pilots have become button-pushers, people will forget how to drive and will be unable to react when necessary. AI will do that.

Soon everything will be AI-driven. Ignition, stability control, the gearbox, airbags, navigation or the ‘infotainment’ system. There will be more and more intelligent gadgets, stacked up upon eachother, with capability to generate innovative suggestions and responses and make split-second decisions based on a myriad of sensors. Again, let’s set aside the fact that sensors can be broken, badly calibrated, misaligned, or simply dirty or slightly damaged, feeding the millions of lines of code with potentially corrupted inputs. Remember the MCAS system on the Boeing 787 max, that cost the lives of over 300 people? The MCAS is a great example of how a piece of software is added a-posteriori, to compensate for something, some design blunder or because you badly want to catch up with the competition, after the product has been rolled out. It is a well known fact that the best control systems are those that are designed in an integrated fashion with the plant they are supposed to control, not separately. But to know that, one has to be a control systems engineer, not a “data scientist”. Again, a totally different story.

The danger of happily adding up apps, gizmos and gadgets on top of each other is immense, especially if one is limited to a linear, three-dimensional model of thought. No action is without consequences and we will point out only two.

First of all, an increasing number of gadgets increases the complexity of a car’s IT infrastructure in a non-linear manner. And we already know that very high complexity inevitably means fragility. The Principle of Fragility, coined by Ontonix, takes care of that. But the way complexity increases is described by Theorem 1 of the QCT (Quantitative Complexity Theory) which states that:

The complexity of a coupled system is greater than or equal to the sum of complexities of the component subsystems.

The ‘equal to’ condition is immensely improbable as eveything is quite tightly coupled because most of these intelligent subsystems interact. The theorem and its proof may be consulted here.

The second unpleasant consequence of this state of affairs lies in L. Zadeh’s Principle of Incompatibility, which states:

High Precision is Incompatible with High Complexity

In other words, the more you make something complex, the less precise it will get. Super complex systems are naturally fuzzy, never precise, and are, inevitably, unpredictable and difficult to understand and control. Think of the climate, the economy, the global financial system, society, politics, human nature, global supply chains, or software systems with hundreds of millions of lines of code. An easier way to express the Principle is this:

In the presence of high complexity, precise statements are irrelevant and relevant statements are imprecise

So, stacking up intelligent and creative gadgets on top of each other, thinking that you are in full control, is wishful thinking of the worst kind. But there is more. In the future, intelligent, self-driving cars will interact – intelligently of course – to provide us with collision-free, efficient and intelligent traffic. Millions of sensors feeding trillions of lines of code of smart, generative AI algorithms. Progress is a good thing, especially if human lives are not the price. Apparently, using humans in large-scale experiments is the letimotif of our times.

The bottom line is this. Forcefully integrating new technologies into existing products, which have been designed independently of the said technologies, is not without consequences. The combination of the First Theorem of the QCT, the Principle of Fragility and the Principle of Incompatibility will make sure that car manufacturers in the future will face increasing liability, warranty and recall costs.

Established originally in 2005 in the USA, Ontonix is a technology company headquartered in Como, Italy. The unusual technology and solutions developed by Ontonix focus on countering what most threatens safety, advanced products, critical infrastructures, or IT network security - the rapid growth of complexity. In 2007 the company received recognition by being selected as Gartner's Cool Vendor. What makes Ontonix different from all those companies and research centers who claim to manage complexity is that we have a complexity metric. This means that we MEASURE complexity. We detect anomalies in complex defense systems without using Machine Learning for one very good reason: our clients don’t have the luxury of multiple examples of failures necessary to teach software to recognize them. We identify anomalies without having seen them before. Sometimes, you must get it right the first and only time!

0 comments on “How Much AI Can a Car Handle?

Leave a comment