Complexity Engineering Society

Autonomous Driving. Are we Ready?


In highly complex systems malfunctioning or even bad design may remain invisible for a long time. In highly complex systems the crucial variables are often discovered by accident. Highly complex systems cannot be designed without taking complexity into account. Sounds obvious but today, in engineering design complexity is not considered as an attribute of a system, as a variable to account for when designing the system’s architecture. An example is the electronics of a modern car. Read or recent blog on the subject.

However, cars that stop and won’t restart because a faulty sensors says that tires aren’t inflated is one thing. Excessively complex car electronics can cause damages, not to mention loss of life.

A Tesla Model S P90 D owner from Munich writes: “… my Tesla decided, whilst performing an autopilot parking maneuver, to suddenly accelerate and crash into the car that was parked behind us”.  The damage on the other car was considerable.

The question, at this point, is this. After a similar experience, would you put your life in the hands of a computer with four wheels and a 760 BHP engine on a highway in autonomous driving mode, knowing that it experiences difficulty simply parking between two solid obstacles that are not moving? Wikipedia states:

“An autonomous car (driverless car, self-driving car, robotic car) is a vehicle that is capable of sensing its environment and navigating without human input. Autonomous cars can detect surroundings using a variety of techniques such as radar, lidar, GPS, odometry, and computer vision.”

That is the theory. There are two major problems.

  1. It is difficult to program a complex piece of SW that will always react correctly – and without human inputs – to other cars driven by humans. A rational machine versus a crowd of irrational humans. Hhhmmmm.
  2. As we know, excessive complexity is a formidable source of fragility. If you want to make something fragile, make it very complex. Problems are guaranteed. This is because a highly complex system may behave according to many ‘modes’ (in non-linear mechanics these are called ‘attractors’). High complexity means that under certain circumstances a system may jump from one mode to another without any warning. Sometimes one such mode of functioning is called ‘fault’. A most unpleasant property.

From an article published on July 1-st, 2016:

“The first known death caused by a self-driving car was disclosed by Tesla Motors on Thursday, a development that is sure to cause consumers to second-guess the trust they put in the booming autonomous vehicle industry.

The 7 May accident occurred in Williston, Florida, after the driver, Joshua Brown, 40, of Ohio put his Model S into Tesla’s autopilot mode, which is able to control the car during highway driving.

Against a bright spring sky, the car’s sensors system failed to distinguish a large white 18-wheel truck and trailer crossing the highway, Tesla said. The car attempted to drive full speed under the trailer, “with the bottom of the trailer impacting the windshield of the Model S”, Tesla said.”

And what about a red truck against a red sky at sunset? Was the car that was damaged in the Munich parking incident of the ‘wrong color’? There are very many circumstances in which one can fool a real-time SW system running lidar, GPS, odometry, and computer vision and which still remain to be discovered. Think of hackers…..

By the way, will drunken people be allowed to ride their cars in autonomous mode?

There is no single (best) way to write a similar piece of fail-safe SW. There are many ways to conceive and architecture an autonomous driving system and the associated control strategy (PD, pole-placement, Kalman Filter, MPC, LQG, sliding mode, mu-control, centralized, de-centralized, adaptive, hierarchical, H infinity loop shaping, stochastic control, bang-bang., etc., etc.). Then there are very many ways to write the actual software. The combinations are infinite. And so are the potential failure modes.

Just to give the readers an idea of how much freedom engineers enjoy when designing similar systems, we can point out, for example, that unlike Boeing, Airbus uses asynchronous controls. This means that what the flying pilot is doing with the control stick cannot be seen or felt by the other pilot. Two radically different strategies. Which one is right? The asynchronous controls is an insane idea but nearly half of the planes flying today are Airbusses and millions of people fly on them. In a recent Quantas A380 incident over Indonesia over 800 error messages were displayed by the cockpit computer after the loss of the No.1 engine. With nearly 100 million lines of code, this is not a surprise. Software this complex becomes ‘alive’ and develops its own little personality.

While there are only two major aircraft manufacturers today, there are tens of car companies. Each one will adopt a different SW strategy for its autonomous driving system.  Each will use different sensors, actuators, ECUs, etc. Now imagine how multitudes of autonomous (and differently built) cars will interact. Imagine ‘collaborative driving’ – groups of cars exchanging information so as to collectively ‘optimize’ things on a motorway.  Imagine all the countless ways in which accidents will occur. To the delight of lawyers. Sounds really like ‘disruptive technology’. Pun intended.

But the most important issue with autonomous driving is the same one which caused the Air France 447 disaster over the Atlantic in 2009 – there is so much automation on modern airliners that pilots no longer know how to fly. They are becoming mere button pushers. If people stop driving their own cars, they will become unable to react to situations and circumstances which cannot be hardwired into a piece of software. People will basically forget how to drive. Few will know what a road sign means. Is this what we want?

The next time you press a button, any button, think twice. In the early days of computing, experienced programmers wrote computer code. Today, a lot of software is written by summer students and kids.


“In a sense the car has become a prosthetic, and though prosthetics are usually for injured or missing limbs, the auto-prosthetic is for a conceptually impaired body or a body impaired by the creation of a world that is no longer human in scale.”
― Rebecca Solnit, Wanderlust: A History of Walking




Established originally in 2005 in the USA, Ontonix is a technology company headquartered in Como, Italy. The unusual technology and solutions developed by Ontonix focus on countering what most threatens safety, advanced products, critical infrastructures, or IT network security - the rapid growth of complexity. In 2007 the company received recognition by being selected as Gartner's Cool Vendor. What makes Ontonix different from all those companies and research centers who claim to manage complexity is that we have a complexity metric. This means that we MEASURE complexity. We detect anomalies in complex defense systems without using Machine Learning for one very good reason: our clients don’t have the luxury of multiple examples of failures necessary to teach software to recognize them. We identify anomalies without having seen them before. Sometimes, you must get it right the first and only time!

0 comments on “Autonomous Driving. Are we Ready?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: