Complexity

QCM – What it is NOT

Quantitative Complexity Management (QCM) technology has been around since 2005. It is a very unusual piece of technology and offers a radically innovative approach to dealing with data, anomalies and risk. As it happens with innovative technologies, they are often years ahead of mainstream thought, sometimes even ahead of predominant and accepted philosophies. And herein lies the problem. If you do something truly innovative, it means that you are (practically) the only one doing it. And this makes things very difficult when it comes to building a business around your innovation, or, even worse, explaining the innovation to others. This is particularly true when it comes to advanced data analysis or data mining (it should actually be called “knowledge mining” as one is mining for knowledge). QCM is a great example of how difficult it may be to explain, even to highly trained individuals.

One of the key applications of QCM is to deliver early-warnings of faults of malfunctions in large complex systems. The comment that we often get, after a carefully crafted presentation is this:

“It appears you have to know the fault conditions up front so that QCM can predict when the system is approaching those conditions”.

No. No, no, no! QCM does NOT need to know anything up front. This would be true if QCM were a pattern-recognition tool, or a tool that uses examples to train itself to recognize an anomaly. If that were the case, QCM would be a trivial Machine Learning system which, like a child, learns on the basis of examples and repetition. Machine Learning (OK, Artificial Neural Nets) has been around for decades and it is quite good at very specific tasks as, for example, recognizing faces, handwriting, or the silhouette of an enemy tank behind a bunch of trees. That’s easy. Been there, done that. Just to make it clear:

QCM is NOT a Machine Learning system – it does not recognize faults or malfunctions

 

Imagine that you are dealing with a system with thousands of variables, say the SW system on a modern aircraft. Millions of lines of code, thousands of functions, tens of thousands of variables. Now ask yourself these questions:

  • Do you know how many anomalies or faults can such a system experience?  Do you know in how many ways can such a system malfunction?
  • Can you define (in precise, workable technical terms) each one of these faults or malfunctions?
  • Can you provide a sufficient number of examples of each fault and malfunction in order for a Machine Learning tool to learn to recognize them?
  • Can you afford to have such a system fail a sufficient amount of times so as to learn to recognize the said anomalies? Do you have this luxury?

Clearly, the answer to all these questions is NO. NO you don’t know in how many ways a complex system can fail. What you do know is that the number of failure modes will be huge and proportional to the complexity of your system. So, if this is the case, how can you anticipate – not just simply record – a fault?  This is what the QCM was designed to do – to recognize that a system is in a pre-crisis situation without having ever experienced that particular crisis before. Just to make it very clear:

QCM can recognize that a system is entering a state of crisis without having ever seen an example of the said crisis

 

How can that be? It is often said that sufficiently advanced technology may be perceived as magic. QCM is not magic. It is different. A simple example may help. If you are suddenly running a high fever, you needn’t be a doctor to figure out that something is going on. A high fever may be sign of infection, flu, or a bunch of other illnesses. That is not the point. The point is that high fever is, often, a great early warning indicator. QCM is used to measure complexity and complexity is, like bodily temperature, a great indicator of trouble, especially if it rises suddenly.

Another common misunderstanding of QCM. QCM works with raw data that is collected from the sensors of a given system, for example:

  • aircraft flight parameters
  • vital signs of a patient in Intensive Care
  • stock prices
  • tensions measured at certain points in an electricity distribution grid

What do these systems have in common? Nothing, except for the fact that when they enter a state of crisis their respective complexities will rise quickly, sometimes even jump. No additional knowledge is required to measure complexity, except for raw data streaming from a set of sensors. Just to make things very clear:

QCM operates on raw sensor data – no additional knowledge is required

 

An example may help clarify. Epileptic seizure can be recorded in the form of an electroencephalogram (EEG). An example is shown below. EEG is commonly recorded at sampling rates between 250 and 2000 Hz in clinical and research settings, but modern EEG data collection systems are capable of recording at sampling rates above 20,000 Hz. The example below is a 16-channel EEG. A typical adult human EEG signal is about 10 µV to 100 µV in amplitude when measured from the scalp. This is what the 16 curves represent – raw voltage data.Based on these raw voltage measurements – and NOTHING else – complexity of the EEG is computed. The result are the three curves below the EEG plot. Without going into details, the green curve represents the value of EEG complexity. It may be observed that complexity is almost zero until step 4 (dashed black vertical line). After step 4 complexity rises sharply. At step 11 seizure commences. QCM triggers an alarm 7 steps before the event becomes visible. Another example from medicine: symptoms of systemic (bloodstream) infection are often visible hours after it commences, when it is sometimes already late. The bottom line is:

QCM anticipates seizure based on raw sensor data only – no additional clinical information is required

 

No clinical information on seizures has been given to the QCM algorithm to anticipate the above seizure. It was all done based on raw voltage data. Clear? So, just to make sure:

QCM is not about Artificial Intelligence

QCM is not about Machine Learning

QCM does not predict anything

QCM works using raw data

 

And finally, because of the fact that QCM requires only (streaming) raw data, and because rising complexity is a symptom of trouble regardless of the type of system one is treating:

QCM is application-independent

 

So, the next time you see the complexity of your system do something like this:

watch out as you may be in trouble.

 

More information on www.ontonix.com

Established originally in 2005 in the USA, Ontonix is a technology company headquartered in Como, Italy. The unusual technology and solutions developed by Ontonix focus on countering what most threatens safety, advanced products, critical infrastructures, or IT network security - the rapid growth of complexity. In 2007 the company received recognition by being selected as Gartner's Cool Vendor. What makes Ontonix different from all those companies and research centers who claim to manage complexity is that we have a complexity metric. This means that we MEASURE complexity. We detect anomalies in complex defense systems without using Machine Learning for one very good reason: our clients don’t have the luxury of multiple examples of failures necessary to teach software to recognize them. We identify anomalies without having seen them before. Sometimes, you must get it right the first and only time!

0 comments on “QCM – What it is NOT

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: