Recent experiments with ChatGPT on the issues of complexity, Artificial Intuition and early-warnings has provided interesting results. Two topics have been submitted:
“System state of health using complexity“
Answer: The state of health can be understood through various complexity measures. Complexity measures can provide insights into various aspects of a system, including its structure, function, and behavior. However, the specific complexity measures used to evaluate the state of health can vary depending on the type of system and the aspect of health being considered.
For example, in the context of a biological system, complexity measures such as fractal dimension or entropy can be used to quantify the degree of order and organization in a system, and to track changes in these properties over time. These measures can provide insight into the health of the system by indicating whether it is functioning in a stable, ordered manner, or whether it is undergoing stress or disease processes.
In the context of a social system, complexity measures such as network analysis or information entropy can be used to evaluate the interactions and relationships between individuals and groups, and to identify patterns of behavior that may indicate the presence of social problems or challenges.
In the context of an ecological system, complexity measures such as diversity and resilience can be used to evaluate the health of the system by quantifying the number and types of species present, and the ability of the system to recover from disturbances.
In summary, the state of health can be evaluated through a variety of complexity measures, depending on the type of system and the aspect of health being considered. These measures can provide valuable insights into the functioning of the system, and can be used to track changes over time, helping to identify potential problems and challenges.
Comments:
The main issue with the above statement is that complexity has one measure, not “various measures”. Complexity is structured information (or the “structure of information entropy”). It is measured in bits and it does not depend on the context. In provides a measure of how much information is encoded within a given system. Think of energy, or mass. These are properties that are independent of context (unless we take into account relativistic effects). Energy is measured in Joules and it doesn’t matter how it has been generated. It does not depend on the context. A megawatt is a megawatt.
The same may be said about information. For example, the human genome, at 3.2 billion BP (base pairs), is 6.4 billion bits, or 800 MBits. It does not depend on the context.
Saying that different complexity measures apply to different systems is precisely the kind of approach that foments fragmentation. Science makes important steps when one is able to generalize, not fragment. Imagine if we had different energy measures for different systems.
Staments like “complexity measures such as diversity and resilience” are hugely confusing. Resilience is not a measure of complexity. Resilience is a well-defined physical property of systems and there even exist machines to measure the resilience of materials.
Finally, the statement “complexity measures ….. by quantifying the number and types of species present” is incorrect, to say the least. The number of components in a biosphere or an ecosystem is not a measure of its complexity. Counting the number of parts is not equivalent to measuring the complexity of any system.
“Prealarm using Artificial Intuition”
Answer: Artificial intuition is a concept that refers to the ability of AI systems to make predictions or decisions based on their experiences and interactions, without relying on explicit rules or logic. The idea is to replicate the human ability to instinctively know the right course of action in a given situation, using machine learning algorithms and neural networks.
In the context of pre-alarms, artificial intuition could be used to analyze data from various sources, such as sensor readings, historical patterns, and real-time data, to identify potential threats or risks. The system would then generate a pre-alarm, indicating that a particular event or situation may require attention or intervention.
While the concept of artificial intuition is promising, it is still in the early stages of development, and there are many challenges that need to be addressed before it can be widely adopted in real-world applications. Some of these challenges include improving the accuracy of predictions, ensuring robustness and reliability, and addressing ethical and privacy concerns.
Overall, the use of artificial intuition in pre-alarms has the potential to greatly enhance the speed and efficiency of risk detection and management, and could have important applications in areas such as security, public health, and emergency response.
Comments:
The statement: “replicate the human ability to instinctively know the right course of action in a given situation, using machine learning algorithms” is what learning is about. It implies that one can “learn intuition”. As Ontonix implements it, Artificial Intuition does not require any learning or experience. That is the whole idea – to know immediately what to do without resorting to historical data or patterns. It all happens in real time.
The statement “the concept of artificial intuition is promising, it is still in the early stages of development” is incorrect. The Artificial Intuition tool OntoNet™ is established technology, not a concept, not an idea, not a working prototype. Since 2005. It has been used and tested in hundreds of projects and applications, with tens of clients all over the globe. There is even a specific chip – built by SAIC and Ontonix – that runs Artificial Intuition in real time. Read more here.
The applicability of Artificial Intuition is not limited to “areas such as security, public health, and emergency response”. In fact, precisely because Artificial Intuition means being able to react instantly, without the need of any form of training, makes it applicable to many more contexts, such as medicine, intensive care, defense, battle management, anomaly detection, maintenance, traffic management, just to name a few.
Read more here.

Conclusions
Artificial Intuition, and Complexity in particular, are concepts that are confusing to mainstream tech media and the AI/ML community. This is due mainly to the immense fragmentation and proliferation of the so called “complexity measures” – see our blog on the survey of complexity “measures” – which never actually quantify anything. The words Intelligence, Instinct and Intuition represent different things and should be used with care in order to avoid confusion.
ChatGPT also does not understand risk. I asked ChatGPT the following Ontonix queation:
Suppose that in a certain portion of a motorway a radar is mounted with the intent of catching those who enjoy speeding. If you’re 20-30 km/h above the limit, you can expect a fine of around 100 Euros. How much am I risking if I go beyond the limit?
Answer from ChatGPT:
If you go beyond the speed limit by 20-30 km/h on a motorway where there is a radar to catch speeding vehicles, you can expect a fine of around 100 Euros.
The exact amount of the fine may depend on various factors, such as the specific speed limit in that portion of the motorway, the speed at which you were driving, and the local laws and regulations governing traffic fines.
It’s important to note that speeding not only puts you at risk of getting a fine, but also increases the risk of accidents and can cause harm to yourself and other drivers. Therefore, it is always advisable to obey speed limits
Sorry, this is the question:
Suppose that in a certain portion of a motorway a radar is mounted with the intent of catching those who enjoy speeding. If you’re 20-30 km/h above the limit, you can expect a fine of around 100 Euros. It is known from statistics that on that portion of the motorway on average one driver out of ten gets caught. How much am I risking if I speed beyond the limit?
If on average one driver out of ten gets caught for speeding on that portion of the motorway, and you are driving 20-30 km/h above the limit, then your chance of getting caught would be approximately 10%.
Assuming the fine for speeding 20-30 km/h above the limit is around 100 Euros, your expected cost for speeding can be calculated as follows:
Expected cost = Probability of getting caught x Cost of getting caught
= 10% x 100 Euros
= 0.10 x 100 Euros
= 10 Euros
Therefore, if you speed 20-30 km/h above the limit on that portion of the motorway, you are risking an expected cost of approximately 10 Euros in terms of the fine. However, it’s important to remember that speeding can also increase the risk of accidents, which could result in much higher costs in terms of damage to your vehicle or harm to yourself or others. It’s always advisable to follow speed limits and drive safely.
My comment:
You simply risk the 100 Euros not 10 Euros. The probability is irrelevant.
LikeLike
Reblogged this on Calculus of Decay and commented:
That’s a fact
LikeLike