The two most important events of this year are the Brexit and the election of the President of the US. In the case of the Brexit, predictions were indicating a clear ‘yes’ while the result was a ‘no’. In the case of US elections, Clinton was going to win. Not only she lost, Trump won by a large margin.
How is it possible to make such macroscopic mistakes in the face of events of such huge, global impact and relevance? Why did Predictive Analytics, Big Data, semantic intelligence, statistical modeling, forecasting and the multitude of other supposedly predictive models get it so dramatically wrong? How is it possible that given remarkable computational firepower, armies of talented scientists, almost unlimited funding, that they would still get it so very wrong? We believe we can explain why. Here we go.
Math models – often used to make forecasts and predictions – are based on hypotheses and assumptions. But one cannot use a model unless one has checked that these assumptions hold. This is rarely done. Second, in a rapidly changing World a model may be valid one day, only to collapse the following day. Models attempting to capture a rapidly changing phenomenon must be adjusted and updated (and validated!) equally quickly. Again, this is rarely done.
Models must be realistic, not precise. The concepts of precision and accuracy are deeply rooted in university curricula, even though reality would suggest that in turbulence one should focus on other, more relevant things. Playing extravagant video games with eight-decimal precision is possible but, essentially, irrelevant.
Experience confirms that the most important parameters in a model are those it doesn’t contain. However, building realistic math models requires considerable experience, which is generally speaking not easy to come by. Models are full of assumptions and simplifications so that their formulation may be easier or simply be computationally cheaper. This means that in the process of model building reality is often bent to satisfy the tool.
The problem with a math model, and mathematics in general, is that it doesn’t ‘complain’ even if you plug-in senseless data into it. Unlike physics, mathematics only need to respect its own ‘grammatical’ rules and syntax. This makes it possible to dream up any fancy equation and to feed it with all sorts of garbage. If the model is not the reflection of a given physical phenomenon, i.e. which obeys the laws of physics, it is very difficult to validate it and to measure its degree of credibility. If one cannot estimate the level of validity of a model, its numerical conditioning and relevance, one is playing a video game.
An example. Take a man and a dog. On average they have three legs each. The calculation of a mean value is a simple and innocent operation. What can go wrong with a mean value? In the case in question, the operations is simply not applicable. It is physically unjustified. Tools must be appropriate for the problem at hand. Because statistics doesn’t need to respect the laws of physics (and sometimes also common sense) it is often excessively ‘brutal’. In the case of the man and his dog it is easy to spot the silliness of the result. However, in highly complex multi-dimensional situations, with thousands of variables, this is not easy if one lacks specific knowledge and experience. You just can’t leave certain things to a computer. It is fine to profess ‘science, not opinions’ as we do, but one must also chose the right science for the right problem.
So, what went wrong? We indicate two main causes:
- The way that correlation is measured – mainly based on the Pearson, or linear correlation – is not applicable in non-stationary, turbulent and chaotic contexts. Linear correlation is obviously not applicable to non-linear problems and if one does use them the result will be a placebo inducing a false sensation of control. This is why we have developed a more relevant means of measuring correlation, the so-called generalized correlation. Read more.
- The way dispersion is measured. A standard deviation – the conventional approach to quantifying scatter – measures the most probable dispersion around the mean. This implies that a mean does exist. But don’t forget the man and the dog example. A mean value may be senseless. Instead of standard deviation, we prefer entropy as a more general and relevant measure of dispersion. Not everything in life has a Gaussian distribution, not everything is linear.
Both linear correlations and standard deviations are the fundamental building blocks of statistical models which are used in uncountable applications, ranging from finance, economics, to medicine or social studies. Statistics is like a bikini – it shows something interesting but hides the essential. It takes experience to recognize what is interesting and what is essential. Because of the immense power and versatility of statistics, just like in the case of certain medicines, it should be kept away from small children.
Entropy and generalized correlations are the core of our model-free approach to data analysis. We measure complexity and resilience, two fundamental dimensions that form the backbone of a new philosophy in finance, economics, portfolio design, asset management, risk assessment or corporate strategy. Complexity is a measure of the degree of evolution and sophistication of a system and quantifies the amount of structured information in a system. Its units are cbits (complexity bits). Resilience, on the other hand, measures the resistance to shocks. Both complexity and resilience of, say, a portfolio, a fund or a corporation, are, for obvious reasons, of paramount importance and relevance in a turbulent and complex economy.
This is why the future lies in model-free methods. The analysis of highly complex systems and phenomena (the economy, finance, society, climate, traffic systems, etc.) cannot be done by building math models. Our model-free methods, on the other hand, are data-centric, which means that a model doesn’t need to be built. The immense consequences of this deserve a separate blog. Stay tuned.
PS. Following the numerous comments we have received, we believe the following must be said.
Take an event which can only have two outcomes, e.g. Brexit (yes, or no), elections (candidate A or B), Greek referendum (yes, or no), etc. Suppose that N individuals attempt to predict the result. Suppose also that each uses a different method. If N is large enough, say tens or hundreds, it is almost certain that someone will predict correctly the result of such an event.
Now, consider also that these events happen only once. There is only one Trump/Clinton election, only one Brexit referendum. How, then, can anyone claim that a particular method ‘works’ if it is applied only once to a given and unique scenario? How can one be sure that its wasn’t just luck? Well, one simply cannot be sure.
Last but not least, humans have been known to lie when it comes to surveys. Your model may be great, but if you feed it garbage …..
Wittgenstein said that things which cannot be debated should not be debated. A similar logic seems to apply to certain classes of problems – things that cannot be modeled shouldn’t be modeled, things that cannot be predicted should not be predicted.
0 comments on “Trump, Brexit and the Failure of Predictive Analytics”