How Psychology Can Help You in Data Analysis

The Art of Data Analysis

It is enough to read Moneyball[1] by Michael Lewis to understand the enormous improvement that the use of data and models can achieve compared to gut instinct. However, as the same author explains in The Undoing Project, such a model has its own limitations, which make it underperforms compared to human judgment in certain cases. He tells the story of Daryl Morey who was using data and statistics to choose the best players for the NBA team Houston Rockets. Morey managed to improve the selection of NBA players but at a certain point, he understood he had reached the limits of the model. “The trick wasn’t just to build a better model. It was to listen both to it and to the scouts at the same time. ‘You have to figure out what the model is good and bad at, and what humans are good and bad at,’ said Morey. Humans sometimes had access to information that the model did not, for instance.[2] However, if we are going to include human judgment in our models, we have first to understand its limitations, as we have done for data models. Lucky for us, in the seventies, Amos Tversky and Daniel Kahneman (Nobel laureate in Economics) started analyzing how people make decisions and their biases in judgment and choices. They stated that “these heuristics are highly economical and usually effective, but they lead to systematic and predictable errors.”[3] This means that we can improve our models by taking into account these biases.

Emotions play a crucial role in people’s behavior and, therefore, in choices. Instead of utility, people try to maximize the pleasure from their emotional states, namely happiness and regret. For example, we value an object more if it is ours, due to the emotional attachment we feel. Its value is not merely economic, but it is the sum of monetary value and emotional value. Another example is the price we pay to avoid risk with insurances, even if the mere economic value of probabilities and outcomes suggests we are better off without insurances.

Besides our specific way of estimating utility, we tend to misinterpret and misuse information. Sometimes, data is incomplete because we only see things that happen and not things that don’t happen, but we act as if we had access to complete information. When we apply a business strategy and it works, we see the choice as a success, but we can’t compare it with the results of an alternative strategy. We not only perceive incomplete data as complete, but we also filter information in a biased way. We have a sort of confirmation bias according to which we tend to see what we expect to see. We like consistency with our beliefs; therefore, when we receive new information we accept it at once if it is in line with them, but we fiercely challenge it if it is not. Moreover, when we are faced with ambiguous information, we usually only see the aspects that agree with our beliefs. But the problem is not only biased filtering, but we also select the pieces of information and sources in a biased manner, that is, the ones closer to our thinking. For instance, when we try to prove something, if we find at once the information that confirms it, we stop our research, but if the information is not confirming it, we keep looking for new data. Problems don’t end here though. Even when data is complete and we don’t filter it, we can be biased by our interpretation of probability, statistics, and Bayesian inference.

We know mathematically how much 1% is, but our feeling of this probability for an event is higher than the actual probability. Would you prefer 200.000 € for sure or 99% chance to win 400.000 € and 1% chance of getting nothing? Are you in doubt about that? The first option has a weighted outcome of 200.000 €, while the second one has 396.000 €. I guess you are still in doubt about the choice you would make. On one side, it is because we place more weight on this 1% probability, and less weight on the 99%. On the other hand, in this situation, we also have risk aversion for the fear of disappointment if we finally receive nothing. This wouldn’t be the case if we were offered this choice several times. This concept is useful for example in deciding the price gap of a refundable tariff versus a nonrefundable one (in airline tickets, hotel stays, etc.). Amos Tversky and Daniel Kahneman carried out a study where they estimated the weight people assign to probabilities. For small percentages, people tend to overweight probabilities due to what they called the “possibility effect.” This explains why people play at lotteries. Progressing from something impossible (0%) to something improbable but possible (1%) is a huge increase in our minds. The opposite happens with higher probabilities, that is, people underestimate them.

Probability vs. decision weight (Source: Adapted from Daniel Kahneman, Thinking, Fast and Slow)

The two authors crossed this information with the fact that we react differently to gains than to losses in a matrix called “the fourfold effect.” This matrix explains situations such as lotteries or insurances where people make choices that would be considered “irrational” looking at the numbers.

The fourfold effect (Source: Adapted from Daniel Kahneman, Thinking, Fast and Slow)

Related to probabilities, we find the misuse of statistics. Daniel Kahneman presented a study made in the United States to decide in which school to invest. According to the results, the most successful schools, on average, were small. Therefore, huge investments were made to create smalls schools ignoring the fact that the variability of small schools was higher and that, probably, also the least successful schools, on average, were small. The problem is that we tend to substitute the use of sampling errors with finding an intuitive causal explanation for the results. In this story, it may seem reasonable that small schools perform better because of more personalized attention and encouragement to students. However, if results were the opposite, we could easily create another story saying that larger schools are better because of available resources, curricular offerings, and so forth.


[1] Michael Lewis, Moneyball: The Art of Winning an Unfair Game (New York: W. W. Norton, 2003).

[2] Michael Lewis, The Undoing Project (New York: Penguin, 2017).

[3] Amos Tversky and Daniel Kahneman, “Judgment under Uncertainty: Heuristics and Biases,” Science 185 (September 1974).

Click here to get the book!!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

search previous next tag category expand menu location phone mail time cart zoom edit close