Subscribe at the end of the post to receive
the eBook for free
Objectively, relying more on data and statistics can enormously improve decision-making. Read Moneyball and you’ll grasp their immense potential. However, even if this trend is well justified, it has had a detrimental effect on the reputation of human intuition and humanities.
To reach a great analytical level, the data analyst needs to master not only conventional analytical skills (math, statistics, etc.) but also unconventional abilities, namely those that can push him or her beyond the conventional quantitative analysis. In addition, it is extremely important for the analyst to know in which circumstances one type of analysis should be preferred to a different one.
On one hand, the type of analysis depends on the type of data (qualitative, quantitative, or big data). Qualitative data usually comes from qualitative market research methods and is recorded as text, pictures, videos, podcasts, and so forth. Quantitative data comes from either research methods such as surveys or from tools that keep track of business operations, customer interactions, and so forth. Big data concerns the availability of great volumes of data in structured and/or unstructured format (e.g., data from sensors).
On the other hand, the type of analysis depends on the level of uncertainty, namely our underlying knowledge and prior research on the topic. Mousavi and Gigerenzer described three different processes in decision-making under risk and uncertainty. These processes are not exclusive—in other words, they can overlap in several situations. Under risk, you can use deductive analysis by applying prior probabilities to the problem. An example of deductive analysis is estimating the ROI of a campaign based on the average ROI of similar campaigns. This implies that we have some previous knowledge about the subject. Alternatively, under risk you can use inductive reasoning by applying statistical inference. Using the previous example, you would test the campaign with a sample of your customers and you infer the results to your entire customer base.
If you lack prior knowledge, the appropriate data for inferential statistics, or a default answer in hypothesis testing, then risk becomes uncertainty and it can’t be dealt with statistics or probabilities. Under this level of uncertainty, you need a different approach: abductive reasoning. Philosophers may refer to it as the “inference to the best explanation,” namely your “best guess” having neither previous information nor statistical proof. There are several methods you can use here. If you have some data at your disposal, use analytics. By analytics, I mean “exploratory data analysis” or “data mining” using tables, graphs, comparisons, evolutions, and so forth to identify a plausible answer. Analytics is also the best way to start if you don’t have a clear question about the analysis. If you have no data (or too little) and/or it is unreliable (too much noise, errors, etc.), use heuristics or the Black Swan approach proposed by Nassim Nicholas Taleb.
Finally, if you have at your disposal a huge amount of data in a constant stream (what you may know as big data), machine learning is probably the best option. This kind of analysis uses examples instead of instructions to give you an answer. For instance, the more pictures of a dog you provide as examples, the more the algorithm will accurately recognize a new picture of a dog as a “dog.”
 Shabnam Mousavi and Gerd Gigerenzer, “Risk, Uncertainty, and Heuristics,” Journal of Business Research 67, no. 8 (2014): 1671–78.
 Elliot Sober, Core Questions in Philosophy: A Text With Readings, 6th edition (Upper Saddle River, NJ: Pearson, 2012).