For a Human-Centered AI

Irrefutable uncertainties

June 15, 2020

A study published in "Nature", with two FBK researchers involved, investigated the extent to which different research groups can reach different conclusions by analyzing the same data

In these times, science is more than ever at the center of the scene. Virologists, physicians and experts are being interviewed all the time by newspapers and hosted in tv shows, and they often disagree with each other. This last aspect in particular sometimes gives rise to bewilderment in public opinion and also in politics, which demand univocal or even “irrefutable” answers from science, to quote a minister. But the Galilean scientific method, based on the reproducibility of the experimental results and more generally on a constantly critical approach to scientific investigation, essentially affirms quite the opposite: to disagree and discuss is completely normal within the scientific community. If anything, it is interesting to understand which mechanisms drive the scientific debate, as well as the possible different interpretation of the same data.

A recent study published in the journal “Nature” stemmed from this approach: an international collaboration within the NARPS   (Neuroimaging Analysis, Replication and Prediction Study) project, led by researchers from the Universities of Tel Aviv, Dartmouth College and Stanford and made up of almost 200 scientists (divided into about 70 research groups) from various sectors including neuroscience, psychology, statistics and economics. Among these is a research group based in Trento and composed of Paolo Avesani and Emanuele Olivetti with the NILab research unit at FBK and Vittorio Iacovella, from CIMeC/University of Trento.

The aim of the study was to determine how (and to what extent) the analysis of the same dataset can vary when different research groups are testing the same scientific hypotheses independently. In other words, the question the study aimed to answer was the following: can different researchers come to different conclusions from the same data and hypotheses?

To do this, the University of Tel Aviv’s Strauss research center collected a set of brain imaging data related to 108 participants engaged in a task in which they were called upon to make decisions. The data were then sent to the 70 research groups, from all over the world, who were asked to analyze them independently and in particular test nine predefined hypotheses (the same for all groups) concerning the activity of some areas of the brain in relation to the decisions made by the participants. Each group was given three months to analyze the data, and asked to peovide (in addition to the final results) detailed information on the analysis methods and also on the intermediate statistical results.

The results are interesting: only for four of the nine hypotheses was there a certain consistency between the various groups, while for the other five there was substantial disagreement. Not only that: the statistical brain maps related to each hypothesis, developed by the various groups, were very similar, yet this did not prevent a diversification of the final results. As regards the intermediate results, there was instead a greater convergence for almost all the hypotheses.

Another important part of the study, carried out by economists and behavioral finance experts, aimed to establish the expectations of the participating groups on the results of the research, through the so-called prediction markets (in finance, investment instruments whose profits depend on the outcome of a certain future event). In this case the “market” was represented by the results of the nine scientific hypotheses considered: prediction markets revealed that the researchers were on average excessively optimistic in estimating the probability of obtaining significant results.

«The study conducted with NARPS not only aimed to investigate the variance in the analysis of the data, but also to determine whether and how the expectation of the expected result could influence the final result. Data analysis is strongly influenced by our a priori expectations”, Paolo Avesani stressed. “Making inferences from data is a complex process that involves numerous assumptions that often remain silent, although they can have a significant impact on results.”

“Our a priori assumptions are what we must motivate and defend before the scientific community,” Emanuele Olivetti added.

More generally, the work has clearly highlighted how much the element of uncertainty and debate are inherent in the scientific process. A process that can lead, even starting from the same data, to draw different conclusions on a certain phenomenon, and sometimes to have expectations that are not totally correct. Scientists being aware of this therefore becomes an essential element in improving the approach to data analysis in research.”Although it is counterintuitive, in scientific research, even if starting from a common question and common data, it is not guaranteed that an univocal answer will be obtained”, Avesani went on. “The strong demand for certainty in this period of health emergency has accentuated the polarization between skepticism and scientism, but science often contemplates a third type of answer: “we don’t know”».

From a more strictly technical point of view, another critical aspect is “the need for the “humanities” scientific community to narrow the thresholds of significance of statistical tests in order for a discovery to be pronounced”, as Olivetti pointed out. «In other areas, such as high energy physics, these thresholds are much more stringent. In general, we try to work for robust and credible science, pushing good practices in that direction, as was also done in this work for “Nature” ».

The experiment carried out by the NARPS project was certainly a success, given the great participation and collaboration by researchers from all over the world. It fits into the context of open science, namely the set of practices that aim to reinforce the authoritativeness of scientific research, through collaboration between scientists and the sharing of data and results. An aspect that is proving particularly important and necessary in this period of health emergency (just think of the international collaborations carried out for the search for a vaccine against Covid-19). But it is an approach, at the same time, that can also be decisive for defining a useful method to better manage the issue of polarizing positions within the scientific community.

 


The author/s