For a Human-Centered AI

The Nobel Prize in physics explained by our physicist Federica Mantegazzini

October 29, 2024

Interview by Claudio Ferlan, editor of FBK Magazine, on the occasion of the researcher's participation in the “Semplicemente Nobel” event.

“Semplicemente Nobel” (Simply Nobel) was defined as ”an event for big ideas explained in simple words.”  The goal of the October 30, 2024 meeting in Dolcè (VR) is to tell who the 2024 recipients of a prize perceived by many as the most prestigious in the world are and especially what they do. Among those invited to carry out this relevant undertaking, explaining complex things in simple words, is Federica Mantegazzini, a physicist by profession and our colleague at Fondazione Bruno Kessler. We asked Federica to help us understand a little more about the 2024 Nobel Prize in Physics.

The invitation to “Semplicamente Nobel” came to you before the prize was awarded to Geoffrey E. Hinton and John Hopfield. Were they on the list of favorites?

In the weeks leading up to the awarding of the Nobel Prizes, there are always a lot of names floating around, kind of like what happens with the topics for the high school final exams’ essays. The two general areas that were most likely to be recognized with tha Nobel prize were Quantum Computing and Artificial Intelligence (AI), having both had a strong acceleration and growing impact both in the scientific world and in our society.  For quantum computers, the favored names were David Deutsch, a British physicist who first described the principles with the quantum Turing machine, and Peter Shor, an American mathematician who gained notoriety through the invention of the quantum algorithm for factoring integers into prime numbers that bears his name – Shor’s algorithm. For AI, on the other hand, there were several hypotheses for winners, and four actually realized: John Hopfield and Geoffrey Hinton, winners of the Nobel Prize in Physics “for fundamental discoveries and inventions enabling machine learning with artificial neural networks,” and John Jumper and Demis Hassabis of DeepMind, scientists at Google’s AI unit, winners of the Nobel Prize in Chemistry “for predicting the structure of proteins.” Artificial intelligence is thus the real star of this year’s science area Nobels.

In commenting on the award, Nello Cristianini (professor of Artificial Intelligence at the University of Bath), called Hinton and Hopfield “two pioneers in artificial intelligence, and particularly in neural networks.” Why, in your opinion? 

The development of machine learning and artificial intelligence has taken off in the last two decades, but the origins of machine learning methods are rooted back in time, with the first pioneering studies already started in the 1980s. In 1982, John Hopfield himself introduced a mathematical model, the Hopfield network, which mimics associative memory. Associative memory is what we use to remember a word by assonance with another word, or a person’s face by similarity to another person we know.  Hopfield’s network explains how this kind of memory is the result of collective behavior of the processing elements in our brains, i.e., our neurons.  Hopfield’s network can store images, and if we query it by showing it another image, it is able to identify among those it has in memory the one that most closely resembles the proposed image.

Geoffrey Hinton extended Hopfield’s research by introducing, in 1985, the so-called Boltzmann Machine, an artificial neural network based on the Boltzmann statistical distribution that mimics our learning process.  The Boltmzann Machine can be trained to recognize and statistically classify patterns or images.  In other words, if trained with many images of cats, the Boltzmann machine will be able to recognize the presence of a cat in an image it has never seen before. This is the method underlying current generative models implemented with deep neural networks.

It is therefore clear how Hopfield and Hinton became interested in neural networks and machine learning when this scientific and application field was still in its early stages. Nevertheless, their efforts contributed significantly to laying the foundation for the machine learning models and algorithms that we study and develop today, forty years later. This I believe is the reason behind the prestigious award given to the two scientists.

Geoffrey E. Hinton has a degree in psychology. How important is collaboration between different expertise in the most advanced research in your discipline?

Physics is multidisciplinary by definition, because “there is a little bit of physics in everything.”  In physics, in fact, we develop models or equations to explain the behavior of nature or, put in other words, we devise abstract descriptions to explain concrete phenomena. Very often, though, it happens that theoretical models developed to explain certain physical phenomena can be applied to completely different systems. An example of this is precisely the results of Hopfield and Hinton, the two Nobel laureates.

The idea of the Hopfield network originated from an attempt to explain the interaction of atoms spins in magnetic materials and was then used to model the collective behavior of neurons. In this scheme, neurons are the analog of atoms, and the “on” or “off” state of the neuron is the analog of the atom’s spin, which we can imagine as an arrow pointing up or down.

The Boltzmann machine introduced by Hinton can be interpreted as a kind of evolution of the Hopfield network, where the spins agitate and change direction because of the thermal noise introduced by the temperature of the system.  To achieve this, Hinton, with a background in both computer science and, precisely, psychology and neuroscience, collaborated with David Ackley, a computer scientist, and Terrence Sejnowski, a biophysicist. It is thus apparent that the combination of different disciplines was also crucial to this achievement.

Finally, the interdisciplinary nature of physics and its applications becomes glaringly obvious when one considers the fields of application of machine learning and neural networks, ranging from supporting medical diagnosis to financial projections, agricultural crop classification, pharmaceutical research, and data analysis in fundamental physics research, to name a few. Neural networks temming from the analogy between neurological and atomic systems, and thus from a kind of bridge between neuroscience and physics, are now being applied not only in medicine and physics, but also in chemistry, economics, computer science, meteorology, biology and many other disciplines.

Let’s guess. What impact might this Nobel have on future research trends?

One does not need a crystal ball to predict that artificial intelligence will be increasingly present, both in research and in our society. The fact that as many as two Nobel Prizes, for Physics and Chemistry, have been awarded to the same discipline is a strong signal in this regard. As for the field of physics research, neural networks and machine learning are increasingly being exploited for data and pattern analysis.  The strength of these techniques is the ability to recognize specific patterns in large amounts of data, somewhat like looking for a needle in a haystack.  What lines of research in physics will benefit from them?  Many, but I bet – with the help of the aforementioned newly polished crystal ball – on high-energy particle physics and astrophysics.

At the Large Hadron Collider (LHC) at CERN, 600 million proton collisions occur every second, and each collision produces a myriad of particles that are measured by detectors. Approximately 1 Petabyte — or one million Gigabytes — of data is accumulated per second, the equivalent of filling the memory of 200,000 DVDs every second.  In this endless amount of data, physicists look for specific patterns that are a kind of “signature” of certain interactions or particles, such as the famous Higgs boson. Neural networks are already being used in this area, and new algorithms and approaches are increasingly being studied to find other “signatures” we are looking for.

In astrophysics, for example, there is a phenomenon called gravitational lensing, which consists of the apparent deformation of a celestial object due to the space-time curvature predicted by General Relativity. The image of the black hole that we all saw on the pages of newspapers a few years ago is an example of this effect. Often, however, manifestations of gravitational lensing are difficult to identify, and this is where the potential of artificial intelligence techniques comes in, that can recognize similarities and correspondences among many different images, in this case images of space.

Neural networks emulate the human brain, are capable of learning and are increasingly powerful, but are they really “intelligent”? The word intelligence comes from the Latin intelligere – to read within, to see deeply, in other words to have an intuition.  Intuition is the spark that comes from creativity and caused Archimedes to exclaim his proverbial “Eureka!” Until neural networks are able to interpret and imagine, and not just recognize, I personally believe they will not be truly intelligent.


The author/s