French original version here. Reposted with kind permission of the authors.
We have just learned that the 2021 Nobel Prize in Physics was awarded this Tuesday, October 5 to two experts in climate change modeling, the American-Japanese Syukuro Manabe and the German Klaus Hasselmann, as well as to the Italian theorist Giorgio Parisi.
Half of the prize rewards Syukuro Manabe, 90, and Klaus Hasselmann, 89, « for the physical modeling of the Earth’s climate and for having quantified its variability and reliably predicted global warming« , specifies the jury. The other half goes to Giorgio Parisi, 73, « for the discovery of the interplay of disorder and fluctuations in physical systems from atomic to planetary scale ».
SCE congratulates the winners and takes the opportunity to recall some obvious facts that most of the media will be careful not to mention.
- The Nobel committee rewards modelers
The Nobel Prize in Physics rewards model physicists, that is, scientists who use well-known physical laws to build complex mathematical models, in an attempt to reproduce past and future changes in the climate system.
Let us remember here that a mathematical model is not reality and that with regard to the climate, no model will ever be able to predict what will happen exactly in the future.
Here’s why. First of all, many physicochemical phenomena are still poorly understood (example, the formation of clouds). If this turns out not to be the case, scientific research might stop today. The climate is a complex system, that is to say that it is composed of a large number of elements (both on Earth and outside the Earth), and that all these elements can have between them a large number of interactions (Figure 1). Many types of interactions are possible; the system also contains many direct or indirect feedback loops (and therefore with some delay). It is also dynamic, that is to say it evolves in space and time.
Figure 1. The many parameters controlling the climate: the “climate meta-model”. For more information see here .
If the interactions between the objects of the system are linear and instantaneous, the evolution of the dynamic system as a function of time can be modeled by systems of ordinary differential equations. The system then evolves over time towards a single solution, where the values of the variables no longer change. In this case, predictions can be made.
Unfortunately, natural systems possessing a property of linearity are extremely rare, not to say non-existent. However, many systems can be reasonably linearized, in order to facilitate their study. Many theories relating to physics and mechanics are thus constructed by considering linearized systems.
But if the interactions between objects of the system are non-linear (Rem: non-linear = the “principle of superposition” is not respected; most physical systems are non-linear. Example: Navier-Stokes equations in mechanics of fluids) the evolution of the dynamic system as a function of time must be modeled by systems of non-linear differential equations. These equations are very difficult to solve… For a very long time, we had to resort to simplifications to study these equations under conditions of linearity. Fortunately, computers can help us. And these computers then show us that there are several possible solutions! In some cases, the system never converges directly to one or the other solution, but « turns around » in asymptotically converging orbits; in other cases, the system evolves in variable but not converging orbits around one of the « strange attractors » and switches at unpredictable times from one « attractor » to the other. Such a system is said to be « chaotic » (in the mathematical sense of the term; it is a « dynamical system » in English). It can be described by a system of perfectly defined nonlinear equations, but the value of its dependent variables at a given moment can change enormously if we change the initial conditions or the value of one of the parameters in a tiny way, which, in practice, makes such a system perfectly unpredictable. This is what has been called deterministic chaos. The system oscillates irregularly (aperiodic) and never repeat exactly the same oscillation twice. It will then not be possible to predict the behavior of the system in the long term.
In deterministic chaos, each initial condition entirely determines the future evolution, because it is not a matter of chance: the system is completely deterministic. However, two very similar initial conditions can have completely different evolutions. The evolution of the system then becomes unpredictable because a small measurement error or a rounding to the 15th decimal place leads to completely different results after a certain time.
In other words, if a mathematical variable is changed slightly, the climate model will not produce the same prediction. Here is a specific example (Figure 2).
Figure 2. Results of 30 simulations carried out with the same climate model (maps 1 to 30), for North America in winter. The maps show trends between 1963 and 2012 (50-year period). Each simulation is performed from the same date and with the same parameters, the only difference between the simulations is the starting temperature, with very small differences between simulations (a small [(10–14) °C] random round-off difference to the initial air temperature field was used. The EM map is the average of the 30 simulations, the OBS map is the observations. The colors represent variations (in °C) over 50 years. Source: Deser et al. 2016, Journal of Climate 29: 2237-2258.
Figure 2 (from Deser et al. 2016, Journal of Climate 29: 2237-2258 ) shows us that when we vary the starting temperature by an infinitesimal value (10–14 °C), completely different forecasts can be obtained.
The mean of the simulations (EM in Figure 2) is not in agreement with the observed measurements, and worse, we make the models say exactly what we want, by modifying one of the parameters in an infinitesimal way. We can naively hope that mathematical models (and their algorithms calculating an average) make it possible to « erase » the chaotic signature affecting a complex phenomenon, but in reality, the models themselves, by the structure of the equations used, give a chaotic character to the results they produce.
For more details, let us recall that SCE published this concerning chaotic systems.
- Climate models are imprecise
After what has just been said, it is obvious that the jury of the Nobel Committee is wrong when it writes that climate models can “reliably predict global warming ”. Here are two more observations that show the opposite:
– No computer model had predicted the pause (the “hiatus”) in the evolution of global temperatures observed between 1998 and 2014. Dozens of articles have been published on this problem. See for example Hedemann et al. 2017 (Nature Climate Change).
– Computer models, based on the CO2 rate, predict an atmospheric “Hot-Spot” (we told you about it here ). However, this “Hot Spot” does not correspond to measurements carried out in situ with sounding balloons! For more details see Christy et al. 2018.
Do you still think climate models are accurate? It has long been known that climate models exhibit excessive rates of warming in the tropical troposphere. And it’s no secret to many modelers who are trying to improve their models… Only the Nobel Committee doesn’t seem to know!
Regarding the new versions of the climate models CMIP6 (Coupled Model Intercomparison Project Version 6), used in the last report of the IPCC (AR6), they are not better. For example, in a 2020 publication, McKitrick and Christy examined 38 CMIP6 models using historically observed parameters. They focused on the 1979-2014 interval. The authors show that the models produce warming that does not correspond with the observations whether it is for the lower or the middle troposphere, at the level of the tropics or elsewhere in the world.
- The philosophy of the Nobel Prize may not be respected
It is unclear whether Alfred Nobel would approve the 2021 Physics Prize in case he was able to see it. Inventor of dynamite, he was passionate about the concrete aspects of science and technology and was keen to reward those who had made a tangible contribution to their progress. Thus, the 1903 Nobel Prize in Physics was awarded jointly to the Curie couple (Pierre and Marie) and to Henri Becquerel for their discovery of natural radioactivity, a physical phenomenon unknown at the time. That of 1935 was awarded to James Chadwick to reward his discovery of the neutron in 1932, an essential element, constitutive of matter. Examples of this type abound.
It was only gradually that the prize was also awarded to theories of a more esoteric nature. One of the conditions of attribution was the light that these theories could provide for the understanding of the properties of the physical world. A prime example is the 1963 Nobel Prize in Physics, awarded Eugène Wigner (jointly with Maria Goeppert-Mayer and Hans Jensen) for fundamental work devoted to symmetry groups, a very abstract mathematical theory for understanding important aspects of structure of the universe.
The 2013 Nobel Prize in Physics rewarded two theorists (Brout and Englert) who created a mathematical model justifying the properties of a fundamental particle, the Higgs boson, highlighted in 2012 in the instruments detection around the large CERN collider. This highlighting explains the award of the prize to the three authors, only two of whom received it, Brout having died in 2011.
This year’s award falls far short of this perspective. Are the authors of the winning model bringing unprecedented results? We can doubt it, with the possible exception of Giorgio Parisi. The attribution just mentions ‘reliable’ results. The least we can say is that this qualifier does not mean much. Moreover, the model does not provide anything fundamentally new in physics. Everything that constitutes it (physics of the sun, the atmosphere, the oceans and their interactions) has been known for decades. The price therefore goes to a more ‘mathematical’ than ‘physical’ exercise. Alfred Nobel’s will is therefore in a way betrayed.
By rewarding climate modellers on the eve of COP26, the Nobel Committee is making a political and media gesture. Common people, who know nothing about modeling, will thus tend to believe what is predicted in the Holy Report of the IPCC.
To finish, let us recall here what physicist Jacques Duran wrote on his site « Pensée-unique« :
“Forecasts on the future of our planet are risky to say the least because we are dealing with a huge system of differential equations with unknown coefficients, nonlinear and coupled together. These equations are therefore very difficult to elucidate. In addition, some of these equations behave chaotically, i.e. they are very sensitive to often unknown initial conditions. There is nothing worse! Yet computer programmers have a blast with thousands of unknown parameters and the shape of master equations that you have to try to guess. Let’s say nicely that, as always, they get results, but the problem is that we can change the results as we want by changing only one of the parameters or only one of the unknown equations. How then can we believe that computers make a correct prediction when the essential mechanisms of exchanges and positive and negative feedbacks are still very poorly understood and are still the subject of bitter discussions between chemists, physicists, climatologists, and others. geophysicists? «
“Belief in the veracity of a particular computer prediction is more a matter of faith than scientific certainty. Given the multiplicity of possible resolution methods, the hazardous equations and the number of injectable parameters, the results of computer simulations are very difficult to control by anyone who has not programmed them himself. In short, we are swimming in the dark and biases take over. Politicians and environmentalists, on the other hand, choose the results that are right for them, but they ignore the many uncertainties, approximations and problems posed by the methods that have been employed. Politicians and environmentalists are totally incapable of appreciating the reliability of the results communicated to them and we cannot blame them in view of the difficulties of the problem. The only thing one can blame them for is their too great credulity and their peremptory assertions to make believe that all this is certain, while it is not at all! »
John R. Christy, Roy W. Spencer, William D. Braswell & Robert Junod (2018) Examination of space-based bulk atmospheric temperatures used in climate research, International Journal of Remote Sensing, 39:11, 3580-3607, DOI: 10.1080 /01431161.2018.1444293
I was in grad school when computers were invented. Many of us made geological models or graphics. Our mentors (supervisors) were from a previous age. If the model did not describe reality, we fixed it or deleted it. It seems thye academe is not what it used to be.