Too good to be true? How to make sure we use models wisely
In the C-THRU project, much of our research is conducted using models. In fact, many of the scientific facts, figures, and graphs we interact with as members of the public are outputs of models. 1.5 degrees Celsius of global warming, 9 billion people on the planet by 2037, 2m social distancing… from the projected warming of the planet to the way a disease spreads, we are continually receiving expert knowledge that has been crafted through careful modelling of systems and objects.
But what goes on behind the scenes to produce the graphs and figures that researchers publish? What do scientific models really look like? And can we trust them? Are their projections of the future too good to be true? How should we interpret model outputs when we come across them, and how should they be used to inform policies? But before we start to look at any of these questions, let’s clarify what models actually are.
What are models?
Models are representations of things or systems that exist in the real world. They are replicas, attempts to imitate and reproduce, to simulate some original process or object. And they come in many forms.
Some models are physical. Examples of physical models could include the miniature railway in your Dad’s garage, or the connecting balls and sticks you might have used in school to build larger versions of microscopic molecules.
Other models are conceptual, such as a flowchart showing the hierarchy of people within an organisation, or the image of a safe operating space for human activity within planetary boundaries.
Perhaps the most influential type of model in environmental science, engineering, and policy is the computational model. This type of model represents physical processes and their outcomes through numerical equations and digitally-generated visuals.
What are models used for?
Models are powerful tools with several uses. They can be educational, helping us to understand something better. They can also perform a predictive function. By configuring a model that represents the current dynamics of an ecosystem or an industrial process, modellers can simulate the possible effects of future stimuli. Models also play a big role in society and in politics; model outcomes and graphics can be used to lend authority to decisions and to persuade people that they need to stay in their homes during a pandemic or leave them ahead of a suspected volcanic eruption.
In the C-THRU project, we use models to learn about historic and current emissions and flows of material in the chemical sector, and explore how they might change in future if different policies and technologies were introduced to mitigate climate change. For the most part, the models we create are computational.
“All models are wrong”…?
But can we trust models to do all that we expect of them? Some have their doubts. Among them are many who quote British statistician George Box’s famous claim that “All models are wrong, but some are useful.” Here we consider three reasons why models might be ‘wrong’ before looking at how we might set the right conditions for them to be genuinely useful to society.
Firstly, a model might be said to be wrong if there are inaccuracies in the mathematical formulae that are coded into its back-end. As modellers generate pages and pages of coding to represent an industrial process or an aspect of the climate, it’s always possible for typographical errors to creep in.
Secondly, it can be hard to validate models to make sure they are good representations of reality. Scientists and engineers validate models by checking that they can reproduce measured data that describe the real-life phenomena they simulate. For example, a climate model might be validated by ensuring it can reproduce historical trends in climatic conditions measured by thermometers or barometers, or derived from older sources such as ice cores or tree-ring records. But there are two challenges to this; modellers have to assume that there are no inaccuracies in the measurements they are checking the model against, and they also have to assume that the model produced these figures by the same mechanism that was at work in the real world system or process. It turns out that it is actually very difficult to ascertain exactly how accurately a computational model represents reality.
Finally, modellers are forced to make simplifications and assumptions as they seek to represent complex processes with mathematical formulae. They may have to generalise the effects of processes that take place on tiny scales, or simplify highly complex relationships. This is partly an issue of computational power too. It may sometime be possible to represent very intricate processes, but as they stack up in the model, running the simulations can require so much time and power that the model itself becomes unviable. Models are always partial – they are abstractions – and so they can never act as an exact stand-in for reality.
So what are we to make of models? And why are we telling you any of this if we want you to trust our models and their results? We would argue that George Box’s statement needs a little nuance. It’s not that all models are definitely wrong. The challenges for simulating reality outlined above mean that all models could be wrong, and it’s hard to know exactly how right or wrong they are.
But the other half of the quote emphasises that models can still be useful. Models are inherently and inescapably partial, but they do not have to perfectly represent the real world in order to tell us something valuable about it. In fact, the ability of models to simplify a complex reality is often what makes them so powerful. Models can be useful, we just have to be careful about how we use them and remember that they will never be exactly identical to the reality they represent. The rest of this article offers three principles for how we can do this.
Principles for how to make and use models wisely
In 2020, Saltelli et al. wrote a piece for Nature about how we can make sure models genuinely serve society. Here we go through three of the five points on their ‘manifesto’ for how to use models well.
Firstly, watch out for the assumptions. We always have to make assumptions when we create models. Again, this doesn’t mean they aren’t useful, it just means we have to be clear about what we are assuming and why so that people can assess the models and their outputs for themselves.
We can also conduct uncertainty and sensitivity analyses to make sure we understand the uncertainty in our own models as best we can. Some level of uncertainty is inevitable, but it doesn’t necessarily mean that models are no longer useful. It just means we need to be transparent and reflexive about the uncertainty that does exist in our models.
Secondly, watch out for complexity. The more parameters we add, the more uncertainty is compounded in the model. This has been called a ‘cascade of uncertainty.’ It can be tempting to include every variable we possibly can in a model, but we need to find a balance and avoid adding so much uncertainty that it is no longer useful for informing decisions by making it overly complex.
Finally, watch out for the framing. Modellers need to explain clearly why they have chosen a particular model and the implications for what they are presupposing is valuable. For example, a model might presuppose that saving emissions is important, or it might presuppose that the most valuable thing is economic growth. Modellers also have a responsibility to present their findings in a way that acknowledges the uncertainty and imprecision in their simulations; powerful visuals can sometimes make it look like a model output is more definite or specific than it really is. As we use models to inform and legitimise policies and strategies for the chemical sector, we need to be mindful of these dynamics.
So next time you come across a model, you can question its assumptions, examine the framing, and look for how transparent the researchers have been about their levels of uncertainty.
Saltelli et al conclude that “Mathematical models are a great way to explore questions. They are also a dangerous way to assert answers. Asking models for certainty or consensus is more a sign of the difficulties in making controversial decisions than it is a solution, and can invite ritualistic use of quantification.”
It would be dangerous to assume that any model could give us an exact picture of the future. Models give us projections, not predictions. We must be careful not to ask models to be oracles, or for scientists to be prophets.
But at the same time, ‘models are a great way to explore questions’. They can help us understand how certain aspects of the future could be shaped by action or inaction in the present, and they can build our knowledge of how processes work and how they could be improved.
All models could be wrong. Some models can be very useful. We just need to keep asking questions to make sure they are used in the right ways.
What are models used for in C-THRU?
We use models to help us uncover how raw materials enter the sector and get transformed through a myriad of different processes into end products. This also enables us to model where and how greenhouse gases are released along the way. In our research, model scenarios allow us to explore what combinations of actions we might need to take to reach different levels of emissions reduction. We also use network modelling to investigate how different actors and processes are related. This is useful because it sheds light on how a change made in one part of the chemical sector might have effects that ripple out and are felt elsewhere within and beyond the industry.
We hope that our models and their outputs will influence key actors in the chemical industry to take urgent action to reduce emissions, and help policymakers to evaluate the policy options available to them. Alongside the scenarios we have modelled, we are also constructing an emissions calculator that will allow people to explore their own scenarios for the future of the chemical sector and assess the potential of a range of interventions. As part of this, we will describe the assumptions that we used in modelling each parameter.
Read our first year report and browse the research paper that we have published so far for more insights into our approach and methods for modelling petrochemical sector emissions.
Photo credit: Karl Pawlowicz