Three ways in which our systems fail and how to overcome them.
Introduction
Despite the significant progress made over the last century, and generally positive trends across many metrics, it cannot be denied that pessimism and uncertainty are strong features of the historical moment we are living through (although not unique to this era). Most countries struggled to prepare for and adequately respond to the largest public health challenge in recent decades, institutions are unable to keep up with the rapid pace of technological change, and climate change looms on the horizon.
We know we can do better, but what is holding us back? A common refrain is to blame ‘human nature’ for being selfish and shortsighted, and being unable to change our habits to confront major challenges. We blame the human brain for its inability to process statistics, for being vulnerable to cognitive biases, and so on. What follows on from this is a sense of resignation; the notion that these problems will always be with us has almost become a default assumption, because human nature is assumed to be immutable. Alternatively, another popular approach is to expend much effort on overcoming these issues through individual training and education. Much is said about teaching critical thinking skills on a national and global scale through education, and there are large communities and institutions focused on attempting to overcome biases and be ‘less wrong’.
Yet this kind of approach seems to be confined to matters of the social and political realms; we do not tend to scoff at human nature for not allowing us to live without water, or communicate over long distances. Neither do we focus on training individuals to live without water or shout loudly to communicate over long distances, historical traditions of yodelling notwithstanding.
Instead, we use our knowledge of the world to overcome these challenges. We dig wells, build water purifiers, develop smoke signals and the internet. In short, we build mechanisms and develop technology to fill in the gaps of human abilities, and develop more advanced versions of these mechanisms as we take on larger challenges. When our tools fail and cause harm, we engineer solutions to improve them, and avoid the failure modes we have identified. For example, improvements in aviation safety have made it the safest mode of mass transport, and safety continues to improve. We have learned from previous disasters and developed solutions ranging from collision avoidance systems to runway lighting and standardised communication protocols, all designed to avoid previously identified failure modes.
This is not limited to engineering problems; we have developed intricate systems of law and politics. Although they often emerged organically through historical circumstances, many are explicitly designed to avoid problems observed in the past. Yet it is rare to think of such social systems in the same way as we think of planes, trains, smartphones, or water purifiers; we do not see the need to upgrade our social and political systems every few years or decades in order to overcome new challenges. I would argue that we should, and this article will suggest some ‘updates’.
Aviation safety actually provides us with an excellent metaphor, as human error is often a significant component in aviation disasters. Despite this, we have never thrown up our hands and claimed that plane crashes are inevitable because human brains struggle to, for example, process high altitudes. Nor have we focused all our efforts on training the brains of pilots to avoid making mistakes (although specialist training is, obviously, an essential part of aviation safety). We instead develop systems that work with and augment the way humans process information, e.g voice warning systems, to help us avoid crashes, going hand-in-hand with training and education.
This article will apply this type of problem-solving approach to larger-scale social issues by discussing three examples of failure modes and presenting a potential solution for each one.
Problem 1: ‘Groupthink’, status, politics, and interpersonal dynamics can override evidence and reason, even among well-intentioned experts.
The early 20th century saw X-ray technology rapidly develop, and it was in widespread use by medical providers in the US and UK by the 1950s, including for the monitoring of foetal development. This time period also saw a large increase in cases of cancer among children.
Using surveys and social medicine methods, Dr. Alice Stewart and her team at Oxford University conducted a study identifying the use of X-rays during pregnancy as a contributing factor to this rise in cases of cancer, in 1956. However, her findings were met with scepticism, and less thorough studies by other scientists were used as evidence to dispute them.
The issue became a clash of personalities, with aspects of discrimination due to prevailing social attitudes affecting how the scientific debate was conducted and received. Alice Stewart’s team suffered from reduced funding as a result, hindering further research. The X-raying of foetuses continued to be widespread practice until the dangers it posed were slowly recognised and it was discontinued as a practice in later decades.
This case has been used as an example of how reputation and personality can shape scientific debates and is illustrative of an issue that continues to pose a challenge. Experts often become drawn into political battles and clashes between interest groups, hindering progress towards better understanding and effective decision making.
Expert disagreement is likely inevitable, and in fact desirable - it is an essential part of the scientific process. However, such situations can be avoided by implementing a set of tools that avoids debates being poisoned by personality clashes and fears of reputational and career damage.
The Solution: Systems for Eliciting Expert Judgment
The Delphi Method provides a structured method for collecting expert opinions and is designed to facilitate constructive discussion and build consensus. It has many varieties and modifications, such as the ‘IDEA protocol’, but broadly speaking they follow a similar set of steps, described below.
First, a group of experts is selected based on a series of predefined criteria to form a ‘panel’. Ideally, the criteria would select for individuals with a broad range of relevant subject-matter expertise, while also incorporating diverse perspectives and not relying too heavily on a single institution or school of thought. The number of individuals involved can range from a handful, up to 1000, with 10-50 considered the most practical and effective size for a panel.
Then, the participants are asked to identify what they think are salient issues pertaining to the topic at hand, and submit them anonymously for the consideration of the whole panel.
These submissions are then collated and a series of questionnaires is developed to gauge where expert opinions lie on each issue. The questions are phrased and presented in a way so as to avoid ambiguity, and capture the core of the issues being discussed.
For example, this study used questions such as asking the experts whether they agree or disagree with statements such as the below:
“It is unethical to use big data in obesity research when consent has not been obtained for this purpose”
“Consent is a major ethical challenge for big data in obesity research”
“Big data from commercial sources is a potential conflict of interest”
And more, see pages 2580 to 2582 here.
After each round, the experts are asked to re-evaluate their positions, and discussion or deliberation sessions may be incorporated into this; reports are written about ‘divergences’
This particular study was able to achieve expert consensus on some aspects of the topic, with all 26 experts agreeing with a series of statements following three rounds of the Delphi method.
We can only imagine how debates such as the one surrounding the X-ray issue discussed above would have played out had such a method been used. Perhaps the studies would have been judged in a more holistic manner, with questions of personality clashes and gender biases being left outside the sphere of the actual scientific discussion. This cannot be said for certain of course, however there is significant evidence suggesting that the Delphi method and its variants are effective at addressing emerging topics, generating consensus, and avoiding biases.
Problem 2: We Tend to Extrapolate Current Trends, Rely on Simplistic Models, and Fail to Anticipate Shifts
Models, simulations, and predictions have become very important parts of the decision making process. All kinds of institutions and decision makers seeking to make informed decisions look at trends and predictions to anticipate what the future would look like.
A simple way of doing this is to extrapolate: extend current trends into the future by assuming they continue.
More complex versions of such modelling can take into account multiple factors, and provide a range of estimates using a variety of mathematical techniques, but mostly work on similar principles to the extrapolation graph depicted in the comic above.
Figure: Forecasts of GDP from December 2019 compared with outcomes
The pitfalls of relying on such an approach is that it does not provide any way to account for drastic changes, such as those caused by major world events.
This is not just a problem with models; we often assume that business will continue as usual, or that events will play out in the same way as they did in recent history. The grasp of ‘conventional wisdom’ is very hard to break, and it is essentially impossible to predict a significant break from existing trends using traditional methods of prediction. For example, a commonly held perception in the lead up to the Covid-19 pandemic was that the new virus would be limited in the same way as the 2003 SARS outbreak, or the 2009 Swine Flu outbreak. A similar effect could be observed in the lead up to the 2022 Russian Invasion of Ukraine - the prospect of a land war in Europe was seen as unlikely, with many experts predicting that the conflict would remain low-intensity as it had been from 2014 onwards, despite strong evidence suggesting an invasion was imminent.
Using current trends to predict future trends, and basing policy decisions on such an approach, has proven to be highly unreliable. Such an approach contributed to poor decisions such as the UK government scrapping its pandemic preparedness committee six months before the Coronavirus outbreak, or Germany becoming reliant on Russian gas prior to the war in Ukraine.
The Solution: Embrace Uncertainty and Use Models Designed for Uncertainty.
Decision Making under Deep Uncertainty (DMDU) is a system that focuses on exploring a wide range of possible futures and identifying actions that lead to ‘win-win’ scenarios. Instead of trying to predict the future, it focuses on finding out what we should do when we cannot predict the future.
At a fundamental level, what it does is quite intuitive: it considers many different possibilities, and identifies the actions that would work in most, or all, possible scenarios. Most of us naturally take a similar approach in day to day life.
For example, when deciding what time you should depart to be on time for an important job, there are many uncertainties. Perhaps you are not certain what the best route to take is; there may be delays on public transport, or perhaps your favoured mapping application could mislead you and spend a lot of time ‘recalculating’ your route. Rather than trying to predict whether or not you would be delayed, you would give yourself some extra time. However, you must also balance this with other considerations - setting off three hours early would not be helpful in any of the likely scenarios - you will not be able to get enough sleep, and will spend a lot of time waiting outside. So, you decide that the optimal action is to give yourself an additional 30 minutes more than your estimated journey time; if you face any delays on the way, this will reduce their impact, but if there are no delays, you have only sacrificed 30 minutes of sleep.
Now expand this scenario to one in which you need to work around the preferences of three others who must also join you at this job - some of them may prefer driving, or cycling, and live in different places with different situations. You can then expand your consideration to factor in their preferences and constraints. You and your colleagues may not be able to agree on how likely delays while travelling are (this is ‘deep uncertainty’ - when stakeholders do not know or cannot agree how likely future scenarios are). However, you can still agree upon departing reasonably early, and plan across multiple scenarios and sketch out how you would adapt to them. For instance, you can ensure each of you know how to begin the early stages of the job, so that even if others are late one of you can begin the job, thereby reducing the impact that lateness could have on your ability to complete the tasks at hand. Therefore you have made a ‘robust’ decision - setting off 30 minutes earlier is a good decision regardless of what happens in the future. This is also an ‘adaptive’ approach - your decisions allow you to adapt to different situations.
Of course, doing this on a large scale is more complicated; you would have to take into account dozens upon dozens of factors and potential scenarios. Once again, we come back to the question of supplementing human abilities with tools and technology. DMDU provides a toolkit which can be used to take into account such a multitude of factors.
It has already been used to great effect by the Dutch Delta Commissioner for flood risk planning, climate resilience, and water management.
Problem 3: The Disconnect between the Public and Decision Makers and Experts Creates Widespread Mistrust
While conducting research for this blog post, I came across an old TEDx talk discussing the Delphi Method. It had very few views, and only a single comment:
“This is how you will end up living in a “15 minute city”
Manufactured consensus.”
Said the anonymous commenter.
Anyone who has spent much time on the internet has seen this sort of sentiment being widespread. The notion that nefarious ‘elites’ are working to shape society, to the detriment of the average person, seems to be a very common one. Perhaps you, the reader, hold this opinion yourself. If you do, please hear me out.
Among academics and people working on policy etc, such views are quite often dismissed as the internet just being the internet, or this being the work of paid trolls sowing discontent online.
However, it cannot be denied that this kind of attitude has permeated our discourse. Almost any new policy prescription or idea is met with mistrust and speculation regarding the interests of the people pushing it, whether it be related to public health, urban planning, or international relations. There is also significant discourse questioning the nature of ‘expertise’, and expressing discontent at what is perceived to be top-down edicts from the ‘ivory tower’ of academia or think tanks.
This sort of sentiment is also often expressed by politicians and more mainstream sources. Famously, Michael Gove, the UK’s Secretary of State for Justice in the lead up to the referendum on Brexit, stated that ‘people have had enough of experts’.
While the full quote clarifies that he meant something a little more specific than dismissing experts entirely, the quote taken out of context has become something of a meme in British politics. Readers from other countries will undoubtedly have their own examples of politicians and influencers calling out ‘experts’, or questioning their integrity and intentions.
Conspiracism and distrust of ‘elites’ has perhaps always been a part of public life; like the other issues discussed in this article, I do not want to imply that this is unique to our era. Regardless, it is a real problem that needs to be addressed: lack of trust in institutions undermines public health, and even affects the mental health of disaster survivors, among many other impacts. It also makes it difficult to implement necessary policy changes, as seen with resistance to climate change mitigation measures.
I would argue that this phenomenon is completely unsurprising given the fact that most people are never given the chance to directly influence decisions that affect them, which contributes to a sense of dissatisfaction and lack of control. There is an element of truth to the conspiracy theories about an ‘elite’ making top-down decisions that affect us all - special interest groups, think tanks, business lobbyists, etc, often have a disproportionate impact on policy in Western democracies as opposed to the electorate, and are often lacking in transparency.
Furthermore, social fragmentation has become more widespread, and people are often in social ‘pockets’, mainly interacting with those at similar income levels or work in similar types of jobs, with affluent households being isolated from others. As someone who has had the opportunity to intersect social groups as part of my varied career, I can (anecdotally, granted) attest to this: the world of people working on ‘policy-related stuff’ is very different from the one of people doing the (literal) heavy lifting that makes society function. Individuals of lower socioeconomic status are more likely to distrust experts - possibly because they do not know any experts. Despite the internet providing a potential bridge, as we all know, it has its own problem with algorithm-driven bubbles.
While there is something to be said about atomisation and how such limited interaction between various social groups affects society, that is a question for another article. Here, I would like to propose a solution to the issue of mistrust and the sense of disenfranchisement: deliberation and participatory democracy.
The Solution: Deliberation and Participatory Democracy, e.g Citizens’ Assemblies
Generally, increased political participation has a positive effect on quality of life and on the quality of political systems; conversely, less participation damages both.
Imagine the following:
Instead of seeing your government implement a new policy with a weirdly soundbite-shaped name generated by a distant think tank, the members of which you will never meet, and having seemingly no say over it,
You instead were invited to join a large meeting alongside a group of your friends, neighbours, coworkers, grandmothers, teachers, people you awkwardly nodded to on the street once, and so on. You are compensated for your involvement, and are given access to evidence, data, a pleasant space for discussion, trained facilitators, and some of the ‘experts’ you would otherwise mistrust. You can speak to them, ask them questions - disagree with them to their faces, and explain why you disagree. In between the onboarding sessions, deliberative discussions, debates, etc. you get to socialise with the group of people you are discussing issues with.
You then collaboratively decide on what kinds of policies you would like to see implemented, and design them yourself, alongside others. Deciding together on problems as if they were an engineering problem to be cracked, not as a team sport to fight over (or an imposition to be resisted).
This is not a utopian thought experiment, rather it is a description of a Citizens’ Assembly. Such deliberative processes have been tried to great success in hundreds of cases around the world. The more cynical among my readers may argue that such assemblies would quickly descend into arguments and petty squabbling - they might perhaps point to the infamous Handforth Parish Council meeting of 2020 as evidence of how things would play out, if not more examples.
However, real world evidence does not show this. Generally, such assemblies, when implemented using a structured process and trained facilitators, are quite pleasant. They reduce deadlock and polarisation, and create a sense of empowerment and community among participants, as well as improving trust in institutions.
Conclusion:
Coming back to the initial metaphor of planes, trains, and water purifiers - those are examples of us using our knowledge to develop solutions to the practical problems of geography and our biological needs. Why not apply the same principle to the practical problems of society and politics?
As laid out above, we have the knowledge to build systems that can help us overcome the sociopolitical challenges of today, and by extension, prepare us for the long term future. The three solutions discussed today combine to form the ‘Odyssean Process’: a set of systems to holistically address the key challenges that humanity now faces.
Notes:
Credit to BritMonkey for making me aware of Alice Stewart and partly inspiring this blog post.
Thank you to the Odyssean Institute team, academic advisors, and trustees for their support and for providing a platform for these thoughts.
Comments