At a time when we stand- lost — in the midst of a data blizzard with the temperature and energy value of data trending towards absolute zero, the only compass we have is a model which enables us to navigate the frozen wastes of knowledge.
Models are critical in human thinking. But models are also open to dispute, misinterpretation and misuse. This makes collective models such as capitalism, socialism or fascism dangerous. Of course, where models are ‘backed’ with power- in whatever form- these limitations are trivial. But if models are developed for democratic use, then they become critical.
But individual models are also dangerous to both their originators and their users.
Models can be developed from the bottom up just as- or more- easily- than they are developed from the top down. We often adopt models unconsciously- and I mean adopt -without questioning them. We sometimes inherit these models from people we have every reason to trust. Or maybe we are driven by peer pressure to adopt models that other people have designed. Politics, science and many other aspects of life are full of models designed to help us resolve complex issues.
And these models seem to work retrospectively. But changes in context and the interactional nature of models means that they will not keep working. And their unintended second and third order effects can be disastrous at an individual and social level. National Socialism might have been a great model in Germany in 1935. Less so in 1945. Communism essential in Russia between 1917 and 1992.
Once engaged, getting rid of a seemingly accurate model which seems to work for others -and us as individuals- is difficult. Our models — call them attitudes, behaviours, mindsets or something else- become a significant part of who we see ourselves as being. An attack upon our model becomes an attack upon ourselves. We are highly resistant to our sense of who we are being wounded. And so, the model becomes embedded.
But, as anyone who has designed and used them will tell you- simulations don’t count. They come unstuck from the reality they purport to describe. And keeping them in place- remember that they are part of ‘you’- causes all sort of problems ranging from anxiety through to war and climate change. All conflicts are a conflict of models.
So, how do we manage the weirdly addictive interactions we have with models? And how do we bring these models under the microscope of attention so we can challenge them — in ourselves as much as in others? It’s difficult.
Because any model system has to offer more than just a set of rules about what works and what doesn’t. That which works is good- right? Well, not really. The models we adopt- from books, films, music, programs- in fact any type of symbolic system are ‘reverse engineered’ from the experience of others and- listen carefully because this is important- that experience is based in the past. And the interactive nature of problem solving- that which we touch also touches us- often means that we inherit, construct and modify models too late.
So, how do we model a model?
The first rule is that the model should be rooted in an unambiguous, transparent and consistent ontology and epistemology in order to avoid any confusion about the nature and type of causality that is assumed to drive the ‘reality’ described in the model. and about the kind of statements that can be derived from any empirical findings. This prerequisite is in response to the observation that researchers are either seemingly unaware of such matters or believe that science only comes in one form. Despite the fact that we like one form. It conserves our energy.
Secondly, the model should be able to process data obtained from the perceived real-world as opposed to simulated data, because we need the model to be as accurate a representation of the real world as possible to avoid any nasty surprises. If our model fails to fully account for the complexity of this reality, we are likely to experience such surprises. Maybe not immediately. But often when it is too late to correct.
Thirdly, the model should allow for unambiguous measurement in order to avoid confusion about what is being measured with the model and how it is being measured. Of course, the measurement of sentient behaviour is interpretive and therefore ambiguous by definition, but that does not prevent from operationalizing models that work.
The fourth rule is persistence. How long has the model been in existence? How many times has it been modified? How well would it work if it hadn’t been? The model should uncover persistent patterns, but retrospective critiques tend to be just that. Analysis of what went wrong rather than analysis of how the model can be adapted to work in a new context.
The fifth rule holds that the model shouldn’t be too simplified or generalised. Nor should it generate obvious statements and truisms without being attached to some ‘handle’ which will allow us to use the model- not just to explain- but to change systemic or individual behaviours which are damaging individuals and collectives
Of course, any model of reality is a simplification — that is what models arefor. But there is a point where simplification fails to support effective action. Most simple models- while explanatorily attractive- are very likely to do more harm than good. As Byrne said so well
“The adulation of the formalized mathematical model, the assertion that theabstract representation of the world through establishing a causal model based on variables and isomorphic with an algebra — the construction of interpreted axiomatic systems — is precisely . . . abstraction over the real. We absolutely need a down and dirty empiricism in which understanding is grounded in the real and constantly returns to the real.” (Byrne, 2002: 42)