My immediately preceding blog post, ‘Truth is what we agree about’, provides a framework for thinking about competing models in the social sciences. There are competing models in physics, but not in relation to most of the ‘core’ – which is ‘agreed’. Most, probably all, of the social sciences are not as mature and so if we have competition, it is not surprising. However, it seems to me that we can make some progress by recognising that our systems of interest are typically highly complex and it is very difficult to isolate ideal and simple systems of interest (as physicists do) to develop the theory – even the building bricks. Much of the interest rests in the complexity. So that means that we have to make approximations in our model building. We can then distinguish two categories of competing models: those that are developed through the ‘approximations’ being done differently; and those that are paradigmatically different. Bear in mind also that models are representations of theories and so the first class – different ways of approximating – may well have the same underlying theory; whereas the second will have different theoretical underpinnings in at least some respects.
I can illustrate these ideas from my own experience. Much of my work has been concerned with spatial interaction: flows across space – for example, journey to work, to shop, to school, to health services, telecoms’ flows of all kinds. Flows decrease with ‘distance’ – measured as some kind of generalised cost – and increase with the attractiveness of the destination. There was even an early study that showed that marriage partners were much more likely to find each other if they lived or worked ‘nearer’ to each other – something that might be different now in times of greater mobility. Not surprisingly, these flows were first modelled on a Newtonian gravity model analogy. The models didn’t quite work and my own contribution was to shift from a Newtonian analogy to a Boltzmann one – a statistical averaging procedure. In this case, there is a methodological shift, but as in physics, whatever there is in underlying theory is the same: the physics of particles is broadly the same in Newton or Boltzmann. The difference is because Newton can deal with small numbers of particles, Boltzmann with very large numbers – but answering different questions. The same applies in spatial interaction: it is the large number methodology that works.
These models are consistent with an interpretation that people behave according to how they perceive ‘distance’ and ‘attractiveness’. Economists then argue that people behave so as to maximise utility functions. In this case the two can be linked by making the economists’ utility functions those that appear in the spatial interaction model. This is easily done – provided that it is recognised that the average behaviour is such that it does not arise from the maximisation of a particular utility function. So the economists have to assume imperfect information and/or, a variety of utility functions. They do this in most instances by assuming a distribution of such functions which, perhaps not surprisingly, is closely related to an entropy function. The point of this story is that apparently competing models can be wholly reconciled even though in some cases the practitioners on one side or other firmly locate themselves in silos that proclaim the rightness of their methods.
The same kind of system can be represented in an agent-based model – an ABM. In this case, the model functions with individuals who then behave according to rules. At first sight, this may seem fundamentally different but in practice, these rules are probabilities that can be derived from the coarser grain models. Indeed, this points us in a direction that shows how quite a range of models can be integrated. At the root of all the models I am using as an illustration, are conditional probabilities – a probability that an individual will make a particular trip from an origin to a destination. These probabilities can then be manipulated in different ways at different scales.
An argument is beginning to emerge that most of the differences involve judgements about such things as scale – of spatial units, sectors or temporal units – or methodology. The obvious example of the latter is the divide between statisticians and mathematicians, particularly as demonstrated by econometrics and mathematical economics. But, recall, we all work with probabilities, implicitly or explicitly.
There is perhaps one more dimension that we need to characterise differences in the social sciences when we are trying to categorise possibly competing approaches. That is when the task in hand is to ‘solve’ a real-world problem, or to meet a challenge. This determines some key variables at the outset: work on housing would need housing in some way as a variable and the corresponding data. This in turn illustrates a key aspect of the social scientists approach: the choice of variables to include in a model. We know that our systems are complex and the elements – the variables in the model – are highly interdependent. Typically, we can only handle a fraction of them, and when these choices are made in different ways for different purposes, it appears that we have competing models. Back to approximations again.
Much food for thought. The concluding conjecture is that most of the differences between apparently competing models come from either different ways of making approximations, or through different methodological (rather than theoretical) approaches. Below the surface, there are degrees of commonality that we should train ourselves to look for; and we should be purposeful!