Brian Arthur introduced a new and important idea in his book *The nature of technology*: that of ‘combinatorial evolution’. The argument, put perhaps overly briefly, is essentially this: a ‘technology’, an aeroplane say, can be thought of as a system, and then we see that it is made up of a number of subsystems; and these can be arranged in a hierarchy. Thus the plane has engines, engines have turbo blades etc. The control system must sit at a high level in the hierarchy and then at lower levels we will find computers. The key idea is that most innovation comes at lower levels in the hierarchy, and through combinations of these innovations – hence combinatorial evolution. The computer may have been invented to do calculations but, as with aeroplanes, now figures as the lynchpin of sophisticated control systems.

This provides a basis for exploring research priorities. Arthur is in the main concerned with hard technologies and with specifics, like aeroplanes. However, he does remark that the economy ‘is an expression of technologies’ and that technological change implies structural change. Then: ‘…economic theory does not usually enter [here]………. it is inhabited by historians’. We can learn something here about dynamics, about economics and about interdisciplinarity! However, let us focus on cities. We can certainly think of cities as technologies – and much of the smart cities’ agenda can be seen as low-level innovation that can then have higher level impacts. We can also see the planning system as a soft technology. What about the science of cities, and urban modelling? Arthur’s argument about technology can be applied to science. Think of ‘physics’ as a system of laws, theories, data and experiments. Think of spelling out the hierarchy of subsystems and, historically, charting the levels at which major innovations have been delivered. Translate this more specifically to our urban agenda. If (in broad terms) modelling is the core of the science of cities, and that (modelling) science is one of the underpinnings of the planning system, can we chart the hierarchy of subsystems and then think about research priorities in terms of lower-level innovation?

This really needs a diagram, but that is left as an exercise for the reader! Suppose the top-level is a working model – a transport model, a retail model or a Lowry-based comprehensive model (*see Lowry and his legacy*). We can represent this and three (speculative) lower levels broadly as follows.

- level 1: working model – static or dynamic
- level 2 – cf. STM, (cf.
*How to begin*):- system definition (entities, scales: sectoral, spatial, temporal); exogenous, endogenous variables
- hypotheses, theories
- means of operationalising (statistics, mathematics, computers, software,…)
- information system (cleaned-up data; intermediate model to estimate missing data)
- visualisation methods

- level 3:
- explore possible hypotheses and theories for each subsystem
- data processing; information system building
- preliminary statistical analysis
- available mathematics for operationalising
- software/computing power

- level4:
- raw data sources

An Arthur-like conjecture might be that the innovations are likely to emanate from levels 3 and 4. In level 3, we have the opportunity to explore alternative hypotheses and to refine theories. Something like utility functions, profits and net benefits are likely to be present in some form or other to represent preferences with any maximisation hypotheses subject to a variety of constraints (which are themselves integral parts of theory-building). We might also conjecture that an underlying feature that is always present is that behaviour will be probabilistic and so this should always be present. (In fact this is likely to provide the means for integrating different approaches.)

Can we identify likely innovation territories? The ‘big and open data’ movement will offer new sources and this will have impacts through levels 2 and 3. One consequence is likely to be the introduction of more detail – more categories – into the working model, exacerbating the ‘number of variables’ problem, which in turn could drive the modelling norm towards microsimulation. This will be facilitated by increasing computing power. We are unlikely to have fundamentally different underlying hypotheses for theory building but there may well be opportunities to bring new mathematics to bear – particularly in relation to dynamics. There is one other possibility of a different kind, reflected in level 2 – system definition – in relation to scales. There is an argument that models at finer scales should be controlled by – made consistent with – models at more aggregate scales. An obvious example is that the population distribution across the zones of a city should be consistent with aggregate level demography; and similarly for the urban economy. An intriguing possibility remains the application of the more aggregate methods (demographic and economic input-output) at fine zone scales.

Alan Wilson, April 2015