Book in draft

This is a Draft of the upcoming book, Tricks of the Trade


Tricks of the Trade

How to do research[1]

Alan Wilson

The Alan Turing Institute

October 2018


Contents


Preface


Part 1. How to do research

1. How to start: some basic principles

2. Interdisciplinarity

3. Serendipity

4. Following fashion

5. Real challenges

6. ‘Research on’ vs ‘research for’


Part 2. Super concepts

7. Systems thinking

8. Combinatorial evolution

9. Requisite knowledge

10. The brain as a model

11. DNA

12. Territories and flows

13. Equations with names


Part 3. What to do next.

14. What would Warren Weaver do next?

15. The power of modelling: understanding and planning cities

16. Competing models-1: truth is what we agree about

17. Competing models-2: dismantling into building bricks

18. Big data and high-speed analytics

19. Block chains

20. Abstract modes

21. Best practice

22. Lowering the bar

23. The future of cities

24. Leicester City


Part 3. Tricks of the trade

25. Serendipity-2: career development

26. Adding depth

27. Time management

28. Against oblivion

29. Sledgehammers for wicked problems

30. Beware of optimisation

31. Spinning out

32. Missing data

33. Learning from history

34. Venturing into other disciplines

35. On writing


PART 1. HOW TO DO RESEARCH

1. HOW TO START: SOME BASIC PRINCIPLES

There are starting points that we can take from ‘systems thinking’ (see Chapter 7 below) and theory development. Add ‘methods’, including data, to this and this becomes ‘STM’ – the first step in getting started.

  • S: define the system of interest, dealing with the various dimensions of scale etc
  • T: what kinds of theory, or understanding, can be brought to bear?
  • M: what kinds of methods are available to operationalise the theory, to build a model?

This is essentially analytical: how does the system of interest work? How has it evolved? What is its future? This approach will almost certainly force an interdisciplinary perspective and within that, force some choices. For example, statistics or mathematics? Econometrics or mathematical economics? We should also flag a connection to Brian Arthur’s (Chapter 8) ideas on the evolution of technology – applied to research. He would argue that our system of interest in practice can be broken down into a hierarchy of subsystems, and that innovation is likely to come from lower levels in the hierarchy. This was, in his case, technological innovation but it seems to me that this is applicable to research as well.

Then if applicable, we have also seen that a second step for getting started is to ask questions about policy and planning in relation to the system of interest – the PDA step:

P: what is the policy? (That is, what are the objectives for the future?) Should we develop a plan – another ‘P’?

D: can we design – that is ‘invent’ – possible plans?

A: we then have to test alternative plans by, say, running a model and to analyse – evaluate – them. Ideally, the analysis would off a range of indicators, perhaps using Sen’s capability framework or offering a full cost-benefit analysis (CBA). A policy or a plan is, informal terms, the specification of exogenous variables that can then be fed into a model-based analysis.

These six steps form an important starting point that usually demands much thought and time. Note the links: the STM is essentially the means of analysis in the PDA. It may be thought that the research territory is in some sense pure analysis but most urban systems of interest have real-world challenges associated with them, and these are worth thinking about. Some ideas of research problems should emerge from this initial thinking. Some problems will arise from the challenges of model building, some from real on-the-ground problems. Examples follow.

  • Demographic models are usually built for an aggregate scale. Could they be developed for small zones – say for each of the 626 wards in London?
  • While there may be pretty good data on birth and death rates, migration proves much more difficult. First there are definitional problems: when is a move a migration – long distance? – and when is it residential relocation?
  • If we want to build an input-output model for a city, then unlike the case at the national level, there will be no data on imports and exports – so there is a research challenge to estimate these.
  • There is then an economic analogue of the demography question: what would an input-output model for a small zone – say a London ward – look like? This could be used to provide a topology of zone (neighbourhood) types.
  • In the UK at the present time there is, in aggregate at the national level, a housing shortage. An STM description might focus on cities, or even small zones within cities. How does the housing shortage manifest itself: differential prices? What can be done about it? This last is a policy and planning question: alternative policies and plans could be explored – the PD part of PDA – and then evaluated – the A part.
  • What is the likely future of retail – relative sizes of centres, the impact of internet shopping etc?
  • How can ‘parental choice’ in relation to schools be made to work (without large numbers feeling very dissatisfied with second, third or fourth choices)?
  • Can we, should we, aim to do anything about road congestion?
  • Does responding to climate change at the urban scale involve shorter trips and higher densities? If so, how can this be brought about – the D-question? If not, why not?!
  • Can we speculate about the future of work in an informed way – taking account of the possibilities of ‘hollowing out’ through automation?
  • And so on……!

Research questions can be posed and this framework should help. The examples indicated are real and ambitious, and it is right that we should aim to be ambitious. However, given the resources that any of us have at our disposal, the research plan also has to be feasible. There are different ways of achieving feasibility – probably two poles of a spectrum: either, narrow down the task to a small part of the bigger question; or stay with the bigger question and try to break into it – test ideas on a ‘proof-of-concept basis? The first of these is the more conservative, and can be valuable; and is probably the most popular with undergraduates doing dissertations or postgraduate students – and indeed their supervisors. It is lower risk, but potentially less interesting!

We can then add a further set of basic principles – offering topics for thought and discussion once the STM-PDA analysis is done at least in a preliminary way.

  • Try to be comprehensive, at least to capture as much of the inevitable interdependence in your system of interest as is feasible.
  • Review different approaches – e.g. to model building – and integrate where possible. There are some good opportunities for spin-off research outcomes in this kind of territory.
  • This of applying good ideas more widely. I was well served in the use of the entropy concept in my early research days: having started in transport modelling, because I always wanted to build a comprehensive model, I could apply the concept to other subsystems and (with Martyn Senior) find a way of making an economic residential location model optimally sub-optimal!
  • The ‘more widely’ also applies to other disciplines. Modelling techniques that work in a contemporary situation, for example, can be applied to historical periods – even ancient history and archaeology (entry to come!).
  • There is usually much work to do on linking data from different sources and making it fit the definitions of your system of interest. Models can also be used for estimating missing data and for making samples comprehensive.

2. INTERDISCIPLINARITY

Systems thinking (anticipating Chapter 7 here) drives us to interdisciplinarity: we need to know everything about a system of interest and that means anything and everything that any relevant discipline can contribute. For almost any social science system of interest, there will be available knowledge at least from economics, geography, history, sociology, politics plus enabling disciplines such as mathematics, statistics, computer science and philosophy; and many more. Even this statement points up the paucity of disciplinary approaches. We should recognise that the professional disciplines such as medicine already have a systems focus and so in one obvious sense are interdisciplinary. But in the medicine case, the demand for in-depth knowledge has generated a host of specialisms which again produce silos and a different kind of interdisciplinary challenge.

Some systems’ foci are strong enough to generate new, if minor disciplines. Transport Studies is an example, though perhaps dominated by engineers and economists? There is a combinatorial problem here. In terms of research challenges, there are very many ways of defining systems of interest and mostly, they are not going to turn into new disciplines.

How do we build the speed and flexibility of response to take on new challenges effectively? A starting point might be the recognition of ‘systems science’ as an enabling discipline  in its own right that should be taught in schools, colleges and universities along with, say, mathematics! This could help to develop a capability to recognise and work with transferable concepts – super concepts – and generic problems (for which ‘solutions’, or at least beginnings, exist). In my Knowledge power book, I identified 100 super concepts. A sample of these follow.

  • systems (scales, hierarchies, ..….)
  • accounts (and conservation laws)
  • probabilities
  • equilibrium (entropy, constraints, ……)
  • optimisation
  • non-linearity, dynamics (multiple equilibria, phase transitions, path dependence)
  • Lotka-Volterra-Richardson dynamics
  • evolution (DNA,……)

Each of these has a set of generic problems associated with them – not an argument that was fully developed in the book! The systems’ argument has been dealt with elsewhere. Think of any systems of interest that come to mind and let us explore each of these super concepts and associated generic problems. (More depth on superconcepts comes in Part 2.)

 Systems entities can nearly always be accounted for. In a time period, they will be in a system state at the beginning and a system state at the end; and entities can enter or leave the system during the period. This applies, for example, to populations, goods, money and transport flows. In each case, an account can be set out in the form of a matrix and this is usually a good starting point for model building. This is direct in demographic modelling, and in input-output and transport models.

The ‘behaviour’ of most entities in a social science system is not deterministic, and therefore the idea of probability is a starting point. The modelling task, implicitly or explicitly, is to estimate probability distributions. We often need to do this subject to various constraints – that is, prior knowledge of the system such as, for example, a total population. It then turns out that the most probable distribution consistent with any known constraints can be estimated by maximising an entropy function, or through maximum likelihood, Bayesian or random utility procedures – all of which can be shown to be equivalent in this respect – a superconcept kind of idea in itself. Which approach is chosen is likely to be a matter of background and taste. The generic problem in this case is the task of modelling a large population of entities which are only weakly interacting with each other. These two conditions must be satisfied. Then the method can generate, usually, good estimates of equilibrium states of the system (or in many cases, subsystem). It can also be used to estimate missing data – and indeed complete data sets from samples following model calibration based on the sample.

Many hypotheses, or model-building tasks, involve optimisation and there is a considerable toolkit available for these purposes. The methods of the previous paragraph all fall into this category for example. However, there may be direct hypotheses such as utility or profit maximisation. It is then often the case that simple maximisation does not reproduce reality – for example because of imperfect information on the part of participants. In this case, a method like entropy-maximising can offer ‘optimal blurring’!

The examples to date focus on equilibrium and in that sense on the fast dynamics of systems: the assumption is that, after a change, there will be a rapid return to equilibrium. We can now shift to the slow dynamics – for example in the cities case’ – evolving infrastructure. We are then dealing with (sub)systems that do not satisfy the large number of elements, weak interactions’ conditions. These systems are nonlinear and a different approach is needed. Such systems have generic properties: multiple equilibria, path dependence and the possibility of phase changes – the last, abrupt transitions at critical parameter values. Examples of phase changes are the shift to supermarket food retailing in the early 1960s and ongoing gentrification in central areas of cities. Working with Britton Harris in the late 70s, we evolved, on an ad hoc basis, a model to represent retail dynamics which did indeed have these properties. It was only later that I came to realise that the model equations were examples of the Lotka-Volterra equations from ecology. In one version of the latter, species compete for resources; in the retail case, retailers compete for consumers – and this identifies the generic nature of these model-building problems. The extent of the range of application is illustrated by Richardson’s work on the mathematics of war. It is also interesting that Lotka’s, Volterra’s and Richardson’s work were all of the 1920s and 1930s illustrating a different point: that we should be aware of the modelling work of earlier eras for ideas for the present!

The path dependent nature of these dynamic models accords with intuition: the future development of a city depends strongly on what is present at a point in time. Path dependence is a sequence of ‘initial conditions’ – the data at a sequence of points in time – and this offers a potentially useful metaphor – that these initial conditions represent the ‘DNA’ of the system.

These illustrations of the nature of interdisciplinarity obviously stem from my own experience – my own intellectual tool kit that has been built over a long period. The general argument is that to be an effective contributor in interdisciplinary work it is worthwhile, probably consciously, to build intellectual tool kits that serve particular systems of interest, and that this involves pretty wide surveys of the possibilities – breadth as well as, what is still very needed, depth.

3. SERENDIPITY

How do we do research? There are text book answers. Identify your research question. Look for data. Decide on which methods you are going to use. Then start with a literature survey.  And so on. My intuition then tells me that what is produced is usually routine, even boring, measured by the small number of citations to any published paper. Can we do better? How can we aim to produce something that is original and significant? I have a jumble of ideas in my mind that I try to organise from time to time for seminars on such topics as ‘research methods’. This is a new attempt to structure the jumble. The motivation: to take the risk of offering advice to researchers – both less and more experienced – and in the process to challenge some of the canons of research ideologies. And also – another kind of risk – to see whether I can draw from my own experience. In this, luck has played a big part – a kind of serendipity. Is there such a thing as planned serendipity? (In my case, none of this was planned.) Maybe wide reading and talking – the breadth as well as the depth?

The first bit of luck was my background in maths and theoretical physics – very employable enabling disciplines (though that wasn’t seen as fashionable at the time). I had a first job at the Rutherford Lab writing the computer programmes to identify bubble chamber events from experiments at CERN. The first piece of luck was that, although working in a team, I was given what in hindsight were enormous responsibilities – at a level I can’t imagine being appropriate for a 22 year old now. This had the advantage of teaching me how to produce things on time – difficult though the work was. Secondly, it was the early days of large main-frame computers and I learned a lot about their enabling significance. I made a decision that became a characteristic of my later career – though heaven knows why I was allowed to do this: I decided to write a general programme that would tackle any event thrown up by the synchrotron. The alternative, much less risky, was to write a suite of much smaller programmes each focused on particular topologies. With hindsight, this was probably unwise though I got away with it. The moral: go for the general if you can. All this was very much ‘blue skies’ research and ‘impact’ was never in my mind.

After a couple of years of this – enjoyable and exciting though it was – I decided that I wanted to do something ‘useful’ – to find a job in the social sciences which would have impact – though we didn’t use that word then. I must have applied for 30 or more jobs and failed to get one. There was very little quantitative social science and the idea of employing someone from theoretical physics was not on anyone’s agenda. So I worked my way into politics and – a long story – ended up on Oxford City Council – which gave me an interest in cities and government which has never left me. And with another piece of luck, I was taken on by a small research group of transport economists in Oxford who had a quantitative problem which demanded ‘big computing’ on the basis that I would do their maths and computing, and they would teach me economics. So I switched fields by a process of apprenticeship.

The research problem – the team funded by the then Ministry of Transport – was the implementation of cost-benefit analysis of large transport projects. This demanded computer models of transport flows in cities – before and after the investment. At the time, the known models were largely in the U.S. and so Christopher Foster, Michael Beesley and I embarked on a tour of the American modellers which included most of those who became iconic names in later years and one of whom became a friend and collaborator over many years. I then started building the model. The standard flow model was a so-called gravity model, based on Newtonian principles but with ‘fudge factors’ added to make it work better. Then a huge chunk of luck for me: I recognised these factors from my student days as being similar to partition functions from statistical mechanics and I was able to rework the basis of the model and shift it from a Newtonian perspective to a Boltzmann statistical averaging one – the so-called entropy maximising models. Around this time, I became Assistant Editor of a new journal – Transportation Research – supporting a Californian, Frank Haight, the Editor. As the copy deadline for Volume 1, Number 3 approached, Frank told me that we didn’t have enough papers, could I help? I offered him my entropy-maximising paper which he gratefully accepted. I had no idea at the time that it would become quite important, but it established my academic and research career. The moral of this: it was not a high impact journal and it was not refereed! And it was driven by a practical problem though it could be seen as basic science – an example of how basic research can emerge from practical concerns.

In the next five years, I made three further moves. The first was to the Ministry of Transport in London where I headed the Mathematical Advisory Unit with the task of building real transport models; then to the Centre for Environmental Studies which gave me the opportunity to broaden my field of research beyond transport; and finally to the University of Leeds. Over a long period – I was in Leeds for a long time – two other general principles emerged and one new idea. First, if you have a good idea in one area – entropy in transport modelling for me – then maximise its use in other areas. Second, if there are alternative approaches to the same problem, explore what is involved in integrating them. The new idea was a basis for building dynamic models of urban growth. I also learned something else about good ideas – especially those that, as in this case, were mathematical: the equations had almost certainly been used in applications in very different domains and these often provided pointers for further development.

Most of this research was carried out in relatively small research groups though in a number of cases – transport models early on, retail models later, there were opportunities for large-scale testing either because of their public investment importance (transport) or commercial value (retail). In small groups – or indeed for individual research – ambition has to be limited; unless, it is possible to switch into what I would call ‘proof-of-concept’ mode where only rough ‘testing’ is possible.

4. FOLLOWING FASHION

Much research follows the current fashion. This leads me to develop an argument around two questions. How does ‘fashion’ come about? How should we respond to it? I begin by reflecting on my personal experience.

My research career began in transport modelling (cf. ‘Serendipity’ entry) in the 1960s and was driven forward from an academic perspective by my work on entropy maximising and from a policy perspective by working in a group of economists on cost-benefit analysis. Both modelling and cost-benefit analysis were the height of fashion at the time. (I didn’t choose transport modelling: it was an available job after many failed attempts to find social science employment as a mathematician.) The fashionability of both fields were almost certainly rooted in the highway building programme in the United States in the 1950s: there was a need for good investment appraisal of large transport projects. Planners such as T. R. (‘Laksh’) Lakshmanan and Walter Hansen developed concepts like accessibility and retail models. This leads me to a first conclusion: fashion can be led from the academic side or the ‘real’ policy side – in my case, perhaps unusually, both. It was realised pretty quickly – probably from both sides – that transport modelling and land-use were intertwined and so led by people such as Britton Harris and Jack Lowry, the comprehensive urban modelling field was launched. I joined this enthusiastically.

These narrower elements of fashion were matched by a broader social science drive to quantitative research though probably the bulk of this was statistical rather than mathematical. It is interesting to review the contributions of different disciplines – and this would make a good research topic in itself. The quantitative urban geographers were important: Peter Haggett, Dick Chorley, Brian Berry, Mike Dacey and more – a distinguished and important community. They approached the beginnings of modelling, but were not modellers. The models themselves grew out of engineering. The economists were surprisingly unquantitative. Walter Isard initiated and led the interdisciplinary movement of ‘regional science’ which thrives today. From a personal point of view, I moved into Geography which proved a good ‘broad church’ base. I was well supported by research council grants and built a substantial modelling research team.

By the late 70s and early 80s, I had become unfashionable – which may be an indicator of the half life of fashions! There were two drivers: the academic on the one hand and planning and policy on the other. There was Douglas Lee’s ‘Requiem for large-scale models’ (which seemed to me to be simply anti-science but was influential) and a broader Marxist attack on ‘positivist’ modellers – notwithstanding the existence of distinguished Marxist modellers such as Sraffa. And model-based quantitative methods in planning – indeed to an extent planning itself – became unfashionable around the time of the Callaghan government in the late 70s. Perhaps, and probably, the modellers had failed to deliver convincingly.

By the mid 80s, research council funding having dried up, with a colleague, Martin Clarke, we decided to explore the prospect of ‘going commercial’ as a way of replacing the lost research council funding. That is another story but it was successful – after a long ‘start-up’ struggle. As in the early days of modelling, we had ‘first mover’ advantage and we were valued by our clients. It would be difficult to reproduce this precisely because so much of the expertise has been internalised by the big retailers. But that was one response to becoming unfashionable.

By the 2000s, complexity science had become the new fashion. I knew I was a complexity scientist as an enthusiastic follower of Warren Weaver and I happily rebadged myself and this led to new and substantial research council funding. In effect, modelling became fashionable again, but under a new label (also supported by the needs of environmental impact assessment in the United States, which needed modelling). I feel now, in the 2010s, that the complexity fashion is already fading and new responses are needed. So we should now turn to the new fashions and see what they mean for our research priorities.

Examples of current fashions follow.

  • agent-based modelling (abm)
  • network analysis
  • study of social media
  • big data
  • smart cities

The first three are academic led, the fourth is shared, and the fifth is policy and technology led (unusually by large companies rather than academia or government).

The first two have some substantial interesting ideas but on the whole are carried out by researchers who have no connection to other styles of modelling. They have not made much impact outside academia. In the abm case, it is possible to show that with appropriate assumptions about ‘rules of behaviour’, the models are equivalent to more traditional (but under-developed) dynamic models. It may also be the case, that as a modelling technique, abm is more suited to the finer scale – for example pedestrian modelling in shopping precincts. abm is sometimes confused with microsimulation – a field that deserves to be a new fashion, which is developing, but where there is scope for major investment.

A curiosity of the network analysis is a focus on topology in such a way that valuable and available information is not used. For example, in many instances, flows between nodes are known (or can be modelled) and can be loaded onto networks to generate link loads but this rich information is not usually used. This is probably a failure of the network community to connect to – even to be aware of – earlier and relevant work. In this case, as in others, there are easy research pickings to be had simply by joining up!

The large-scale study of social media is an interesting phenomenon. I suspect it is done because there are large sources of data that can then be plugged into the set of network analysis techniques mentioned earlier. If this could be seen as modelling telecommunications as an important addition to the comprehensive urban model, then it would be valuable both as a piece of analysis and for telecoms companies and regulators but these connections are not typically made.

The ‘big data’ field is clearly important – but the importance should be measured against the utility of the data in analysis and policy – not as a field in itself. This applies to the growing ‘discipline’ of ‘data science’: if this develops as a silo, the full benefits of new data sources will not be collected. However, there is a real research issue to be talked here: the design and structure of information systems that connect big data to modelling, planning and policy analysis.

The ‘smart cities’ field is important in that all efficiency gains are to be welcomed. But it is a fragmented field, mostly focused on very specific applications and there is much thinking to be done in terms of integration with other forms of analysis and planning, and being smart for the long run.

There is one important general conclusion to be drawn that I will emphasise very strongly: fashion is important because usually (though not always) it is a recognition of something important and new; but the degrees of swing to fashion are too great. There are many earlier threads which form the elements of core social science which become neglected. Fortunately, there is usually a small but enthusiastic group who keep the important things moving forward and the foundation is there for when those threads become important and fashionable again (albeit sometimes under another name). So in choosing research topics, it is important to be aware of the whole background and not just what is new; sometimes integration is possible; sometimes the old has to be a continuing and developing thread.

5. REAL CHALLENGES

One route into urban research is to reflect on the well known and very real challenges that cities face. The most obvious ones can be classified as ‘wicked problems’ in that they have been known for a long time, usually decades, and governments of all colours have made honourable attempts to ‘solve’ them. Given the limited resources of most academic researchers it could be seen as wishful thinking to take these issues on to a research agenda, but nothing ventured, nothing gained. And we need to bear in mind the PDA  framework (cf. Chapter 1): policy, design, analysis. Good analysis will demonstrate the nature of the challenge; policy is usually to make progress with meeting it; the hard part is the ‘design – inventing possible solutions. The analysis comes into play again to evaluate the options. If we focus on the UK context, what is a sample of the issues? Let’s structure the argument in terms of half a dozen headings, and then take examples under each.

  • living in cities – people issues
  • the economy of cities
  • urban metabolism: energy and materials flows
  • urban form
  • infrastructure
  • governance

Then, a sample of issues within each:

  • living in cities – people issues
    • housing, there is a current shortage and this situation will be exacerbated by population growth
    • education – a critical service – upskilling for future proofing and yet a significant percentage leave school inadequate in literacy, numeracy and work skills
    • health – a postcode lottery in the delivery of services?
    • the future of work – what will happen if the much predicted ‘hollowing out’ occurs as middle-range jobs are automated? How will the redundant pay their bills?
  • the economy of cities
    • the ‘economy’ embraces private and public and so has to deliver products, services and jobs (and therefore incomes)
  • urban metabolism: energy and materials flows
    • issues of sustainability and the feasibility – indeed the necessity – of achieving low carbon targets
  • urban form
    • where will the necessary new housing go – 200,000+ p.a. for the foreseeable future?
  • infrastructure
    • accessibilities are crucial for both people and organisations so transport infrastructure and an effective system are correspondingly critical
    • investment in utilities will be necessary to match population growth but also to respond to the sustainability agenda
    • in particular, counting communications and broadband as utilities, how do we secure our future in a competitive world?
  • governance
    • at what levels are planning and policy decisions best made?
    • the security of food, communications and utilities?

This brief analysis throws out one immediate important conclusion: these issues are highly interdependent and one important area of research is to chart these interdependencies and to build policies and plans that take them into account. One obvious research priority, argued by me over and over again, is the need for comprehensive urban models. There are some distinguished examples, but they are not a core part of planning practice. Ideally, therefore, in relation to the issues sketched above, the comprehensive model should have enough detail in it to represent all the problems and any specific piece of research should be tested by the runs of such a model. Neither of these ambitions is fulfilled in practice and so this offers a challenge to modellers as well as to the specifics of real-world issues.

A second conclusion is that any specification of research into any of these issues will be interdisciplinary (qv). Bearing this in mind, let us now work down to another level and pose questions about research.

  • living in cities – people issues
    • housing, there is a current shortage and this situation will be exacerbated by population growth
      • the demography is itself very much worth exploring: populations in many places are, relatively, ageing; and others are being restructured by immigration – both in and out. These shifts in many cases relate to work opportunities and there has been relatively little research on these linkages.
      • we need an account of, that is a model, of where people choose to live in relation to their incomes, housing availability, affordability and prices, local environments and accessibilities to work and services – a pretty tall order for the initial analysis
      • there is then a planning issue which links closely to urban form: where should the new housing go? At present it is mainly on the edges of cities, towns and villages with no obvious functional relationship to other aspects of people’s lives; there is related research to be done on developers and house builders and their business models
    • education – a critical service – upskilling for future proofing and yet a significant percentage leave school inadequate in literacy, numeracy and work skills
      • some progress has been made in developing federations of schools to bring ‘failing’ schools into a more successful fold; however, there is hard analysis to be done on other factors – particularly the impact of the social background of children and whether schools’ initiatives need to be extended into a wider community. Again, there are examples but it should be possible to explore the relative successes of initiatives in a wide range of areas.
      • a particular category of concern is looked-after children – children in care. The system is obviously failing as measured by the tiny percentage who progress into higher education and by the high percentage of offenders who have ever been in care.
    • health – a postcode lottery in the delivery of services?
      • this is a sector that is data rich but under-analysed. There are a good number of research projects in the field but it remains relatively fragmented. Does anyone explore an obvious question for example: what is the optimum size of GP surgeries in different kinds of locations?
    • the future of work – what will happen if the much predicted ‘hollowing out’ occurs as middle-range jobs are automated? How will the redundant pay their bills – picked up in the next section?
  • the economy of cities
    • the ‘economy’ embraces private and public and so has to deliver products, services and jobs (and therefore incomes)
      • interesting research has been done on e.g. growing and declining sectors in the economy which can then be translated down to the city level and this can be combined with the ‘replicator’-‘reinventor’ concepts introduced in the Centre for Cities’ Century of Cities paper.
      • this would enable  at least short term predictions of employment change which could also be related to the immigration issues
  • urban metabolism: energy and materials flows
    • issues of sustainability and the feasibility – indeed the necessity – of achieving low carbon targets
      • an obvious research issue here is the monitoring – and analysis of past – trends in relation to sustainability targets. There is some evidence that trends are in the ‘wrong’ direction: trips getting longer, densities increasing. If this is the case, can we invent and test alternative futures? Longer trips, but all by new forms of public transport? Some high density development aimed at groups who might appreciate it?
  • urban form
    • where will the necessary new housing go – 200,000+ p.a. for the foreseeable future?
      • Explore possible ‘green belt’ futures – for example analysing the URBED Wolfson Prize model?
  • infrastructure
    • accessibilities are crucial for both people and organisations so transport infrastructure and an effective system are correspondingly critical
      • transport efficient city regions; counties as distributed cities?
      • some systematic research on accessibilities and the ways in which they can be related to utility functions?
    • investment in utilities will be necessary to match population growth but also to respond to the sustainability agenda
    • in particular, counting communications and broadband as utilities, how do we secure our future in a competitive world?
  • governance
    • at what levels are planning and policy decisions best made?
      • charting subsidiarity principles?
    • the security of food, communications and utilities?

This is a very partial and briefly-argued list but I think it exposes the paucity of both particular and integrated research on some of the big challenges. Ready to engage?!

6. ‘RESEARCH ON’ VERSUS ‘RESEARCH FOR’

Let us begin by asserting that any piece of research is concerned with a ‘system of interest’ – henceforth ‘the system’(cf. Systems thinking). We can then make a distinction between the ‘science of the system’ and the ‘applied science relating to the system’. In the second case, the implication is that the system offers challenges and problems that the science (of that system or possibly also with ‘associated’ systems) might help with. In research terms, this distinction can be roughly classified as ‘research on’ the system and ‘research for’ the system. This might be physics on the one hand, and engineering on the other; or biological sciences and medicine. There will be many groups of disciplines like this where there is a clear division of labour – though whether this division is always either clear or efficient is a matter of debate. In the case of urban research (and possibly the social sciences more generally), possibly because it is an under-developed interdisciplinary area, there is a division of labour but with a significant grey area in between. But there is also a concern in my mind that the division is too sharp and that the balance of research effort is more focused on ‘research on’ rather than contributing to ‘research for’.

There are a number of complications that we have to work to resolve. First, there is the fact that there are disciplinary agendas on cities – in economics, geography and sociology for example where they ought to be interdisciplinary. But it does illustrate the fact that there is a ‘research on’ versus ’research for’ challenge. The ‘research on’ school are concerned with how cities work, the ‘research for’ group with, for example, how to ‘solve’ (if that is the right question) traffic congestion; or housing problems; or social disparities. It is a long list (cf. Real challenges).

A second complicating issue is the research councils ‘impact’ agenda. I have no problem with a requirement that all research should be intended to have impact. The opposite is pretty absurd. However, that depends on the possibility of the impact being intellectual impact, within the science; that is, impact within ‘research on’. What seems to have happened is that the research councils’ definition has narrowed and impact in their sense is intended to relate to ‘real’ problems – in other words, to ‘research for’. Consider physics and engineering: while the tool kits overlap in some respects, they, and the associated mindsets, are pretty different. The same could be argued for research on cities except in this case, we don’t have labels that are analogous to physics and engineering. So we have to invent our own! From a research council perspective, this has not been clearly handled. There is an expectation that for any application, there will be a ‘pathways to impact’ statement. If the research in question is of a ‘research on’ kind, and if the associated tools do not obviously fit ‘research for’, then this is very difficult and there is quite a lot of jumping through small dimension hoops.

A third issue is the influence of the REF (in the UK) on research priorities. Again, there is an element of required impact and yet the bulk of the panels are made up of ‘research on’ academics. It is even argued – or is it just in our subconscious? – that ‘pure’ research is more worthwhile in REF terms than ‘applied’. It was once suggested to me in the context of a university Business School – not in these words, that the ‘research on fiirms’ was more important for the REF than ‘research for firms’ – because the latter could be considered as consultancy and therefore of a lower grade. There is some truth in this in that ‘research on’ can produce wider ranging ‘general results’ that offer insight as opposed to specific case studies that don’t generalise. But in the social sciences at least, it is the case studies that eventually lead to the general, grounded in evidence.

There is then a fourth issue, more like a challenge: if impact is really desirable – and it is – how can the users get the best from the researchers? It is often argued that the UK has very high quality research but, to a substantial extent, fail to reap the rewards of application. Indeed there have been commissions of many kinds for decades on how academic research can be better linked to application – I would guess a study roughly once every two years. There are various ‘solutions’ and many have been tried but success has been, at best, partial. The ‘research on’ community remain the largest group of academic researchers and retain the ‘prestige’ that serve it well in many ways. There are significant straws in the wind: a shifting of research resources in the direction of Innovate UK and the establishment of the catapults; some redirection of research council funding. But my guess is that there is a battle for hearts and minds still being fought.

So what do we need? Some clarity of thought, some changes of mindset especially in terms of prestige, and perhaps above all, some demonstrators that show that ‘research for’ can be just as exciting as ‘research on’ – in many cases much more so. In the urban research world, we are luck in principle in that we can have it both ways: discoveries in the science often have pretty immediate applications, but there are opportunities for more ‘research on’ researchers to spend at least some time in the ‘research for’ community!

PART 2. SUPER CONCEPTS

7. SYSTEMS THINKING

When I was studying physics, I was introduced to the idea of ‘system of interest’: defining at the outset that ‘thing’ you were immediately interested in learning about. There are three ideas to be developed from this starting point. First, it is important simply to list all the components of the system, what Graham Chapman called ‘entitation’; secondly, to simplify as far as possible by excluding all possible extraneous elements from this list; and thirdly, taking us beyond what the physicists were thinking at the time, to recognise the idea of a ‘system’ as made up of related elements, and so it was as important to understand these relationships as it was to enumerate the components. all three ideas are important. Identifying the components will also lead us into counting them; simplification is the application of Occam’s Razor to the problem; and the relationships take us into the realms of interdependence. Around the time that I was learning about the physicists’ ideas of a ‘system’, something more general, ‘systems theory’ was in the air though I didn’t become aware of it until much later.

So, always start a piece of research with a ‘system of interest’. Defining the system then raises other questions which have to be decided at the outset – albeit open to later modification: questions of scale[2]. There are three dimensions to this. First, what is the granularity at which you view your system components? Population by age: how many age groups? Or age as a continuous variable?  Usually too difficult. Second, how do you treat space? Continuous with Cartesian coordinates? Or as discrete zones? If the latter, what size and shape? Thirdly, since we will always be interested in system evolution and change, how do we treat time? Continuous? Or as a series of discrete steps? If the latter, one minute, one hour, one year, ten years, or what? These choices have to be made in a way that is appropriate to the problem and, often, in relation to the data which will be needed. (Data collectors have already made these scale decisions.)

Once the system is defined, we have to ask a question like: how does it work? Our understanding, or ‘explanation’ is represented by a theory. There may be an existing theory which may be partially or fully worked out; or there may be very little theory. Part of the research problem is then to develop the theory, possibly stated as hypotheses to be tested.

Then there is usually a third step relating to questions like: how do we represent our theory? How do we do this in such a way that it can be tested? What methods are available for doing this?

In summary, a starting point is to define a system of interest, to articulate a theory about how it works, and to find methods that enable us to represent, explore and test the theory. We can call this the STM approach.

This, though the reference to theory (or hypothesis) formulation and testing, establishes the science base of research. Suppose now that we want to apply our science to real-world problems or challenges. We need to take a further step which in part is an extension of the science in that it is still problem-solving in relation to a particular system of interest but has added dimensions beyond what is usually described as ‘blue-skies’ science. The additions are: articulating objectives and inventing possible solutions. Take a simple example: how to reduce car-generated congestion in a city. the science offers us a mathematical-computer model of transport flows. Our objective is to reduce congestion. The possible solutions range from building new roads in particular places to improving a public transport system to divert people from cars. Each possible solution can be thought of as a plan and the whole activity is a form of planning. In some cases, computer algorithms can invent plans but more usually it is a human activity. For any plan, the new flows can be calculated using the model along with indicators of, for example, improved traffic speeds and consumers’ surplus. A cost-benefit analysis can be carried out and the plan chosen that has the greatest rate-of-return or the greatest benefit-to-cost ratio. In reality, it is never as neat as this, of course.

How can we summarise this process? I learned from my friend and collaborator. Britton Harris, many years ago that this can be thought of as policy, design and analysis: a PDA framework to complement the STM. His insight was that each of the three elements involved different kinds of thinking – and that it was rare to find these in one person, or applied systematically to real problems. There is a further insight to be gained: the pda framework can be applied to a problem in ‘pure’ science. the objective, the policy, may be simpler – to articulate a theory – but it is still important to recognise the ‘design’ element – the invention that is required at the heart of the scientific process. And this applies very directly to engineering of course: engineers have both a science problem and a policy and planning problem and both STM and PDA apply.

There is an important corollary of adopting a systems perspective at the outset of a piece of research: it forces interdisciplinarity. This can be coupled with an idea which will be developed more fully later: requisite knowledge. This is simply the knowledge that is required as the basis for a piece of research. When this question is asked about the system of interest, it will almost always demand elements from more than one discipline; and these elements combine into something new – moiré than the sum of the parts. There is a fundamental lesson here about effective research: it has to be interdisciplinary at the outset.

This provides a framework for approaching a subject, but we still have to choose! These decisions can be informed but are ultimately subjective. A starting point is that they should be interesting to the researcher but also important so some wider community. The topic should be ambitious but also feasible – a very difficult balance to strike.

8. COMBINATORIAL EVOLUTION

Brian Arthur introduced a new and important idea in his book The nature of technology: that of ‘combinatorial evolution’. The argument, put perhaps overly briefly, is essentially this: a ‘technology’, an aeroplane say, can be thought of as a system, and then we see that it is made up of a number of subsystems; and these can be arranged in a hierarchy. Thus the plane has engines, engines have turbo blades etc. The control system must sit at a high level in the hierarchy and then at lower levels we will find computers. The key idea is that most innovation comes at lower levels in the hierarchy, and through combinations of these innovations – hence combinatorial evolution. The computer may have been invented to do calculations but, as with aeroplanes, now figures as the lynchpin of sophisticated control systems.

This provides a basis for exploring research priorities. Arthur is in the main concerned with hard technologies and with specifics, like aeroplanes. However, he does remark that the economy ‘is an expression of technologies’ and that technological change implies structural change. Then: ‘…economic theory does not usually enter [here]………. it is inhabited by historians’. We can learn something here about dynamics, about economics and about interdisciplinarity! However, let us focus on cities. We can certainly think of cities as technologies – and much of the smart cities’ agenda can be seen as low-level innovation that can then have higher level impacts. We can also see the planning system as a soft technology. What about the science of cities, and urban modelling? Arthur’s argument about technology can be applied to science. Think of ‘physics’ as a system of laws, theories, data and experiments. Think of spelling out the hierarchy of subsystems and, historically, charting the levels at which major innovations have been delivered. Translate this more specifically to our urban agenda. If (in broad terms) modelling is the core of the science of cities, and that (modelling) science is one of the underpinnings of the planning system, can we chart the hierarchy of subsystems and then think about research priorities in terms of lower-level innovation?

This really needs a diagram, but that is left as an exercise for the reader! Suppose the top-level is a working model – a transport model, a retail model or a Lowry-based comprehensive model. We can represent this and three (speculative) lower levels broadly as follows.

  • level 1: working model – static or dynamic
  • level 2 – cf. STM, (cf. How to begin):
    • system definition (entities, scales: sectoral, spatial, temporal); exogenous, endogenous variables
    • hypotheses, theories
    • means of operationalising (statistics, mathematics, computers, software,…)
    • information system (cleaned-up data; intermediate model to estimate missing data)
    • visualisation methods
  • level 3:
    • explore possible hypotheses and theories for each subsystem
    • data processing; information system building
    • preliminary statistical analysis
    • available mathematics for operationalising
    • software/computing power
  • level4:
    • raw data sources

An Arthur-like conjecture might be that the innovations are likely to emanate from levels 3 and 4. In level 3, we have the opportunity to explore alternative hypotheses and to refine theories. Something like utility functions, profits and net benefits are likely to be present in some form or other to represent preferences with any maximisation hypotheses subject to a variety of constraints (which are themselves integral parts of theory-building). We might also conjecture that an underlying feature that is always present is that behaviour will be probabilistic and so this should always be present. (In fact this is likely to provide the means for integrating different approaches.)

Can we identify likely innovation territories? The ‘big and open data’ movement will offer new sources and this will have impacts through levels 2 and 3. One consequence is likely to be the introduction of more detail – more categories – into the working model, exacerbating the ‘number of variables’ problem, which in turn could drive the modelling norm towards microsimulation. This will be facilitated by increasing computing power. We are unlikely to have fundamentally different underlying hypotheses for theory building but there may well be opportunities to bring new mathematics to bear – particularly in relation to dynamics (cf. Working backwards. forthcoming entry). There is one other possibility of a different kind, reflected in level 2 – system definition – in relation to scales. There is an argument that models at finer scales should be controlled by – made consistent with – models at more aggregate scales. An obvious example is that the population distribution across the zones of a city should be consistent with aggregate level demography; and similarly for the urban economy. An intriguing possibility remains the application of the more aggregate methods (demographic and economic input-output) at fine zone scales.

9. REQUISITE KNOWLEDGE

W Ross Ashby was a psychiatrist who, through books such as Design for a brain, was one of the pioneers of the development of systems theory (cf. Chapter 7) in the 1950s. A particular branch of systems theory was ‘cybernetics’ – from the Greek ‘steering’ – essentially the theory of the ‘control’ of systems. This was, and I assume is, very much a part of systems engineering and it attracted mathematicians such as Norbert Weiner. For me, an enduring contribution was ‘Ashby’s Law of Requisite Variety’ which is simple in concept and anticipates much of what we now call complexity science. ‘Variety’ is a measure of the complexity of a system and is formally defined as the number of possible ‘states’ of a system of interest. A coin to be tossed has two possible states – heads or tails; a machine can have billions. Suppose some system of interest has to be controlled – for simplicity – a machine, a robot say. Then the law of requisite variety asserts that the control system must have at least the same variety as the machine it is trying to control. This is intuitively obvious since any state of the machine must be matched in some way by a state of the control unit – it needs an ‘if ….. then ……’ mechanism. Suppose now that the system of interest is a country and the control system is its government. It is again intuitively obvious that the government does not have the ‘variety’ of the country and so its degree of control is limited. Suppose, further, that the government of a country is a dictatorship and wants a high degree of control. This can only be achieved by reducing the ‘variety’ of the country through a system of rules. This can then be seen as underpinning the argument for devolution from ‘centre’ to ‘local’ – a way of building ‘variety’ into governance. So we begin to see how a concept which appears rooted in engineering can be applied more widely.

We can now take a bigger step and apply it to ‘knowledge’, and, specifically, to the knowledge required to make progress with a research problem. The ‘problem’ is now associated with a ‘system of interest’ and the ‘requisite knowledge’ is that which is required to ‘solve’ the research problem. The application of the law of requisite variety can then be interpreted as relating to the specification of the toolkit of knowledge elements needed to make progress and the law asserts that it must be at least as complex as the problem. We might think of this as the ‘RK-toolkit’. It seems to me that this is an important route into thinking about how to do research. What do I need to know? What do I need in my toolkit? It forces an interdisciplinary perspective at the outset.

Consider, as an example, the housing problem in the UK (cf. Real challenges): what is the requisite knowledge – the RK toolkit – which would be the basis for shifting from building the current 100,000 new houses p.a. in the UK to an estimated ‘need’ of 200,000+ p.a.? We can get a clue from How to start: there will be policy, design and analysis elements of the toolkit. Elementary economics will tell us that builders will only build if the products can be sold, and in turn this means can be afforded – basic supply and demand. Much of the price is determined by the price of land – so land economics is important; or if land price is too high for elements of need to be met, there may be an argument for Government subsidies to generate social housing. Alternatively, prices could be influenced by the cost of building and this raises questions of whether new technology could help – and this brings engineering (and international experience) very much into the toolkit. Given that there is likely to be a substantial expansion, geography kick in: where can this number of houses be built? This is in part a question of ‘where across the UK’ and in part, ‘where in, or on the periphery of, particular cities’. Or new ‘garden cities’? All of this raises questions for the planning profession. The builders are part of a wider ecosystem and land owners and the government regulation of land, through taxation or whatever become part of the research task.

So there are challenges for all of us who might want to work on this issue – academia, divided by discipline; the professions, functioning in silos; the land owners, developers and builders; and government, wanting to make progress but finding it difficult to coral the different groups into an effective unit. In this case, the RK-toolkit can be assembled as a knowledge base, but this sketch shows that an important part of it is a capacity to assemble the right teams.

10. THE BRAIN AS A MODEL

An important part of my intellectual toolkit has been Stafford Beer’s book Brain of the firm, published in 1972. Stafford Beer was a larger than life character who was a major figure in operational research, cybernetics, general systems theory and management science. I have a soft spot for him because of his book and work more widely and because, though I never met him, he wrote to me in 1970 after the publication of my Entropy book saying that it was the best description of ‘entropy’ he had ever read. I see from googling that the book is still in print as a second edition and I think it is also possible to download a pdf. Googling will also fill in more detail on Stafford Beer but beware the entry, which made me think he still had a contemporary supporters’ club, that is ‘Stafford Beer Festival 2015’ which turns out to be the Stafford Beer and Cider Festival!

The core argument of the book is a simple one: that the brain is the most successful system ever to evolve in nature and, therefore, if we explore it, we might learn something. In the Brain, he expounds the neurophysiology to a point at the time – it seemed so good – that I checked the accuracy against some neurophysiology texts and it seemed to pass. What follows is a considerable oversimplification both of the physiology and of Beer’s use of it – so tolerance is in order! The brain has five levels of organisation. Then top – level 5 – is the strategic level. Levels 1-3 represent the autonomous nervous system which govern actions like breathing without us having to think; and also carry instructions to carry out actions at level 1. Occasionally, the autonomous system passes messages upwards if, for example, there is some danger. Level 4 is particularly interesting. It can be seen as an information processor. The brain receives an enormous amount of sense data and would not be able to make sense of this without the filter. Beer’s argument is that there is no equivalent function in organisations – and this is to their fundamental detriment. He cites as an example the Cabinet Office War Room in World War II (now open as part of the Imperial War Museum) which was set up to handle the real time flow of information and to deal with information overload.

Beer translated this into a model of an organisation which he called the VSM – the viable system model. The workings of the organisation were at levels 1, 2 and 3. The top level – the company board or equivalent was level 5. He usually attributes level 4 to the Development Directorate and I can see the case for that, but it doesn’t entirely deal with the filtering operation that any organisation needs. (But this is probably because of an over-rapid rereading on my part.) However, what he did recommend, even in 1972, was ‘a large dynamic electrical display of the organisation’ together with a requirement that all meetings of senior staff took place in that room. The technical feasibility of this is much higher and fits with the display of ‘big data’ such as that which has been built in Glasgow as an Innovate UK demonstrator.

This still leaves open the question of how to make sense of the mass of data – post filtering – and this is where we need an appropriate model. This connects to a big research question which has already been raised in another blog: how to design the architecture of a multi-dimensional information system that can be aggregated and interrogated in a variety of ways. This constitutes an invitation to work on this.

I think we can gain tremendous insights from the Brain if the firm model when we think about organisations we either work in or work for; or are simply interested in.  Additionally, can we learn anything about how to approach research? We are certainly aware of information overload and functioning as individual researchers, the scale of this makes it impossible to cope with. Can we build an equivalent of a War Room. Forward-looking Librarians are probably trying to help us by doing this electronically – but we run into the classification problem again (cf. Atlases and ,….). Can we organise any other kind of cooperative effort – crowd sourcing to find the game changers? we might call this the ‘market in research’. The buyers are the researchers who cite other research, and a cumulatively large number of citations usually points to something important. The only problem then is that the information comes too late. We learn about the new fashion; we don’t get in on the ground floor. So an unresolved challenge here!

11. DNA

The idea of ‘DNA’ has become a commonplace metaphor. the real DNA is the genetic code that underpins the development of organisms. I find the idea useful in thinking about the development of – evolution of – cities. This development depends very obviously on ‘what is there already’ – in technical terms, we can think of that as the ‘initial conditions’ for the next stage. we can then make an important distinction between what can change quickly – the pattern of a journey to work for instance – and what changes only slowly – the pattern of buildings or a road network. It is the slowly changing stuff that represents urban DNA. Again, in technical terms, it is the difference between the fast dynamics and the slow dynamics. The distinction is between the underlying structure and the activities that can be carried out on that structure.

It also connects to the complexity science picture of urban evolution and particularly the idea of path dependence. How a system evolves depends on the initial conditions. Path dependence is a series of initial conditions. we can then add that if there are nonlinear relations involved – scale economies for example – then the theory shows us the likelihood of phase changes – abrupt changes in structure. The evolution of supermarkets is one example of this; gentrification is another.

This offers another insight: future development is constrained by the initial conditions. we can therefore ask the question: what futures are possible – given plans and investment – from a given starting point? This is particularly important if we want to steer the system of interest towards a desirable outcome, or away from an undesirable one – also, a tricky one this, taking account of possible phase changes. This then raises the possibility that we can change the DNA: we can invest in such a way as to generate new development paths. This would be the planning equivalent of genetic medicine – ‘genetic planning’. There is a related and important discovery from examining retail dynamics from this perspective. Suppose there is a planned investment in a new retail centre at a particular location. this constitutes an addition to the DNA. The dynamics then shows that this investment has to exceed a certain critical size for it to succeed. If this calculation could be done for real-life examples (as distinct from proof-of-concept research explorations) then this would be incredibly valuable in planning contexts. Intuition suggests that a real life example might be the initial investment in Canary Wharf in London: that proved big enough in the end to pull with it a tremendous amount of further investment. the same thing may be happening with the Cross rail investment in London – around stations such as Farringdon.

The ‘structure vs activities’ distinction may be important in other contexts as well. It has always seemed to me in a management context that it is worth distinguishing between ‘maintenance’ and ‘development’, and keeping these separate – that is, between keeping the organisation running as it is, and planning the investment that will shape its future. (cf. The brain as a model)

The DNA idea can be part of our intuitive intellectual toolkit, and can then function more formally and technically in dynamic modelling. The core insight is worth having!!

12. TERRITORIES AND FLOWS

Territories are defined by boundaries at scales ranging from countries and indeed alliances of countries) to neighbourhoods via regions and cities. These may be government or administrative boundaries, some formal, some less so; or they may be socially defined as in gang territories in cities. Much data relates to territories; some policies are defined by them – catchment areas of schools or health facilities for example. It is at this point that we start to see difficulties. Local government boundaries usually will not coincide with the functional city region; and in the case of catchment boundaries, some will be crossed unless there is some administrative ‘forcing’. So as well as defining territories, we need to consider flows both within but especially between them. Formally, we can call territories ‘zones’, and flows are then between origin zones and destination zones. If the zones are countries, then the flows are trade and migration; if zones within a city region, then the flows may be journeys to work, to retail or other facilities.

It is then convenient to make a distinction between the social and political roles of territories and how we make best use of them in analysis and research. In the former case, much administration is rooted in the government areas and they have significant roles in social identity – ‘I am a Yorkshire-man or -woman’, ‘I am Italian’, and so on; in the latter case, these territories don’t typically suit our purposes though we are often prisoners of administrative data and associated classifications.

So how do we make the best of it for our analysis? A part of the answer is always to make use of the flow data. In the case of functional city regions, the whole region can be divided into smaller zones and origin-destination flows (O-D matrices technically) can be analysed, first to identify ‘centres’ and then perhaps a hierarchy of centres. (Google ‘Nystuen and Dacey, 1961’ for one way to do this systematically.) It is then possible, for example, to define a city region as a ‘travel to work area’ – a TTWA – as in the UK Census. Note, however, that there will always be an element of arbitrariness in this: what is the cut-off – the percentage of flows from an origin zone into a centre – that determines whether that origin is in a particular TTWA or not?

In analysis terms, I would argue that the use of flow data is always critical. Very few territories – zones at any scale – are self-contained. And the flows across territorial boundaries, as well as the richer sets of O-D flows, are often very interesting. An obvious example is imports and exports across a national boundary from which the ‘balance of payments’ can be calculated – saying something about the health of an economy. In this case, the data exists (for many countries) but in the case of cities, it doesn’t and yet the balance of payments for a city (however defined) is a critical measure of economic health. (There is a big research challenge there.)

It is helpful to point to some contrasts in both administration and analysis when flows are not taken into account, and then to consider what can be done about this. There are many instances when catchments are defined as areas inside fixed boundaries – even when they are not defined by government. Companies, for example, might have CMAs – customer market areas; primary schools might draw a catchment boundary on a map giving priority to ‘nearness’ but trying to ensure that they get the correct number of pupils. In some traditional urban and regional analysis – in the still influential Christaller central place theory for example – market areas are defined around centres – in Christaller’s case nested in a hierarchy. This makes intuitive sense, but has no analytical precision because the market areas are not self-contained. As it happens, there is a solution!

Think of a map of facilities – shopping centres, hospitals, schools or whatever – and for each, add to the map a ‘star’ of the origins of users, with each link being given a width to represent that number of users. For each facility, that star is the catchment population. And it all adds up properly: the sum of all the catchment populations equals the population of the region. This, of course, represents the situation as it actually is and is fine for retail analysis for example. It is also fine for the analysis of the location of health facilities. It may be less good for primary schools that are seeking to define an admissions policy.

A particular application of the ‘catchment population’ concept is in the calculation of performance indicators. If cost of delivery per capita is an important indicator, then this can be calculated as the cost of running the facility divided by the catchment population. It is clearly vital that there is a good measure of catchment population. In this case, the ‘star’ is better than the ‘territory’. But the concept can be applied the other way round. Focus on the population of a small zone within a city and then build a reverse star: link to the facilities serving that zone, each link weighted by what is delivered. What you then have is a measure of effective delivery and by dividing by the zonal population, you have a per capita measure. (An alternative, and related, measure is ‘accessibility’.) This may sound unimportant, but consider, say, supermarkets and dentists. On a catchment population basis, any one of these facilities may be performing well. On a delivery basis to a population, analysis will turn up areas that are ‘supermarket deserts’ (usually where poorer people live – those who would like access to the cheaper food) or have poor access to dental treatment – even though the facilities themselves are perfectly efficient.

So what do we learn from this: that we have to work with territories, because they are administratively important and may provide the most data; but we should always, where at all possible, make use of all the associated flows, many of which cross territorial boundaries, and then calculate useable concepts like catchment populations and delivery indicators ‘properly’.

13. EQUATIONS WITH NAMES: THE IMPORTANCE OF LOTKA AND VOLTERRA (AND TOLSTOY?)

The most famous equations with names – in one case by universal association – seem to come from physics: Newton’s Law of Gravity – the gravitational force between two objects is proportional to their masses and inversely proportional to the distance between them; Maxwell’s equations for electromagnetic fields; the Navier-Stokes’ equation in fluid dynamics; and E = mc2, Einstein’s equation which converts mass into energy. The latter is the only equation to appear in the index (under ‘E’) in Ian Stewart’s book ‘Seventeen equations that changed the world’. While the gravitational law has been used to represent situations where distance attenuation is important, the translation is analogous and not exact. An interesting example, pointed out to me by Mark Birkin, is Tolstoy in ‘War and Peace’: “Meanwhile, the very next morning after the battle, the French army moved against the Russians, carried along by its own impetus, now accelerating in inverse proportion to the square of the distance from its goal.” Penguin edition, 2005. Tolstoy would have written this in the 1860s! The physics equations, on the whole, work in physics and not elsewhere. An exception – that is, it does work elsewhere and has served me well in my own work – is Boltzmann’s equation for entropy, S = klogW (to be found on his gravestone in Vienna). The other equations which have served me well – plural because they come in several forms – are the Lotka-Volterra equations, originally developed in ecology. Because of the nature of ecology relative to physics, they do not deliver the physics kind of ‘exactness’ but this may in part be the reason for their utility in translation to other disciplines.

The Boltzmann entropy-maximising method works for any problem (and hence in a variety of fields) where there are large numbers of weakly-interacting elements where interesting questions can be posed about average properties of the system. Boltzmann does this for the distribution of energy levels of particles in gases at particular temperatures for example. In my own work, I use the method to calculate, for example, journey to work flows in cities. The entropy measure was also introduced by Shannon into information theory and in one sense underpins much of computer science. When he produced his equation to measure ‘information’ he is said to have consulted the famous mathematician von Neumann on what to call the main term. “Call it ‘entropy’”, von Neumann replied (paraphrased): “It is like the entropy in physics and if you do this you will find in any argument, no one will understand it and you will always win!” I would dare to say that von Neumann was wrong in this respect: it can be explained. Cesar Hidalgo in his recent book ‘Why information grows’ makes the interesting point about Boltzmann’s work that it crosses and links scales – the atoms in the micro with the thermodynamic properties of the macro; this was unusual at the time and perhaps still is.

The Lotka-Volterra equations are concerned with systems of populations of different kinds – different species in ecology for example. In one sense, their historical roots can be related back to Malthus and his exponential ‘growth of population’ model. In that model, there were no limits to growth, and these were supplied by Verhulst who dampened growth to produce the well-known logistic curve. (Bob May in the 1970s showed that this simple model has remarkable properties and was the route into chaos theory.) What Lotka and Volterra did – each working independently, unknown to each other – was to model two or more populations that interacted with each other. The simplest L-V model is the well known two-species predator-prey model. There is a logistic equation for each species linked by their interactions: the predator species grows when there is an abundance of prey; the prey species declines when it is eaten by the predator. Not surprisingly, there is an oscillating solution. What is more interesting in terms of the translation into other fields is the ‘competition for resources’ form of the L-V model. In this case, two or more species compete for one or more resources and this provides a way of representing interactions between species in an ecosystem. The translation comes through identifying systems of interest in which populations of other kinds compete for resources. There are examples in chemistry where molecules in a mixture compete for energy, in geography where retailers compete for consumers (as in my own work with Britton Harris) and in security with Lewis Fry Richardson’s model of arms races and wars. There are undoubtedly many more possibilities.

Lotka, Volterra and Richardson were working in the 1930s and 40s and there are interesting common features of their research. None of them worked primarily – at least in the first instance – in ecology. Lotka was a mathematician and chemist, and later an actuary; Volterra was a mathematician and an Italian Senator. Both came to mathematical biology relatively late. Richardson was a distinguished meteorologist and later a College Principal. It is worth looking at their original papers to see the extraordinary range of examples they pursued in each case (with real data which must have been difficult to accumulate) – particularly bearing in mind that there were no computers. Indeed, Richardson, at the end of one of his papers, thanks “…..the Government Grant Committee of the Royal Society for the loan of a calculating machine”! It was also interesting, perhaps not surprising given their mathematical skills, that they explored the mathematical properties of these systems of equations, in various forms, in some depth. Their work at the time was picked up by others – notably V. A. Kostitzin. I picked up a second-hand copy of his 1939 book ‘Mathematical biology’ via the internet after searching for Volterra’s work: Volterra wrote a generous preface of the book!

The Lotka-Volterra equations represent one of the keys to a particular kind of interdisciplinarity: a concept that can be applied across many disciplines because of the nature of what is a generic problem – modelling the ‘competition for resources’. In a particular instance of a research challenge, the trick is to be aware that the problem may be generic and that there are elements of a toolkit lurking in another discipline!

Alan Wilson, June 2015

PART 3. WHAT TO DO NEXT

14. WHAT WOULD WARREN WEAVER SAY NOW?

Warren Weaver was a remarkable man. He was a distinguished mathematician and statistician. He made important early contributions on the possibility of the machine translation of languages. He was a fine writer who recognised the importance of Shannon’s work on communications and the measurement of information and he worked with Shannon to co-author ‘The mathematical theory of communication’. But perhaps above all, he was a highly significant science administrator. For almost 30 years, from 1932, he worked in senior positions for the Rockefeller Foundation, latterly as Vice-president. I guess he had quite a lot of money to spend. From his earliest days with the Foundation, he evolved a strategy which was potentially a game-changer, or at the very least, seriously prescient: he switched his funding priorities from the physical sciences to the biological. In 1948, he published a famous paper in The American Scientist that justified this – maybe with an element of post hoc rationalisation – on the basis of three types of problem (or three types of system – according to taste): simple, of disorganised complexity and of organised complexity. Simple systems have a relatively small number of entities; complex systems have a very large number. The entities in the systems of disorganised complexity interact only weakly; those of organised complexity have entities that interact strongly. In the broadest terms – my language not his – Newton had solved the problems of simple systems and Boltzmann those of disorganised complexity. The biggest research challenges, he argued, were those of systems of organised complexity and more of these were to be found in the biological sciences than the physical. How right he was and it has only been after some decades that ‘complexity science’ has come of age – and become fashionable. (I was happy to re-badge myself as a complexity scientist which may have helped me to secure a rather large research grant.)

There is famous management scientist, no longer alive, called Peter Drucker. Such was his fame that a book was published confronting various business challenges with the title: ‘What would Peter Drucker say now?’. Since to my knowledge, no one has updated Warren Weaver’s analysis, I am tempted to pose the question ‘What would Warren Weaver say now?’. I have used his analysis for some years to argue for more research on urban dynamics – recognising cities as systems of organised complexity. But let’s explore the harder question: given that we understand urban organised complexity – though we haven’t progressed a huge distance with the research challenge – if Warren Weaver was alive now and could invest in research on cities, could we imagine what he might say to us. What could the next game changer be? I will argue it for ‘cities’ but I suspect, mutatis mutandis, the argument could be developed for other fields. Let’s start by exploring where we stand against the original Weaver argument.

We can probably say a lot about the ‘simple’ dimension. Computer visualisation for example can generate detailed maps on demand which can provide excellent overviews of urban challenges. We have done pretty well on the systems of disorganised complexity in areas like transport, retail and beyond. This has been done in an explicit Boltzmann-like way with entropy maximising models but also with various alternatives – from random utility models via microsimulation to agent-based modelling (ABM). We have made a start on understanding the slow dynamics with a variety of differential and difference equations, some with roots in the Lotka-Volterra models, some connected to Turing’s model of morphogenesis. What kinds of marks would Weaver give us? Pretty good on the first two: making good use of dramatically increased computing power and associated software development. I think on the disorganised complexity systems, when he saw that we have competing models for representing the same system, he would tell us to get that sorted out: either decide which is best and/or work out the extent to which they are equivalent or not at some underlying level. He might add one big caveat: we have not applied this science systematically and we have missed opportunities to use it to help tackle major urban challenges. On urban dynamics and organised complexity, we would probably get marks for making a goodish start but with a recommendation to invest a lot more.

So we still have a lot to do – but where do we look for the game changers? Serious application of the science – equilibrium and dynamics – to the major urban challenges could be a game changer. A full development of the dynamics would open up the possibility of ‘genetic planning’ by analogy with ‘genetic medicine’. But for the really new, I think we have to look to rapidly evolving technology. I would single out two examples, and there may be many more. The first is in one sense already old hat: big data. However, I want to argue that if it can be combined with hi-speed analytics, this could be a game changer. The second is something which is entirely new to me and may not be well known in the urban sciences: block chains. A block is some kind of set of accounts at a node. A block chain is made up of linked nodes – a network. There is much more to it and it is being presented as a disruptive technology that will transform the financial world (with many job losses?). If you google it, you will find out that it is almost wholly illustrated by the bitcoin system. A challenge is to work out how it could transform urban analytics and planning.

I have left serious loose ends which I won’t be able to tie up but which I will begin to elaborate the challenges in three subsequent chapters: Competing models (14); Big data and hi-speed analytics (15); and Block chains (16).

15. The power of modelling: understanding and planning cities

Alan Wilson

A brief history

The ‘science of cities’ has a long history. The city was the market for von Thunen’s analysis of agriculture in the late 18th Century. There were many largely qualitative explorations in the first half of the 20th Century. However, cities are complex systems and the major opportunities for scientific development came with the beginnings of computer power in the 1950s. This coincided with major investment in highways in the United States and computer models were developed to predict both current transport patterns and the impacts of new highways. Plans could be evaluated and the formal methods of what became cost-benefit analysis were developed around that time. However, it was always recognised that transport and land use were interdependent and that a more comprehensive model was needed. There were several attempts to build such models but the one that stands out is I. S. (Jack) Lowry’s model of Pittsburgh which was published in 1964. This was elegant and (deceptively) simple – represented in just 12 algebraic equations and inequalities. Many contemporary models are richer in detail and have many more equations but most are Lowry-like in that they have a recognisably similar core structure.[3]

So what is this core and what do we learn from it? How can we add to our understanding by adding detail and depth? What can we learn by applying contemporary knowledge of system dynamics? What does all this mean for future policy development and planning? The argument is illustrated and referenced from my own experience as a mathematical and computer urban modeller but the insights work on a broader canvass.

The Lowry model

Lowry[4] started with some definitions of urban economies with two broad categories:  ‘basic’ – mainly industry, and from the city’s perspective, exporting; and ‘retail’, broadly defined, meaning anything that serves the population[5]. He then introduced some key hypotheses about land. For each zone of the city he took total land, identified unusable land, allocated land to the basic sector and then argued that the rest was available to retail and housing, with retail having  priority. Land available for housing, therefore, was essentially a residual.

A model run then proceeds iteratively. Basic employment is allocated exogenously to each zone – possibly as part of a plan. This employment is then allocated to residences and converted into total population in each zone. This link between employment zones and residential zones can be characterised as ‘spatial interaction’ manifested by the ‘journey to work’. The population then ‘demands’ retail services and this generates further employment which is in turn allocated to residential zones. (This is another spatial interaction – between residential and retail zones.) At each stage in the iteration the land use constraints are checked and if they are exceeded (in housing demand) the excess is reallocated. And so the city ‘grows’. This growth can be interpreted as the model evolving to an equilibrium at a point in time or as the city evolving through time – an elementary form of dynamics.

The essential characteristics of the Lowry model which remain at the core of our understanding are

  • the distinction between basic (outward serving) and retail (population serving) sector of the urban economy;
  • the ‘spatial interaction’ relationships between work and home and between home and retail;
  • the demand for land from different sources, and in particular housing being forced to greater distances from work and retail as the city grows. This has obvious implications for land value and rents.

Towards realism.

In the half century since Lowry’s work was published, depth and detail have been added and the models have become realistic, at least for representing a working city and for short-run forecasting.  The longer run still provides challenges as we will see. It is now more likely that the Lowry model iteration would start with some ‘initial conditions’ that represent the current state. The model would then represent the workings of the city and could be used to test the impact of investment and planning policies in the short run. The economic model and the spatial interaction models would be much richer in detail and while it remains non-trivial to handle land constraints, submodels of land value both help to handle this and are valuable in themselves.

Specifically:

  • the key variables can all be disaggregated – people for example can be characterised by age, sex, education attainment and skills and so be better matched to a similarly disaggregated set of economic sectors – demanding a variety of skills and offering a range of incomes
  • population and analysis and forecasting can be connected to a full-developed demographic model;
  • the economy can be described by full input-output accounts and the distinction between basic and retail can be blurred through disaggregation;
  • the residential location component can be enriched through the incorporation of utility functions with a range of components and house prices can be estimated through estimates of ‘pressure’, thus facilitating the effective modelling of which types of people live where;
  • this all reinforces the idea that the different elements of the city are all interdependent..

‘Housing pressure’ will be related to the handling of land constraints in the model. In the Lowry case, this was achieved by the reallocation of an undifferentiated population when zones became ‘full’. With contemporary models, because house prices can be estimated (or some equivalent), it is these prices that handle the constraints.

While the Lowry-type models remain comprehensive in their ambition, sectoral models – particularly in the transport and retail cases – are usually developed separately in even greater depth and as such, they can be used for short run forecasting. Supermarket companies, for example, routinely use such models to estimate the revenue attracted to proposed new stores which supports the planning of their investment strategies.[6]

Understanding behaviour.

The models as described above are essentially statistical averaging models[7] and they work well for large populations where the predictions of the models are of ‘trip bundles’ rather than of individual behaviour. The models work well precisely because of this averaging process which takes out the idiosyncrasies of individuals. They use the mathematics – but with a different theoretical base – developed by Boltzmann in the late 19th Century in physics. But what can we then say about individual behaviour? Two things: we can interpret the ‘averaging models’, and we can seek to model individual behaviour directly.

In the first case, there are elements of the models that can be interpreted as individual utility functions. In the retail case for example, it is common to estimate the perceived benefits of shopping centre size and to set these against the costs of access (including money costs and estimated values of different kinds of time). What the models do through their averaging mechanism is to represent distributions of behaviour around average utilities. This is much more realistic than the classic economic utility maximising models as shown through goodness-of-fit measures.

The second case demands a new kind of model and these have been developed as so-called agent-based models or (ABMs). A population of individual ‘agents’ is constructed along with an ‘environment’. The agents are then given rules of behaviour and the system evolves. If the rules are based on utility maximisation on a probabilistic basis, then the two kinds of model can be shown to be broadly equivalent.

The argument to date has been essentially geo-economic though with some implicit sociology in the categorisation of variables when the models are disaggregated. There is more depth to be added in principle from sociological and ethnographic studies and if new findings can be clearly articulated, this kind of integration can be achieved.

The harder challenges: dynamics and evolution.

The models described thus far represent the workings of a city at a point in time – give or take the dynamic interpretation of the Lowry model. there is an implicit assumption that if there is a ‘disturbance’ – an investment in a new road or a housing estate for example – then the city returns to equilibrium very quickly and so this can be said to characterise the ‘fast dynamics’. It does mean that these models can be used, and are used, to estimate the impacts of major changes in the short term. The harder challenge is the ‘slow dynamics’ – to model the evolution of the slower changing structural features of a city over a longer period. This takes us into interdisciplinary territory now known as ‘complexity science’. When the task of building a fully dynamic model is analysed, it becomes clear that there are nonlinear relationships – for example, as retail centres grow, there is evidence that there are positive returns to scale. Technically, we can then draw on the mathematics of nonlinear complex systems which show that we can expect to find path dependence – that is dependence on initial conditions –  and phase changes – that is, abrupt changes in form as ‘parameters’ (features such as income or car ownership) pass through critical values. The particular models in mathematical terms bear a family relationship to the Lotka-Volterra models[8] originally designed to model ecological systems in the 1930s but which can now be seen as having a much wider range of application[9].

These ideas can be illustrated in terms of retail development. In the late 1950s and early 60s, corner-shop food retailing was largely replaced by supermarkets. By the standards of urban structural change, this was very rapid, and it can be shown that this arose though a combination of increasing incomes and car ownership – hence, in effect, increasing accessibility to more distant places. This was a phase change. Path dependence is illustrated by the fact that if a new retail centre is developed, its success in terms of revenue attracted will be dependent on the existing pattern of centres – the initial conditions – and again this can be analysed using dynamic models.

This leads us to two fundamental insights: first, it is impossible to forecast for the long term because of the likelihood of phase changes at some point in the future; and secondly, the initial structure of the city – the ‘initial conditions – might be thought of as the ‘DNA’ of the city and this will in substantial part determine what futures are possible. Attempts to plan new and possibly more desirable futures can be thought of as ‘genetic planning’ by analogy with genetic medicine.

Given these insights, how can we investigate the long term – 25 or 50 years ahead? We can investigate a range of futures through the development of scenarios and then we can deploy Lowry- Boltzmann like models to investigate the performance of these and we can use the fully dynamic Lotka-Volterra models to explore the possible paths to give insights on what has to be done to achieve these futures.

Models in policy development and planning.

There is a key distinction in the application of models: that between variables which are either exogenous to the model and those that are endogenous. The exogenous variables are specified either as forecasts or as components of plans and the model can then be run to calculate the endogenous variables for the new situation. This is done more or less routinely in transport and retail sector planning. For example, a new road can be ‘inserted’ into the model, the model rerun, and the ‘adjusted’ city explored. In this case, a cost-benefit analysis can be carried out along with the calculation of accessibilities. In the case of retail, a developer or a retailer can run the model to calculate the revenue attracted to a new store and then calculate the maximum level of investment that would make such a store profitable – often, how much to bid for a site. It is possible now, but relatively rare, to apply these methods in the public sector in fields such as education and health and indeed, model-based methods could be used to underpin master planning and thus contribute to effective housing development and associated green belt policies.

As we have seen in our brief review of dynamics, this only works in this way for the short run – the impacts of building a new road or opening a new store. For the long run, it is necessary to shift to scenario development and this created opportunities to explore possible solutions to the biggest challenges – the so-called wicked problems.

It is not difficult to construct a wicked problems list: regeneration, especially economic, in many ‘poor’ towns, embracing the north vs south issues – opportunities for fulfilling work more broadly; chronic housing shortages; transport congestion, limiting accessibilities; the long tail of failure in education; the post-code lottery aspect of health care; and perhaps the biggest challenge of all, responding to climate change and low-carbon targets. Applications of computer models will not solve these problems. This brings home the policy and design dimensions of planning: policies to attack wicked problems need to be ‘serious’ and to be seen to be so; possible solutions have to be invented. These ambitions then provide the basis for developing more radical scenarios as well as ‘more of the same’. And then the skills of the modeller kick in again in the analysis of feasibility, calculating costs and benefits, and charting the path from A to B – from the present to a rewarding future.

16. COMPETING MODELS-1: TRUTH IS WHAT WE AGREE ABOUT

I have always been interested in philosophy. I was interested in the big problems – the ‘What is life about?’ kind of thing with, as a special subject, ‘What is truth?’. How can we know whether something – a sentence, a theory, a mathematical formula – is true? And I guess because I was a mathematician and a physicist early in my career, I was particularly interested in the subset of this which is the philosophy of mathematics and the philosophy of science. I read a lot of Bertrand Russell – which perhaps seems rather quaint now. This had one nearer contemporary consequence. I was at the first meeting of the Vice-Chancellors of universities that became the Russell Group. There was a big argument about the name. We were meeting in the Russell Hotel and after much time had passed, I said something like ‘Why not call it the Russell Group?’ – citing not just the hotel but also Bertrand Russell as a mark of intellectual respectability. Such is the way that brands are born.

The maths and the science took me into Popper and the broader reaches of logical positivism. Time passed and I found myself a young university professor, working on mathematical models of cities, then the height of fashion. But fashions change and by the late 70s, on the back of distinguished works like David Harvey’s ‘Social justice and the city’, I found myself under sustained attack from a broadly Marxist front. ‘Positivism’ became a term of abuse and Marxism, in philosophical terms – or at least my then understanding of it, merged into the wider realms of structuralism. I was happy to come to understand that there were hidden (often power) structures to be revealed in social research that the models I was working on missed, therefore undermining the results.

This was serious stuff. I could reject some of the attacks in a straightforward way. There was a time when it was argued that anything mathematical was positivist and therefore bad and/or wrong. This could be rejected on the grounds that mathematics was a tool and that indeed there were distinguished Marxist mathematical economists such as Sraffa. But I had to dig deeper in order to understand. I read Marx, I read a lot of structuralists some of whom, at the time, were taking over English departments. I even gave a seminar in the Leeds English Department on structuralism!

In my reading, I stumbled on Jurgen Habermas and this provided a revelation for me. It took me back to questions about truth and provided a new way of answering them. In what follows, I am sure I oversimplify. His work is very rich in ideas, but I took a simple idea from it: truth is what we agree about. I say this to students now who are usually pretty shocked. But let’s unpick it. We can agree that 2 + 2 = 4. We can agree about the laws of physics – up to a point anyway – there are discoveries to be made that will refine these laws as has happened in the past. That also connects to another idea that I found useful in my toolkit: C. S. Peirce and the pragmatists. I will settle for the colloquial use of ‘pragmatism’: we can agree in a pragmatic sense that physics is true – and handle the refinements later. I would argue from my own experience that some social science is ‘true’ in the same way: much demography is true up to quite small errors – think of what actuaries do. But when we get to politics, we disagree. We are in a different ball park. We can still explore and seek to analyse and having the Habermas distinction in place helps us to understand arguments.

How does the agreement come about? The technical term used by Habermas is ‘intersubjective communication’ and there has to be enough of it. In other words, the ‘agreement’ comes on the back of much discussion, debate and experiment. This fits very well with how science works. A sign of disagreement is when we hear that someone has an ‘opinion’ about an issue. This should be the signal for further exploration, discussion and debate rather than simply a ‘tennis match’ kind of argument.

Where does this leave us as social scientists? We are unlikely to have laws in the same way that physicists have laws but we have truths, even if they are temporary and approximate. We should recognise that research is a constant exploration in a context of mutual tolerance – our version of intersubjective communication. We should be suspicious of the newspaper article which begins ‘research shows that …..’ when the ‘research quoted is a single sample survey. We have to tread a line between offering knowledge and truth on the one hand and recognising the uncertainty of our offerings on the other. This is not easy in an environment where policy makers want to know what the evidence is, or what the ‘solution’ is, for pressing problems and would like to be more assertive than we might feel comfortable with. The nuances of language to be deployed in our reporting of research become critical. 

17. COMPETING MODELS-2: DECONSTRUCT INTO BUILDING BRICKS?

Models are representations of theories. I write this as a modeller – someone who works on mathematical and computer models of cities and regions but who is also seriously interested in the underlying theories I am trying to represent. My field, relative say to physics, is underdeveloped. This means that we have a number of competing models and it is interesting to explore the basis of this and how to respond. There may be implications for other fields – even physics!

A starting conjecture is that there are two classes of competing models: (i) those that represent different underlying theories (or hypotheses); and (ii) those that stem from the modellers choosing different ways of making approximations in seeking to represent very complex systems. The two categories overlap of course. I will conjecture at the outset that most of the differences lie in the second (perhaps with one notable exception). So let’s get the first out of the way. Economists want individuals to maximise utility and firms to maximise profits – simplifying somewhat of course. They can probably find something that public services can maximise – health outcomes, exam results – indeed a whole range of performance indicators. There is now a recognition that for all sorts of reasons, the agents do not behave perfectly and way have been found to handle this. There is a whole host of (usually) micro-scale economic and social theory that is inadequately incorporated into models, in some cases because of the complexity issue – the niceties are approximated away; but in principle, that can be handled and should be. There is a broader principle lurking here: for most modelling purposes, the underlying theory can be seen as maximising or minimising something. So if we are uncomfortable with utility functions or economics more broadly, we can still try to represent behaviour in these terms – if only to have a base line from which behaviour deviates.

So what is the exception – another kind of dividing line which should perhaps have been a third category? At the pure end of a spectrum, ‘letting the data speak for themselves’. It is mathematics vs statistics; or econometrics vs mathematical economics. Statistical models look very different – at least at first sight – to mathematical models – and usually demand quite stringent conditions to be in place for their legitimate application. Perhaps, in the quantification of a field of study, statistical modelling comes first, followed by the mathematical? Of course there is a limit in which both ‘pictures’ can merge: many mathematical models, including the ones I work with, can be presented as maximum likelihood models. This is a thread that is not to be pursued further here, and I will focus on my own field on mathematical modelling.

There is perhaps a second high-level issue. It is sometimes argued that there are two kinds of mathematician: those that think in terms of algebra and those who think in terms of geometry. (I am in the algebra category which I am sure biases my approach.) As with many of these dichotomies, they should be removed and both perspectives fully integrated. But this is easier said than done!

How do the ‘approximations’ come about? I once tried to estimate the number of variables I would like to have for a comprehensive model of a city of 1M people and at a relatively coarse grain, the answer was around 1013! This demonstrates the need for approximation. The first steps can be categorised in terms of scale: first, spatial – referenced by zones of location rather than continuous space – and how large should the zones be? Second, temporal: continuous time or discrete? Third, sectoral: how many characteristics of individuals or organisations should be identified and at how fine a grain? Experience suggests that the use of discrete zones – and indeed other discrete definitions – makes the mathematics much easier to handle. Economists often use continuous space in their models, for example, and this forces them into another kind of approximation: monocentricity, which is hopelessly unrealistic. Many different models are simply based on different decisions about, and representations of, scale.

The second set of differences turn on focus of interest. One way of approximating is to consider a subsystem such as transport and the journey to work, or retail and the flow of revenues into a store or a shopping centre. The dangers here are the critical interdependencies are lost and this always has to be borne in mind. Consider the evaluation of new transport infrastructure for example. If this is based purely on a transport model, there is a danger than the cost-benefit analysis will be concentrated on time savings rather than the wider benefits. There is also a potentially higher-level view of focus. Lowry very perceptively  once pointed out that models often focus on activities – and the distribution of activities across zones; or on the zones, in which case the focus would be on land use mix in a particular area. The trick, of course, is to capture both perspectives simultaneously – which is what Lowry achieved himself very elegantly but which has been achieved only rarely since.

A major bifurcation in model design turns on the time dimension and the related assumptions about dynamics. Models are much easier to handle if it is possible to make an assumption that the system being modelled is either in equilibrium or will return to a state of equilibrium quickly after a disturbance. There are many situations where the equilibrium assumption is pretty reasonable – for representing a cross-section in time or for short-run forecasting, for example, representing the way in which a transport system returns to equilibrium after a new network link or mode is introduced. But the big challenge is in the ‘slow dynamics’: modelling how cities evolve.

It is beyond the scope of this piece to review a wide range of examples. If there is a general lesson here it is that we should be tolerant of each others’ models, and we should be prepared to deconstruct them to facilitate comparison and perhaps to remove what appears to be competition but needn’t be. The deconstructed elements can then be seen as building bricks that can be assembled in a variety of ways. For example, ‘generalised cost’ in an entropy-maximising spatial interaction model can easily be interpreted as a utility function and therefore not in competition with economic models. Cellular automata models, and agent-based models are similarly based on different ‘pictures’ – different ways of making approximations. There are usually different strengths and weaknesses in the different alternatives. In many cases, with some effort, they can be integrated. From a mathematical point of view, deconstruction can offer new insights. We have, in effect, argued that model design involves making a series of decisions about scale, focus, theory, method and so on. What will emerge from this kind of thinking is that different kinds of representations – ‘pictures’ – have different sets of mathematical tools available for the model building. And some of these are easier to use than others, and so, when this is made explicit, might guide the decision process.

18. BIG DATA AND HIGH-SPEED ANALYTICS

My first experience of big data and high-speed analytics was at CERN and the Rutherford Lab over 50 years ago. I was in the Rutherford Lab part of a large distributed team working on a CERN bubble chamber experiment. There was a proton-proton collision every second or so which, for the charged particles, produced curved tracks in the chamber which were photographed from three different angles. The data from these tracks was recorded in something called the Hough-Powell device (after its inventors) in real time. This data was then turned into geometry; this geometry was then passed to my program. I was at the end of the chain and my job was to take the geometry, work out for this collision which of a number of possible events it actually was – the so-called kinematics analysis. This was done by chi-squared testing which seemed remarkably effective. The statistics of many events could then be computed, hopefully leading to the discovery of new (and anticipated) particles – in our case the Ω. In principle, the whole process for each event, through to identification, could be done in real time – though in practice, my part was done off-line. It was in the early days of big computers, in our case, the IBM 7094. I suspect now it will be all done in real time. Interestingly, in a diary I kept at the time, I recorded my immediate boss, John Burren, as remarking that ‘we could do this for the economy you know’!

So if we could it then for quite a complicated problem, why don’t we do it now? Even well-known and well-developed models – transport and retail for example – typically take months to calibrate, usually from a data set that refers to a point in time. We are progressing to a position at which, for these models, we could have the data base continually updated from data flowing from censors. (There is an intermediate processing point of course: to convert the sensor data to what is needed for model calibration.) This should be a feasible research challenge. What would have to be done? I guess the first step would be to establish data protocols so by the time the real data reached the model – the analytics platform, it was in some standard form. The concept of a platform is critical here. This would enable the user to select the analytical toolkit needed for a particular application. This could incorporate a whole spectrum from maps and other visualisation to the most sophisticated models – static and dynamic.

There are two possible guiding principles for the development of this kind of system: what is needed for the advance of the science, and what is needed for urban planning and policy development. In either case, we would start from an analysis of ‘need’ and thus evaluate what is available from the big data shopping list for a particular set of purposes – probably quite a small subset. There is a lesson in this alone: to think what we need data for rather than taking the items on the shopping list and asking what we can use them for.

Where do we start? The data requirements of various analytics procedures are pretty well known. There will be additions – for example incorporating new kinds of interaction from the Internet-of-Things world. This will be further developed in the upcoming blog piece on block chains.

So why don’t we do all this now? Essentially because the starting point – the first demo – is a big team job, and no funding council has been able to tackle something on this scale. There lies a major challenge. As I once titled a newspaper article: ‘A CERN for the social sciences’?

19. BLOCK CHAINS

I argued in an earlier piece that game-changing advances in urban analytics may well depend on technological change. One such possibility is the introduction of block chain software. a block is a set of accounts at a node. This is part of a network of many nodes many of which – in some cases all? – are connected. Transactions are recorded in the appropriate nodal accounts with varying degrees of openness. It is this openness that guarantees veracity and verifiability. This technology is considered to be a game changer in the financial world – with potential job losses because a block chain system excludes the ‘middlemen’ who normally record the transactions. Illustrations of the concept on the internet almost all rely on the bitcoin technology as the key example.

The ideas of ‘accounts at nodes’ and ‘transactions between nodes’ resonate very strongly with the core elements of urban analytics and models – the location of activities and spatial interaction. In the bitcoin case, the nodes are presumably account holders but it is no stretch of the imagination to imagine the nodes as being associated with spatial addresses. There must also be a connection to the ‘big data’ agenda and the ‘internet of things’. Much of the newly available real time data is transactional and one can imagine it being transmitted to blocks and used to update data bases on a continuous basis. This would have a very obvious role in applied retail modelling for example.

This is a shorter than usual blog piece because to develop the idea, I need to do a lot more work!! The core concepts are complicated and not easy to follow. I have watched two You Tube videos – an excellent one being from the Kahn Academy. I recommend these but what I would really like is someone to take on the challenge of (a) really understanding block chains and (b) thinking through possible urban analytics applications!

20. ABSTRACT MODES

I have spent much of the last three years working on the Government Office for Science Foresight project on The future of cities. The focus was on a time horizon of fifty years into the future. It is clearly impossible to use urban models to forecast such long-term futures but it is possible in principle to explore systematically a variety of future scenarios. A key element of such scenarios is transport and we have to assume that what is on offer – in terms of modes of travel – will be very different to today – not least to meet sustainability criteria. The present dominance of car travel in many cities is likely to disappear. How, then, can we characterise possible future transport modes?

This takes me back to ideas that emerged in papers published 50 years ago (or in one case, almost that). In 1966 Dick Quandt and William Baumol, distinguished Princeton economists, published a paper in the Journal of Regional Science on ‘abstract transport modes’. Their argument was precisely that in the future, technological change would produce new modes: how could they be modelled? Their answer was to say that models should be calibrated not with modal parameters, but with parameters that related to the characteristics of modes. The calibrated results could then be used to model the take up of new modes that had new characteristics. By coincidence, Kelvin Lancaster, Columbia University economist, published a paper, also in 1966, in The Journal of Political Economy on ‘A new approach to consumer theory’ in which utility functions were defined in terms of the characteristics of goods rather than the goods themselves. He elaborated this in 1971 in his book ‘Consumer demand: a new approach’. In 1967, my ‘entropy’ paper was published in the journal Transportation Research and a concept used in this was that of ‘generalised cost’. This assumed that the cost of travelling by a mode was not just a money cost, but the weighted sum of different elements of (dis)utility: different kinds of time, comfort and so as well as money costs. The weights could be estimated as part of model calibration. David Boyce and Huw Williams in their magisterial history of transport modelling, ‘Forecasting urban travel’, wrote, quoting my 1967 paper, “impedance … may be measured as actual distance, as travel time, as cost, or more effectively as some weighted combination of such factors sometimes referred to as generalised cost……… In later publications, ‘impedance’ fell out of use in favour of ‘generalised cost’”. (They kindly attributed the introduction of ‘generalised cost’ to me.)

This all starts to come together. The Quandt and Baumol ‘abstract mode’ idea has always been in my mind and I was attracted to the Kelvin Lancaster argument for the same reasons – though that doesn’t seem to have taken off in a big way in economics. (I still have his 1971 book, purchased from Austicks in Leeds for £4-25.) I never quite connected ‘generalised cost’ to ‘abstract modes’. However, I certainly do now. When we have to look ahead to long-term future scenarios, it is potentially valuable to envisage new transport modes in generalised cost terms. By comparing one new mode with another, we can make an attempt – approximately because we are transporting current calibrated weights fifty years forward – to estimate the take up of modes by comparing generalised costs. I have not yet seen any systematic attempt to explore scenarios in this way and I think there is some straightforward work to be done – do-able in an undergraduate or master’s thesis!

We can also look at the broader questions of scenario development. Suppose for example, we want to explore the consequences of high density development around public transport hubs. These kinds of policies can be represented in our comprehensive models by constraints – and I argue that the idea of representing policies – or more broadly ‘knowledge’ – in constraints within models is another powerful tool. This also has its origins in a fifty year old paper – Jack Lowry’s ‘Model of metropolis’. In broad terms, this represents the fixing through plans of a model’s exogenous variables – but the idea of ‘constraints’ implies that there are circumstances where we might want to fix what we usually take as endogenous variables.

So we have the machinery for testing and evaluating long-term scenarios – albeit building on fifty year old ideas. It needs a combination of imagination – thinking what the future might look like – and analytical capabilities – ‘big modelling’. It’s all still to play for, but there are some interesting papers waiting to be written!!

21. BEST PRACTICE

Everything we do, or are responsible for, should aim at adopting ‘best practice’. This is easier said than done! We need knowledge, capability and capacity. Then maybe there are three categories through which we can seek best practice: (1) from ‘already in practice’ elsewhere; (2) could be in practice somewhere but isn’t: the research has been done but hasn’t been transferred; (3) problem identified, but research needed.

How do we acquire the knowledge? Through reading, networking, cpe courses, visits. Capability is about training, experience, acquiring skills. Capacity is about the availability of capability – access to it – for the services (let us say) that need it. Medicine provides an obvious example; local government another. How do each of 164 local authorities in England acquire best practice? Dissemination strategies are obviously important.  We should also note that there may be central government responsibilities. We can expect markets to deliver skills, capabilities and capacities – through colleges, universities and, in a broad sense, industry itself (in its most refined way through ‘corporate universities’). But in many cases, there will be a market failure and government intervention becomes essential. In a field such as medicine, which is heavily regulated, the Government takes much of the responsibility for ensuring supply of capability and capacity. There are other fields, where in early stage development, consultants provide the capacity until it becomes mainstream – GMAP in relation to retailing being an example from my own experience. (See the two ‘spin-out blogs.)

How does all this work for cities, and in particular, for urban analytics? Good analytics provide a better base for decision making, planning and problem solving in city government. This needs a comprehensive information system which can be effectively interrogated. This can be topped with a high-level ‘dashboard’ with a hierarchy of rich underpinning levels. Warning lights might flash at the top to highlight problems lower down the hierarchy for further investigation.  It needs a simulation (modelling) capacity for exploring the consequences of alternative plans. Neither of these needs are typically met. In some specific areas, it is potentially, and sometimes actually, OK: in transport planning in government; in network optimisation for retailers for example. A small number of consultants can and do provide skills and capability. But in general, these needs are not met, often not even recognised. This seems to be a good example of a market failure. There is central government funding and action – through research councils and particularly perhaps, Innovate UK. The ‘best practice’ material exists – so we are somewhere in between categories 1 and 2 of the introductory paragraph above. This tempts me to offer as a conjecture the obvious ‘solution’: what is needed are top-class demonstrators. If the benefits were evident, then dissemination mechanisms would follow!

22. LOWERING THE BAR

A few weeks ago, I attended a British Academy workshop on ‘Urban Futures’ – partly focused on research priorities and partly on research that would be useful for policy makers. The group consisted mainly of academics who were keen to discuss the most difficult research challenges. I found myself sitting next to Richard Sennett – a pleasure and a privilege in itself, someone I’d read and knew by repute but whom I had never met. When the discussion turned to research contributions to policy, Richard made a remark which resonated strongly with me and made the day very much worthwhile. He said: “If you want to have an impact on policy, you have to lower the bar!” We discussed this briefly at the end of the meeting, and I hope he won’t mind if I try to unpick it a little. It doesn’t tell the whole story of the challenge of engaging the academic community in policy, but it does offer some insights.

 The most advanced research is likely to be incomplete and to have many associated uncertainties when translated into practice. This can offer insights, but the uncertainties are often uncomfortable for policy makers. If we lower the bar to something like ‘best practice’ – see preceding blog 42 – this may involve writing and presentations which do not offer the highest levels of esteem in the academic community. What is on offer to policy makers has to be intelligible, convincing and useful. Being convincing means that what we are describing should evidence-based. And, of course, when these criteria are met, there should be another kind of esteem associated with the ‘research for policy’ agenda. I guess this is what ‘impact’ is supposed to be about (though I think that is half of the story, since impact that transforms a discipline may be more important in the long run).

‘Research for policy’ is, of course, ‘applied research’ which also brings up the esteem argument: if ‘applied’, then less ‘esteemful’ if I can make up a word. In my own experience, engagement with real challenges – whether commercial or public – adds seriously to basic research in two ways: first, it throws up new problems; and secondly, it provides access to data – for testing and further model development – that simply wouldn’t be available otherwise. Some of the new problems may be more challenging and in a scientific sense more important, than the old ones.

So, back to the old problem: what can we do to enhance academic participation in policy development? First a warning: recall the policy-design-analysis argument much used in these blogs. Policy is about what we are trying to achieve, design is about inventing solutions; and analysis is about exploring the consequences of, and evaluating, alternative policies, solutions and plans – the point being that analysis alone, the stuff of academic life, will not of itself solve problems. Engagement, therefore, ideally means engagement across all three areas, not just analysis.

How can we then make ourselves more effective by lowering the bar? First, ensure that our ‘best practice’ (see blog 42) is intelligible, convincing and useful; evidence-based. This means being confident about what we know and can offer. But then we also ought to be open about what we don’t know. In some cases we may be able to say that we can tackle, perhaps reasonably quickly, some of the important ‘not known’ questions through research; and that may need resource. Let me illustrate this with retail modelling. We can be pretty confident about estimating revenues (or people) attracted to facilities when something changes – a new store, a new hospital or whatever. And then there is a category, in this case, of what we ‘half know’. We have an understanding of retail structural dynamics to a point where we can estimate the minimum size that a new development has to be for it to succeed. But we can’t yet do this with confidence. So a talk on retail dynamics to commercial directors may be ‘above the bar’.

I suppose another way of putting this argument is that for policy engagement purposes, we should know where we should set the height of the bar: confidence below, uncertainty (possibly with some insights), above. There is a whole set of essays to be written on this for different possible application areas.

23. THE FUTURE OF CITIES

For the last three years (almost), I have been chairing the Lead Expert group of the Government Office for Science Foresight Project on The Future of Cities. It has finally ‘reported’, not as conventionally with one large report and many recommendations, but with four reports and a mass of supporting papers. If you google ‘Foresight Future of Cities’ you will get very quickly to the web site and you will find all the material.

During the project, we have worked with fourteen Government Department – ‘cities’ as a topic crosses government – and we have visited over 20 cities in the UK and have continued to work with a number of them.

Project reports

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/520963/GS-16-6-future-of-cities-an-overview-of-the-evidence.pdf

Science of Cities:

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/516407/gs-16-6-future-cities-science-of-cities.pdf

Foresight for Cities:

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/516443/gs-16-5-future-cities-foresight-for-cities.pdf

Graduate Mobility:

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/510421/gs-16-4-future-of-cities-graduate-mobility.pdf

24. LEICESTER CITY

Followers of English football will be aware that the top tier is the Premier League and that the clubs that finish in the top four at the end of the season play in the European Champions League the following year. These top four places are normally filled by four of a top half a dozen or so – let’s say Manchester United, Manchester City, Arsenal, Chelsea, Tottenham Hotspur and Liverpool. There are one or two others on the fringe. This group does not include Leicester City. At Christmas 2014, Leicester were bottom of the Premier League with relegation looking inevitable. They won seven of their last nine games in that season and survived. At the beginning of the current (2015-16) season, the bookmakers’ odds on them winning the Premier League were 5000-1 against. At the time of writing, they top the league by eight points with four matches to play. The small number of people who might have bet £10 or more on them last August are now sitting on a potential fortune.

How has this been achieved? They have a very strong defence and so concede little; they can score ‘on the break’, notably through Jamie Vardy, a centre forward who not long ago was playing for Fleetwood Town in the nether reaches of English football; they have an interesting and experienced manager, Claudio Ranieri; and they work as a team. It is certainly a phenomenon and the bulk of the football-following population would now like to see them win the League.

What are the academic equivalents? There are university league tables and it is not difficult to identify a top half dozen. There are tables for departments and subjects. There is a ranking of journals. I don’t think there is an official league table of research groups but certainly some informal ones. As in football, it is very difficult to break into the top group from a long way below. Money follows success – as in the REF (the Research Excellence Framework) – and facilitates the transfer of the top players to the top group. So what is the ‘Leicester City’ strategy for an aspiring university, an aspiring department or research group or a journal editor? The strong defence must be about having the basics in place – good REF ratings and so on. The goal-scoring break-out attacks is about ambition and risk taking. The ‘manager’ can inspire and aspire. And the team work: we are almost certainly not as good as we should be in academia, so food for thought there.

Then maybe all of the above requires at the core – and I’m sure Leicester City have these qualities – hard work, confidence, good plans while still being creative; and a preparedness to be different – not to follow the fashion. So when The Times Higher has its ever-expanding annual awards, maybe they should add a ‘Leicester City Award’ for the university that matches their achievement in our own leagues. Meanwhile, will Leicester win the League? Almost all football followers in the country are now on their side. We will see in a month’s time!

PART 3. TRICKS OF THE TRADE

25. Serendipity-2: career development

            I was new to Geography when I went to Leeds as a Professor in 1970. There was even a newspaper headline in one of the trade papers with words to the effect that “Leeds appoints Geography Professor with no qualifications in Geography!” So how did this come about? As I reflect, it makes me realise the extent to which my career – and this will be by no means unique – has been shaped by serendipity.

            I graduated in Mathematics and I wanted to work as a mathematician. I had a summer job as an undergraduate in the (then new) Rutherford Lab at Harwell and this led to a full-time post when I left Cambridge. I was, in civil service terms, a ‘Scientific Officer’ which I was very pleased about because I wanted to be a ‘scientist’. I even put that as my profession on my new passport. It was interesting. I had to write a very large computer programme for the analysis of bubble chamber events in experiments at CERN (which also gave me the opportunity to spend some time in Geneva). With later hindsight, it was a very good initial training in what was then front-line computer science. But within a couple of years, I began to tire of the highly competitive nature of elementary particle physics and I also wanted to work in a field where I could be more socially useful but still be a mathematician. So I started applying for jobs in the social sciences in universities: I still wanted to be a maths-based researcher. All the following steps in my career were serendipitous – pieces of good luck.

It didn’t start well. I must have applied for 30 or 40 jobs and had no response at all. To do something different, sometime in 1962, I decided to join the Labour Party. I lived in Summertown in North Oxford, a prosperous part of the city, and there were very few members there. Within months, I had taken on the role of ward secretary. We selected our candidates for the May 1963 local elections but around February, they left Oxford and it then turned out that the rule book said that the Chairman and Secretary of the Ward would be the candidates, so in May, I found myself the Labour candidate for Summertown. I duly came bottom of the poll. But I enjoyed it and in the next year, I managed to get myself selected for East Ward – which had not had a Labour Councillor since 1945 but seemed just winnable in the tide that was then running. I was elected by a majority of 4 after four recounts. That led me into another kind of substantive experience – three years on Oxford City Council.

And then a second piece of luck. I was introduced by an old school friend to a small group of economists in the Institute of Economics and Statistics in Oxford who had a research grant from the then Ministry of Transport in cost-benefit analysis. In those days – it seems strange now – social science was largely non-quantitative and they had a very quantitative problem – needing a computer model of transport flows in cities. We did a deal: that I would do all their maths and computing and they would teach me economics. So I changed fields by a kind of apprenticeship. It was a terrific time. I toured the United States – where all the urban modellers were – with Christopher Foster and Michael Beesley and we met people like Britton Harris (the Penn State Study) and I. S ‘Jack’ Lowry (of ‘Model of Metropolis’ fame). I set about trying to build the model. The huge piece of luck was that I recognised that what the American engineers were doing in developing gravity models could be restated in a format that was more Boltzmann and statistical mechanics than Newton and gravity and this generalised the methodology. The serendipity in this case was that I recognised some terms in the engineers’ equations from my statistical mechanics lectures as a student. This led to the so-called ‘entropy-maximising models’. I was suddenly invited to give lots of lectures and seminars and people forgot that I had this rather odd academic background.

It was a time of rapid job progression. I moved with Christopher Foster to the Ministry of Transport and set up something called the Mathematical Advisory Unit, which grew rapidly, with a model-building brief. (I had been given the title of Mathematical Adviser: it should have been ‘Economic Adviser’ but the civil service economists refused to accept me as such because I wasn’t a proper economist!).  This was 1966-68 and then serendipity struck again. I gave a talk on transport models in the Civil Engineering Department in University College London and in the audience was Professor Henry Chilver. I left the seminar and started to walk down Gower Street, when he caught up with me and told me that he had just been appointed as Director of a new research centre – the Centre for Environmental Studies – and “would I like to be the Assistant Director?” My talk had, in effect, been a job interview – not the kind of thing that HR departments would allow now! And so I moved to CES and built a new team of modellers and worked on extending what I had learned about transport models to the bigger task of building a comprehensive urban model – something I have worked on ever since.

This was from 1968. By the end of the 60s, quantitative social science was all the rage. Many jobs were created as universities sought to enter the field and I had three serious approaches: one in Geography at Leeds, one in Economics and one in Town Planning. I decided, wisely as it turned out, that Geography was a broad church and had a record in absorbing ‘outsiders’ and so I came to Leeds in October 1970 as Professor of Urban and Regional Geography. And so I became a geographer! Again, the experience was terrific. I enjoyed teaching. It led to long term friendships and collaborations with a generation that is still in Leeds or at least academia: Martin Clarke, Graham Clarke, John Stillwell, Phil Rees, Adrian MacDonald, Christine Leigh, Martyn Senior, Huw Williams and many others – a long list. Some friendships have been maintained over the years with students I met through tutorial groups. We had large research grants and could build modelling teams. Geography, in the wider sense, did prove very welcoming and it was all – or at least mostly! – very congenial. Sometime in the early 1970s, I found myself as head of department and I also started taking an interest in some university issues. But that decade was mainly about research and was very productive.

By the end of the decade, there had been the oil crisis, cuts were in the air – déjà vu! – and research funding became harder to get. The next big step had its origins in a race meeting on Boxing Day – was it 1983? – at a very cold Wetherby. I was with Martin Clarke in one of the bars and we were watching – on the bar’s TV – Wayward Lad, trained near Leeds by Michael Dickinson, win the King George at Kempton. Thoughts turned to our lack of research funding. It was at that moment, I think, that we thought we would investigate the possibility of commercial applications of our models. We first tried to ‘sell’ our ideas to various management consultants. We thought they could so the marketing for us. But no luck. So we had to go it alone. We constituted a two-person very part-time workforce. Our first job was finding the average length of a garden path for the Post Office! Our second was predicting the useage of a projected dry ski slope. We did the programming, we collected the data. We wrote the reports. We were once ‘moved’ on by the management from outside Marks and Spencers in Leeds when we were standing outside the store with clip boards trying to collect origin and destination data from customers! But then it suddenly was better. We had substantial contracts with W H Smith and Toyota and we could start to employ people. It is a longer story that can’t be told here – Martin should tell that story –  but that is what grew into GMAP, with Martin as the Managing Director and driving its growth. At its peak, GMAP was employing 120 people and had a range of blue-chip clients. That was a kind of real geography that I was proud to be associated with.

Simultaneously in the 80s, I began to be involved in university management and I became ‘Chairman of the Board of Social and Economic Studies and Law’ – what in modern parlance would be a Dean. In 1989, I was invited to become Pro-Vice-Chancellor – at a time when there was only one. I left the Geography Department and as it turned out, never to return. The then Vice-Chancellor became Chairman of the CVCP and had to spend a lot of time in London and so the PVC job was bigger than usual. In 1991, I found myself appointed as Vice-Chancellor and embarked on that role on 1 October in some trepidation. I was VC until 2004. It was challenging, exciting and demanding. It was, in Dickens’ phrase, ‘the best of times and the worst of times’: tremendously privileged, but also with a recurring list of very difficult, sometimes unpleasant, problems. But this is about geography: I somehow managed to keep my academic work going in snatches of time but the publication rate certainly fell.

I was Vice-Chancellor for almost 13 years and in 2004 I was scheduled to ‘retire’.  This, however, seemed to be increasingly unattractive. Salvation came from an unlikely source: the Department for Education and Skills (DfES) in London. I was offered the job of Director-General for Higher Education and so I became a civil servant for almost three years with  policy advising and management responsibilities for universities in England. Again, it was privileged and seriously interesting, working with Ministers, having a front row seat on the politics of the day. But I always knew I wanted to research again, so after a brief sojourn in Cambridge, I returned to academic life at University College London as Professor of Urban and Regional Systems. This was another terrific experience. I worked with Mike Batty, an old modelling friend, in the Centre for Advanced Spatial Analysis and a group of young researchers. What was very exciting was that my research field, developed into the realms of what became called complexity science, became a hot topic. So research grants flowed again. This included £2.5M for a five-year grant from EPSRC for a project on global dynamics. This embraced migration, trade, security and development aid: big issues to which real geography can make a significant contribution. It funded half a dozen new research posts and five PhD studentships.

The final step (?) was my move to The Alan Turing Institute in the summer of 2016. The original plan was to develop a programme in urban modelling – partly on the basis that modelling was a crucial element of data science which was to some extent being neglected. However, this plan had to be put on hold as in September, I was appointed as CEO of the Institute. This mean that I had to learn much more about data science and AI and this delivered its own research benefits. I began to see new possibilities of urban models being embedded in ‘learning machines’.

My career trajectory has taken me from mathematics and elementary particle physics into geography and the social sciences via economics, to complexity science, and on into data science and AI. Add to this elements of operational research and ‘management’ both in a university and as a civil servant. The most crucial moves were not planned and I think justify the ‘serendipity’ label!! I’m not sure that this is to be wholly recommended, however. It has worked for me and that is all I can claim!

26. ADDING DEPTH

An appropriate ambition of the model-building component of urban science is the construction of the best possible comprehensive model which represents the interdependencies that make cities complex (and interesting) systems. To articulate this is to spell out a kind of research programme: how do we combine the best of what we know into such a general model? Most of the available ‘depth’ is in the application of particular submodels – notably transport and retail. If we seek to identify the ‘best’ – many subjective decisions here in a contested area – we define a large scale computing operation underpinned by a substantial information system that houses relevant data. Though a large task, this is feasible! How would we set about it?

The initial thinking through would be an iterative process. The first step would be to review all the submodels and in particular, their categorisation of their main variables – their system definitions. Almost certainly, these would not be consistent: each would have detail appropriate to that system. It would be necessary then – or would it? – to find a common set of definitions. It may be possible to work with different classifications for different submodels and then to integrate them in some way in connecting the submodels as part of a general model that, among other things, captures the main interdependencies. This is a research question in itself! It is at this point that it would be necessary to confront the question of exogenous and endogenous variables. We want to maximise the number of endogenous variables but to retain as exogenous those that can be determined externally, for example by a planning process.

There is then the question of scales and possible relations between scales. Suppose we can define our ‘city’, say as a city region, divided into zones, with an appropriate external zone system (including a ‘rest of the world’ zone to close the system). Then for the city in aggregate, we would normally have a demographic model and an economic model. These would provide controlling totals for the zonal models: the zonal populations would add up to those of the aggregate demographic model for example. There is also the complicated question of whether we would have two or more zone systems – say one with larger zones, one with finer-scale. Bit for simplicity at this stage, assume one zone system. We can then begin to review the submodels.

The classic transport model has four submodels: trip generation, distribution, modal split and assignment. As this implies, the model includes a multi-modal network representation. Trips from origin to destination by mode (and purpose) are loaded onto the network. This enables congestion to be accounted for and properly represented in generalised costs (with travel time as an element) – a level of detail which is not usually captured in the usual running of spatial interaction models.

A fine-grain retail model functions with a detailed categorisation of consumers and of store attractiveness and can predict flows into stores with reasonable accuracy. This model can be applied in principle to any consumer driven service particularly for example, flows into medical facilities, and especially general practice surgeries. This task is different if the flows are assigned by a central authority as to schools for instance.

The location of economic activity, and particularly employment, is more difficult. Totals by sector might be derived from an input-out model but the numbers of firms are too small to use statistical averaging techniques. What ought to be possible with models is to estimate the relative desirability of different locations for different sectors and then use this information to interpret the marginal location decisions of firms. This fits with the argument to follow below about the full application of urban science being historical!

In all cases of location of activities, it will be necessary in a model to incorporate constraints at the zonal scale, particularly in relation to land use. as these are applied, measures of ‘pressure’ e.g. on housing at particular locations can be calculated (and related to house prices). It is these measures of pressure that lie at the heart of dynamic modelling, and it is to this that we will turn shortly.

As this sketch indicates, it would be possible to construct a Lowry-like model which incorporated the best-practice level of detail from any of the submodels. Indeed, it is likely that within the hardy band of comprehensive modellers – Marcial Echenique, Michael Wegener, Roger Mackett, Mike Batty and David Simmonds for example – this will largely have been done, though my memory is that this is usually without a full transport model as a component. What has not been done, typically, is to make these models fully dynamic. Rather, in forecasting mode, they are run as a series of equilibrium positions, usually on the basis of changes that are exogenous to the model.

The next step – to build a Lowry-like model that is fully dynamic – has been attempted by Joel Dearden and myself and is reported in Chapter 4 of Explorations in urban and regional dynamics (which has recently been published by Routledge). However, it should be emphasised that this is a proof-of-concept exploration and does not contain the detail –e.g. on transport – that is being advocated above. It does tackle the difficult issues of moves: non-movers, job movers, house movers, house and job movers, which is an important level of detail in a dynamic model but very difficult to handle in practice. It also attempts to handle health and education explicitly in addition to conventional retail. As a nonlinear model, it does embrace the possibility of path dependence phase changes and these are illustrated (a) by the changes in initial conditions that would be necessary to revive a High Street and (b) in terms of gentrification in housing.

What can we learn from this sketch? First, it is possible to add much more detail than is customary but this is difficult in practice. I would conjecture this is because to do this effectively demands a substantial team and corresponding resources and, unlike particle physics, these kinds of resources are not available to urban science! Secondly, and rather startlingly, it can be argued that the major advance of this kind of science will lie in urban history! This is because in principle, all the data is available – even that which we have to declare exogenous from a modelling perspective.  The exogenous variables can be fed into the model and the historians, geographers and economic historians can interpret their evolution. This would demand serious team work but would be the equivalent for urban science of the unravelling of DNA in biology or demonstrating the existence of the Higgs boson! Where are the resources – and the ambition – for this!!

27. TIME MANAGEMENT

When I was Chair of AHRC, I occasionally attended small meetings of academics who we were consulting about various issues – our version of focus groups. On one occasion, we were expecting comments – even complaints – about various AHRC procedures. What we actually heard were strong complaints about the participant’s universities who ‘didn’t allow them enough time to do research’. This was a function, of course, of the range of demands in contemporary academic life with at least four areas of work: teaching, research, administration and outreach – all figuring in promotion criteria. There is a classic time management problem lurking here and the question is: can we take some personal responsibility for finding the time to do research amidst this sea of demands?

There is a huge literature on time management and I have engaged with it over the years for my own sake as I have tried to juggle with the variety of tasks at any one time. The best book I ever found was titled ‘A-time’ by an author whose name I have forgotten – jog my memory please – and which now seems to be out of print. My own copy is long lost. It was linked to a paper system which helped deliver its routines. the fact that it is now out of print is probably linked to the fact that I am talking about a pre-PC age. I used that system. I used Filofax. And it all helped. There was much sensible advice in the book. ‘Do not procrastinate’ was good. In the pre-e-mail days, correspondence came in the post and piled up in an in-try and it didn’t take long for it to form an impossible pile. ‘Do not procrastinate’ meant: deal with it more or less as it comes in. This is true now, of course, of e-mails. I think ‘A-time’ in the title of the book referred to two things: first, sort out your best and most effective time – morning, night, whatever; and secondly divide tasks into A, B and C categories. Then focus you’re A-time on the A tasks.

So what does this mean for contemporary academic life? Teaching and administration are relatively straightforward and efficiency is the key. Although sometimes derided, Powerpoint – or an equivalent – is a key aid for teaching: once done – no pain, no gain – bit can easily be updated (and can easily be an outline of a book! Achieving clarity of expression for different audiences can be very satisfying and creative in its own right. Good writing, as a part of good exposition, is a good training for research writing. So teaching may be straightforward, but it is very important. 

Research and outreach are harder. First, research. The choices are harder: what to research, what problem to work on, how to make a difference. How not to simply engage with the pressure to publish for your CV’s sake. Note the argument in Alvesson’s book The triumph of emptiness. [add] So what do we actually do in making research decisions? Here is a mini check list. Define your ‘problems’. Something ‘interesting and important’ – interesting at least to you and important to someone else. Be ambitious. Be aware of what others are doing and work out how you are going to be different, not simply fashionable. All easier said than done of course. And the ‘keeping up’ is potentially incredibly time consuming with the number of journals now current. Form a ‘journals reading club’? All of this is different if you are part of an existing team but you can still think as an individual if only for the sake of your own future.

And finally, outreach. ‘Interesting and important’ kicks-in in a different way. Material from both teaching and research can be used. Consultancy becomes possible – though yet another time demand – cf. Chapter 27.

Thinking things through on all four fronts should produce first a list of pretty routine tasks – administration, ‘keeping up’ and so on. The rest can be bundled into a number of projects. The two together start to form a work plan with short run, middle run and long run elements. If you want to be very text book about it you can define your critical success factors – CSFs – but that may be going too far! So, we have a work plan, almost certainly too long and extensive. How do we find the time?

First, be aware of what consumes time: e-mails, meetings, preparing teaching, teaching, supervisions, administration – all of which demand diary management because we have not yet added ‘research to the list It is important that research is not simply a residual, so time has to be allocated. Within the research box, avoid too much repetition – giving more or less the same paper many times at many conferences for instance. And on outreach, be selective. On all fronts, be prepared to use cracks in time to do something useful. In particular in relation to research, don’t wait for the ‘free’ day or the free week to do the writing for example. If you have a well-planned outline for a paper, a draft can be written in a sequence of bits of time.

What do I do myself? Am I a paragon of virtue? Of course not, but I do keep a ‘running agenda’ – a list of tasks and projects with a heading at the top that says ‘Immediate’ and a following one that says ‘Priorities’. It ends with a list headed ‘On the backburner’. Quite often the whole thing is too long and needs to be pruned. When I was in Leeds, I used to circulate my running agenda to colleagues because a lot of it concerned joint work of one kind or another. At one point, there was 22 pages of it and needless to say, I was seriously mocked about it. So, do it, manage it – and control it!!

28. AGAINST OBLIVION

I was at school in the 1950s – Queen Elizabeth Grammar School Darlington – with Ian Hamilton. He went on to Oxford and became a significant and distinguished poet, critic, writer and editor – notable, perhaps, for shunning academia and running his editorial affairs from the Pillar of Hercules in Greek Street in Soho. I can probably claim to be the first publisher of his poetry as Editor of the School Magazine – poems that, to my knowledge, have never been ‘properly’ published. We lost touch after school. He went on to national service and Oxford; I deferred national service and went to Cambridge. I think we only met once in later years – by coincidence on an underground station platform in the 1960s or 70s. However, I did follow his work over the years and I was looking at one of his books recently that gave me food for thought – Against oblivion, published posthumously in 2002. (He died at the end of 2001.) This book contained brief lives of 50 poets of the Twentieth Century – emulating a work by Dr Johnson of poets of the Seventeenth and Eighteenth Centuries. He also refers to two Twentieth Century anthologies for which the editors had made selections. The title of Ian’s book reflects the fact that a large proportion of the poets in these earlier selections had disappeared from view – and he checked this with friends and colleagues: into oblivion. He took this as a warning about what would happen to the reputations of those included in his selection a hundred years into the future – and by implication, the difficulty of making a selection at all. It is interesting to speculate about what survives – whether the oblivion is in some sense just or unjust. Were those that have disappeared from view simply ‘fashionable’ at the time – cf, Following fashion – or is there a real loss?

This has made me think about ‘selection’ in my own field of urban modelling. I recently edited a five-volume ‘history’ of a kind – by selecting significant papers and book extracts which were then published in more or less chronological order. The first two volumes cover around  the first 70 years and include 70 or so authors. Looking at the selection again, particularly for these early volumes, I’m reasonably happy with it though I have no doubt that others would do it differently. Two interesting questions then arise: which of these authors would still be selected in fifty or a hundred years’ time? Who have we missed and who should be rescued from oblivion? The first question can’t be answered, only speculated about. It is possible to explore the second, however, by scanning the notes and references at the end of each of the published papers. Such a scan reveals quite a large army of researchers and early contributors. Some of them were doing the donkey work of calculation in the pre-computer age but many, as now, were doing the ‘normal science’ of their age. It is this normal science that ultimately gives fields their credibility – the constant testing and retesting of ideas – old and new. However, I’m pretty sure there are also nuggets, some of them gold, to be found by trawling these notes and references and this is a kind of work which is not, on the whole, done. This might be called ‘trawling the past for new ideas’, or some such. This would be closely related to delving into, and writing about, the history of fields and in urban modelling this has only be done on a very partial and selective basis through review papers in the main. (Though the thought occurs to me that a very rich source would be the obligatory literature reviews and associated references in PhD theses. I am not an enthusiast for these reviews as Chapter 1 of theses because they usually don’t make for an interesting read – but this argument suggests that they have tremendous potential value as appendices.) There is one masterly exception and that is the recently published book by Dave Boyce and Huw Williams – Forecasting urban travel – which, while very interesting in fulfilling its prime aim as a history of transport modelling, would also act as a resource for trawling the past to see what we have missed! This kind of history also involves selection, but when thoroughly accomplished as in this case, is much more wide ranging.

Most of us spend most of our time doing normal science. We recognise the breakthroughs and time will tell whether they survive or are overtaken. Ian Hamilton’s introduction to Against oblivion provides some clues about how this process works – and that, at least, it is a process worth studying. For me, it suggests a new kind of research: trawling the past for half-worked out ideas that may have been too difficult at the time and could be resurrected and developed.

29. SLEDGEHAMMERS FOR WICKED PROBLEMS

There are many definitions of ‘wicked problems’ – first characterised by Rittel and Webber in the 1970s – try googling to explore. However, essentially, they are problems that are well known, difficult and that governments of all colours have attempted to solve. My own list, relating to cities in the UK, would be something like:

  • social
    • social disparities
    • welfare – unemployment, pensions,……
    • housing
  • services
    • health services – elements of post-code lottery, poor performance
    • education – a long tail of poor performance – for individuals and schools
    • prisons – and high levels of recidivism
  • economics
    • productivity outside the Greater South East
    • ‘poor’ towns – seaside towns for example
  • global, with local impacts – sustainability
    • responding to the globalisation of the economy
    • responding to climate change
    • food security
    • energy security
    • indeed security in general

There are lots of ways of doing this and much more detail could be added. See for example the book edited by Charles Clarke titled The too difficult box.

Even at this broad level of presentation, the issues all connect and this is one of the arguments, continually put, for joined-up government. It is almost certainly the case, for example, that the social list has to be tackled through the education system. Stating this, however, is insufficient. Children from deprived families arrive at school relatively ill-prepared – in terms of vocabulary for example and so start – it has been estimated – two years ‘behind’ and in a conventional system, there is a good chance that they never catch up. There are extremes of this. Children who have been in care for example rarely progress to higher education; and it then turns out that quite a high percentage of the prison population have at some stage in their lives been in care. Something fundamental is wrong there.

We can conjecture that there is another chain of causal links associated with housing issues. Consider not the overall numbers issue – that as a country we build 100,000 houses a year when the ‘need’ is estimated at 200,000 or more – but the fact that there are areas of very poor housing, usually associated with deprived families. I would argue that this is not a housing problem but an income problem – not enough resource for the families to maintain the housing. It is an income problem because it is an employment problem. It is an employment problem because it is a skills problem. It is a skills problem because it is an education problem. Hence the root, as implied earlier, lies in education. So a first step in seeking to tackle the issues on the list of to identify the causal chain and to begin with the roots.

What, then, is the ‘sledgehammer’ argument? If the problem can be articulated and analysed, then it should be possible to see ‘what can be done about it’. The investigation of feasibility then kicks in of course: solutions are usually expensive. However, we don’t usually manage to do the cost-benefit analysis at a broad scale. If we could be more effective in providing education for children in care, and for the rehabilitation of prisoners, expensive schemes could be paid for by savings in the welfare and prison budgets: invest to save.

Let’s start with education. There are some successful schools in potentially deprived areas – so examples are available. (This may not pick up the child care issues but we return to that later.) There are many studies that say that the critical factor in education is the quality of the teachers – so enhancing the status of the teaching profession and building on schemes such as Teach First will be very important. Much is being done, not quite enough. above all, there must be a way of not accepting ‘failure’ in any individual cases. With contemporary technology, tracking is surely feasible, though the follow-up might involve lots of 1-1 work and that is expensive. Finally, there is a legacy issue: those from earlier cohorts who have been failed by the system will be part of the current welfare and unemployment challenge, and so again some kind of tracking, some joining up of social services, employment services and education, should provide strong incentives to engage in life-long learning programmes – serious catch up. The tracking and joining up part of this programme should also deal with children in care as a special case, and a component of the legacy programme should come to grips with the prison education and rehabilitation agenda. There is then an important add-on: it may be necessary for the state to provide employment in some cases. Consider people released from prison as one case. They are potentially unattractive to employers (though some are creative in this respect) and so employment through let’s say a Remploy type of scheme – maybe as a licence condition of (early?) release becomes a partial solution. This might help to take the UK back down the league table of prison population per capita. This could all ion principle be done and paid for out of savings – though there may be an element of no pain ‘no gain’ at the start. There are examples where it is being done: let’s see how they could be scaled up.

Similar analyses could be brought to bear on other issues. Housing is at the moment driven by builders’ and developers’ business models; and as with teacher supply, there is a capacity issue. As we have noted, in part it needs to be driven by education, employment and welfare reforms as a contribution to affordability challenges. And it needs to be driven by planners who can switch from a development control agenda to a place-making one.

The rest of the list, for now, is, very unfairly, left as an exercise for the reader!! In all cases, radical thinking is required, but realistic solutions are available!! We can offer a Michael Barber check list for tackling problems – from his book How to run a government so that citizens benefit and taxpayers don’t go crazy – very delivery-focused, as is his wont. For a problem:

  • What are you trying to do?
  • How are you going to do it?
  • How do you know you will be on track?
  • if not on track, what will you do?

All good advice!

30. BEWARE OF OPTIMISATION

The idea of ‘optimisation’ is basic to lots of things we do and to how we think. When driving from A to B, what is the optimum route? When we learn calculus for the first time, we quickly come to grips with the maximisation and minimisation of functions. This is professionalised within operational research. If you own a transport business, you have to plan a daily schedule of collections and deliveries. How do you allocate your fleet to miminise costs and hence to maximise profits for the day? In this case, the mathematics and the associated computer programmes exist and are well known – they will solve the problem for you. You have the information and you can control what happens. But suppose now that you are an economist and you want to describe, theorise about, or model human behaviour. Suppose you want to investigate the economics of the journey to work. This is another kind of scheduling problem except that in this case it involves a large number of individual decision makers. If we turn to the micro economics text books, we find the answer: define a utility function for individuals, and each can then maximise. In this case we run into problems: does each individual have all the relevant information? Does the economist have this? For the individual, all options need to be available on possible journeys – perfect information. The impossibility of this led Herbert Simon to the powerful concept of ‘satisficing’ rather than ‘maximising’. This was brilliant and shifts the modelling task to a probabilistic one (as well as being a realistic description of human behaviour – isn’t it what we all do?). Of course, economists responded to this too and associated probability distributions with the utility functions. This is more difficult to build into the basic economics’ text books, however. (And for this problem, there is an associated issue: space. The economists’ toolkit is usually seen as having micro or macro dimensions but when space needs to be added, we might think of the resulting scale as being ‘meso’. More adaptation needed.)

So the lesson to heed at this stage is that while ‘optimisation’ is a powerful concept and tool, it should be used with care and should be ‘blurred’ via the introduction of probability distributions – which makes everything more messy – when appropriate. Hence, we should ‘beware’. A related field to explore in this respect is the very fashionable agent-based models (ABM). If individuals in an ABM are behaving according to ‘rules’, does the specification of these rules incorporate the necessary blurring? I know this can be done, and have done it, but is it always done? I suspect not.

There is a temptation to stick to simpler forms of optimisation because the associated mathematics and computer software is so attractive. This is particularly true of linear programming problems like the transport owner’s scheduling problem. A good example is the algorithm for the shortest path through a network which Dijkstra discovered in the 1950s. A great thing to have. In the early days of transport modelling this provided the basis for assigning origin-destination flows to networks – essentially assuming that all travellers took the best route. Again, this proved too simple and eventually it became possible, from the early 1970s, though a shade more difficult, to calculate second best and third best routes, even the kth best route – and then the trips could allocated probilistically.  And then we have to recognise that much of the world is nonlinear. The mathematics of nonlinear optimisation is more tricky but huge progress has been made – and indeed one of the core methods dates back to Lagrange in the Eighteenth Century.

I have been lucky with my own work in all these respects because entropy-maximising is a branch of nonlinear optimisation which actually offers optimum blurring. It is possible to take a traditional economic model and to turn it into and optimally blurred and hence more realistic model by these means. Indeed, there is one remarkable mathematical result: that the nonlinear version of an equivalent of the scheduling problem transforms into the linear problem when one of the parameters becomes infinite.

Conclusion: different kinds of optimisation methods should be in the toolkit, but we should be wary about matching uses with reality!

31. SPINNING OUT

I estimate that once every two years for the last 20 or 30 years, there has been a report of an inquiry into the transfer of university research into the economy – for commercial or public benefits. The fact that the sequence continues demonstrates that this remains a challenge. One mechanism is the spinning out of companies from universities and this piece is in two parts – the first describing my own experience and the second seeking to draw some broader conclusions. Either part might offer some clues for budding entrepreneurs.

This is the story of GMAP Ltd. The idea was born, half-formed, at Wetherby Racecourse on Boxing Day 1984 and it was six years before the company was spun out. Of course, there is a back story. Through the 1970s, I worked on a variety of aspects of urban modelling supported by large research grants. All of this work was basic science but there was a clear route into application, particularly into town planning. By the early 80s, the research grants dried up – a combination of becoming unfashionable and perhaps it was ‘someone else’s turn’. I worked with a friend and colleague, Martin Clarke, and he and I always went to Wetherby races on Boxing Day. We discussed the falling away of research grants. As we watched – on TV in a bar – Wayward Lad win the King George at Kempton, we somehow decided that the commercial world would provide an alternative source of funding. We had models at our disposal – notably the retail model. Surely there was a substantial market for this expertise!

 Our first thought was that the companies that had the resources to implement this idea were the big management consultants. In 1985, we began a tour. The idea was that they would work with our models on some kind of licence basis, Martin and I could be consultants, and they would find the clients. We were well received, usually given a good lunch; and then nothing happened. It became clear that DIY was the only way to make progress. We approached some companies whom we knew who we thought were possible targets but mostly our marketing was cold calling based on a weekly read of the Sunday Times job advertisements to identify companies seeking to fill marketing posts. Over two years, we had a number of small contracts, run through ULIS (University of Leeds Industrial Services). We learned our first lesson in this period: that to get contracts, we had to do what the companies actually wanted rather than what we thought they should have. We thought we could offer the Post Office a means of optimising their network; what they actually wanted was to know the average length of a garden path! That was our first contract, and that’s what we did. Another company (slightly later) said that all these models were very interesting, but their data was in 14 different information systems – could we sort that out? We did. The modelling came later.

Our turnover in Year 1 was around £20k and it slowly grew to around £100k. The big breakthroughs came in 1986 and 1987 when we won contracts with W H Smith and with Toyota. By then we had our first employee and GMAP became a formal division of ULIS. It wasn’t yet spun out, but we could run it like a small company with shadow accounts that looked like real accounts. There was then steady growth and in late 1989, we won a major contract with the Ford Motor Company. By 1990, our turnover reached £1M, we had a staff of around 20 and we crossed a threshold and we were allowed to spin out as GMAP Ltd. By this time, I was heavily involved in University management and so could only function as a non-executive director. The company’s development turned critically on Martin becoming the full-time Managing Director.

The 90s were years of rapid growth. We retained clients such as W H Smith, Toyota and Ford but added BP, Smith Klyne Beecham, the Halifax Building Society and many more. We were optimising retail, bank and dealership networks across the UK and, in the case of Ford, all over Europe. By 1997, our turnover was almost £6M and we were employing 110 staff. And then came a kind of ending. In 1997, the automotive part of GMAP was sold to R L Polk, an American company, and in 2001, the rest to the Skipton Building Society, to merge into a group of marketing companies that they were building.

What can we learn from this? It was very hard work, especially in the early days. DIY meant just that: Martin and I wrote the computer programmes, wrote and copied the reports, collected the data. We once stood outside Marks and Spencer in Leeds with clipboards asking people where they had travelled from – so we could get the data to calibrate a model. We were moved on by M and S staff for being a nuisance! We also had to be very professional. A project could not be treated like a research project. If there was a three-month deadline, it had to be met. We had to learn how to function in the commercial world very quickly. But it was exciting as well. We grew continuously. We didn’t need any initial capital – we funded ourselves out of contracts. We were always profitable. In one sense it was real research: we had incredible access to data from companies that would have been unavailable if we hadn’t been working for them. And many of them rather liked being referred to in papers published in academic journals.

Could it be done again? In this field, possibly, though this kind of analysis has become more routine and has been internalised by many – notably the big supermarket companies. However, there are many companies that could use this technology with (literally) profit, but don’t. And there are huge opportunities in the public sector – notably education and health. The companies we worked with, especially those with whom we had long-term relationships, recognised the value of what they were getting: it impacted on their bottom line. We did relatively little work in the public sector – not for want of trying – but it was difficult to convince senior management to see the value. However, it could certainly be done again on the back of new opportunities. Much is said about the potential value of ‘big data’ or of the ‘internet of things’ for example and many small companies are now in the business of seeking out new opportunities. But is anyone linking serious modelling with these fields? Now, there’s an opportunity!!

32. MISSING DATA

All the talk of ‘big data’ sometimes carries the implication that we must now surely have all the data that we need. However, frequently, crucial data is ‘missing’. This can then be seen as inhibiting research: ‘can’t work on that because there’s no data’! For important research, I want to make the case that missing data can often be estimated with reasonable results. This then links to the ‘statistics vs mathematical models’ issue: purist statisticians really do need the data – or at least a good sample; if there is a good model, then there is a better chance of getting good estimates of missing data.

As I mentioned in an earlier blog – Serendipity, I started my life in elementary particle physics at the Rutherford Lab. I was working in part at CERN on a bubble chamber experiment in the synchrotron. A high-energy proton collided with another proton and the results of the collision left tracks in the chamber which were curved by a magnetic field thereby offering a measurement of momentum. My job was to identify the particles generated in the collision. The ‘missing data’ was that of the neutral particles which left no tracks. The solution came from a mix of the model and statistics. The model offered a range of hypotheses of possible events and chi-squared testing identified the most probable – actually with remarkable ease though I confess that there was an element of mystery in this for me. But it made me realise, with some hindsight, that missing data could be discovered.

Science needs data – cf. Nullius in verba – but it also hypotheses and theories to test – cf. Evolvere theoriae et intellectum. In the case of my current field – the analysis of states and dynamics of cities and regions, there is an enormous need for data. I once estimated that I needed 1013 variables as the basis for a half-decent comprehensive urban model and in many ways this takes us beyond big data – though real-time sensor data will generate these kinds of numbers very quickly. The question is: is it the data we need? We can set out a comprehensive theory – and an associated model – of cities and regions. We have core data from decennial censuses together with a large volume of administrative data and much survey data. The real-time data – e.g. positional data from mobile phones – can be used to estimate person flows, taking over from expensive (and infrequent) surveys. In practice, of course, much of the data available to us is sample data and we can use statistics – either directly or to calibrate models – to complete the set.

My own early work in urban modelling was a kind of inversion of the missing data problem: entropy maximising generated a model which provided the best fit to what is known – in effect a model for the missing data. It turns out, not surprisingly, to have a close relationship to Bayesian methods of adding knowledge to refine ‘beliefs’. In theory, this only works with large ‘populations’ but there have been hints that it can work quite well with small numbers. This only gets us so far. The collection (or identification) of data to help us build dynamic models is more difficult. Even more difficult is connecting these models which rely on averaging in large populations with micro ‘data’ – maybe qualitative – on individual behaviour. There are research challenges to be met here.

There are other kinds of challenges: what to do when critical elements are missing of a simpler nature. An example is the need for data on ‘import and export’ flows across urban boundaries to be used in building input-output models at the city (or region) scale. We need these models so that we can work out the urban equivalent of the well-understood ‘balance of payments’ in the national accounts. How can we estimate something which is not measured at all, even on a sample basis? I recently started to ponder whether we could look at the sectors of an urban economy and make some bold assumption that the import and export propensities were identical to the national ones? This immediately throws up another problem, that we have to distinguish between intra-national – that is between cities – and international flows. It became apparent pretty quickly that we needed the model framework of interacting input-output models for the UK urban system before we could progress to making albeit very bold estimates of the missing data. We have done this for 200+ countries in a global dynamics research project and the task was now to translate this to the urban scale but for the country as a whole. A ‘missing data’ problem is seen as quite a tricky theoretical one.

Perhaps the best way to summarise the ‘missing data’ challenges is to refer back to the ‘Requisite knowledge’ argument of an earlier blog: what is the ‘requisite data set’ needed for an effective piece of research   – e.g. to calibrate a model? Then if the model is good, the model outputs looks like ‘data’ for other purposes. More generally: do not be put off from doing something important by ‘missing data’. There are ways and means albeit sometimes difficult ones!

33. LEARNING FROM HISTORY

I was recruited to a post that was the start of my urban modelling career in the Autumn of 1964 by Christopher Foster (now Sir Christopher) to work on the cost-benefit analysis of major transport projects. My job was to do the computing and mathematics and at the same time to learn some economics. Of course, the project needed good transport models and at the time, all the experience was in the United States. Christopher had worked with Michael Beesley (LSE) on the pioneering cost-benefit analysis of the Victoria Line. To move forward on modelling, in 1965, Christopher, Michael and I embarked on a tour of the US. As I remember, in about ten days, we visited Santa Monica and Berkeley, Philadelphia, Boston and Washington DC. Having set the context, this finally gets me to the point of this piece: we met a good proportion of the founding fathers – they were all men – of urban modelling. A number of them influenced my thinking in ways that have been part of my intellectual make-up ever since. These threads can easily be traced in my work over the years. An interesting question then: for those recruited in subsequent decades, what are the equivalents? It would be an interesting way of writing a history of the field.

Jack (I. S.) Lowry was working for the RAND Corporation in Santa Monica where he developed the model of Pittsburgh that now bears his name. I recall an excellent dinner in his house overlooking the Bay. His model has become iconic because it revealed the bare bones of a comprehensive model in the simplest way possible. Those of us involved in building comprehensive models have been elaborating it ever since. The conversation for me reinforced something that was already becoming clear: the transport model needed to be embedded in a more comprehensive model so that the transport impact on land use – and vice-versa – could be incorporated.

The second key proponent of the comprehensive model was Britton Harris a Professor of City Planning at the University of Pennsylvania but, particularly important in the context of that visit, he was the Director of the Penn-Jersey Land-Use Transportation Study. The title indicated its ambitions. This again reinforced the ‘comprehensive’ argument and became the basis of a life-long friendship and collaboration. I spent many happy hours in Wissahiken Avenue. The Penn-Jersey study used a variety of modelling techniques, not least mathematical programming which was new element of my intellectual tool kit. More of Brit later. At Penn – was it on that trip or later – I met Walter Isard, a giant figure in the creation of regional science and who contributed to my roots in regional input-output modelling. Walter was probably the first person to recognise that von Thunen’s theory of rent could be applied to cities – see his 1956 book ‘Location and the space-economy’. Bill Alonso was one of his graduate students and he fully developed the theory of bid-rent. We visited Bill in Berkeley and I recall a letter from him three years later, in the heady days of 1968, starting with ‘As I write, military helicopters hover overhead ….’! Then back to Penn. For me it was Ben Stevens who operationalised the Alonso model in his 1960 paper with John Herbert – as a mathematical programming model. This fed directly into work I did in the 1970s with Martyn Senior to produce an entropy-maximising version of that – making me realise that one of the unheralded advantages of that method was that it made optimising economic models – like the Alonso-Herbert-Stevens model – ‘optimally blurred’, to recognise sub-optimal real life.

In Harvard, we met John Kain, very much the economist, very concerned with housing models – territory I have failed to follow up on since. A new objective! He was at the Harvard-MIT Joint Centre for Urban Studies whose existence was a sign that these kinds of interdisciplinary centres were fashionable at the time – and have been in an out of fashion ever since – now fortunately fashionable again! An alumnus was Martin Meyerson who by this time was Chancellor of the University of California at Berkeley (and we dined with him in his rather austere but grand official dining room – why does one remember these things rather than the conversation?!). There also was Daniel (Pat) Moynihan who had just left the Centre to work for the President in Washington – another sign of the importance of the urban agenda. I was urged to meet him and that led to my only ever visit to the White House – to a small office in the basement. Of course he later became very grand as a long-serving Senator for New York.

The Washington part of our visit established some other important contacts and building bricks. We engaged directly with the transport modelling industry through Alan Voorhees – already running quite a large company that still bears his name. It was valuable to see the ideas of transport modelling put to work and I think that reinforced my commitment that modelling was a contribution to achieving things – the use of the science. We met Walter Hansen who was working for Voorhees who was probably the inventor of the concept of ‘accessibility’ in modelling through his paper ‘How accessibility shapes land use’ and T. R. (‘Laksh’) Lakshmanan of the ‘Lakshmanan and Hansen’ retail modelling paper – other critical and ever-present parts of the tool kit. From a different part of the agenda, there was Clopper Almon who was working for the Government (as I remember) on regional input-output models.

Much of what I learned on that trip has remained as part of my intellectual tool kit. Much of it led to long-standing exchanges – particularly through regional science conferences. Some led to close working collaboration. Brit and Walter between them, ten years later, recruited me to a position of Adjunct Professor in Regional Science at Penn where I spent a few weeks every summer in the late 70s. I worked closely with Brit and those visits must have been the basis for my work with him on urban dynamics that was published in 1978 – and is still a feature of my ongoing work plan. I could chart a whole set of contacts and collaborations for subsequent decades. Maybe the starting points are always influential for any of us but I was very lucky in one particular respect: it was the start of the modern period of urban modelling and there was everything to play for.

34. VENTURING INTO OTHER DISCIPLINES

Urban and regional science – a discipline or a subdiscipline, or is it still called interdisciplinary? – has been good at welcoming people from other disciplines, notably in recent times, physicists. Can we venture outside our box? It would be rather good if we could make some good contributions to physics!! However, given that the problems we handle are in some sense generic – spatial interaction and so on – we could look for similar problems in other disciplines and see if we can offer a contribution. I can report a number of my own experiences which give some clues on how these excursions can come about and may be food for thought for something new. I ventured into demography with Phil Rees many years ago, and I tried to improve the Leontief-Strout inter-regional input-output model around the same time – the latter only implemented once by Geoff Hewings and colleagues. Some of this has re-emerged in the current Global Dynamics project, so it is still alive. But both of these are broadly within regional science. More interesting examples are in ecology, archaeology, history and security. All relate to spatial interaction and competition-for-resources modelling by some combination of (i) adding space, (iii) applying the models to new elements or (iii) thinking of new kinds of flow for new problems. In some cases, our own core models have to be combined with those from the other field. But let’s be specific.

The oldest exercise dates back to the mid 1980s, but also, after a gap in time, has proved one of the most fruitful. Around 1985, Tracey Rihll, then a research student in ancient history in Leeds, came to see me in Geography and said that someone had told her that I had a model that would help her with her data. The data were points representing then locations of known settlement in Greece around 800 BC. What we did was make some colossal assumptions about interactions – say trade and migration – between settlements and, using Euclidean distance, run the data through a dynamic retail model to estimate – at equilibrium – settlement sizes. Out popped Athens, Thebes, Corinth etc – somehow teased out from the topology of the points. One site was predicted as large that hadn’t been thought to be, and if we had had the courage of our convictions, we would have urged archaeologists to go there! we published three papers on this work. Nothing then happened for quite a long time until it was picked up by some American archaeologists and then by Andy Bevan in UCL Archaeology. Somehow, the penny dropped with Andy that the ‘Wilson’ of ‘Rihll and Wilson’ was now in UCL  and we began to work together – first reproducing the old results and then extending them to Crete. These methods were then separately picked up by Mark Altaweel in Archaeology and Karen Radner in History and we started working on data from the Kurdistan part of Iraq. In this case, the archaeologists were really prepared to dig at what the model predicted were the largest sites. Sadly, this has now been overtaken by events in that part of the world. This work has led to three more published papers. There is then one other archaeology project but this links with ‘security’ below.

The excursion into ecology came about in a different way. The dynamic retail model is based on equations that are very similar to the Lotka-Volterra equations in ecology and so I decided to investigate whether there was the possibility of knowledge transfer between the two fields. What was most striking was that virtually all the applications in ecology were aspatial, notwithstanding the movement of animals and seeds. So I was able to articulate what a spatial L-V system might look like in ecology. I didn’t have the courage I’m afraid to try to publish it in an ecology journal, but Environment and Planning A published the paper and I fear it has fallen rather flat. But I still believe that it’s important!

There was a different take on history with the work I did with Joel Dearden on Chicago. This came about because I had been invited to give a paper at a seminar in Leeds for (a) Phil Rees’s retirement and (b) I had been reading William Cronon’s book, Nature’s Metropolis, about the growth of Chicago. Cronon’s book in particular charted in detail the growth of the railway system in the 19th Century and so Joel and I designed a model – nor unlike the Greek one but in this case with an emphasis on the changing accessibility provided by the growth of railways over a century. We had US Census data from 1790 against which we could do some kind of checking and we generated a plausible dynamics.

The fourth area was ‘security’, stimulated by this being one of the four elements of our EPSRC Global dynamics project. This turns on the interesting idea of interpreting spatial interaction as ‘threat’ – which can attenuate with distance. From a theoretical point of view, the argument was analogous to the ecological one. Lewis Fry Richardson, essentially a meteorologist, developed an interest in war in the 1930s and built models of arms races essentially using L-V models. But again, without any spatial structure. We have been able to add space and this makes the model much more versatile and we have applied it in a variety of situations. We even reconnected with archaeology and history again by seeking to model, in terms of threat, the summer ‘tour’ of the Emperor of Assyria in the Middle Bronze Age with his army, taking over smaller states and reinforcing existing components of the Empire.

All of these ventures have been modest, two of them funded by small UCL grants. papers have been accepted and published in relation to all of them.  However, attempts to obtain funding from research councils have all failed. Are we ahead of our time or is it more likely that the community of available referees can’t handle this kind of interdisciplinarity, particularly if algebra and calculus is involved!?!

35. ON WRITING

Research has to be ‘written up’. To some, writing comes easily – though I suspect this is on the basis of learning through experience. To many, especially research students at the time of thesis writing, it seems like a mountain to be climbed. There are difficulties of getting started, there are difficulties of keeping going! An overheard conversation in the Centre where I work was reported to me by a third party: “Why don’t you try Alan’s 500 words a day routine?” The advice I had been giving to one student – not a party to this conversation – was obviously being passed around. So let’s try that as a starting point. 500 words doesn’t feel mountainous. If you write 500 words a day, 5 days a week, 4 weeks a month, 10 months a year, you will write 100,000 words: a thesis, or a long book, or a shorter book and four papers. It is the routine of writing that achieves this so the next question is: how to achieve this routine? This arithmetic, of course, refers to the finished product and this needs preparation. In particular, it needs a good and detailed outline. If this can be achieved, it also avoids the argument that ‘I can only write if I have a whole day or a whole week’: the 500 words can be written in an hour or two first thing in the morning, it can be sketched on a train journey. In other words, in bits of time rather than the large chunks that are never available in practice.

The next questions beyond establishing a routine are: what to write and how to write? On the first, content is key: you must have something interesting to say; on the second what is most important is clarity of expression, which is actually clarity of thought. How you do it is for your own voice and that, combined with clarity, will produce your own style. I can offer one tip on how to achieve clarity of expression: become a journal editor. I was very lucky that early in my career I became first Assistant Editor of Transportation Research (later TR B)  and then Editor of Environment and Planning (later EP A). As an editor you often find yourself in a position of thinking ‘There is a really good idea here but the writing is awful – it doesn’t come through’. This can send you back to the author with suggestions for rewriting, though in extreme cases, if the paper is important, you do the rewriting yourself. This process made me realise that my own writing was far from the first rank and I began to edit it as though I was a journal editor. I improved. So the moral can perhaps be stated more broadly: read your own writing as through an editor’s eyes – the editor asking ‘What is this person trying to say?’.

The content, in my experience, accumulates over time and there are aids to this. First, always carry a notebook! Second, always have a scratch pad next to you as you write to jot down additional ideas that have to be squeezed in. The ‘how to’ is then a matter of having a good structure. What are the important headings? There may be a need for a cultural shift here. Writing at school is about writing essays and it is often the case that a basic principle is laid down which states: ‘no headings’. I guess this is meant to support good writing so that the structure of the essay and the meaning can be conveyed without headings. I think this is nonsense – though if, say a magazine demands this, then you can delete the headings before submission! This is a battle I am always prepared to fight. In the days when I had tutorial groups, I always encouraged the use of headings. One group refused point blank to do this on the basis of their school principle. I did some homework and for the following week, I brought in a book of George Orwell’s essays, many of which had headings. I argued that if George Orwell could do it, so could everybody, and I more or less won.

The headings are the basis of the outline of what is to be written. I would now go further and argue that clarity, especially in academic writing, demands subheadings and sub-subheadings – a hierarchy in fact. This is now reinforced by the common use of Powerpoint for presentations. This is a form of structured writing and Powerpoint bullets, with sequences of indents, are hierarchical – so we are now all more likely to  be brought up with this way of thinking. Indeed, I once had a sequence of around 200 Powerpoint slides for a lecture course. I produced a short book by using this as my outline. I converted the slides to Word, and then I converted the now bullet-less text to prose.

I am a big fan of numbered and hierarchical outlines: 1, 1.1, 1.1.1, 1.1.2, 1.2,…..2, 2.1, 2.1.1, 2.1.2, etc. This is an incredibly powerful tool. At the top level are say 6 main headings, then maybe six subheadings and so on. The structure will change as the writing evolves – a main heading disappears and another one appears. This is so powerful, I became curious about who invented it and resorted to google. There is no clear answer, and indeed it says something about the contemporary age that most of the references offer advice on how to use this system in Microsoft Word! However, I suspect the origins are probably in Dewey’s Library classification system – still in use – in effect a classification of knowledge. Google ‘Dewey’s decimal classification’ to find its Nineteenth Century history.

There are refinements to be offered on the ‘What to ….’ and ‘How to ….’ questions. What genre: an academic paper, a book – a text book? – a paper intended to influence policy, written for politicians or civil servants? In part, this can be formulated as ‘be clear about your audience’. One academic audience can be assumed to be familiar with your technical language; another may be one that you are trying to draw into an interdisciplinary project and might need more explanation. A policy audience probably has no interest in the technicalities but would like to be assured that they are receiving real ‘evidence’.

What next? Start writing, experiment; above all, always have something on the go – a chapter, a paper or a blog piece. Jot down new outlines in that notebook. As Mr Selfridge said, ‘There’s no fun like work!’ Think of writing as fun. It can be very rewarding – when it’s finished!!


[1] These pieces are taken from quaestio.co.uk

[2] Scale questions define disciplines and subdisciplines: quantum to cosmology; ethnography and psychology to social policy.

[3] AGW, Urban modelling, 5 vols

[4] I S Lowry (1964) A model of metropolis

[5] This was a very crude form of economic base model which can be elaborated as an input-output model.

[6] The science of cities and regions

[7] Interface 2008

[8] L-V references

[9] 2008 op cit