Book in draft

BID221220

Being interdisciplinary

Systems approaches to social, economic and geographic research

Alan Wilson

The Alan Turing Institute

December 2020

Contents

Preface

Chapter 1. Interdisciplinary research

1.1. Some key concepts

1.2. Real challenges

1.3. The organisation of the book

Chapter 2. Being interdisciplinary

2.1. Disciplines and beyond

2.2. Interdisciplinarity

2.3. Requisite knowledge

2.4. Combinatorial evolution

2.5. Concluding comments

Chapter 3. How to start

3.1. First steps

3.2. ‘Research on’ vs ‘research for’

3.3. Concluding comments

Chapter 4. Analysis

4.1. Introduction: models, then data

4.2. The power of modelling: an example – understanding and planning cities

4.3. Competing models: truth is what we agree about

4.4. Abstract modes

4.5. Equations with names: the importance of Lotka and Volterra (and Tolstoy?)

4.6. Data science: a new discipline to change the world?

4.7. What is data science? What is AI?

4.8. Mix and match: the five pillars of data science and AI

4.9. A data-driven future?

4.10. Big data and high-speed analytics

4.11. Block chains

Chapter 5. Problem solving

5.1. Meeting challenges

5.2. Research into policy: lowering the bar

5.3. An example: the future of cities

Chapter 6. Doing interdisciplinary research

6.1. Introduction

6.2. A research autobiography

6.3. Learning from History

6.4. Spinning out

6.5. Venturing into other disciplines

6.6 Following fashion

6.7. Against oblivion: mining the past

Chapter 7. Tricks of the trade

7.1. Introduction

7.2. The brain as a model

7.3. DNA

7.4. Territories and flows

7.5. Adding depth

7.6. Beware of ‘optimisation’

7.7. Missing data

Chapter 8. Managing research, managing ourselves

8.1. Introduction

8.2. Do you really need an MBA?

8.3. Collaboration

8.4. Pure vs applied

8.5. Leicester City: a good defence and breakaway goals

8.6. What would Warren Weaver say now?

8.7. Best practice

8.8. Time management

8.9. On writing

Chapter 9. Organising research

9.1. Introduction

9.2. OR in the Age of AI

9.3. Research and innovation in an ecosystem

Annex. Research challenges

Bibliography

Preface

My aim in this book is to shed light on the question: ‘how to do research?’. Whatever insights there are to offer have emerged, for me, from decades of research and the accumulation of a toolkit. – and so this is a personal account. There are perhaps two key threads: first, that (almost?) all research is essentially interdisciplinary; and secondly, that a systems perspective is nearly always a good starting point and indeed forces us to look beyond disciplines. There is a more subtle thread: how to identify the game-changers of the past as a basis for learning to think outside the box of convention – how to do something new, how to be ambitious. In a nutshell, how to be creative.

I originally thought the title of the book could be ‘How to do research’ and then, without much thought, I realised this was both hubristic and pretentious and so I demoted it to ‘Tricks of the trade’ – and this remained the prospective title through courses and talks over the last few years. Recently, Alison Powell (LSE) introduced me to Howard Becker’s book of the same title and so I had to think again. This combined with a further realisation that I would be wise to narrow the scope to something that at least fitted my experience – my ‘evidence’ base. Combining this thought with the primary theme produced ‘Being interdisciplinary’ as a title (with some connotations of research autobiography) with ‘Systems approaches to social, economic and geographic research’, picking up the second theme and a broad but still limited scope.

In basing the argument on my own research experience, I should like to acknowledge at the outset that this has been achieved with a very large number of collaborators, most of whose names appear in the bibliography at the end of the book; and to a range of employers who between them have offered the freedom for me to make the necessary judgements about research choices. I offer my thanks to colleagues in both categories.

Alan Wilson

Norfolk 2020

Chapter 1. Interdisciplinary research

1.1. Some key concepts

When I was studying physics, I was introduced to the idea of ‘system of interest’: defining at the outset that ‘thing’ you were immediately interested in studying. This is crucial for interdisciplinarity. There are three ideas to be developed from this starting point. First, it is important simply to list all the components of the system; secondly, to simplify as far as possible by excluding all possible extraneous elements from this list; and thirdly, taking us beyond what the physicists were thinking at the time, to recognise the idea of a ‘system’ as made up of related elements, and so it was as important to understand these relationships as it was to enumerate the components. All three ideas are important. Identifying the components will also lead us into counting them; simplification is the application of Occam’s Razor to the problem; and the relationships take us into the realms of interdependence and complexity. Around the time I was learning about the physicists’ ideas of a ‘system’, something more general, ‘systems theory’, was in the air though I didn’t become aware of it until much later[1].

So, always start a piece of research with a ‘system of interest’. Defining the system then raises other questions which have to be decided at the outset – albeit open to later modification: questions of scale[2]. There are three dimensions to this. First, what is the granularity at which you view your system components? Population by age: how many age groups? Or age as a continuous variable?  Usually too difficult. Second, how do you treat space? Continuous with Cartesian coordinates? Or as discrete zones? If the latter, what size and shape? Thirdly, since we will always be interested in system evolution and change, how do we treat time? Continuous? Or as a series of discrete steps? If the latter, one minute, one hour, one year, ten years, or what? These choices have to be made in a way that is appropriate to the problem and, often, in relation to the data which will be needed. (Data collectors have already made these scale decisions.)

Once the system is defined, we have to ask a question like: how does it work? Our understanding, or ‘explanation’ is represented by a theory. There may be an existing theory which may be partially or fully worked out; or there may be very little theory. Part of the research problem is then to develop the theory, possibly stated as hypotheses to be tested.

Then there is usually a third step relating to questions like: how do we represent our theory? How do we do this in such a way that it can be tested? What methods are available for doing this[3]?

In summary, a starting point is to define a system of interest, to articulate a theory about how it works, and to find methods that enable us to represent, explore and test the theory. We can call this the STM approach.

This, through the reference to theory (or hypothesis) formulation and testing, establishes the science base of research. Suppose now that we want to apply our science to real-world problems or challenges. We need to take a further step which in part is an extension of the science; it is still problem-solving in relation to a particular system of interest but has added dimensions beyond what is usually described as ‘blue-skies’ science. The additions are: articulating objectives and inventing possible solutions. Take a simple example: how to reduce car-generated congestion in a city. The science offers us a mathematical-computer model of transport flows. Our objective is to reduce congestion. The possible solutions range from building new roads in particular places to improving a public transport system to divert people from cars. Each possible solution can be thought of as a plan and the whole activity is a form of planning. In some cases, computer algorithms can invent plans but more usually it is a human activity. For any plan, the new flows can be calculated using the model along with indicators of, for example, improved traffic speeds and consumers’ surplus. A cost-benefit analysis can be carried out and the plan chosen that has the greatest rate-of-return or the greatest benefit-to-cost ratio. In reality, it is never as neat as this, of course[4].

How can we summarise this process? I learned from my friend and collaborator, Britton Harris, many years ago, that this can be thought of as policy, design and analysis: a PDA framework to complement the STM approach. His insight was that each of the three elements involved different kinds of thinking – and that it was rare to find these in one person, in one room, or applied systematically to real problems. There is a further insight to be gained: the PDA framework can be applied to a problem in ‘pure’ science. the objective, the policy, may be simpler – to articulate a theory – but it is still important to recognise the ‘design’ element – the invention at the heart of the scientific process. And this applies very directly to engineering of course: engineers have both a science problem and a policy and planning problem: both STM and PDA apply.

As we noted briefly, adopting a systems perspective at the outset of a piece of research forces interdisciplinarity. This can be coupled with an idea which will be developed more fully later: requisite knowledge. This is simply the knowledge that is required as the basis for a piece of research. When this question is asked about the system of interest, it will almost always demand elements from more than one discipline; and these elements combine into something new – more than the sum of the parts. There is a fundamental lesson here about effective research: it has to be interdisciplinary at the outset.

This provides a framework for approaching a subject, but we still have to choose! These decisions can be informed but are ultimately subjective. A starting point is that they should be interesting to the researcher but also important so some wider community. The topic should be ambitious but also feasible – a very difficult balance to strike.

1.2. Real challenges

Cities provide a system of interest which is complex and interesting both as science, and as science applied to real challenges. Urban research draws on social, economic and geographic disciplines and so provides a good example for our explorations. One starting point is to reflect on the well known and very real challenges that cities face. The most obvious ones can be classified as ‘wicked problems’ in that they have been known for a long time, usually decades, and governments of all colours have made honourable attempts to ‘solve’ them. Given the limited resources of most academic researchers it could be seen as wishful thinking to take these issues on to a research agenda, but nothing ventured, nothing gained. We need to bear in mind the PDA  framework – policy, design, analysis. Good analysis will demonstrate the nature of the challenges; policy is usually to make progress with meeting them; the hard part is the ‘design – inventing possible solutions. The analysis comes into play again to evaluate the options. If we focus on the UK context, we can offer a sample of the issues (though most translate easily across national boundaries). Broad headings, reflecting the definition of our system of interest, might be:

  1. living in cities – people issues
  2. the economy of cities – organisations providing goods and services
  3. urban metabolism: energy and materials flows
  4. urban form
  5. infrastructure
  6. governance

The challenges have all been exacerbated by the COVID pandemic to which we return after pre-pandemic considerations – which are still with us! Consider a sample of issues within each topic.

Given our subtitle, and its roots in social, economic and geographic research, a focus on people at the outset is a good starting point, and to illustrate: how do people live in cities? The challenges are well known and can be summarised. Housing: there is a current shortage and this situation will be exacerbated by population growth; education – a critical service – upskilling for future proofing and yet a significant percentage leave school with inadequate literacy, numeracy and work skills; health – a postcode lottery in the delivery of services; the future of work – what will happen if the much predicted ‘hollowing out’ occurs as middle-range jobs are automated? How will the redundant pay their bills?

A complementary perspective focuses on the economy of cities, embracing private and public sectors and the delivery of products, services and jobs (and therefore incomes). The metabolism of the city is characterised by energy and materials flows with the associated major challenges of climate change – sustainability and the feasibility – indeed the necessity – of achieving low carbon targets.

How we live, and how the economy can function, connects strongly to urban form which raises issues such as the appropriateness of low densities in housing development and the locations of the new housing that will be necessary to meet the demands of a growing population.  A physical embodiment of the city’s form is its infrastructure. We need to relate our objectives not simply to ‘more public transport’ for example, but to conceptualise and measure ‘accessibilities’ that are crucial for both people and organisations, working towards transport infrastructure that underpins an effective system. Investment in utilities will also be necessary to match population growth but also to respond to the sustainability agenda. In particular, counting communications and broadband as utilities, how do we secure our future in a competitive world?

To begin to meet these challenges, we need effective governance that incorporates community engagement. At what levels are planning and policy decisions best made? How do we make food, communications and utilities secure?

This brief analysis leads us to an immediate important conclusion: these issues are highly interdependent and one important area of research is to chart these interdependencies and to build policies and plans that take them into account. One obvious research priority is the need for comprehensive urban models – the underpinning science of cities. There are some excellent examples, but they are not deployed as a core part of planning practice. Ideally, therefore, in relation to the issues sketched above, a comprehensive urban model should have enough detail in it to represent all the problems and any planning proposals should be tested by the runs of such a model. Neither of these ambitions is fulfilled in practice and so this offers a challenge to modellers as well as those directly concerned with real-world issues. Any specification of these issues will be interdisciplinary.

Bearing this in mind, let us now work down to another level and pose questions about research priorities.

Living in cities – people issues. We noted earlier the housing shortage, exacerbated by population growth. The demography is itself very much worth exploring: populations in many places are, relatively, ageing; and others are being restructured by immigration – both in and out. These shifts in many cases relate to work opportunities and there has been relatively little research on these linkages. We need an account of, that is a model, of where people choose to live in relation to their incomes, housing availability, affordability and prices, local environments and accessibilities to work and services – a pretty tall order for the initial analysis. Then there is then a planning issue which links closely to urban form: where should the new housing go? At present it is mainly on the edges of cities, towns and villages with no obvious functional relationship to other aspects of people’s lives; there is related research to be done on developers and house builders and their business models.

Education is a critical service – upskilling for future proofing and yet dealing in the short term with a significant percentage, as we noted, leaving school inadequate in literacy, numeracy and work skills. Some progress has been made in developing federations of schools to bring ‘failing’ schools into a more successful fold; however, there is hard analysis to be done on other factors – particularly the impact of the social background of children and whether schools’ initiatives need to be extended into a wider community. Again, there are examples of improvement, but it should be possible to explore the relative successes of initiatives in a wide range of areas. A particular category of concern is looked-after children – children in care. The system is obviously failing as measured by the tiny percentage who progress into higher education and by the high percentage of offenders who have ever been in care.

Health is another critical service, unevenly delivered across the country. In terms of research possibilities, this is a sector that is data rich but under-analysed – perhaps in part because of the difficulties researchers have in accessing the data. There are many research projects in the field but it remains relatively fragmented. Does anyone explore an obvious question for example: what is the optimum size of GP surgeries in different kinds of locations?

The ‘economy’ embraces both private and public sectors and so has to deliver products, services and jobs (and therefore incomes). Interesting research has been done on e.g. growing and declining sectors in the economy which can then be translated down to the city level and this can be combined with the ‘replicator’-‘reinventor’ concepts introduced in the Centre for Cities’ Century of Cities paper[5]. This would enable at least short-term predictions of employment change which could also be related to the immigration issues. The major challenge for the economy is the ability to deliver employment in the context of the much predicted ‘hollowing out’ as middle-range jobs are automated. How will the redundant pay their bills?

We have noted what may be the biggest challenges of all: the re-working of the urban metabolism – energy and materials flows – to achieve sustainability and low-carbon futures; and the feasibility of achieving low carbon targets. An obvious research issue here is the monitoring – and analysis of past – trends in relation to sustainability targets. It is likely that current trends are in the ‘wrong’ direction: trips are getting longer and densities are decreasing. If this is the case, can we invent and test alternative futures? Shorter trips, and all by new forms of public transport? Some high-density development aimed at groups who might appreciate it?

In the case of urban form, achievement of sustainability targets will make huge demands for fundamental change. Where will the necessary new housing go? It is possible to explore possible ‘green belt’ futures – for example analysing the URBED Wolfson Prize model[6]? More importantly, how can higher densities and sustainable forms be achieved – given the lock-in of present structures and market demand?

Infrastructure: accessibilities are crucial for both people and organisations so transport infrastructure and an effective system are correspondingly critical – at scales from the neighbourhood, to the city, the region, nation and the global. Rural areas offer a particular research challenge: I have heard it suggested that counties could be seen as distributed cities[7]. There is an argument for some systematic research on accessibilities, a concept introduced earlier, and the ways in which they can be related to utility functions? Investment in utilities will be necessary to match population growth but also to respond to the sustainability agenda. In particular, counting communications and broadband as utilities, how do we secure our future in a competitive world?

Governance: at what levels are planning and policy decisions best made? National, regional, city or neighbourhood – a mixture needed? A good research question would be: charting subsidiarity principles to address this question?

This is a very partial and briefly-argued list but I think it exposes the paucity of both particular and integrated research on some of the big challenges. Interdisciplinarity is crucial here.

1.3. The organisation of the book

In this chapter, we have outlined the basic reasons for establishing interdisciplinary foundations for research – offering the STM and PDA frameworks as a check list. In Chapter 2, as a preliminary, we explore the nature of interdisciplinarity, particularly since most of us were educated through disciplines which provides a basis for tackling the question’ How to start?’ in Chapter 3. We pick up the ‘real challenges’ thread from Chapter 1 by comparing ‘research on’ and ‘research for’ approaches. We are now in business! Most research is founded in analysis of particular systems of interest and in Chapter 4, we explore a variety of approaches before return to ‘real challenges and ‘problem solving’ in Chapter 5. The argument is illustrated in Chapters 6 and 7 from decades of evolving personal experience. The final chapters, 8 and 9, focus on the organisation of research – first at a personal or project level, secondly at a national system level.

Chapter 2. Being interdisciplinary

2.1. Disciplines and beyond

Disciplines are social coalitions and have considerable power. Expertise in a discipline usually involves high levels of skill and deep learning. Conferences, journals and probably the majority of learned societies are organised as disciplines. There are even powerful coalitions of subdisciplines with even further subdivisions into factions – Tony Becher’s ‘academic tribes’[8]. Here, I follow the argument first put in Knowledge Power[9] that it is possible to define disciplines in a systematic way as a means towards understanding them, and then to consider how they fit into an interdisciplinary framework. It can be argued that there are (in broad terms) three kinds of discipline: those that are abstract and enabling; those defined in terms of the big systems; and those rooted in the professions.

As examples of the first, the enablers, consider philosophy, mathematics, and computer science. Philosophy could be said to be about how to think clearly and  it certainly crosses disciplines; mathematics does have a life of its own but is particularly valuable in providing the underpinnings of many disciplines; while computer science has become the enabling discipline par excellence. As we will see in later chapters, there may now be an argument for adding ‘AI and data science’ as a fifth enabler.

The ‘big systems’ disciplines at a high level can be defined in terms of the physical, biological and the social – including the humanities in the ‘social. There are then specialisms within each – physics and chemistry for example within the physical sciences, and then there is much differentiation by scale – from the micro to the cosmic in physics for example. The beginnings of one kind of interdisciplinarity can be seen with interactions across the big system boundaries and the evolution of new (sub?) disciplines or coalitions such as biochemistry and biophysics.

The professional disciplines, medicine being a striking example, are already interdisciplinary along with the addition of skills for professional practice. Consider, for example, law, engineering and planning in this context. These disciplines can usually be identified by a concept of practitioners being ‘chartered’ – licensed to practice.

There is an organisational challenge which holds back interdisciplinary research: universities are mostly organised in departments of traditional disciplines. They have responded to the needs of interdisciplinarity by setting up large numbers of centres and institutes, often driven by the research councils laudable interdisciplinary strategies. However, departments typically retain the teaching of undergraduate students and therefore the bulk of the core funding.[10] Academics are often discipline bound by the promotion criteria and procedures in their universities – reinforced, for example, by the importance of publications in ‘top journals’ rooted in disciplines.

In the next section, we show how systems thinking and the idea of requisite knowledge drives – indeed, forces – interdisciplinarity. This is reinforced by Brian Arthur’s idea of combinatorial evolution which shows how ideas cross disciplines and leads to the formation of new coalitions.

2.2 Interdisciplinarity

Systems thinking, as introduced in Chapter 1, drives us to interdisciplinarity: we need to know everything about the system of interest in our research and that means anything and everything that any relevant discipline can contribute. For almost any social science system of interest, there will be available knowledge at least from economics, geography, history, sociology, politics and psychology, plus enabling disciplines such as mathematics, statistics, computer science and philosophy; and many more[11]. This perspective points up the paucity of disciplinary approaches. We should recognise that the professional disciplines such as medicine already have a systems focus and so in one obvious sense are interdisciplinary. But in the medicine case, the demand for in-depth knowledge has generated a host of specialisms which again produce silos and a different kind of interdisciplinary challenge. There is also the special case of medical diagnosis which is interdisciplinary par excellence!

Some systems’ foci are strong enough to generate new, if minor disciplines. Transport Studies is an example, though perhaps dominated by engineers and economists[12]? There is a combinatorial problem here. In terms of research challenges, there are very many ways of defining systems of interest and mostly, they are not going to turn into new disciplines.

How do we build the speed and flexibility of response to take on new challenges effectively? A starting point might be the recognition of ‘systems science’ as an enabling discipline  in its own right that should be taught in schools, colleges and universities along with, say, mathematics! This could help to develop a capability to recognise and work with transferable concepts – super concepts – and generic problems (for which ‘solutions’, or at least beginnings, exist). In my Knowledge power book, I identified 100 super concepts. A sample of these follow.

  • systems (scales, hierarchies, ..….)
  • accounts (and conservation laws)
  • probabilities
  • equilibrium (entropy, constraints, ……)
  • optimisation
  • non-linearity, dynamics (multiple equilibria, phase transitions, path dependence)
  • Lotka-Volterra-Richardson dynamics

Many of these are developed further in subsequent chapters. Each of these has a set of generic problems associated with them – though this was not an argument that was fully developed in the book! The systems’ argument is pursued throughout this book. Any systems of interest that come to mind and allows us to assemble a tailored toolkit, building on super concepts and associated generic problems integrated with deep domain knowledge. This is the heart of the interdisciplinary challenge.

 Systems entities can nearly always be accounted for and literally, counted. In a time period, they will define a system state at the beginning and a system state at the end; and entities can enter or leave the system during the period. This applies, for example, to populations, goods, money and transport flows. In each case, an account can be set out in the form of a matrix and this is usually a good starting point for model building. This is direct in demographic modelling, in input-output modelling in economics, and in transport modelling.

The ‘behaviour’ of most entities in a social science system is not deterministic, and therefore the idea of probability is a starting point. The modelling task, implicitly or explicitly, is to estimate probability distributions. We often need to do this subject to various constraints – that is, prior knowledge of the system such as, for example, a total population. It then turns out that the most probable distribution consistent with any known constraints can be estimated by maximising an entropy function, or through maximum likelihood, Bayesian or random utility procedures – all of which can be shown to be equivalent in this respect – a superconcept kind of idea in itself. Which approach is chosen is likely to be a matter of background and taste. The generic problem in this case is the task of modelling a large population of entities which are only weakly interacting with each other. These two conditions must be satisfied. Then the method can generate, usually, good estimates of equilibrium states of the system (or in many cases, subsystem). It can also be used to estimate missing data – and indeed complete data sets from samples following model calibration based on the sample. We will return to this when we discuss Weaver’s work in Chapter 8, section 8.6.

Many hypotheses, or model-building tasks, involve optimisation and there is a considerable toolkit available for these purposes. The methods of the previous paragraph all fall into this category for example. However, there may be direct hypotheses such as utility or profit maximisation in economics. It is then often the case that simple maximisation does not reproduce reality – for example because of imperfect information on the part of participants. In this case, a method like entropy-maximising can offer ‘optimal blurring’![13]

The above examples focus on systems in equilibrium and in that sense, on the fast dynamics of systems: the assumption is that, after a change, there will be a rapid return to equilibrium. We can now shift to the slow dynamics – for example in the cities case, evolving infrastructure. We are then dealing with (sub)systems that do not satisfy the ‘large number of elements/weak interactions’ conditions. These systems are typically nonlinear and a different approach is needed. Such systems have generic properties: multiple equilibria, path dependence and the possibility of phase changes – the last, abrupt transitions at critical parameter values. Examples of phase changes are the shift to supermarket food retailing in the early 1960s and ongoing gentrification in central areas of cities. Working with Britton Harris in the late 70s, we evolved, on an ad hoc basis, a model to represent retail dynamics which did indeed have these properties. It was only later that I came to realise that the model equations were examples of the Lotka-Volterra equations from ecology. In one version of the latter, species compete for resources; in the retail case, retailers compete for consumers – and this identifies the generic nature of these model-building problems. The extent of the range of application is illustrated by Richardson’s work on the mathematics of war. It is also interesting that Lotka’s, Volterra’s and Richardson’s work were all of the 1920s and 1930s illustrating a different point: that we should be aware of the modelling work of earlier eras for ideas for the present![14]

The path dependent nature of these dynamic models accords with intuition: for example, the future development of a city depends strongly on what is present at a point in time. Path dependence is a sequence of ‘initial conditions’ – the data at a sequence of points in time – and this offers a potentially useful metaphor – that these initial conditions represent the ‘DNA’ of the system.

These illustrations of the nature of interdisciplinarity obviously stem from my own experience – my own intellectual tool kit that has been built over a long period. The general argument is that to be an effective contributor in interdisciplinary work it is worthwhile, probably consciously, to build intellectual tool kits that serve particular systems of interest, and that this involves pretty wide surveys of the possibilities – breadth as well as, what is still very much needed, depth. This leads directly into the notion of ‘requisite knowledge’ which is explored in the next section.

2.3. Requisite knowledge

We now work towards a law which provides a basis for interdisciplinaity. W Ross Ashby was a psychiatrist who, through books such as Design for a brain[15], was one of the pioneers of the development of systems theory in the 1950s. A particular branch of systems theory was ‘cybernetics’ – from the Greek ‘steering’ – essentially the theory of the ‘control’ of systems. This was, and I assume is, very much a part of systems engineering and it attracted mathematicians such as Norbert Weiner[16]. For me, an enduring contribution was ‘Ashby’s Law of Requisite Variety’ which is simple in concept and anticipates much of what we now call complexity science. ‘Variety’ is a measure of the complexity of a system and is formally defined as the number of possible ‘states’ of a system of interest. A coin to be tossed has two possible states – heads or tails; a machine can have billions. Suppose some system of interest has to be controlled – for simplicity – a machine, a robot say. Then the law of requisite variety asserts that the control system must have at least the same variety as the machine it is trying to control. This is intuitively obvious since any state of the machine must be matched in some way by a state of the control unit – it needs an ‘if ….. then ……’ mechanism. Suppose now that the system of interest is a country and the control system is its government. It is again intuitively obvious that the government does not have the ‘variety’ of the country and so its degree of control is limited. Suppose, further, that the government of a country is a dictatorship and wants a high degree of control. This can only be achieved by reducing the ‘variety’ of the country through a system of rules.

The law of requisite variety can be seen as underpinning the argument for devolution from ‘centre’ to ‘local’ – a way of building ‘variety’ into governance. This is an idea which can be applied in many situations – for example, in universities and research councils. We begin to see how a concept which appears rooted in engineering can be applied more widely.

We can now take a bigger step and apply it to ‘knowledge’, and, specifically, to the knowledge required to make progress with a research problem. The ‘problem’ is now associated with a ‘system of interest’ and the ‘requisite knowledge’ is that which is required to make progress with the research problem. The application of the law of requisite variety can then be interpreted as relating to the specification of the toolkit of knowledge elements needed, and the law asserts that it must be at least as complex as the problem. We might think of this as the ‘requisite knowledge toolkit’. It seems to me that this is an important route into thinking about how to do research. What do I need to know? What do I need in my toolkit? It forces an interdisciplinary perspective at the outset.

Consider, as an example, the housing problem in the UK (cf. Real challenges, section 1.2 in Chapter 1 which we elaborate here): what is the requisite knowledge which would be the basis for shifting from building the current 150,000 new houses p.a. in the UK to an estimated ‘need’ of 300,000+ p.a.? We can get a clue from ‘How to start(Chapter 3, section 3.1 to follow): there will be policy, design and analysis elements of the toolkit. Elementary economics will tell us that builders will only build if the products can be sold, and in turn this means can be afforded – basic supply and demand. Much of the price is determined by the price of land – so land economics is important; or if land price is too high for elements of need to be met, there may be an argument for Government subsidies to generate social housing. Alternatively, prices could be influenced by the cost of building and this raises questions of whether new technology could help – and this brings engineering (and international experience) very much into the toolkit. Given that there is likely to be a substantial expansion, geography kicks in: where can this number of houses be built? This is in part a question of ‘where across the UK’ and in part, ‘where in, or on the periphery of, particular cities’. Or new ‘garden cities’? All of this raises questions for the planning profession. The builders are part of a wider ecosystem which includes land owners and the government in relation to the regulation of land, through taxation or other means. This all become part of the research task.

There are challenges for all of us who might want to work on this issue – academia, divided by discipline; the professions, functioning in silos; the land owners, developers and builders; and government, wanting to make progress but finding it difficult to coral the different groups into an effective unit. In this case, the RK-toolkit can be assembled as a knowledge base, but this sketch shows that an important part of it is a capacity to assemble the right teams.

2.4. Combinatorial evolution

Brian Arthur introduced a new and important idea in his book The nature of technology[17]: that of ‘combinatorial evolution’ which gives us a different kind of insight into interdisciplinarity. The argument, put perhaps overly briefly, is essentially this: a ‘technology’, an aeroplane say, can be thought of as a system, and then we see that it is made up of a number of subsystems; and these can be arranged in a hierarchy. Thus the plane has engines, engines have turbo blades etc. The control system must sit at a high level in the hierarchy and then at lower levels we will find computers. Arthur’s key idea is that most innovation comes at lower levels in the hierarchy, and through combinations of these innovations – hence combinatorial evolution. The computer may have been invented to do calculations but, as with aeroplanes, now figures as the ubiquitous lynchpin of sophisticated control systems.

This provides a basis for exploring research priorities and unsurprisingly, it forces us into an interdisciplinary perspective. Arthur is in the main concerned with hard technologies and with specifics, like aeroplanes. However, he does remark that the economy ‘is an expression of technologies’ and that technological change implies structural change. Then: ‘…economic theory does not usually enter [here]………. it is inhabited by historians’. We can learn something here about dynamics, about economics and about interdisciplinarity! However, let us focus on cities. We can certainly think of cities as technologies – and much of the smart cities’ agenda can be seen as low-level innovation that can then have higher level impacts. We can also see the planning system as a soft technology. What about the science of cities, and urban modelling? Arthur’s argument about technology can be applied to science. Think of ‘physics’ as a system of laws, theories, data and experiments. Think of spelling out the hierarchy of subsystems and, historically, charting the levels at which major innovations have been delivered. Translate this more specifically to our urban agenda. If (in broad terms) modelling is the core of the science of cities, and that (modelling) science is one of the underpinnings of the planning system, can we chart the hierarchy of subsystems and then think about research priorities in terms of lower-level innovation?

This really needs a diagram, but that is left as an exercise for the reader! Suppose the top-level is a working model – a transport model, a retail model or a Lowry-based comprehensive model[18]. We can represent this and three (speculative) lower levels broadly as follows.

  • level 1: working model – static or dynamic
  • level 2 – cf. STM
    • system definition (entities, scales: sectoral, spatial, temporal); exogenous, endogenous variables
    • hypotheses, theories
    • means of operationalising (statistics, mathematics, computers, software,…)
    • information system (cleaned-up data; intermediate model to estimate missing data)
    • visualisation methods
  • level 3:
    • explore possible hypotheses and theories for each subsystem
    • data processing; information system building
    • preliminary statistical analysis
    • available mathematics for operationalising
    • software/computing power
  • level4:
    • raw data sources

An Arthur-like conjecture might be that the innovations are likely to emanate from levels 3 and 4. In level 3, we have the opportunity to explore alternative hypotheses and to refine theories. Something like utility functions, profits and net benefits are likely to be present in some form or other to represent preferences with any maximisation hypotheses subject to a variety of constraints (which are themselves integral parts of theory-building). We might also conjecture that an underlying feature that is always present is that behaviour will be probabilistic and so this should always be present. (In fact this is likely to provide the means for integrating different approaches.)

Can we identify likely innovation territories? The ‘big and open data’ movement will offer new sources and this will have impacts through levels 2 and 3. One consequence is likely to be the introduction of more detail – more categories – into the working model, exacerbating the ‘number of variables’ problem, which in turn could drive the modelling norm towards microsimulation. This will be facilitated by increasing computing power. We are unlikely to have fundamentally different underlying hypotheses for theory building but there may well be opportunities to bring new mathematics to bear – particularly in relation to dynamics. There is one other possibility of a different kind, reflected in level 2 – system definition – in relation to scales. There is an argument that models at finer scales should be controlled by – made consistent with – models at more aggregate scales. An obvious example is that the population distribution across the zones of a city should be consistent with aggregate level demography; and similarly for the urban economy. An intriguing possibility remains the application of the more aggregate methods (demographic and economic input-output) at fine zone scales.

2.5 Concluding comments

We have noted the power of disciplines and associated coalitions. We have seen how new (possibly sub) disciplines can evolve. But most importantly, we have seen how a systems focus forces interdisciplinarity in a fundamental sense. This leads to the notion of requisite knowledge in relation to our system of interest and our research challenges and then an exploration of combinatorial evolution. This latter adds insight and leads us into constructing a map of the knowledge base for our research problem. With these foundations in mind, we can progress in the next chapter to some of the practicalities of interdisciplinary research,  building on the framework presented in Chapter 1.

Chapter 3. How to start

3.1. First steps

There are starting points that we can take from ‘systems thinking’, theory development and ‘methods’ – including data – that made up the ‘STM’ framework (from Chapter 1): the first step in getting started. To recap:

  • S: define the system of interest, dealing with the various dimensions of scale etc
  • T: what kinds of theory, or understanding, can be brought to bear?
  • M: what kinds of methods are available to operationalise the theory, to build a model?

This is essentially analytical: how does the system of interest work? How has it evolved? What is its future? This approach will force an interdisciplinary perspective and within that, force some choices. For example, statistics or mathematics? Econometrics or mathematical economics? We should also flag a connection to Brian Arthur’s ideas on the evolution of technology – applied to research (cf. section 2.3 above). He would argue that our system of interest in practice can be broken down into a hierarchy of subsystems, and that innovation is likely to come from lower levels in the hierarchy. This was, in his case, technological innovation but it seems to me that this is applicable to research as well.

Then if applicable, we have also seen that a second step for getting started from Chapter 1: to ask questions about policy and planning in relation to the system of interest – the PDA. Recapping again:

P: what is the policy? (That is, what are the objectives for the future?) Should we develop a plan – another ‘P’?

D: can we design – that is ‘invent’ – possible plans?

A: we then have to test alternative plans by, say, running a model and to analyse and evaluate them. Ideally, the analysis would offer a range of indicators, perhaps using Sen’s capability framework[19] or offering a full cost-benefit analysis[20]. A policy or a plan is, in formal terms, the specification of exogenous variables that can then be fed into a model-based analysis[21].

These six steps form an important starting point that usually demands much thought and time. Note the links: the STM is essentially the means of analysis in the PDA. It may be thought that the research territory is in some sense pure analysis but most urban systems of interest have real-world challenges associated with them, and these are worth thinking about. Some ideas of research problems should emerge from this initial thinking. Some problems will arise from the challenges of model building, some from real on-the-ground problems. Examples follow.

  • Demographic models are usually built for an aggregate scale. Could they be developed for small zones – say for each of the 626 wards in London?
  • While there may be pretty good data on birth and death rates, migration proves much more difficult. First there are definitional problems: when is a move a migration – long distance? – and when is it residential relocation?
  • If we want to build an input-output model for a city, then unlike the case at the national level, there will be no data on trade flows – imports and exports – so there is a research challenge to estimate these.
  • There is then an economic analogue of the demography question: what would an input-output model for a small zone – say a London ward – look like? This could be used to provide a topology of zone (neighbourhood) types.
  • In the UK at the present time there is, in aggregate at the national level, a housing shortage. An STM description might focus on cities, or even small zones within cities. How does the housing shortage manifest itself: differential prices? What can be done about it? This last is a policy and planning question: alternative policies and plans could be explored – the PD part of PDA – and then evaluated – the A part.
  • What is the likely future of retail – relative sizes of centres, the impact of internet shopping etc?
  • How can ‘parental choice’ in relation to schools be made to work (without large numbers feeling very dissatisfied with second, third or fourth choices)?
  • Can we, should we, aim to do anything about road congestion?
  • Does responding to climate change at the urban scale involve shorter trips and higher densities? If so, how can this be brought about – the Design-question? If not, why not?!
  • Can we speculate about the future of work in an informed way – taking account of the possibilities of ‘hollowing out’ through automation?
  • And so on……!

Research questions can be posed and the STM-PDA framework should help. The examples indicated are real and ambitious, and it is right that we should aim to be ambitious. However, given the resources that any of us have at our disposal, the research plan also has to be feasible. There are different ways of achieving feasibility – probably two poles of a spectrum: either, narrow down the task to a small part of the bigger question; or stay with the bigger question and try to break into it – test ideas on a ‘proof-of-concept basis? The first of these is the more conservative, and can be valuable; and is probably the most popular with undergraduates doing dissertations or postgraduate students – and indeed their supervisors. It is lower risk, but potentially less interesting!

We can then add a further set of basic principles – offering topics for thought and discussion once the STM-PDA analysis is done at least in a preliminary way.

  • Try to be comprehensive, at least to capture as much of the inevitable interdependence in your system of interest as is feasible.
  • Review different approaches – e.g. to model building – and integrate where possible. There are some good opportunities for spin-off research in this kind of territory.
  • Think of applying good ideas more widely. I was well served in the use of the entropy concept in my early research days: having started in transport modelling, because I always wanted to build a comprehensive model, I could apply the concept to other subsystems and (with Martyn Senior) find a way of making an economic residential location model optimally sub-optimal[22]!
  • The ‘more widely’ also applies to other disciplines. Modelling techniques that work in a contemporary situation, for example, can be applied to historical periods – even ancient history and archaeology. (See Chapter 6, section 6.5)
  • There is usually much work to do on linking data from different sources and making it fit the definitions of your system of interest. Models can also be used for estimating missing data and for making samples comprehensive.

Preliminary thinking done, some structure generated: get started!!

3.2. ‘Research on’ vs ‘research for’

The previous section adds substance to the framework of Section 1.1. In a similar way we cam explore a new perspective on ‘real challenges as introduced in Section 1.2. In the usual way, we can begin with a ‘system of interest’ – henceforth ‘the system’ for brevity. We can then make a distinction between the ‘science of the system’ and the ‘applied science relating to the system’. In the second case, the implication is that the system offers challenges and problems that the science (of that system or possibly also with ‘associated’ systems) might help with. In research terms, this distinction can be roughly classified as ‘research on’ the system and ‘research for’ the system. This might be physics on the one hand, and engineering on the other; or biological sciences and medicine. There will be many groups of disciplines like this where there is a division of labour – though whether this division is always either clear or efficient can be a matter of debate. In the case of urban research (and possibly the social sciences more generally), perhaps because it is an under-developed interdisciplinary area, there is a division of labour but with a significant grey area in between. In the case of cities, the practitioners – the planners – are not well served by the ‘research on’ community; or perhaps they are not sufficiently well equipped to engage.[23] But there is also a concern that the division is too sharp and that the balance of research effort is more focused on ‘research on’ rather than contributing to ‘research for’.

There are a number of complications that we have to work to resolve. First, there is the fact that there are disciplinary agendas on cities – in economics, geography and sociology for example where they ought to be interdisciplinary. But it does illustrate the fact that there is a ‘research on’ versus ’research for’ challenge. The ‘research on’ school are concerned with how cities work, the ‘research for’ group with, for example, how to ‘solve’ (if that is the right question) traffic congestion; or housing problems; or social disparities. It is a long list as we have already seen.

A second complicating issue is the research councils ‘impact’ agenda. I have no problem with a requirement that all research should be intended to have impact. The opposite is absurd. However, that depends on the possibility of the impact being intellectual impact, within the science; that is, impact within ‘research on’. What seems to have happened is that the research councils’ definition has narrowed and impact in their sense is intended to relate to ‘real’ problems – in other words, to ‘research for’. Consider physics and engineering: while the tool kits overlap in some respects, they, and the associated mindsets, are pretty different. The same could be argued for research on cities except in this case, we don’t have labels that are analogous to physics and engineering. So we have to invent our own! From a research council perspective, this has not been clearly handled. There is an expectation that for any application, there will be a ‘pathways to impact’ statement. If the research in question is of a ‘research on’ kind, and if the associated tools do not obviously fit ‘research for’, then this is very difficult and there is quite a lot of jumping through hoops.

A third issue is the influence of the REF (in the UK) on research priorities[24]. Again, there is an element of required impact and yet the bulk of the panels are made up of ‘research on’ academics. It is even argued – or is it just in our subconscious? – that ‘pure’ research is more worthwhile in REF terms than ‘applied’. It was once suggested to me in the context of a university Business School – not in these words, that the ‘research on firms’ was more important for the REF than ‘research for firms’ – because the latter could be considered as consultancy and therefore of a lower grade. There is some truth in this in that ‘research on’ can produce wider ranging ‘general results’ that offer insight as opposed to specific case studies that don’t generalise. But in the social sciences at least, it is the case studies that eventually lead to the general, grounded in evidence.

There is then a fourth issue, more like a challenge: if impact is really desirable – and it is – how can the users get the best from the researchers? It is often argued that the UK has very high quality research but, to a substantial extent, fail to reap the rewards of application. Indeed there have been commissions of many kinds for decades on how academic research can be better linked to application – I would guess a study roughly once every two years. There are various ‘solutions’ and many have been tried but success has been, at best, partial. The ‘research on’ community remain the largest group of academic researchers and retain the ‘prestige’ that serve it well in many ways. There are significant straws in the wind, at least in the UK: a shifting of research resources in the direction of Innovate UK and the establishment of the catapults[25]; some redirection of research council funding. But my guess is that there is a battle for hearts and minds still being fought.

What do we need? Some clarity of thought, some changes of mindset especially in terms of prestige, and perhaps above all, some demonstrators that show that ‘research for’ can be just as exciting as ‘research on’ – in many cases much more so. In the urban research world, we are lucky in principle in that we can have it both ways: discoveries in the science often have pretty immediate applications, but there are opportunities for more ‘research on’ researchers to spend at least some time in the ‘research for’ community! In my own case, the most striking example was working to build a spin-out company – GMAP – as presented in Chapter 6, section 6.4. This was a demonstration of having it both ways: ‘research for’ provided access to data which could then be used in basic research – ‘research on’.

3.3. Concluding comments

We are now in a position to start: we can define a research challenge, we have frameworks which help us to be interdisciplinary. The next step is to begin to build a toolkit from which methods and ideas can be drawn and brought to bear on our problem. The PDA framework helps us here, and we begin in Chapter 4 with ‘analysis’ – core knowledge of our system of interest is always likely to be a good starting point. We then take design and policy together in Chapter 5 under the heading of ‘problem-solving’.

4. Analysis

4.1. Introduction: models, then data

In broad terms, ‘analysis’ represents the underpinning science via STM, applied through the PDA framework. In the illustrations that follow, the emphasis is on quantitative research to illustrate the principles of interdisciplinary research – fully recognising that there are alternative, relevant and complementary perspectives – and indeed, the qualitative informs the quantitative, particularly through the theory-building part of STM. However, the development of computing power and, more recently, access to real-time ‘big’ data sources, have added momentum to quantitative interdisciplinary social, economic, and geographic research. A narrative on a theory of, for example, how a city functions can be translated into mathematics and then into a computer model, and we use the idea of a model in a number of the following sections to illustrate interdisciplinary urban research that can also be applied within a PDA framework.

The history of urban mathematical and computer modelling is sketched in section 4.2. as a prime example of interdisciplinary research. Of course, there are different and competing approaches to model development and these are explored in sections 4.3 and 4.4 in ways that might help researchers make their choices in model design. We then work towards a series of sections on the impact of increased data availability and the development of data science as an underpinning technology for the growth of AI and the range of its application in a number of domains. The ordering here is deliberate. An argument is sometimes put in terms of ‘big data’ – the exponentially increasing supply of data – that analysis of the data will somehow make modelling (and theory?) obsolete. The opposite remains the case: we need the  understanding of our systems of interest to know which data are relevant!

4.2. The power of modelling: understanding and planning cities – an example

The ‘science of cities’ has a long history. The city was the market for von Thunen’s analysis of agriculture in the late 18th Century. There were many largely qualitative explorations in the first half of the 20th Century. However, cities are complex systems and the major opportunities for scientific development came with the emergence of computer power in the 1950s. This coincided with large investments in highways in the United States and computer models were developed to predict both current transport patterns and the impacts of new highways. Plans could be evaluated and the formal methods of what became cost-benefit analysis were developed around that time. However, it was always recognised that transport and land use were interdependent and that a more comprehensive model was needed. There were several attempts to build such models but the one that stands out is I. S. (Jack) Lowry’s model of Pittsburgh which was published in 1964. This was elegant and (deceptively) simple – represented in just 12 algebraic equations and inequalities. Many contemporary models are richer in detail and have many more equations but most are Lowry-like in that they have a recognisably similar core structure.[26]

So what is this core and what do we learn from it? How can we add to our understanding by adding detail and depth? What can we learn by applying contemporary knowledge of system dynamics? What does all this mean for future policy development and planning? The argument is illustrated and referenced from my own experience as a mathematical and computer urban modeller but the insights work on a broader canvass.

The Lowry model is iconic in its representation of urban theory into a comprehensive model. He[27] started with some definitions of urban economies with two broad categories:  ‘basic’ – mainly industry, and from the city’s perspective, exporting; and ‘retail’, broadly defined, meaning anything that serves the population[28]. He then introduced some key hypotheses about land. For each zone of the city he took total land, identified unusable land, allocated land to the basic sector and then argued that the rest was available to retail and housing, with retail having  priority. Land available for housing, therefore, was essentially a residual.

A model run then proceeds iteratively. Basic employment is allocated exogenously to each zone – possibly as part of a plan. This employment is then allocated to residences and converted into total population in each zone. This link between employment zones and residential zones can be characterised as ‘spatial interaction’ manifested by the ‘journey to work’. The population then ‘demands’ retail services and this generates further employment which is in turn allocated to residential zones. (This is another spatial interaction – between residential and retail zones.) At each stage in the iteration the land use constraints are checked and if they are exceeded (in housing demand) the excess is reallocated. And so the city ‘grows’. This growth can be interpreted as the model evolving to an equilibrium at a point in time or as the city evolving through time – an elementary form of dynamics.

The essential characteristics of the Lowry model which remain at the core of our understanding are

  • the distinction between basic (outward serving) and retail (population serving) sector of the urban economy;
  • the ‘spatial interaction’ relationships between work and home and between home and retail;
  • the demand for land from different sources, and in particular housing being forced to greater distances from work and retail as the city grows. This has obvious implications for land value and rents.

In the half century since Lowry’s work was published, depth and detail have been added and the models have become realistic, at least for representing a working city and for short-run forecasting.  The longer run still provides challenges as we will see. It is now more likely that the Lowry model iteration would start with some ‘initial conditions’ that represent the current state. The model would then represent the workings of the city and could be used to test the impact of investment and planning policies in the short run. The economic model and the spatial interaction models would be much richer in detail and while it remains non-trivial to handle land constraints, submodels of land value both help to handle this and are valuable in themselves.[29]

Specifically:

  • the key variables can all be disaggregated – people for example can be characterised by age, sex, education attainment and skills and so be better matched to a similarly disaggregated set of economic sectors – demanding a variety of skills and offering a range of incomes;
  • population and analysis and forecasting can be connected to a full-developed demographic model;
  • the economy can be described by full input-output accounts and the distinction between basic and retail can be blurred through disaggregation;
  • the residential location component can be enriched through the incorporation of utility functions with a range of components and house prices can be estimated through estimates of ‘pressure’, thus facilitating the effective modelling of which types of people live where;
  • this all reinforces the idea that the different elements of the city are all interdependent.

‘Housing pressure’ will be related to the handling of land constraints in the model. In the Lowry case, this was achieved by the reallocation of an undifferentiated population when zones became ‘full’. With contemporary models, because house prices can be estimated (or some equivalent), it is these prices that handle the constraints.

While the Lowry-type models remain comprehensive in their ambition, sectoral models – particularly in the transport and retail cases – are usually developed separately in even greater depth and as such, they can be used for short run forecasting. Supermarket companies, for example, routinely use such models to estimate the revenue attracted to proposed new stores which supports the planning of their investment strategies.[30]

The models as described above are essentially statistical averaging models[31] and they work well for large populations where the predictions of the models are of ‘trip bundles’ rather than of individual behaviour. The models work well precisely because of this averaging process which takes out the idiosyncrasies of individuals. They use the mathematics – but with a different theoretical base – developed by Boltzmann in the late 19th Century in physics. But what can we then say about individual behaviour? Two things: we can interpret the ‘averaging models’, and we can seek to model individual behaviour directly.

In the first case, there are elements of the models that can be interpreted as individual utility functions. In the retail case for example, it is common to estimate the perceived benefits of shopping centre size and to set these against the costs of access (including money costs and estimated values of different kinds of time). What the models do through their averaging mechanism is to represent distributions of behaviour around average utilities. This is much more realistic than the classic economic utility maximising models as shown through goodness-of-fit measures.

The second case demands a new kind of model and these have been developed as so-called agent-based models (ABMs). A population of individual ‘agents’ is constructed along with an ‘environment’. The agents are then given rules of behaviour and the system evolves. If the rules are based on utility maximisation on a probabilistic basis, then the two kinds of model can be shown to be broadly equivalent[32]

The argument to date has been essentially geo-economic though with some implicit sociology in the categorisation of variables when the models are disaggregated. There is more depth to be added in principle from sociological and ethnographic studies and if new findings can be clearly articulated, this kind of integration can be achieved.

The models described thus far represent the workings of a city at a point in time – give or take the dynamic interpretation of the Lowry model. There is an implicit assumption that if there is a ‘disturbance’ – an investment in a new road or a housing estate for example – then the city returns to equilibrium very quickly and so this can be said to characterise the ‘fast dynamics’. It does mean that these models can be used, and are used, to estimate the impacts of major changes in the short term. The harder challenge is the ‘slow dynamics’ – to model the evolution of the slower changing structural features of a city over a longer period. This takes us into interdisciplinary territory now sometimes known as ‘complexity science’. When the task of building a fully dynamic model is analysed, it becomes clear that there are nonlinear relationships – for example, as retail centres grow, there is evidence that there are positive returns to scale. Technically, we can then draw on the mathematics of nonlinear complex systems which show that we can expect to find path dependence – that is dependence on initial conditions –  and phase changes – that is, abrupt changes in form as ‘parameters’ (features such as income or car ownership) pass through critical values. The particular models in mathematical terms bear a family relationship to the Lotka-Volterra models[33] originally designed to model ecological systems in the 1930s but which can now be seen as having a much wider range of application[34]. See also Section 4.5 below.)

These ideas can be illustrated in terms of retail development. In the late 1950s and early 60s, corner-shop food retailing was largely replaced by supermarkets. By the standards of urban structural change, this was very rapid, and it can be shown that this arose though a combination of increasing incomes and car ownership – hence, in effect, increasing accessibility to more distant places. This was a phase change. Path dependence is illustrated by the fact that if a new retail centre is developed, its success in terms of revenue attracted will be dependent on the existing pattern of centres – the initial conditions – and again this can be analysed using dynamic models[35].

This leads us to two fundamental insights: first, it is impossible to forecast for the long term because of the likelihood of phase changes at some point in the future; and secondly, the initial structure of the city – the ‘initial conditions – might be thought of as the ‘DNA’ of the city and this will in substantial part determine what futures are possible. Attempts to plan new and possibly more desirable futures can be thought of as ‘genetic planning’ by analogy with genetic medicine.

Given these insights, how can we investigate the long term – 25 or 50 years ahead? We can investigate a range of futures through the development of scenarios and then we can deploy Lowry- Boltzmann like models to investigate the performance of these and we can use the fully dynamic Lotka-Volterra models to explore the possible paths to give insights on what has to be done to achieve these futures.

There is a key distinction in the application of models: that between variables which are exogenous to the model and those that are endogenous. The exogenous variables are specified either as forecasts or as components of plans and the model can then be run to calculate the endogenous variables for the new situation. This is done more or less routinely in transport and retail sector planning. For example, a new road can be ‘inserted’ into the model, the model rerun, and the ‘adjusted’ city explored. In this case, a cost-benefit analysis can be carried out along with the calculation of accessibilities. In the case of retail, a developer or a retailer can run the model to calculate the revenue attracted to a new store and then calculate the maximum level of investment that would make such a store profitable – often, how much to bid for a site. It is possible now, but relatively rare, to apply these methods in the public sector in fields such as education and health and indeed, model-based methods could be used to underpin master planning and thus contribute to effective housing development and associated green belt policies.

As we have seen in our brief review of dynamics, this only works in this way for the short run – the impacts of building a new road or opening a new store. For the long run, it is necessary to shift to scenario development and this created opportunities to explore possible solutions to the biggest challenges – the so-called wicked problems.

It is not difficult to construct a wicked problems list[36]: regeneration, especially economic, in many ‘poor’ towns, embracing the north vs south issues – opportunities for fulfilling work more broadly; chronic housing shortages; transport congestion, limiting accessibilities; the long tail of failure in education; the post-code lottery aspect of health care; and perhaps the biggest challenge of all, responding to climate change and low-carbon targets. Applications of computer models will not solve these problems. This brings home the policy and design dimensions of planning: policies to attack wicked problems need to be ‘serious’ and to be seen to be so; possible solutions have to be invented. These ambitions then provide the basis for developing more radical scenarios as well as ‘more of the same’. And then the skills of the modeller kick in again in the analysis of feasibility, calculating costs and benefits, and charting the path from A to B – from the present to a rewarding future.

4.3. Competing models: truth is what we agree about

I have always been interested in philosophy and the big problems – particularly, ‘What is truth?’. How can we know whether something – a sentence, a theory, a mathematical formula – is true? And I guess because I was a mathematician and a physicist early in my career, I was particularly interested in the subset of this which is the philosophy of mathematics and the philosophy of science. I read a lot of Bertrand Russell – which perhaps seems rather quaint now.[37] The maths and the science took me into Popper and the broader reaches of logical positivism. Time passed and I found myself a young university professor, working on mathematical models of cities, then the height of fashion. But fashions change and by the late 70s, on the back of distinguished works like David Harvey’s ‘Social justice and the city’, I found myself under sustained attack from a broadly Marxist front. ‘Positivism’ became a term of abuse and Marxism, in philosophical terms – or at least my then understanding of it, merged into the wider realms of structuralism. I was happy to come to understand that there were hidden (often power) structures to be revealed in social research that the models I was working on missed, therefore undermining the results.

This was serious stuff. I could reject some of the attacks in a straightforward way. There was a time when it was argued that anything mathematical was positivist and therefore bad and/or wrong. This could be rejected on the grounds that mathematics was a tool and that indeed there were distinguished Marxist mathematical economists such as Sraffa. But I had to dig deeper in order to understand. I read Marx, I read a lot of structuralists some of whom, at the time, were taking over English departments. I even gave a seminar in the Leeds English Department on structuralism! In my reading, I stumbled on Jurgen Habermas and this provided a revelation for me. It took me back to questions about truth and provided a new way of answering them. In what follows, I am sure I oversimplify. His work is very rich in ideas, but I took a simple idea from it: truth is what we agree about. I say this to students now who are usually pretty shocked. But let’s unpick it. We can agree that 2 + 2 = 4. We can agree about the laws of physics – up to a point anyway – there are discoveries to be made that will refine these laws as has happened in the past. That also connects to another idea that I found useful in my toolkit: C. S. Peirce and the pragmatists. I will settle for the colloquial use of ‘pragmatism’: we can agree in a pragmatic sense that physics is true – and handle the refinements later. I would argue from my own experience that some social science is ‘true’ in the same way: much demography is true up to quite small errors – think of what actuaries do. But when we get to politics, we disagree. We are in a different ball park. We can still explore and seek to analyse and having the Habermas distinction in place helps us to understand arguments.

How does the agreement come about? The technical term used by Habermas is ‘intersubjective communication’ and there has to be enough of it. In other words, the ‘agreement’ comes on the back of much discussion, debate and experiment. This fits very well with how science works. A sign of disagreement is when we hear that someone has an ‘opinion’ about an issue. This should be the signal for further exploration, discussion and debate rather than simply a ‘tennis match’ kind of argument.

Where does this leave us as social scientists? We are unlikely to have laws in the same way that physicists have laws but we have truths, even if they are temporary and approximate. We should recognise that research is a constant exploration in a context of mutual tolerance – our version of intersubjective communication. We should be suspicious of the newspaper article which begins ‘research shows that …..’ when the research quoted is based on a single, usually small, sample survey, and regression analysis. We have to tread a line between offering knowledge and truth on the one hand and recognising the uncertainty of our offerings on the other. This is not easy in an environment where policy makers want to know what the evidence is, or what the ‘solution’ is, for pressing problems and would like to be more assertive than we might feel comfortable with. The nuances of language to be deployed in our reporting of research become critical. 

Models are representations of theories. I write this as a modeller – someone who works on mathematical and computer models of cities and regions but who is also seriously interested in the underlying theories I am trying to represent. My field, relative say to physics, is underdeveloped. This means that we have a number of competing models and it is interesting to explore the basis of this and how to respond. What is ‘truth’ in this context?  There may be implications for other fields – even physics!

A starting conjecture is that there are two classes of competing models: (i) those that represent different underlying theories (or hypotheses); and (ii) those that stem from the modellers choosing different ways of making approximations in seeking to represent very complex systems. The two categories overlap of course. I will conjecture at the outset that most of the differences lie in the second (perhaps with one notable exception). So let’s get the first out of the way. Economists want individuals to maximise utility and firms to maximise profits – simplifying somewhat of course. They can probably find something that public services can maximise – health outcomes, exam results – indeed a whole range of performance indicators. There is now a recognition that for all sorts of reasons, the agents do not behave perfectly and way have been found to handle this. There is a whole host of (usually) micro-scale economic and social theory that is inadequately incorporated into models, in some cases because of the complexity issue – the niceties are approximated away; but in principle, that can be handled and should be. There is a broader principle lurking here: for most modelling purposes, the underlying theory can be seen as maximising or minimising something. So if we are uncomfortable with utility functions or economics more broadly, we can still try to represent behaviour in these terms – if only to have a base line from which behaviour deviates.

So what is the exception – another kind of dividing line which should perhaps have been a third category? At the pure end of a spectrum, ‘letting the data speak for themselves’. It is mathematics vs statistics; or econometrics vs mathematical economics. Statistical models look very different – at least at first sight – to mathematical models – and usually demand quite stringent conditions to be in place for their legitimate application. Perhaps, in the quantification of a field of study, statistical modelling comes first, followed by the mathematical? Of course there is a limit in which both ‘pictures’ can merge: many mathematical models, including the ones I work with, can be presented as maximum likelihood models. This is a thread that is not to be pursued further here, and I will focus on my own field on mathematical modelling.

There is perhaps a second high-level issue. It is sometimes argued that there are two kinds of mathematician: those that think in terms of algebra and those who think in terms of geometry. (I am in the algebra category which I am sure biases my approach.) As with many of these dichotomies, they should be removed and both perspectives fully integrated. But this is easier said than done!

How do the ‘approximations’ come about? I once tried to estimate the number of variables I would like to have for a comprehensive model of a city of 1M people and at a relatively coarse grain, the answer was around 1013![38] This demonstrates the need for approximation. The first steps can be categorised in terms of scale: first, spatial – referenced by zones of location rather than continuous space – and how large should the zones be? Second, temporal: continuous time or discrete? Third, sectoral: how many characteristics of individuals or organisations should be identified and at how fine a grain? Experience suggests that the use of discrete zones – and indeed other discrete definitions – makes the mathematics much easier to handle. Economists often use continuous space in their models, for example, and this forces them into another kind of approximation: monocentricity, which is hopelessly unrealistic. Many different models are simply based on different decisions about, and representations of, scale.

The second set of differences turn on focus of interest. One way of approximating is to consider a subsystem such as transport and the journey to work, or retail and the flow of revenues into a store or a shopping centre. The dangers here are the critical interdependencies are lost and this always has to be borne in mind. Consider the evaluation of new transport infrastructure for example. If this is based purely on a transport model, there is a danger than the cost-benefit analysis will be concentrated on time savings rather than the wider benefits – perhaps represented by accessibilities. There is also a potentially higher-level view of focus. Lowry very perceptively  once pointed out that models often focus on activities – and the distribution of activities across zones; or on the zones, in which case the focus would be on land use mix in a particular area. The trick, of course, is to capture both perspectives simultaneously – which is what Lowry achieved himself very elegantly but which has been achieved only rarely since.

A major bifurcation in model design turns on the time dimension and the related assumptions about dynamics. Models are much easier to handle if it is possible to make an assumption that the system being modelled is either in equilibrium or will return to a state of equilibrium quickly after a disturbance. There are many situations where the equilibrium assumption is pretty reasonable – for representing a cross-section in time or for short-run forecasting, for example, representing the way in which a transport system returns to equilibrium after a new network link or mode is introduced. But the big challenge is in the ‘slow dynamics’: modelling how cities evolve.

It is beyond our scope here to review a wide range of examples. If there is a general lesson, it is that we should be tolerant of each others’ models, and we should be prepared to deconstruct them to facilitate comparison and perhaps to remove what appears to be competition but needn’t be. The deconstructed elements can then be seen as building bricks that can be assembled in a variety of ways. For example, ‘generalised cost’ in an entropy-maximising spatial interaction model can easily be interpreted as a utility function and therefore not in competition with economic models. Cellular automata models, and agent-based models are similarly based on different ‘pictures’ – different ways of making approximations. There are usually different strengths and weaknesses in the different alternatives. In many cases, with some effort, they can be integrated[39]. From a mathematical point of view, deconstruction can offer new insights. We have, in effect, argued that model design involves making a series of decisions about scale, focus, theory, method and so on. What will emerge from this kind of thinking is that different kinds of representations – ‘pictures’ – have different sets of mathematical tools available for the model building. And some of these are easier to use than others, and so, when this is made explicit, might guide the decision process.

4.4. Abstract modes

I spent much time working on the Government Office for Science Foresight project on The future of cities. The focus was on the basis of a time horizon of fifty years into the future. It is clearly impossible to use urban models to forecast such long-term futures but it is possible in principle to explore systematically a variety of future scenarios. A key element of such scenarios is transport and we have to assume that what is on offer – in terms of modes of travel – will be very different to today – not least to meet sustainability criteria. The present dominance of car travel in many cities is likely to disappear. How, then, can we characterise possible future transport modes?[40]

This takes me back to ideas that emerged in papers published 50 years ago (or in one case, almost that). In 1966 Dick Quandt and William Baumol[41], distinguished Princeton economists, published a paper in the Journal of Regional Science on ‘abstract transport modes’. Their argument was precisely that in the future, technological change would produce new modes: how could they be modelled? Their answer was to say that models should be calibrated not with modal parameters, but with parameters that related to the characteristics of modes. The calibrated results could then be used to model the take up of new modes that had new characteristics. By coincidence, Kelvin Lancaster, Columbia University economist, published a paper, also in 1966, in The Journal of Political Economy on ‘A new approach to consumer theory’[42] in which utility functions were defined in terms of the characteristics of goods rather than the goods themselves. He elaborated this in 1971 in his book ‘Consumer demand: a new approach’. In 1967, my ‘entropy’ paper was published in the journal Transportation Research and a concept used in this was that of ‘generalised cost’. This assumed that the cost of travelling by a mode was not just a money cost, but the weighted sum of different elements of (dis)utility: different kinds of time, comfort and so as well as money costs. The weights could be estimated as part of model calibration. David Boyce and Huw Williams in their magisterial history of transport modelling, ‘Forecasting urban travel’, wrote, quoting my 1967 paper, “impedance … may be measured as actual distance, as travel time, as cost, or more effectively as some weighted combination of such factors sometimes referred to as generalised cost……… In later publications, ‘impedance’ fell out of use in favour of ‘generalised cost’”. (They kindly attributed the introduction of ‘generalised cost’ to me.)[43]

This all starts to come together. The Quandt and Baumol ‘abstract mode’ idea has always been in my mind and I was attracted to the Kelvin Lancaster argument for the same reasons – though that doesn’t seem to have taken off in a big way in economics. (I still have his 1971 book, purchased from Austicks in Leeds for £4-25.) I never quite connected ‘generalised cost’ to ‘abstract modes’. However, I certainly do now. When we have to look ahead to long-term future scenarios, it is potentially valuable to envisage new transport modes in generalised cost terms. By comparing one new mode with another, we can make an attempt – approximately because we are transporting current calibrated weights fifty years forward – to estimate the take up of modes by comparing generalised costs. I have not yet seen any systematic attempt to explore scenarios in this way and I think there is some straightforward work to be done – do-able in an undergraduate or master’s thesis!

We can also look at the broader questions of scenario development. Suppose for example, we want to explore the consequences of high-density development around public transport hubs. These kinds of policies can be represented in our comprehensive models by constraints – and I argue that the idea of representing policies – or more broadly ‘knowledge’ – in constraints within models is another powerful tool. This also has its origins in a fifty year old paper – Jack Lowry’s ‘Model of metropolis’[44]. In broad terms, this represents the fixing through plans of a model’s exogenous variables – but the idea of ‘constraints’ implies that there are circumstances where we might want to fix what we usually take as endogenous variables.

So we have the machinery for testing and evaluating long-term scenarios – albeit building on fifty year old ideas. It needs a combination of imagination – thinking what the future might look like – and analytical capabilities – ‘big modelling’. It’s all still to play for, but there are some interesting papers waiting to be written!!

4.5. Equations with names: the importance of Lotka and Volterra (and Tolstoy?)

One approach to developing ideas to support interdisciplinarity is to look at those – in the context here, ‘equations with names’, that over time have established themselves beyond their founders and into a more public (at least in academia) consciousness. Indeed, it can be argued that there is a Darwinian process that allows these to emerge as being both game changers and having wider roles. In this section, I draw on my own experience as an indicator of where we can continue to pick up new ideas from these kinds of equations.

The most famous equations with names – at least one being known almost universally – seem to come from physics. Newton’s Law of Gravity – the gravitational force between two objects is proportional to their masses and inversely proportional to the distance between them; Maxwell’s equations for electromagnetic fields; the Navier-Stokes’ equation in fluid dynamics; and E = mc2, Einstein’s equation which converts mass into energy. The latter is the only equation to appear in the index (under ‘E’) in Ian Stewart’s book ‘Seventeen equations that changed the world’.[45]  While the gravitational law has been used to represent situations where distance attenuation is important, the translation is analogous and not exact. An interesting example, pointed out to me by Mark Birkin, is Tolstoy in ‘War and Peace’: “Meanwhile, the very next morning after the battle, the French army moved against the Russians, carried along by its own impetus, now accelerating in inverse proportion to the square of the distance from its goal.” Penguin edition, 2005. Tolstoy would have written this in the 1860s!

The physics equations, on the whole, work in physics and not elsewhere. An exception – that is, it does work elsewhere and has served me well in my own work – is Boltzmann’s equation for entropy, S = klogW (to be found on his gravestone in Vienna and on many book covers, including one of mine). The other equations which have served me well – plural because they come in several forms – are the Lotka-Volterra equations, originally developed in ecology. Because of the nature of ecology relative to physics, they do not deliver the physics kind of ‘exactness’ but this may in part be the reason for their utility in translation to other disciplines.

The Boltzmann entropy-maximising method[46] works for any problem (and hence in a variety of fields) where there are large numbers of weakly-interacting elements and where interesting questions can be posed about average properties of the system. Boltzmann does this for the distribution of energy levels of particles in gases at particular temperatures for example. In my own work, I use the method to calculate, for example, journey to work flows in cities. The entropy measure was also introduced by Shannon into information theory and in one sense underpins much of computer science. When he produced his equation to measure ‘information’ he is said to have consulted the famous mathematician von Neumann on what to call the main term. “Call it ‘entropy’”, von Neumann replied (paraphrased): “It is like the entropy in physics and if you do this you will find in any argument, no one will understand it and you will always win!” I would dare to say that von Neumann was wrong in this respect: it can be explained. Cesar Hidalgo in his recent book ‘Why information grows’[47] makes the interesting point about Boltzmann’s work that it crosses and links scales – the atoms in the micro with the thermodynamic properties of the macro; this was unusual at the time and perhaps still is.[48]

The Lotka-Volterra equations are concerned with systems of populations of different kinds – different species in ecology for example. In one sense, their historical roots can be related back to Malthus and his exponential ‘growth of population’ model. In that model, there were no limits to growth, and these were supplied by Verhulst who dampened growth to produce the well-known logistic curve. (Bob May in the 1970s showed that this simple model has remarkable properties and was the route into chaos theory[49].) What Lotka and Volterra did – each working independently, unknown to each other – was to model two or more populations that interacted with each other. The simplest L-V model is the well known two-species predator-prey model. There is a logistic equation for each species linked by their interactions: the predator species grows when there is an abundance of prey; the prey species declines when it is eaten by the predator. Not surprisingly, there is an oscillating solution. What is more interesting in terms of the translation into other fields is the ‘competition for resources’ form of the L-V model. In this case, two or more species compete for one or more resources and this provides a way of representing interactions between species in an ecosystem. The translation comes through identifying systems of interest in which populations of other kinds compete for resources. There are examples in chemistry where molecules in a mixture compete for energy, in geography where retailers compete for consumers (as in my own work with Britton Harris[50]) and in security with Lewis Fry Richardson’s model of arms races and wars[51]. There are undoubtedly many more possibilities.

Lotka, Volterra and Richardson were working in the 1930s and 40s and there are interesting common features of their research. None of them worked primarily – at least in the first instance – in ecology. Lotka was a mathematician and chemist, and later an actuary; Volterra was a mathematician and an Italian Senator. Both came to mathematical biology relatively late. Richardson was a distinguished meteorologist and later a College Principal. It is worth looking at their original papers to see the extraordinary range of examples they pursued in each case (with real data which must have been difficult to accumulate) – particularly bearing in mind that there were no computers. Indeed, Richardson, at the end of one of his papers, thanks “…..the Government Grant Committee of the Royal Society for the loan of a calculating machine”! It was also interesting, perhaps not surprising given their mathematical skills, that they explored the mathematical properties of these systems of equations, in various forms, in some depth. Their work at the time was picked up by others – notably V. A. Kostitzin. I picked up a second-hand copy of his 1939 book ‘Mathematical biology’[52] via the internet after searching for Volterra’s work: Volterra wrote a generous preface of the book!

The Lotka-Volterra equations represent one of the keys to a particular kind of interdisciplinarity: a concept that can be applied across many disciplines because of the nature of what is a generic problem – modelling the ‘competition for resources’. In a particular instance of a research challenge, the trick is to be aware that the problem may be generic and that there are elements of a toolkit lurking in another discipline!

4.6. Data science will change the world?

The availability of data through a combination of multiplicity of ‘sensors’ and computing power – so-called ‘big data’ – has the power to revolutionise. It has been described as the ‘new oil’ in terms of the value it can deliver. Many disciplines are underpinned by mathematics, and experimental science by statistics. Since these are being transformed by data through new ‘disciplines’ such as machine learning together with advances in computer science, new foundations are being laid that are essentially interdisciplinary. All of this combines into the field of artificial intelligence. We can spend some time and space, therefore, in exploring these developments.

I have re-badged myself several times in my research career: mathematician, theoretical physicist, economist (of sorts), geographer, city planner, complexity scientist, and now data scientist. (Is data science a new enabling discipline – cf Chapter 2 – or is it firmly located within ‘statistics’? There is a similar question for AI.) My career path is partly a matter of personal idiosyncrasy, but also a reflection of how new interdisciplinary research challenges emerge. I have had the privilege of being the Chief Executive of The Alan Turing Institute – the national institute for data science and AI. Its strapline is ‘Data science will change the world’.  ‘Data science’ is the new kid on the block. How come?

First, there is an enormous amount of new ‘big’ data; second, this has had a powerful impact on all the sciences; and thirdly, on society, the economy and our way of life . Data science represents these combinations. The data comes from ubiquitous digitisation combined with the ‘open data’ initiatives of government and extensive deployment of sensors and devices such as mobile phones. This generates huge research opportunities. In broad terms, data science has two main branches. First, what can we do with the data? Applications of statistics and machine learning. Second, how can we transform existing science with this data and these methods? Much of the second is rooted in mathematics. To make this work in practice, there is a time-consuming first step: making the data useable, combining different sources in different formats – ‘data wrangling’. The whole field is driven by the power of the computer, and computer science. And understanding the effects of data on society, and the ethical questions it poses, is led by the social sciences.

All of this combines in the idea of ‘artificial intelligence – AI. In many applications, AI supports human decision making and the current buzz phrase is ‘augmented intelligence’: the ‘machine’ has not yet passed the ‘Turing test’ of competing with humans in thought.

I can illustrate the research  potential of  data science through two examples, the first from my own field of urban research; the second from medicine –  recent AI research here learned, no doubt imperfectly, from my Turing colleague Mihaela van der Schaar.

As we have seen, there is a long history of developing mathematical and computer models of cities. Data arrives very slowly for model calibration – the decennial  census, for example, is critical. A combination of open government data and real-time flows from, for example,  mobile phones and social media networks, have changed this situation: real-time calibration is now possible. This potentially transforms both the science and its application in city planning. Machine learning complements, and potentially integrates with, the models. Data science in this case adds to an existing deep knowledge base.

Medical diagnosis is also underpinned by existing knowledge – physiology, cell and molecular biology for example. It is a skilled business, interpreting symptoms and tests. This can be enhanced through data science techniques – beginning with advances in imaging and visualisation and then the application of machine learning to the variety of evidence available. The clinician can add his or her own judgement: augmented intelligence. Treatment plans follow. At this point, something really new kicks in. ‘Live’ data on patients, including their responses to treatment, becomes available. This data can be combined with personal data to derive clusters of ‘like’ patients, enabling the exploration of the effectiveness of different treatment plans for different types of patients. This opens the way to personalised intelligent medicine: set to have a transformative effect on healthcare

These kinds of developments of the science, and the associated applications, are possible in almost all sectors of industry. It is the role of the Alan Turing Institute to explore both the fundamental science underpinnings, and the potential applications, of data science across this wide landscape.

We currently work in fields as diverse as digital engineering, defence and security, computer technology and finance as well as cities and health. This range will expand as this very new Institute grows. We will work with and through universities and with commercial, public and third sector partners, to generate and develop the fruits of data science. This is a challenging agenda but a hugely exciting one.

4.7. What is ‘data science? What is ‘AI’?

When I first took on the role of CEO at The Alan Turing Institute, the strap line beneath the title was ‘The National Institute for Data Science’. A year or so later, this became ‘The National Institute for Data Science and AI’ – at a time when there was a mini debate about whether there should be a separate ‘national institute for AI’. It has always seemed to me that ‘AI’ was included in ‘data science’ – or maybe vice versa. In the early ‘data science’ days, there were plenty of researchers in Turing focused on machine learning for example. However, we acquired the new title – ‘for avoidance of doubt’ one might say – and it now seems worthwhile to unpick the meanings of these terms. However we define them, there will be overlaps but by making the attempt, we can gain some new insights.[53]

Ai has a long history, with well-known ‘summers’ and ‘winters’. Data science is newer and is created from the increases in data that have become available (partly generated  by the Internet of Things) closely linked with continuing increases in computing power. For example, in my own field of urban modelling, where we need location data and flow data for model calibration, the advent of mobile phones means that there is now a data source that locates most of us at any time – even when phones are switched off. In principle, this means that we could have data that would facilitate real-time model calibration. New data, ‘big data’, is certainly transforming virtually all disciplines, industry and public services.

Not surprisingly, most universities now have data science (or data analytics) centres or institutes – real or virtual. It has certainly been the fashion but may now be overtaken by ‘AI’ in that respect. In Turing, our ‘Data science for science’ theme has now transmogrified into ‘AI for science’ as more all embracing. So there may now be some more renaming!

Let’s start the unpicking. ‘Big data’ has certainly invigorated statistics. And indeed, the importance of machine learning within data science is a crucial dimension – particularly as a clustering algorithm with obvious implications for targeted marketing (and electioneering!). Machine learning is sometimes called ‘statistics reinvented’! The best guide to AI and its relationship to data science that I have found is Michael Jordan’s blog piece ‘Artificial intelligence – the revolution hasn’t happened yet’ – googling the title takes you straight there[54]. He notes that historically AI stems from what he calls ‘ human-imitative’ AI; whereas now, it mostly refers to the applications of machine learning – ‘engineering’ rather than mimicking human thinking. As this has had huge successes in the business world and beyond, ‘it has come to be called data science’ – closer to my own interpretation of data science, but which, as noted, fashion now relabels as AI. We are a long way from machines that think and reason like humans. But what we have is very powerful. Much of this augments human intelligence, and thus, following Jordan, we can reverse the acronym: ‘IA’ is ‘intelligence augmentation’ – which is exactly where the Turing Institute works on rapid and precise machine-learning-led medical diagnosis – the researchers working hand in hand with clinicians. Jordan also adds another acronym: ‘II’ – ‘intelligent infrastructure’. ‘Such infrastructure is  beginning to make its appearance in domains such as transportation, medicine, commerce and finance, with vast implications for individual humans and societies.’ This is a bigger scale concept than my notion that an under-developed field of research is the design of (real-time) information systems.

This framework, for me, provides a good articulation of what AI means now – IA and II. However, fashion and common useage will demand that we stick to AI! And it will be a matter of personal choice whether we continue to distinguish data science within this!!

4.8. Mix and match: the five pillars of data science and AI

The brief introduction of the previous section indicates that we are in a relatively new interdisciplinary field. It is interesting to continue the exploration by connecting to previous drivers of interdisciplinarity – to see how these persist and ‘add’ to our agenda; and then to examine examples of new interdisciplinary challenges.

There are five pillars of data science and AI and these help us to develop our discussion on interdisciplinarity. Three make up, in combination, the foundational disciplines – mathematics, statistics and computer science; the fourth is the data – ‘big data’ as it now is; and the fifth is a many-stranded pillar – domain knowledge – now combining enabling disciplines with substantive ones. The mathematicians use data to calibrate and test models and theories; the statisticians also calibrate models and seek to infer findings from data; the computer scientists develop the intelligent infrastructure. Above all, the three combine in the development of machine learning – the heart of contemporary AI and its applications. Is this already a new discipline? Not yet, I suspect – not marked by undergraduate degrees in AI (unlike, say, biochemistry). These three disciplines, as we have noted, can be thought of as enabling disciplines and this helps us to unpick the strands of the fifth pillar: both scientists and engineers are users, as are the applied domains such as  medicine, economics and finance, law, transport and so on. As the field develops, the AI and data science knowledge will be internalised in many of these areas – in part meeting the Mike Lynch challenge – incorporating prior knowledge into machine learning[55].

It has been argued in earlier sections that the concept of a system of interest drives interdisciplinarity and this is very much the case here in the domains for which the AI toolkit is now valuable. More recently, complexity science was noted an important driver with challenges articulated through Weaver’s notion of ‘systems of organised complexity’. This emphasises both the high dimensionality of systems of interest and the nonlinear dynamics which drives their evolution. There are challenges here for the applications of AI in various domains. Handling ‘big data’ also drives us towards high dimensionality. As noted, I once estimated the number of variables I would like to have to describe a city of a million people at a relatively coarse grain, and the answer came out as 1013! This raises new challenges for the topologists within mathematics: how to identify structures within the corresponding data sets – a very sophisticated form of clustering! These kinds of system can be described through conditional probability distributions again with large numbers of variables – high dimensional challenges for Bayesian statisticians. One way to proceed with mathematical models that are high dimensional and hence intractable is to run them as simulations. The outputs of these models can then be treated as ‘data’ and, to my knowledge, there is an as-yet untouched research challenge: to apply unsupervised machine learning algorithms to these outputs to identify structures in a high-dimensional nonlinear space.

We begin to reveal many research challenges across both foundational, and especially, applied domains. (In fact a conjecture is that the most interesting foundational challenges emerge from these domains?) We can then make another connection – flowing on from section 2.4 – to Brian’s Arthur’s argument in his book The nature of Technology.[56] A discovery in one domain can, sometimes following a long period, be transferred into other domains: opportunities we should look out for.

Can we optimise how we do research in data science and AI? We have starting points in the ideas of systems analysis and complexity science: define a system of interest and recognise the challenges of complexity. Seek the data to contribute to scientific and applied challenges – not the other way round – and that will lead to new opportunities? But perhaps above all, seek to build teams which combine the skills of mathematics, statistics and computer science, integrated through both systems and methods foci. This is non-trivial, not least due to the shortage of these skills. In the projects in the Turing Institute funded by the UKRI Special Priorities Fund – AI for Science and Government (ASG) and Living with machines (LWM) – we are trying to do just this. Early days and yet to be tested. Watch this space![57]

4.9. A data-driven future?

We can now build on the argument of the two previous sections and look ahead in more detail. As we have noted, a data-driven transformation is in progress, the infrastructure is being put in place. Infrastructure serves a purpose: transport systems provide mobility and accessibility; utilities provide energy and water. The purposes cannot be served without the infrastructure. Much of how we now live and how the economy functions is now driven by data – the flow of data is a new utility – and so the provision of the corresponding infrastructure is critical. This provides the means of collecting data, storing it, moving it around to the points of use. This needs smart administration, sensors, computer storage, fibre and wireless communications networks and computer power, analytics and display. The key point, as with other infrastructure, is to work backwards along this sequence: from the uses of the data to the articulation of the infrastructure.

We live in the ‘information age’ – the fourth industrial revolution. Information is coded as data which once upon a time would mean through the printed word in books and newspapers and financial accounts handwritten into ledgers. The key to the new ‘age’ is the digitisation of all kinds of data, computer power and the word-wide web and the internet. This has a long history. It was Claud Shannon in his famous 1945 paper on ‘the mathematics of information’[58] that demonstrated the power of the ‘binary bit’ and provided the basis for the transmission of data. Computing power and the internet have already transformed how businesses and services work, and how we live: administration, manufacturing, investment and planning, sales and delivery have all changed. Note the increase in the activities of white vans and the ubiquity of the ‘deliveroos’.

 But there is a more to come. The newer fields of ‘data science’ and ‘artificial intelligence’ will be transformative in new ways. What has been achieved so far is largely in specialised silos. The challenge is to explore the futures of the big systems – in business and the professions, and in science, engineering, health, education and security; and how these combine in the places where most of us live – in cities or city regions. If we take the growth of computing power and the continued expansion of communications bandwidth as given, there are two parts of data science that underpin future developments in applied domains: data wrangling and data analytics.

Data systems are messy. Take one element of one example: the data on a patient which is the input for clinical judgement – for diagnosis. This will range from historic handwritten ‘doctor’s notes’ through all the usual tests to imaging through MRI scans. Data wrangling is the task of digitising all of this (where necessary) and then making it available in common formats that can be inputs to the analytics.  In this medical example, this organised data system can be input to machine learning algorithms, and indeed combined with social data on the patient to provide augmented intelligence for the clinician. Indeed, there are already examples where the algorithm may provide a quicker or better diagnosis. Messy data makes data wrangling a messy and time consuming, but necessary process. It is estimated that this can take 80% of the time in developing applications. A key research challenge, therefore, is to find a way of automating this.

‘Data analytics’ can be thought of as a tool kit. The appropriate tools can be deployed on the data to fulfil the purpose of the application. In some cases, this may be a matter of traditional statistical analysis. In other cases, there may be much prior knowledge which has been encoded in mathematical models on computers and which can be used to explore alternative plans and futures on the computer rather than through what might turn out to be very poor investment. This is commonplace in transport planning and retail store location for example. But these skills have been combined into the methods of machine learning (ML). ML algorithms can be applied to data sets to ‘learn’ routine business processes – for example to be able to recommend whether to accept a person for insurance (and to estimate the premium) without human intervention. Retailers can classify us in ways that allow them to target their marketing through ‘recommender systems’. ML-based computers become learning machines that can replace, even enhance, human judgement in a range of situations. Data analytics provides the foundations of artificial intelligence. There are dangers of course – serious ones – including bias[59]. In most cases involving automated or semi-automated decision making that impacts directly on people there are either checks in the software nor, perhaps more commonly, some kind of appeals procedure which takes the ’;decision;’ back to a case manager. A related and important research area is the need to make algorithmic outputs interpretable or explainable – which is particularly difficult in the case of ‘deep learning’ examples.

We can now return to the central question: what are the uses of data within a range of application domains, and what does this imply for the future development of data infrastructure? The health service provides a good example of what the future could, or will, look like. We can build a scenario. A patient presents, data is collected – tests, imaging; the clinician makes a diagnosis and specifies a treatment plan. Now suppose all this is fully digitised and recorded and the impacts of treatment plans are tracked over time and evaluated. Add machine learning at the diagnostic stage, combining medical and social data. Over a large sample, patients can be clustered by social ‘type’, disease, stage of disease, and in each case, the effectiveness of the evaluated treatment plan can be recorded. With this system in place, a new patient presents, the clinician has augmented intelligence and can select the best treatment plan from the past experiences of thousands, indeed millions, of previous patients.[60] Hence: personalised and super-effective medicine becomes achievable. This is feasible but the inhibitor is the availability of data infrastructure. Past and often-failed attempts to digitise medical records demonstrates the scale of the challenge.

The illustration is paradigmatic: the structure can be applied to any system that is client or consumer driven: tracking students through the education system; or personalised finance; or offenders through the criminal justice system for example. Businesses will create their own data infrastructure. Retailers, as noted, are already well down this path. Advanced manufacturers are linking data-driven demand analysis with robotic production. However, all of these gains will be in silos. The big systems are all interdependent. Consider the agenda of the National Infrastructure Commission: transport, utilities, telecoms and broadband – even housing.[61] There are huge opportunities for ‘smart’ efficiencies but they demand linking of the major data infrastructure systems. The Commission has plans for a ‘digital twin’ of this huge linked system, and this would provide a map of the substantive underlying data infrastructure.

The core of the interdependence lies in the functioning of cities which provide an example of how we have to integrate to capture the relevant complexity. Each city will aim to be ‘smart’ but data-driven analytics can also contribute to the most challenging economic question: how to improve productivity outside the Greater South East? To tackle this question, we must turn to the key processes of government: planning and investment, both public and private. What are the best investment strategies to manage projected, and substantial, population growth? How can urban economies develop both in relation to productivity and the ‘hollowing out’ challenge? How can services be delivered with decreasing budgets? Investment in utilities and transport? What are the implications for urban form: higher densities, redefinition of green belts? How can private and public investment be integrated to best effect? And an overriding and neglected question: how can all of this be done in a sustainable carbon-reducing way? Governance and planning structures are needed that respond to this agenda.[62] The data infrastructure systems are key.

We also need better data analytics. Thissits ‘on top of’ the data infrastructure so that future scenarios can be explored and paths towards good ‘plans’ charted in terms of investment and the allocation of scarce resources. We have the capability to do this – but this is not being deployed systematically – indeed barely at all. What are the barriers and how can we overcome them? Most cities don’t have good data technologies, or good analytics – and the challenge is to provide an effective capability to local authorities across the country – 164 in England? The current ‘supply’ is very fragmented and the demand is very weak – although the biggest players such as London and Manchester have at least the beginnings of something good – as demonstrated in the London Infrastructure 2050 study. The big players are well equipped – Atkins, Arup, IBM, Siemens – a substantial list, but they are playing in a weak market in the UK – perhaps doing better as exporters? Is there a market failure here? Should the Government be taking a stronger lead? The National Infrastructure Commission will be very important. The Alan Turing Institute, as the national institute for data science, can play an important role: it has an urban analytics theme, and indeed all its themes contribute to the development of the ‘national data infrastructure’.

A second integrator is provided by the Government’s industrial strategy. The challenges of providing the data science base for industrial development are explicitly recognised particularly through research needs but also through skills and infrastructure – Pillars 1-3 of the strategy. The need for adequate human capital and the corresponding training needs are urgent. Skills are needed at all levels from the basics of coding data to post-doc research. In the Edinburgh region alone, it has been estimated that 10,000 additional data scientists per annum will be needed over the next ten years. Master’s courses in data science are being developed in universities all over the country. Since the field is developing so rapidly, these courses must be integrated with research and industry experience, along with the task of ensuring international competitiveness.

A third cross-cutting theme is data ethics. Questions range from ensuring privacy and anonymity where appropriate and necessary, and being able to demonstrate the transparency of machine learning algorithms. In terms of privacy, there is also a prior issue: ensuring the security of the cloud against hacking – and indeed this extends into the wider and rapidly developing field of cyber security. The effectiveness of artificial intelligence and data science in many fields depends on good data being loaded on to the infrastructure. Much of this is personal data which can in principle be anonymised. This is a non-trivial exercise. A file of such data can often be cross-referenced to publicly available personal data in a way that identities are revealed. This should be a solvable problem, but should be convincingly recognised as ‘solved’. The transparency issue is mathematically challenging. A company or a government department might use a ‘deep learning’ algorithm, involving many layers of a neural network, to generate a decision say on an insurance policy or a benefits recommendation. At present, it is often not possible to give an explicit account of how the decision was reached and therefore it is difficult to respond to an appeal against the decision. These are major issues and the Nuffield Foundation, the Royal Society, the Royal Statistical Society and The Alan Turing Institute formed a partnership to ensure that progress is made which has led to the creation of the Ada Lovelace Institute[63].

In summary, the future will be data-driven. Lives and economies will be transformed by data, data science and artificial intelligence. There are challenges in data wrangling as a starting point and in artificial intelligence as an end point. The future is already evident in sophisticated applications in sectors like retailing. There are the beginnings in a public service sector like health but in that case the ultimate benefits will come from a system-wide application. The industrial sectors are mixed in performance: robotics-based advanced manufacturers are the leaders; there are beginnings in financial services, others have yet to start. We have seen that the challenges are interdependent and this is recognised in the thinking about cities – well represented in this report – and in the industrial strategy. There is a strong argument that the ethics agenda should keep pace with the development of our data-driven futures. None of this will happen without effective data infrastructure.

4.10. Big data and high-speed analytics

My first experience of big data and high-speed analytics was at CERN and the Rutherford Lab over 50 years ago. I was in the Rutherford Lab part of a large distributed team working on a CERN bubble chamber experiment. There was a proton-proton collision every second or so which, for the charged particles, produced curved tracks in the chamber which were photographed from three different angles. The data from these tracks was recorded in something called the Hough-Powell device (after its inventors) in real time. This data was then turned into geometry; this geometry was then passed to my program. I was at the end of the chain and my job was to take the geometry, work out for this collision which of a number of possible events it actually was – the so-called kinematics analysis. This was done by chi-squared testing which seemed remarkably effective. The statistics of many events could then be computed, hopefully leading to the discovery of new (and anticipated) particles – in our case the Ω. In principle, the whole process for each event, through to identification, could be done in real time – though in practice, my part was done off-line. It was in the early days of big computers, in our case, the IBM 7094. I suspect now it will be all done in real time. Interestingly, in a diary I kept at the time, I recorded my then immediate boss, John Burren, as remarking that ‘we could do this for the economy you know’!

So if we could do it then for quite a complicated problem, why don’t we do it now? Even well-known and well-developed models – transport and retail for example – typically take months to calibrate, usually from a data set that refers to a single point in time. We are progressing to a position at which, for these models, we could have the data base continually updated from data flowing from sensors. (There is an intermediate processing point of course: to convert the sensor data to what is needed for model calibration.) This should be a feasible research challenge. What would have to be done? I guess the first step would be to establish data protocols so by the time the real data reached the model – the analytics platform, it was in some standard form. The concept of a platform is critical here. This would enable the user to select the analytical toolkit needed for a particular application. This could incorporate a whole spectrum from maps and other visualisation to the most sophisticated models – static and dynamic.

There are two possible guiding principles for the development of this kind of system: what is needed for the advance of the science, and what is needed for urban planning and policy development. In either case, we would start from an analysis of ‘need’ and thus evaluate what is available from the big data shopping list for a particular set of purposes – probably quite a small subset. There is a lesson in this alone: to think what we need data for rather than taking the items on the shopping list and asking what we can use them for.

Where do we start? The data requirements of various analytics procedures are pretty well known. There will be additions – for example incorporating new kinds of interaction from the Internet-of-Things world. This will be further developed in the upcoming blog piece on block chains.

So why don’t we do all this now? Essentially because the starting point – the first demo – is a big team job, and no funding council has been able to tackle something on this scale. There lies a major challenge. As I once titled a newspaper article: ‘A CERN for the social sciences’? See Chapters 8 and 9 below.

4.11. Block chains

Game-changing advances in any field may well depend on technological change and we can explore this in the context of urban analytics. One such possibility is the introduction of block chain software. A block is a set of accounts at a node. This is part of a network of many nodes many of which – in some cases all? – are connected. Transactions are recorded in the appropriate nodal accounts with varying degrees of openness. It is this openness that guarantees veracity and verifiability. This technology is considered to be a game changer in the financial world – with potential job losses because a block chain system excludes the ‘middlemen’ who normally record the transactions. Illustrations of the concept on the internet almost all rely on the bitcoin technology as the key example.

The ideas of ‘accounts at nodes’ and ‘transactions between nodes’ resonate very strongly with the core elements of urban analytics and models and associated data flows – the location of activities and spatial interaction. In the bitcoin case, the nodes are presumably account holders but it is no stretch of the imagination to imagine the nodes as being associated with spatial addresses. There must also be a connection to the ‘big data’ agenda and the ‘internet of things’ with edge computing nodes delivering the data in real time.. Much of the newly available real time data is transactional and one can imagine it being transmitted to blocks and used to update data bases on a continuous basis. This would have a very obvious role in applied retail modelling for example.

Chapter 5. Problem solving

5.1. Meeting challenges

Real challenges, as we have seen, pose some very difficult problems which can be classified as ‘wicked’. There are many definitions of wicked problems – first characterised by Rittel and Webber in the 1970s[64]. However, essentially, they are problems that are well known, difficult and that governments of all colours have attempted to solve. My own list, relating to cities in the UK, would be something like: social – social disparities, welfare, unemployment, pensions and housing; services – health (elements of post-code lottery and poor performance), education (a long tail of poor performance – for individuals and schools), prisons (and high levels of recidivism); economics (productivity outside the Greater South East), ‘poor’ towns – seaside towns for example; global, with local impacts, for example,  sustainability (responding to the globalisation of the economy and climate change), food security, energy security, and indeed security more broadly.

There are lots of ways of exploring this and much more detail could be added. See for example the book edited by Charles Clarke titled The too difficult box. [Ref, and add examples from the book[65]]

Even at this broad level of presentation, the issues all connect and this is one of the arguments, continually put, for joined-up government. It is almost certainly the case, for example, that the social challenges have to be tackled through the education system. Stating this, however, is insufficient. Children from deprived families arrive at school relatively ill-prepared – in terms of vocabulary for example – and so start, it has been estimated, two years ‘behind’; in a conventional system, there is a good chance that they never catch up. There are extremes of this as noted: children who have been in care for example rarely progress to higher education; and it then turns out that quite a high percentage of the prison population have at some stage in their lives been in care. Something fundamental is wrong there.

We can conjecture that there is another chain of causal links associated with housing issues. Consider not the overall numbers issue – that as a country we build 150,000 houses a year when the ‘need’ is estimated at 300,000 or more – but the fact that there are areas of very poor housing, usually associated with deprived families. I would argue that this is not a housing problem but an income problem – not enough resource for the families to maintain the housing. It is an income problem because it is an employment problem. It is an employment problem because it is a skills problem. It is a skills problem because it is an education problem. Hence the root, as implied earlier, lies in education. So a first step in seeking to tackle the issues on the list of to identify the causal chain and to begin with the roots.

How, then, can we take a ‘sledgehammer’ to these challenges? If the problem can be articulated and analysed, then it should be possible to see ‘what can be done about it’. The investigation of feasibility then kicks in of course: solutions are usually expensive. However, we don’t usually manage to do the cost-benefit analysis at a broad scale. If we could be more effective in providing education for children in care, and for the rehabilitation of prisoners, expensive schemes could be paid for by savings in the welfare and prison budgets: invest to save.

Let’s start with education. There are some successful schools in deprived areas – so examples are available. (This may not pick up the childcare issues but we return to that later.) There are many studies that say that the critical factor in education is the quality of the teachers – so enhancing the status of the teaching profession and building on schemes such as Teach First will be very important. Much is being done, but not enough. Above all, there must be a way of not accepting ‘failure’ in any individual cases. With contemporary technology, tracking is surely feasible, though the follow-up might involve lots of one-one work and that is expensive. Finally, there is a legacy issue: those from earlier cohorts who have been failed by the system will be part of the current welfare and unemployment challenge, and so again some kind of tracking, some joining up of social services, employment services and education, should provide strong incentives to engage in life-long learning programmes – serious catch up. The tracking and joining up part of this programme should also deal with children in care as a special case, and a component of the legacy programme should come to grips with the prison education and rehabilitation agenda. There is then an important add-on: it may be necessary for the state to provide employment in some cases. Consider people released from prison as one case. They are potentially unattractive to employers (though some are creative in this respect) and so employment through let’s say a Remploy type of scheme[66] – maybe as a licence condition of (early?) release becomes a partial solution. This might help to take the UK back down the league table of prison population per capita. This could all in principle be done and paid for out of savings – though there may be an element of no pain ‘no gain’ at the start. There are examples where it is being done: let’s see how they could be scaled up.

Similar analyses could be brought to bear on other issues. Housing is at the moment driven by builders’ and developers’ business models; and as with teacher supply, there is a capacity issue. As we have noted, in part it needs to be driven by education, employment and welfare reforms as a contribution to affordability challenges. And it needs to be driven by planners who can switch from a development control agenda to a place-making one.

The rest of the list, for now, is, very unfairly, left as an exercise for the reader!! In all cases, radical thinking is required, but realistic solutions are available!! We can offer a Michael Barber check list for tackling problems – from his book How to run a government so that citizens benefit and taxpayers don’t go crazy[67] – very delivery-focused, as is his wont. For a problem: What are you trying to do? How are you going to do it? How do you know you will be on track? if not on track, what will you do?

All good advice!

5.2. Research into policy: lowering the bar

Some time ago, I attended a British Academy workshop on ‘Urban Futures’ – partly focused on research priorities, and partly on research that would be useful for policy makers. The group consisted mainly of academics who were keen to discuss the most difficult research challenges. I found myself sitting next to Richard Sennett[68] – a pleasure and a privilege in itself, someone I’d read and knew by repute, but whom I had never met. When the discussion turned to research contributions to policy, Richard made a remark which resonated strongly with me and made the day very much worthwhile. He said: “If you want to have an impact on policy, you have to lower the bar!” We discussed this briefly at the end of the meeting, and I hope he won’t mind if I try to unpick it a little. It doesn’t tell the whole story of the challenge of engaging the academic community in policy, but it does offer some insights.

 The most advanced research is likely to be incomplete and to have many associated uncertainties when translated into practice. This can offer insights, but the uncertainties are often uncomfortable for policy makers. If we lower the bar to something like ‘best practice’, this may involve writing and presentations which do not offer the highest levels of esteem in the academic community. What is on offer to policy makers has to be intelligible, convincing and useful. Being convincing means that what we are describing should evidence-based. And, of course, when these criteria are met, there should be another kind of esteem associated with the ‘research for policy’ agenda. I guess this is what ‘impact’ is supposed to be about (though I think that is half of the story, since impact that transforms a discipline may be more important in the long run).

‘Research for policy’ is, of course, ‘applied research’ which also brings up the esteem argument: if ‘applied’, then less ‘esteemful’ if I can make up a word. In my own experience, engagement with real challenges – whether commercial or public – adds seriously to basic research in two ways: first, it throws up new problems; and secondly, it provides access to data – for testing and further model development – that simply wouldn’t be available otherwise. Some of the new problems may be more challenging and in a scientific sense more important, than the old ones.

So, back to the old problem: what can we do to enhance academic participation in policy development? First a warning: recall the policy-design-analysis argument introduced in Chapter 1. Policy is about what we are trying to achieve, design is about inventing solutions; and analysis is about exploring the consequences of, and evaluating, alternative policies, solutions and plans – the point being that analysis alone, the stuff of academic life, will not of itself solve problems. Engagement, therefore, ideally means engagement across all three areas, not just analysis.

How can we then make ourselves more effective by lowering the bar? First, ensure that our ‘best practice’ is intelligible, convincing and useful; evidence-based. This means being confident about what we know and can offer. But then we also ought to be open about what we don’t know. In some cases we may be able to say that we can tackle, perhaps reasonably quickly, some of the important ‘not known’ questions through research; and that may need resource. Let me illustrate this with retail modelling. We can be pretty confident about estimating revenues (or people) attracted to facilities when something changes – a new store, a new hospital or whatever – even the impact of internet retailing or ‘virtual’ medical consultations. And then there is a category, in this case, of what we ‘half know’. We have an understanding of retail structural dynamics to a point where we can estimate the minimum size that a new development has to be for it to succeed. But we can’t yet do this with confidence. So a talk on retail dynamics to commercial directors may be ‘above the bar’.

I suppose another way of putting this argument is that for policy engagement purposes, we should know where we should set the height of the bar: confidence below, uncertainty (possibly with some insights), above. There is a whole set of essays to be written on this for different possible application areas.

5.3 An example: the future of cities

To illustrate this challenge: in recent years, I chaired the Lead Expert group of the Government Office for Science Foresight Project on The Future of Cities. It has ‘reported’, not as conventionally with one large report and many recommendations, but with four reports and a mass of supporting papers. Googling ‘Foresight Future of Cities’ leads very quickly to the web site and all the supporting material.[69] During the project, we worked with fourteen Government Department – ‘cities’ as a topic crosses government – and we visited over 20 cities in the UK and have continued to work with a number of them.

A key, if obvious point was established: that the problems and challenges are all interdependent and this puts substantial responsibility on researchers, analysts and policy makers to handle this and not to work in siloes. If there was one key recommendation, that was it. Beyond that, it was recognised that there were no easy solutions, but was possible was to formalise the process of generating and exploring alternative scenarios – the subject of one of the reports. This is the basis for inventing possible plans – the ‘design’ part of ‘PDA’.

Chapter 6. Doing interdisciplinary research

6.1. Introduction.

I have been privileged to have a background and a series of partly serendipitous career challenges which have taken me on an interdisciplinary path before the idea was fashionable. I hope it is helpful to chart this progression, not as a model, but as an account of possibilities (Section 6.2). I then report on a variety of experiences and choices, and the ideas that start to put a rationale round interdisciplinary thinking.

I was fortunate in my early career to have the opportunity to spend time in North America, meeting the founding fathers – they were all men! – of what became my research field (Section 6.3). As a complement to university-based research, I was a founding director of a university spin-out company, and we had to be very business-like – something about adaptation (6.4). I also found myself, later, venturing into other disciplines which gives a different perspective on interdisciplinarity – being invited ‘in’ to help develop a new research base (6.5). There are two sections on ideas, particularly important at a time when interdisciplinarity gains a foothold: the dangers of ‘following fashion (6.6) and the opportunities of ‘mining the past’ (6.7).

6.2. A research autobiography

I was new to Geography when I went to Leeds as a Professor in 1970. There was even a newspaper headline in one of the trade papers with words to the effect that “Leeds appoints Geography Professor with no qualifications in Geography!” So how did this come about? As I reflect, it makes me realise the extent to which my career – and this will be by no means unique – has been shaped by serendipity but has become an illustration of the emergence of interdisciplinarity.

I graduated in Mathematics and I wanted to work as a mathematician. I had a summer job as an undergraduate in the (then new) Rutherford Lab at Harwell and this led to a full-time post when I left Cambridge. I was, in civil service terms, a ‘Scientific Officer’ which I was very pleased about because I wanted to be a ‘scientist’. I even put that as my profession on my new passport. It was interesting. I had to write a very large computer programme for the analysis of bubble chamber events in experiments at CERN (which also gave me the opportunity to spend some time in Geneva). With later hindsight, it was a very good initial training in what was then front-line computer science. I also realise, with several decades of hindsight, that this was an early experience of what would now be called ‘data science’. Working in a team, I was given enormous responsibilities – at a level I can’t imagine being thought appropriate for a 22 year old now. This had the advantage of teaching me how to produce things on time – difficult though the work was. Secondly, it was the early days of large main-frame computers and I learned a lot about their enabling significance. I made a decision that became a characteristic of my later career – though heaven knows why I was allowed to do this: I decided to write a general programme that would tackle any event thrown up by the synchrotron. The alternative, much less risky, was to write a suite of much smaller programmes each focused on particular topologies. With hindsight, this was probably unwise though I got away with it. The moral: go for the general if you can. All this was very much ‘blue skies’ research and ‘impact’ was never in my mind.

Within a couple of years, I began to tire of the highly competitive nature of elementary particle physics and I also wanted to work in a field where I could be more socially useful but still be a mathematician. So I started applying for jobs in the social sciences in universities: I still wanted to be a maths-based researcher. All the following steps in my career were serendipitous – pieces of good luck.

It didn’t start well. I must have applied for 30 or 40 jobs and had no positive response at all. To do something different, sometime in 1962, I decided to join the Labour Party. I lived in Summertown in North Oxford, a prosperous part of the city, and there were very few members there. Within months, I had taken on the role of ward secretary. We selected our candidates for the May 1963 local elections but around February, they left Oxford and it then turned out that the rule book said that the Chairman and Secretary of the Ward would be the candidates, and so in May, I found myself the Labour candidate for Summertown. I duly came bottom of the poll. But I enjoyed it and in the next year, I managed to get myself selected for East Ward – which had not had a Labour Councillor since 1945 but seemed just winnable in the tide that was then running.[70] I was elected by a majority of 4 after four recounts. That led me into another kind of experience – three years on Oxford City Council – different from conventional research but one which adds a new dimension to the idea of interdisciplinarity. It was also the beginning of my interest in cities as a prospective subject for research.

And then a second piece of luck. I was introduced by an old school friend to a small group of economists in the Institute of Economics and Statistics in Oxford who had a research grant from the then Ministry of Transport in cost-benefit analysis. In those days – it seems strange now – social science, even economics, was largely non-quantitative and they had a very quantitative problem – needing a computer model of transport flows in cities. We did a deal: that I would do all their maths and computing and they would teach me economics. So I changed fields by a kind of apprenticeship. It was a terrific time. I toured the United States – where all the urban modellers were – with Christopher Foster and Michael Beesley[71] and we met people like Britton Harris (the Penn State Study) and I. S ‘Jack’ Lowry (of ‘Model of Metropolis’ fame).These early days are charted in the next section. I set about trying to build the model. The huge piece of luck was in recognising that what the American engineers were doing in developing models of flows as ‘gravity models’ could be restated in a format that was based on Boltzmann and statistical mechanics rather than Newton and gravity; this generalised the methodology. The serendipity in this case was that I recognised some terms in the engineers’ equations from my statistical mechanics lectures as a student. This led to the so-called ‘entropy-maximising models’. I was suddenly invited to give lots of lectures and seminars and people forgot that I had this rather odd academic background.

It was a time of rapid job progression. I moved with Christopher Foster to the Ministry of Transport and set up something called the Mathematical Advisory Unit, which grew rapidly, with a model-building brief. (I had been given the title of Mathematical Adviser: it should have been ‘Economic Adviser’ but the civil service economists refused to accept me as such because I wasn’t a proper economist!)[72].  This was 1966-68 and then serendipity struck again. I gave a talk on transport models in the Civil Engineering Department in University College London and in the audience was Professor Henry Chilver. I left the seminar and started to walk down Gower Street, when he caught up with me and told me that he had just been appointed as Director of a new research centre – the Centre for Environmental Studies – and “would I like to be the Assistant Director?”. My talk had, in effect, been a job interview – not the kind of thing that HR departments would allow now! And so I moved to CES and built a new team of modellers and worked on extending what I had learned about transport models to the bigger task of building a comprehensive urban model – something I have worked on ever since.

This was from 1968. By the end of the 60s, quantitative social science was all the rage. Many jobs were created as universities sought to enter the field and I had three serious approaches: one in Geography at Leeds, one in Economics and one in Town Planning. I decided, wisely as it turned out, that Geography was a broad church – and in a real sense, it is internally interdisciplinary – and had a record in absorbing ‘outsiders’. I moved to Leeds in October 1970 as Professor of Urban and Regional Geography. And so I became a geographer! Again, the experience was terrific. I enjoyed teaching. It led to long term friendships and collaborations with a generation that is still in Leeds or at least academia: Martin Clarke, Graham Clarke, John Stillwell, Phil Rees, Adrian MacDonald, Christine Leigh, Martyn Senior, Huw Williams and many others – a long list. Some friendships have been maintained over the years with students I met through tutorial groups. We had large research grants and could build modelling teams. Geography, in the wider sense, did prove very welcoming and it was all – or at least mostly! – very congenial. Sometime in the early 1970s, I found myself as head of department and I also started taking an interest in some university issues. But that decade was mainly about research and was very productive.

By the end of the decade, there had been the oil crisis, cuts were in the air – déjà vu! – and research funding became harder to get. The next big step had its origins in a race meeting on Boxing Day – was it 1983? – at a very cold Wetherby. I was with Martin Clarke in one of the bars and we were watching – on the bar’s TV – Borough Hill Lad, trained near Leeds by Michael Dickinson, win the King George at Kempton. Thoughts turned to our lack of research funding. It was at that moment, I think, that we thought we would investigate the possibility of commercial applications of our models. We first tried to ‘sell’ our ideas to various management consultants. We thought they could do the marketing for us. But no luck. So we had to go it alone. We constituted a two-person very part-time workforce. Our first job was finding the average length of a garden path for the Post Office! Our second was predicting the useage of a projected dry ski slope. We did the programming, we collected the data. We wrote the reports. But then it suddenly was better. We had substantial contracts with W H Smith and Toyota and we could start to employ people. More of this story is told in the next section – and in more detail in Martin Clarke’s book[73] . What began very modestly, became GMAP Ltd, with Martin as the Managing Director and driving its growth. At its peak, GMAP was employing 120 people and had a range of blue-chip clients. That was a kind of real geography that I was proud to be associated with. In research terms, it provided access to data that would not normally have been available to academics. It was very much ‘research on’ being carried into ‘research for’.

Simultaneously in the 80s, I began to be involved in university management and I became ‘Chairman of the Board of Social and Economic Studies and Law’ in the university – what in modern parlance would be a Dean. In 1989, I was invited to become Pro-Vice-Chancellor – at a time when there was only one. I left the Geography Department and, as it turned out, never to return. The then Vice-Chancellor became Chairman of the CVCP and had to spend a lot of time in London and so the PVC job was bigger than usual. In 1991, I found myself appointed as Vice-Chancellor and embarked on that role on 1 October in some trepidation. I was VC until 2004. It was challenging, exciting and demanding. It was, in Dickens’ phrase, ‘the best of times and the worst of times’: tremendously privileged, but also with a recurring list of very difficult, sometimes unpleasant, problems. I was asked by research friends why I had taken on such a job instead of continuing in research. I responded at least half seriously that this was a serious social science research challenge! But the focus here is research: I somehow managed to keep my academic work going in snatches of time but the publication rate certainly fell.

I was Vice-Chancellor for almost 13 years and in 2004 I was scheduled to ‘retire’.  This, however, seemed to be increasingly unattractive. Salvation came from an unlikely source: the Department for Education and Skills (DfES) in London. I was offered the job of Director-General for Higher Education and so I became a civil servant for almost three years with  policy-advising and management responsibilities for universities in England – direct experience of the ‘P’ in ‘PDA’. Again, it was privileged and seriously interesting, working with Ministers, having a front row seat on the politics of the day. But I always knew I wanted to research again, so after a brief sojourn in Cambridge, I returned to academic life at University College London as Professor of Urban and Regional Systems. This was another terrific experience. I worked with Mike Batty, an old modelling friend, in the Centre for Advanced Spatial Analysis and a group of young researchers. What was very exciting was that my research field, developed into the realms of what became complexity science and a hot topic. Research grants flowed again. This included £2.5M for a five-year grant from EPSRC for a project on global dynamics. This embraced migration, trade, security and development aid: big issues to which real geography can make a significant contribution. It funded half a dozen new research posts and five PhD studentships.

The final step was my move to The Alan Turing Institute in the summer of 2016. The original plan was to develop a programme in urban modelling – partly on the basis that modelling was a crucial element of data science which was to some extent being neglected. However, this plan had to be put on hold as in September, I was appointed as CEO of the Institute. This mean that I had to learn much more about data science and AI and this delivered its own research benefits. I began to see new possibilities of urban models being embedded in ‘learning machines’.

My career trajectory has taken me from mathematics and elementary particle physics into geography and the social sciences via economics, to complexity science, and on into data science and AI. Add to this elements of operational research and ‘management’ both in a university and as a civil servant. The most crucial moves were not planned and I think a ‘serendipity’ label is appropriate.

6.3. Learning from history

My core research career was  built on my move from the Rutherford Lab at Harwell to the Institute of Economics and Statistics in Oxforsd. I was recruited in the Autumn of 1964 by Christopher Foster (now Sir Christopher) to work on the cost-benefit analysis of major transport projects. My job was to do the computing and mathematics and at the same time to learn some economics. Of course, the project needed good transport models and at the time, all the experience was in the United States. Christopher had worked with Michael Beesley (LSE) on the pioneering cost-benefit analysis of the Victoria Line. To move forward on modelling, in 1965, Christopher, Michael and I embarked on a tour of the US. As I remember, in about ten days, we visited Santa Monica and Berkeley, Philadelphia, Boston and Washington DC. We met a good proportion of the founding fathers – they were all men – of urban modelling. A number of them influenced my thinking in ways that have been part of my intellectual make-up ever since – threads that can easily be traced in my work over the years. An interesting question then: for those recruited in subsequent decades, what are the equivalents? It would be an interesting way of writing a history of the field.

Jack (I. S.) Lowry was working for the RAND Corporation in Santa Monica where he developed the model of Pittsburgh that now bears his name. I recall an excellent dinner in his house overlooking the Bay. His model has become iconic because it revealed the bare bones of a comprehensive model in the simplest way possible. Those of us involved in building comprehensive models have been elaborating it ever since. The conversation for me reinforced something that was already becoming clear: the transport model needed to be embedded in a more comprehensive model so that the transport impact on land use – and vice-versa – could be incorporated.

The second key proponent of the comprehensive model was Britton Harris a Professor of City Planning at the University of Pennsylvania but, particularly important in the context of that visit, he was the Director of the Penn-Jersey Land-Use Transportation Study. The title indicated its ambitions. This again reinforced the ‘comprehensive’ argument and became the basis of a life-long friendship and collaboration. I spent many happy hours in Wissahiken Avenue. The Penn-Jersey study used a variety of modelling techniques, not least mathematical programming which was a new element of my intellectual tool kit. More of Brit later. At Penn – was it on that trip or later – I met Walter Isard, a giant figure in the creation of regional science and who contributed to my roots in regional input-output modelling. Walter was probably the first person to recognise that von Thunen’s theory of rent could be applied to cities – see his 1956 book ‘Location and the space-economy’[74]. Bill Alonso was one of his graduate students and he fully developed the theory of bid-rent. We visited Bill in Berkeley and I recall a letter from him three years later, in the heady days of 1968, starting with ‘As I write, military helicopters hover overhead ….’! Then back to Penn. For me it was Ben Stevens who operationalised the Alonso model in his 1960 paper with John Herbert – as a mathematical programming model.[75] This fed directly into work I did in the 1970s with Martyn Senior to produce an entropy-maximising version of that – making me realise that one of the unheralded advantages of that method was that it made optimising economic models – like the Alonso-Herbert-Stevens model – ‘optimally blurred’, to recognise sub-optimal real life[76].

In Harvard, we met John Kain, very much the economist, very concerned with housing models – territory I have failed to follow up on since. A new objective! He was at the Harvard-MIT Joint Centre for Urban Studies whose existence was a sign that these kinds of interdisciplinary centres were fashionable at the time – and have been in and out of fashion ever since – now fortunately fashionable again! An alumnus was Martin Meyerson who by this time was Chancellor of the University of California at Berkeley (and we dined with him in his rather austere but grand official dining room – why does one remember these things rather than the conversation?!). There also was Daniel (Pat) Moynihan who had just left the Centre to work for the President in Washington – another sign of the importance of the urban agenda. I was urged to meet him and that led to my only ever visit to the White House – to a small office in the basement. He later became very grand as a long-serving Senator for New York State.

The Washington part of our visit established some other important contacts and building bricks. We engaged directly with the transport modelling industry through Alan Voorhees – already running quite a large company that still bears his name.[77] It was valuable to see the ideas of transport modelling put to work and I think that reinforced my commitment that modelling was a contribution to achieving things – the use of the science. I met Walter Hansen who was working for Voorhees who was probably the inventor of the concept of ‘accessibility’ in modelling through his paper ‘How accessibility shapes land use’ and T. R. (‘Laksh’) Lakshmanan of the ‘Lakshmanan and Hansen’ retail modelling paper – other critical and ever-present parts of the tool kit[78]. From a different part of the agenda, there was Clopper Almon who was working for the Government (as I remember) on regional input-output models.

Much of what I learned on that trip has remained as part of my intellectual tool kit. Much of it led to long-standing exchanges – particularly through regional science conferences. Some led to close working collaboration. Brit and Walter between them, ten years later, recruited me to a position of Adjunct Professor in Regional Science at Penn where I spent a few weeks every summer in the late 70s. I worked closely with Brit and those visits must have been the basis for my work with him on urban dynamics that was published in 1978 – and is still a feature of my ongoing work plan. I could chart a whole set of contacts and collaborations for subsequent decades. Maybe the starting points are always influential for any of us but I was very lucky in one particular respect: it was the start of the modern period of urban modelling and there was everything to play for. I was also very fortunate in being able to meet the key research figures of the time. I suspect that now, as fields have expanded, it would be much more difficult for a young researcher to be able to do this.

6.4. Spinning out

I estimate that once every two years for the last 20 or 30 years, there has been a report of an inquiry into the transfer of university research into the economy – for commercial or public benefits – a version of ‘research for’ rather than ‘just research on’. The fact that the sequence continues demonstrates that this remains a challenge. One mechanism is the spinning out of companies from universities and this piece is in two parts – the first describing my own experience and the second seeking to draw some broader conclusions. Either part might offer some clues for budding entrepreneurs.

This is the story of GMAP Ltd. The idea was born, as noted in the previous section, half-formed, at Wetherby Racecourse on Boxing Day 1984 and it was six years before the company was spun out. Of course, there is a back story. Through the 1970s, I worked on a variety of aspects of urban modelling supported by large research grants. All of this work was basic science but there was a clear route into application, particularly into town planning. By the early 80s, the research grants dried up – a combination of becoming unfashionable and perhaps it was ‘someone else’s turn’. I worked with a friend and colleague, Martin Clarke, and he and I always went to Wetherby races on Boxing Day. We discussed the falling away of research grants. As we watched – on TV in a bar – Borough Hill Lad win the King George at Kempton, we somehow decided that the commercial world would provide an alternative source of funding. We had models at our disposal – notably the retail model. Surely there was a substantial market for this expertise!

 Our first thought was that the companies that had the resources to implement this idea were the big management consultants. In 1985, we began a tour. The idea was that they would work with our models on some kind of licence basis, Martin and I could be consultants, and they would find the clients. We were well received, usually given a good lunch; and then nothing happened. It became clear that DIY was the only way to make progress. We approached some companies whom we knew who we thought were possible targets but mostly our marketing was cold calling based on a weekly read of the Sunday Times job advertisements to identify companies seeking to fill marketing posts. Over two years, we had a number of small contracts, run through ULIS (University of Leeds Industrial Services). We learned our first lesson in this period: that to get contracts, we had to do what the companies actually wanted rather than what we thought they should have. We thought we could offer the Post Office a means of optimising their network; what they actually wanted was to know the average length of a garden path! That was our first contract, and that’s what we did. Another company (slightly later) said that all these models were very interesting, but their data was in 14 different information systems – could we sort that out? We did. The modelling came later.

Our turnover in Year 1 was around £20k and it slowly grew to around £100k. The big breakthroughs came in 1986 and 1987 when we won contracts with W H Smith and with Toyota. By then we had our first employee and GMAP became a formal division of ULIS. It wasn’t yet spun out, but we could run it like a small company with shadow accounts that looked like real accounts. There was then steady growth and in late 1989, we won a major contract with the Ford Motor Company. By 1990, our turnover reached £1M, we had a staff of around 20 and we crossed a threshold and we were allowed to spin out as GMAP Ltd. By this time, I was heavily involved in University management and so could only function as a non-executive director. The company’s development turned critically on Martin becoming the full-time Managing Director.

The 90s were years of rapid growth. We retained clients such as W H Smith, Toyota and Ford but added BP, Smith Klyne Beecham, the Halifax Building Society and many more. We were optimising retail, bank and dealership networks across the UK and, in the case of Ford, all over Europe. By 1997, our turnover was almost £6M and we were employing 110 staff. And then came a kind of ending. In 1997, the automotive part of GMAP was sold to R L Polk, an American company, and in 2001, the rest to the Skipton Building Society, to merge into a group of marketing companies that they were building.

What can we learn from this? It was very hard work, especially in the early days. DIY meant just that: Martin and I wrote the computer programmes, wrote and copied the reports, collected the data. We once stood outside Marks and Spencer in Leeds with clipboards asking people where they had travelled from – so we could get the data to calibrate a model. We were moved on by M and S staff for being a nuisance! We also had to be very professional. A project could not be treated like a conventional research project. If there was a three-month deadline, it had to be met. We had to learn how to function in the commercial world very quickly. But it was exciting as well. We grew continuously. We didn’t need any initial capital – we funded ourselves out of contracts. We were always profitable. It was real research: we had incredible access to data from companies that would have been unavailable if we hadn’t been working for them. And many of them rather liked being referred to in papers published in academic journals.

Could it be done again? In this field, possibly, though this kind of analysis has become more routine and has been internalised by many – notably the big supermarket companies. However, there are many companies that could use this technology with (literally) profit, but don’t. And there are huge opportunities in the public sector – notably education and health. The companies we worked with, especially those with whom we had long-term relationships, recognised the value of what they were getting: it impacted on their bottom line. We did relatively little work in the public sector – not for want of trying – but it was difficult to convince senior management to see the value. However, it could certainly be done again on the back of new opportunities. Much is said about the potential value of ‘big data’ or of the ‘internet of things’ for example and many small companies are now in the business of seeking out new opportunities. But is anyone linking serious modelling with these fields? Now, there’s an opportunity!![79]

6.5. Venturing into other disciplines

Urban and regional science – a discipline or a subdiscipline, or is it still interdisciplinary? – has been good at welcoming people from other disciplines, notably in recent times, physicists. Can we venture outside our box? It would be rather good if we could make some good contributions to physics!! However, given that the problems we handle are in some sense generic – such as spatial interaction – we can look for similar problems in other disciplines and see if we can offer a contribution. I can report a number of my own experiences which give some clues on how these excursions can come about and may be food for thought for something new for others. I ventured into demography with Phil Rees many years ago[80], and I tried to improve the Leontief-Strout inter-regional input-output model around the same time – the latter only implemented once by Geoff Hewings and colleagues[81]. Some of this has re-emerged in the Global Dynamics project, so it is still alive [refs]. Both of these are broadly within regional science. More interesting examples are in ecology, archaeology, history and security. All relate to spatial interaction and competition-for-resources modelling by some combination of (i) adding space, (iii) applying the models to new elements or (iii) thinking of new kinds of flow for new problems. In some cases, our own core models have to be combined with those from the other field. But let’s be specific.

The oldest exercise dates back to the mid 1980s, but also, after a gap in time, has proved one of the most fruitful. Around 1985, Tracey Rihll, then a research student in ancient history in Leeds, came to see me in the Geography Department and said that someone had told her that I had a model that would help her with her data. The data were points representing the locations of known settlement in Greece around 800 BC. What we did was make some colossal assumptions about interactions – say trade and migration – between settlements and, using Euclidean distance, run the data through a dynamic retail model to estimate – at equilibrium – settlement sizes. Out popped Athens, Thebes, Corinth etc – somehow teased out from the topology of the points. One site was predicted as large that hadn’t been thought to be, and if we had had the courage of our convictions, we would have urged archaeologists to go there! We published three papers on this work[82]. Nothing then happened for quite a long time until it was picked up by some American archaeologists and then by Andy Bevan in UCL Archaeology. Somehow, the penny dropped with Andy that the ‘Wilson’ of ‘Rihll and Wilson’ was now in UCL and we began to work together – first reproducing the old results and then extending them to Crete[83]. These methods were then separately picked up in UCL by Mark Altaweel in Archaeology and Karen Radner in History and we started working on data from the Kurdistan part of Iraq. In this case, the archaeologists were really prepared to dig at what the model predicted were the largest sites. Sadly, this has now been overtaken by events in that part of the world. This work has led to further published papers[84]. There is then one other archaeology project but this links with ‘security’ below.

The excursion into ecology came about in a different way. The dynamic retail model, as we have noted, is based on equations that are very similar to the Lotka-Volterra equations in ecology and so I decided to investigate whether there was the possibility of knowledge transfer between the two fields. What was most striking was that virtually all the applications in ecology were aspatial, notwithstanding the movement of animals and seeds. So I was able to articulate what a spatial L-V system might look like in ecology. I didn’t have the courage to try to publish it in an ecology journal, but Environment and Planning A published the paper and I fear it has fallen rather flat[85]. But I still believe that it is important! Indeed, it is highly relevant in exploring the spatial dynamics of the COVID pandemic[86].

There was a different take on history with the work I did with Joel Dearden[87] [add footnote][88] on Chicago. This came about because I had been invited to give a paper at a seminar in Leeds to mark Phil Rees’s retirement and I had been reading William Cronon’s book, Nature’s Metropolis, about the growth of Chicago[89]. Cronon’s book in particular charted in detail the growth of the railway system in North America in the 19th Century. Phil had done his PhD in Chicago and so this seemed like a good topic for the seminar. Joel and I designed a model – nor unlike the Greek one but in this case with an emphasis on the changing accessibility provided by the growth of railways over a century. We had US Census data from 1790 against which we could do some kind of checking and we generated a plausible dynamics. This work is being taken forward in the UK context as part of the Living with machines project in The Alan Turing Institute.

The fourth area was ‘security’, stimulated by this being one of the four elements of our then current EPSRC Global dynamics project. This turns on the interesting idea of interpreting spatial interaction as ‘threat’ – which can attenuate with distance. From a theoretical point of view, the argument was analogous to the ecological one. Lewis Fry Richardson, essentially a meteorologist, developed an interest in war in the 1930s and built models of arms races essentially using L-V models[90]. But again, without any spatial structure. We have been able to add space and this makes the model much more versatile and we have applied it in a variety of situations[91]. We even reconnected with archaeology and history again by seeking to model, in terms of threat, the summer ‘tour’ of the Emperor of Assyria in the Middle Bronze Age with his army, taking over smaller states and reinforcing existing components of the Empire. In this case, the itineraries of the tour were recorded on stone tablets – an unusual source of data for modellers[92]. The models are currently being applied in a contemporary context through a project in The Alan Turing Institute[93].

All of these ventures have been modest, two of them funded by small UCL grants and as small parts of larger projects in The Alan Turing Institute. Papers have been accepted and published in relation to all of them.  However, attempts to obtain funding from research councils have mostly failed (the Turing project being an exception – but that is part of something larger). Are we ahead of our time or is it more likely that the community of available referees can’t handle this kind of interdisciplinarity, particularly if algebra and calculus are involved!?!

6.6. Following fashion

Choosing a research topic – even a research field – is difficult and tricky. Much research follows the current fashion. This leads me to develop an argument around two questions. How does ‘fashion’ come about? How should we respond to it? I begin by reflecting on my personal experience.

My research career began in transport modelling in the 1960s and was driven forward from an academic perspective by my work on entropy maximising and from a policy perspective by working in a group of economists on cost-benefit analysis. Both modelling and cost-benefit analysis were the height of fashion at the time. I didn’t choose transport modelling: it was an available job after many failed attempts to find social science employment as a mathematician. The fashionability of both fields were almost certainly rooted in the highway building programme in the United States in the 1950s: there was a need for good investment appraisal of large transport projects. As noted in section 6.3 abovem planners such as T. R. (‘Laksh’) Lakshmanan and Walter Hansen developed concepts like accessibility and retail models. This leads me to a first conclusion: fashion can be led from the academic side or the ‘real’ policy side – in my case, perhaps unusually, both. It was realised pretty quickly – probably from both sides – that transport modelling and land-use were intertwined and so led by people such as Britton Harris and Jack Lowry, the comprehensive urban modelling field was launched. I joined this enthusiastically.

These narrower elements of fashion were matched by a broader social science drive to quantitative research though probably the bulk of this was statistical rather than mathematical. It is interesting to review the contributions of different disciplines – and this would make a good research topic in itself. The quantitative urban geographers were important: Peter Haggett, Dick Chorley, Brian Berry, Mike Dacey and more – a distinguished and important community. They introduced the beginnings of modelling, but were not modellers.[94] The models themselves grew out of engineering. The economists were surprisingly unquantitative. Walter Isard initiated and led the interdisciplinary movement of ‘regional science’ which thrives today[95]. From a personal point of view, I moved into Geography as a good ‘broad church’ base. I was well supported by research council grants and built a substantial modelling research team.

By the late 70s and early 80s, I had become unfashionable – which may be an indicator of the half-life of fashions! There were two drivers: the academic on the one hand and planning and policy on the other. There was Douglas Lee’s ‘Requiem for large-scale models’[96] (which seemed to me to be simply anti-science but was influential) and a broader Marxist attack on ‘positivist’ modellers – notwithstanding the existence of distinguished Marxist modellers such as Sraffa. And model-based quantitative methods in planning – indeed to an extent planning itself – became unfashionable around the time of the Callaghan government in the late 70s. Perhaps, and probably, as modellers we had failed to deliver convincingly.

By the mid 80s, research council funding having dried up, with a colleague, Martin Clarke, we decided to explore the prospect of ‘going commercial’ as a way of replacing the lost research council funding. That is a story told in section 6.4; it was successful – after a long ‘start-up’ struggle. As in the early days of modelling, in racing parlance, we had ‘first mover’ advantage and we were valued by our clients. It would be difficult to reproduce this precisely because so much of the expertise has been internalised by the big retailers. But that was one response to becoming unfashionable.

By the 2000s, complexity science had become the new fashion. I knew I was a complexity scientist as an enthusiastic follower of Warren Weaver and I happily rebadged myself and this led to new and substantial research council funding. In effect, modelling became fashionable again, but under a new label (also supported by the needs of environmental impact assessment in the United States, which needed modelling). By the 2010s, the complexity fashion was already fading and new responses became needed. So we should now examine the new fashions and see what they mean for research priorities.

Examples of current fashions are: agent-based modelling (ABM); network analysis; study of social media; big data; smart cities. The first three are academic led, the fourth is shared, and the fifth is policy and technology led (unusually by large companies rather than academia or government). The first two have some substantial interesting ideas but on the whole are carried out by researchers who have no connection to other styles of modelling. They have not made much impact outside academia. In the ABM case, it is possible to show that with appropriate assumptions about ‘rules of behaviour’, the models are equivalent to more traditional (but under-developed) dynamic models. It may also be the case, that as a modelling technique, ABM is more suited to the finer scale – for example pedestrian modelling in shopping precincts. ABM is sometimes confused with microsimulation – a field that deserves to be a new fashion, which is developing, and where there is scope for major investment.

A curiosity of the network analysis is a focus on topology in such a way that valuable and available information is not used. For example, in many instances, flows between nodes are known (or can be modelled) and can be loaded onto networks to generate link loads but this rich information is not usually used by network analysts. This is probably a failure of the network community to connect to – even to be aware of – earlier and relevant work. In this case, as in others, there are easy research pickings to be had simply by joining up!

The large-scale study of social media is an interesting phenomenon. I suspect it is done because there are large sources of data that can then be plugged into the set of network analysis techniques mentioned earlier. If this could be seen as modelling telecommunications as an important addition to the comprehensive urban model, then it would be valuable both as a piece of analysis and for telecoms companies and regulators but these connections are not typically made. Interestingly, flows are not usually modelled and there are research opportunities here (and in the related field of ‘internet of things’).

The ‘big data’ field is clearly important – but the importance should be measured against the utility of the data in analysis and policy – not as a field in itself. This applies to the growing ‘discipline’ of ‘data science’: if this develops as a silo, the full benefits of new data sources will not be collected. However, there is a real research issue to be discussed here: the design and structure of information systems that connect big data to modelling, planning and policy analysis.

The ‘smart cities’ field is important in that all efficiency gains are to be welcomed. But it is a fragmented field, mostly focused on very specific applications – even down to the level of ‘smart lamp-posts’ – and there is much thinking to be done in terms of integration with other forms of analysis and planning, and being smart for the long run.

There is one general conclusion to be drawn that I will emphasise very strongly: fashion is important because usually (though not always) it is a recognition of something important and new; but the degrees of swing to fashion are too great. There are many earlier threads which form the elements of core social science which become neglected. Fortunately, there is usually a small but enthusiastic group who keep the important things moving forward and the foundation is there for when those threads become important and fashionable again (albeit sometimes under another name). So in choosing research topics, it is important to be aware of the whole background and not just what is new; sometimes integration is possible; sometimes the old has to be a continuing and developing thread. It is also worth asking when a fashion ceases to be ‘new’! The moral may be that if you choose to follow a fashion, get in early and be reasonably confident that it will be fruitful. If you are late, the practising community will be large and very competitive!

6.7. Against oblivion: mining the past

I was at school in the 1950s – Queen Elizabeth Grammar School, Darlington – with Ian Hamilton. He went on to Oxford and became a significant and distinguished poet, critic, writer and editor – notable, perhaps, for shunning academia and running his editorial affairs from the Pillar of Hercules public house in Greek Street in Soho. I can probably claim to be the first publisher of his poetry as Editor of the School Magazine – poems that, to my knowledge, have never been ‘properly’ published. We lost touch after school. He went on to national service and Oxford; I deferred national service and went to Cambridge. I think we only met once in later years – by coincidence on an underground station platform in the 1960s or 70s. However, I did follow his work over the years and I was looking at one of his books recently that gave me food for thought – Against oblivion, published posthumously in 2002.[97] (He died at the end of 2001.) This book contained brief lives of 50 poets of the Twentieth Century – emulating a work by Dr Johnson of poets of the Seventeenth and Eighteenth Centuries. He also refers to two Twentieth Century anthologies for which the editors had made selections. The title of Ian’s book reflects the fact that a large proportion of the poets in these earlier selections had disappeared from view – and he checked this with friends and colleagues – into oblivion. He took this as a warning about what would happen to the reputations of those included in his selection a hundred years into the future – and by implication, the difficulty of making a selection at all. It is interesting to speculate about what survives – whether the oblivion is in some sense just or unjust. Were those that have disappeared from view simply ‘fashionable’ at the time – note Following fashion in the previous section – or is there a real loss?

This has made me think about ‘selection’ in my own field of urban modelling. I edited a five-volume ‘history’ of urban modelling – by selecting what I judged to be – albeit subjectively –  significant papers and book extracts which were then published in more or less chronological order. [ref] The first two volumes cover around the first 70 years and include 70 or so authors. Looking at the selection again, particularly for these early volumes, I’m reasonably happy with it though I have no doubt that others would do it differently. Two interesting questions then arise: which of these authors would still be selected in fifty or a hundred years’ time? Who have we missed and who should be rescued from oblivion? The first question can’t be answered, only speculated about. It is possible to explore the second, however, by scanning the notes and references at the end of each of the published papers. Such a scan reveals quite a large army of researchers and early contributors. Some of them were doing the donkey work of calculation in the pre-computer age but many, as now, were doing the ‘normal science’ of their age. It is this normal science that ultimately gives fields their credibility – the constant testing and retesting of ideas – old and new. However, I’m pretty sure there are also nuggets, some of them gold, to be found by trawling these notes and references and this is a kind of work which is not, on the whole, done. This might be called ‘trawling the past for new ideas’, or some such. This would be closely related to delving into, and writing about, the history of fields and in urban modelling this has only be done on a very partial and selective basis through review papers in the main. (Though the thought occurs to me that a very rich source would be the obligatory literature reviews and associated references in PhD theses. I am not an enthusiast for these reviews as Chapter 1 of theses because they usually don’t make for an interesting read – but this argument suggests that they have tremendous potential value as appendices.) There is one masterly exception and that is the recently published book by Dave Boyce and Huw Williams – Forecasting urban travel – which, while very interesting in fulfilling its prime aim as a history of transport modelling[98], would also act as a resource for trawling the past to see what we have missed! This kind of history also involves selection, but when thoroughly accomplished as in this case, is much more wide ranging.

Most of us spend most of our time doing normal science. We recognise the breakthroughs and time will tell whether they survive or are overtaken – whether they were substantive game-changers. Ian Hamilton’s introduction to Against oblivion provides some clues about how this process works – and that, at least, it is a process worth studying. For me, it suggests a new kind of research: trawling the past for half-worked out ideas that may have been too difficult at the time and could be resurrected and developed.

Chapter 7. Tricks of the trade.

7.1. Introduction

Implicitly, over the years, it is possible to build an intellectual toolkit, or in the research context, a research toolkit, which provides the concepts that can be brought to bear on a range of challenges as they emerge. Important elements of that toolkit are the superconcepts which cross disciplines and can be applied to a ranger of what turn out to be generic problems. We have argued that most of the big research problems are interdisciplinary and the toolkit kit can become very important – continually expanding of course. The importance here is because it draws on the breadth of knowledge as well as the depth – and this is one of the core challenges of working in an interdisciplinary way. The core elements of my own toolkit have been sketched in earlier chapters, particularly 1 and 3. – STM and PDA for example. What follow are ideas which have been added as subsidiary elements of my own toolkit – hence the notion of ‘tricks of the trade’. These offer further illustrations of the toolkit principle but readers are urged to develop their own!!

7.2. The brain as a model

I will start with Stafford Beer’s book Brain of the firm, which has ideas I have used since it was first published in 1972[99]. Stafford Beer was a larger-than-life character who was a major figure in operational research, cybernetics, general systems theory and management science. I have a soft spot for him because of his book and work more widely and because, though I never met him, he wrote to me in 1970 after the publication of my Entropy book saying that it contained the best description of ‘entropy’ he had ever read. I see from googling that his ‘brain’ book is still in print as a second edition and I think it is also possible to download a pdf. Googling will also fill in more detail on Stafford Beer but beware the entry, which made me think he still had a contemporary supporters’ club, that is ‘Stafford Beer Festival 2015’ which turns out to be the Stafford Beer and Cider Festival!

The core argument of the book is a simple one: that the brain is the most successful system ever to evolve in nature and, therefore, if we explore it, we might learn something. In the Brain, he expounds the neurophysiology to a point when at the time of first reading, it seemed so good that I checked the accuracy against some neurophysiology texts and it seemed to pass. What follows is a considerable oversimplification both of the physiology and of Beer’s use of it – so tolerance is needed! The brain has five levels of organisation. The top – level 5 – is the strategic level. Levels 1-3 represent the autonomous nervous system which govern actions like breathing without us having to think; and also carry instructions to carry out actions at level 1. Occasionally, the autonomous system passes messages upwards if, for example, there is some danger. Level 4 is particularly interesting. It can be seen as an information processor. The brain receives an enormous amount of data and would not be able to make sense of this without the filter. Beer’s argument is that there is no equivalent function in organisations – and this is to their fundamental detriment. He cites as an example – an exception to this – the Cabinet Office War Room in World War II (now open as part of the Imperial War Museum) which was set up to handle the real time flow of information and to deal with information overload.

Beer translated this into a model of an organisation which he called the VSM – the viable system model[100]. The workings of the organisation were at levels 1, 2 and 3. The top level – the company board or equivalent was level 5. He usually attributes level 4 to the Development Directorate and I can see the case for that, but it doesn’t entirely deal with the filtering operation that any organisation needs. (But this is probably because of an over-rapid rereading on my part.) However, what he did recommend, even in 1972, was ‘a large dynamic electrical display of the organisation’ together with a requirement that all meetings of senior staff took place in that room. The technical feasibility of this is much higher and fits with the display of ‘big data’ such as that which has been built in Glasgow as an Innovate UK demonstrator.

This still leaves open the question of how to make sense of the mass of data – post filtering – and this is where we need an appropriate model. This connects to a little-explored research question: how to design the architecture of a multi-dimensional information system that can be aggregated and interrogated in a variety of ways. This statement constitutes an invitation to work on this!

I think we can gain tremendous insights from the Brain if the firm model when we think about organisations we either work in or work for; or are simply interested in.  Additionally, can we learn anything about how to approach research? We are certainly aware of information overload and functioning as individual researchers, the scale of this makes it impossible to cope with. Can we build an equivalent of a War Room? Forward-looking Librarians are probably trying to help us by doing this electronically – but we run into the classification problem[101]. Can we organise any other kind of cooperative effort – crowd sourcing to find the game changers? We might call this the ‘market in research’. The buyers are the researchers who cite other research, and a cumulatively large number of citations usually points to something important. The only problem then is that the information comes too late. We learn about the new fashion; we don’t get in on the ground floor. So an unresolved challenge here!

7.3. DNA

The idea of ‘DNA’ has become a commonplace metaphor. the real DNA is the genetic code that underpins the development of organisms. I find the idea useful in thinking about the development of – evolution of – cities. This development depends very obviously on ‘what is there already’ – in technical terms, we can think of that as the ‘initial conditions’ for the next stage in a dynamic model. We can then make an important distinction between what can change quickly – the pattern of a journey to work for instance – and what changes only slowly – the pattern of buildings or a road network. It is the underpinnings of the slowly changing stuff that represents urban DNA. Again, in technical terms, it is the difference between the fast dynamics and the slow dynamics. The distinction is between the underlying structure and the activities that can be carried out on that structure.

It also connects to the complexity science picture of urban evolution and particularly the idea of path dependence. How a system evolves depends on the initial conditions. Path dependence is a series of initial conditions. We can then add that if there are nonlinear relations involved – scale economies for example – then the theory shows us the likelihood of phase changes – abrupt changes in structure. The evolution of supermarkets is one example of this; gentrification is another.

This offers another insight: future development is constrained by the initial conditions. we can therefore ask the question: what futures are possible – given plans and investment – from a given starting point? This is particularly important if we want to steer the system of interest towards a desirable outcome, or away from an undesirable one – also, a tricky challenge here, taking account of possible phase changes. This then raises the possibility that we can change the DNA: we can invest in such a way as to generate new development paths. This would be the planning equivalent of genetic medicine – ‘genetic planning’. There is a related and important discovery from examining retail dynamics from this perspective. Suppose there is a planned investment in a new retail centre at a particular location. this constitutes an addition to the DNA. The dynamics then shows that this investment has to exceed a certain critical size for it to succeed. If this calculation could be done for real-life examples (as distinct from proof-of-concept research explorations) then this would be incredibly valuable in planning contexts[102]. Intuition suggests that a real life example might be the initial investment in Canary Wharf in London: that proved big enough in the end to pull with it a tremendous amount of further investment. the same thing may be happening with the Crossrail investment in London – around stations such as Farringdon.

The ‘structure vs activities’ distinction may be important in other contexts as well. It has always seemed to me in a management context that it is worth distinguishing between ‘maintenance’ and ‘development’, and keeping these separate – that is, between keeping the organisation running as it is, and planning the investment that will shape its future. (cf. The brain as a model – Section 7.2 above.)

The DNA idea can be part of our intuitive intellectual toolkit, and can then function more formally and technically in dynamic modelling. The core insight is worth having!!

7.4. Territories and flows

Territories are defined by boundaries at scales ranging from countries and indeed alliances of countries) to neighbourhoods via regions and cities. These may be government or administrative boundaries, some formal, some less so; or they may be socially defined as in gang territories in cities. Much data relates to territories; some policies are defined by them – catchment areas of schools or health facilities for example. It is at this point that we start to see difficulties. Local government boundaries usually will not coincide with the functional city region; and in the case of catchment boundaries, some will be crossed unless there is some administrative ‘forcing’. So as well as defining territories, we need to consider flows both within but especially between them. Formally, we can call territories ‘zones’, and flows are then between origin zones and destination zones. If the zones are countries, then the flows are trade and migration; if zones within a city region, then the flows may be journeys to work, to retail or other facilities.

It is then convenient to make a distinction between the social and political roles of territories and how we make best use of them in analysis and research. In the former case, much administration is rooted in the government areas and they have significant roles in social identity – ‘I am a Yorkshire-man or -woman’, ‘I am Italian’, and so on; in the latter case, these territories don’t typically suit our purposes though we are often prisoners of administrative data and associated classifications.

So how do we make the best of it for our analysis? A part of the answer is always to make use of the flow data. In the case of functional city regions, the whole region can be divided into smaller zones and origin-destination flows (O-D matrices technically) can be analysed, first to identify ‘centres’ and then perhaps a hierarchy of centres[103] for one way to do this systematically.) It is then possible, for example, to define a city region as a ‘travel to work area’ – a TTWA – as in the UK Census. Note, however, that there will always be an element of arbitrariness in this: what is the cut-off – the percentage of flows from an origin zone into a centre – that determines whether that origin is in a particular TTWA or not?

In analysis terms, I would argue that the use of flow data is always critical. Very few territories – zones at any scale – are self-contained. And the flows across territorial boundaries, as well as the richer sets of O-D flows, are often very interesting. An obvious example is imports and exports across a national boundary from which the ‘balance of payments’ can be calculated – saying something about the health of an economy. In this case, the data exists (for many countries) but in the case of cities, it doesn’t and yet the balance of payments for a city (however defined) is a critical measure of economic health. There is a big research challenge there.

It is helpful to point to some contrasts in both administration and analysis when flows are not taken into account, and then to consider what can be done about this. There are many instances when catchments are defined as areas inside fixed boundaries – even when they are not defined by government. Companies, for example, might have CMAs – customer market areas; primary schools might draw a catchment boundary on a map giving priority to ‘nearness’ but trying to ensure that they get the correct number of pupils. In some traditional urban and regional analysis – in the still influential Christaller central place theory for example – market areas are defined around centres – in Christaller’s case nested in a hierarchy. This makes intuitive sense, but has no analytical precision because the market areas are not self-contained. As it happens, there is a solution[104]!

Think of a map of facilities – shopping centres, hospitals, schools or whatever – and for each, add to the map a ‘star’ of the origins of users, with each link being given a width to represent that number of users. For each facility, that star is the catchment population. And it all adds up properly: the sum of all the catchment populations equals the population of the region. This, of course, represents the situation as it actually is and is fine for retail analysis for example. It is also fine for the analysis of the location of health facilities. It may be less good for primary schools that are seeking to define an admissions policy.

A particular application of the ‘catchment population’ concept is in the calculation of performance indicators. If cost of delivery per capita is an important indicator, then this can be calculated as the cost of running the facility divided by the catchment population. It is clearly vital that there is a good measure of catchment population. In this case, the ‘star’ is better than the ‘territory’. But the concept can be applied the other way round. Focus on the population of a small zone within a city and then build a reverse star: link to the facilities serving that zone, each link weighted by what is delivered. What you then have is a measure of effective delivery and by dividing by the zonal population, you have a per capita measure. (An alternative, and related, measure is ‘accessibility’.) This may sound unimportant, but consider, say, supermarkets and dentists. On a catchment population basis, any one of these facilities may be performing well. On a delivery basis to a population, analysis will turn up areas that are ‘supermarket deserts’ (usually where poorer people live – those who would like access to the cheaper food) or have poor access to dental treatment – even though the facilities themselves are perfectly efficient.[105]

So what do we learn from this: that we have to work with territories, because they are administratively important and may provide the most data; but we should always, where at all possible, make use of all the associated flows, many of which cross territorial boundaries, and then calculate useable concepts like catchment populations and delivery indicators ‘properly’.

7.5. Adding depth

A valuable principle for starting a piece of research is to ‘start simple, and the progressively, ‘add depth’. We illustrate with an urban example in the following.

An appropriate ambition of the model-building component of urban science is the construction of the best possible comprehensive model which represents the interdependencies that make cities complex (and interesting) systems. To articulate this is to spell out a kind of research programme: how do we combine the best of what we know into such a general model? Most of the available ‘depth’ is in the application of particular submodels – notably transport and retail. If we seek to identify the ‘best’ – many subjective decisions here in a contested area – we define a large scale computing operation underpinned by a substantial information system that houses relevant data. Though a large task, this is feasible! How would we set about it?

The initial thinking through would be an iterative process. The first step would be to review all the submodels and in particular, their categorisation of their main variables – their system definitions. Almost certainly, these would not be consistent: each would have detail appropriate to that system. It would be necessary then – or would it? – to find a common set of definitions. It may be possible to work with different classifications for different submodels and then to integrate them in some way in connecting the submodels as part of a general model that, among other things, captures the main interdependencies. This is a research question in itself! It is at this point that it would be necessary to confront the question of exogenous and endogenous variables. We want to maximise the number of endogenous variables but to retain as exogenous those that will be determined externally, for example by a planning process.

There is then the question of scales and possible relations between scales. Suppose we can define our ‘city’, say as a city region, divided into zones, with an appropriate external zone system (including a ‘rest of the world’ zone to close the system). Then for the city in aggregate, we would normally have a demographic model and an economic model. These would provide controlling totals for the zonal models: the zonal populations would add up to those of the aggregate demographic model for example. There is also the complicated question of whether we would have two or more zone systems – say one with larger zones, one with finer-scale. Bit for simplicity at this stage, assume one zone system. We can then begin to review the submodels[106].

The classic transport model has four submodels: trip generation, distribution, modal split and assignment. As this implies, the model includes a multi-modal network representation. Trips from origin to destination by mode (and purpose) are loaded onto the network. This enables congestion to be accounted for and properly represented in generalised costs (with travel time as an element) – a level of detail which is not usually captured in the usual running of spatial interaction models.

A fine-grain retail model functions with a detailed categorisation of consumers and of store attractiveness and can predict flows into stores with reasonable accuracy. This model can be applied in principle to any consumer driven service particularly for example, flows into medical facilities, and especially general practice surgeries. This task is different if the flows are assigned by a central authority as to schools for instance.

The location of economic activity, and particularly employment, is more difficult. Totals by sector might be derived from an input-out model but the numbers of firms are too small to use statistical averaging techniques. What ought to be possible with models is to estimate the relative desirability of different locations for different sectors and then use this information to interpret the marginal location decisions of firms. This fits with the argument to follow below about the full application of urban science being historical!

In all cases of location of activities, it will be necessary in a model to incorporate constraints at the zonal scale, particularly in relation to land use. as these are applied, measures of ‘pressure’ e.g. on housing at particular locations can be calculated (and related to house prices). It is these measures of pressure that lie at the heart of dynamic modelling, and it is to this that we will turn shortly.

As this sketch indicates, it would be possible to construct a Lowry-like model which incorporated the best-practice level of detail from any of the submodels. Indeed, it is likely that within the hardy band of comprehensive modellers – Marcial Echenique, Michael Wegener, Roger Mackett, Mike Batty and David Simmonds for example – this will largely have been done, though my memory is that this is usually without a full transport model as a component. What has not been done, typically, is to make these models fully dynamic. Rather, in forecasting mode, they are run as a series of equilibrium positions, usually on the basis of changes that are exogenous to the model[107].

The next step – to build a Lowry-like model that is fully dynamic – has been attempted by Joel Dearden and myself and is reported in Chapter 4 of Explorations in urban and regional dynamics[108]. However, it should be emphasised that this is a proof-of-concept exploration and does not contain the detail –e.g. on transport – that is being advocated above. It does tackle the difficult issues of moves: non-movers, job movers, house movers, house and job movers, which is an important level of detail in a dynamic model but very difficult to handle in practice. It also attempts to handle health and education explicitly in addition to conventional retail. As a nonlinear model, it does embrace the possibility of path dependence phase changes and these are illustrated (a) by the changes in initial conditions that would be necessary to revive a High Street and (b) in terms of gentrification in housing.

What can we learn from this sketch? First, it is possible to add much more detail than is customary but this is difficult in practice. I would conjecture this is because to do this effectively demands a substantial team and corresponding resources and, unlike particle physics, these kinds of resources are not available to urban science! Secondly, and rather startlingly, it can be argued that the major advance of this kind of science will lie in urban history! This is because in principle, all the data is available – even that which we have to declare exogenous from a modelling perspective.  The exogenous variables can be fed into the model and the historians, geographers and economic historians can interpret their evolution. This would demand serious team work but would be the equivalent for urban science of the unravelling of DNA in biology or demonstrating the existence of the Higgs boson! Where are the resources – and the ambition – for this!!

7.6. Beware of ‘optimisation’!

The idea of ‘optimisation’ is basic to lots of things we do and to how we think. When driving from A to B, what is the optimum route? When we learn calculus for the first time, we quickly come to grips with the maximisation and minimisation of functions. This is professionalised within operational research. If you own a transport business, you have to plan a daily schedule of collections and deliveries. How do you allocate your fleet to miminise costs and hence to maximise profits for the day? In this case, the mathematics and the associated computer programmes exist and are well known – they will solve the problem for you. You have the information and you can control what happens. But suppose now that you are an economist and you want to describe, theorise about, or model human behaviour. Suppose you want to investigate the economics of the journey to work. This is another kind of scheduling problem except that in this case it involves a large number of individual decision makers. If we turn to the micro economics text books, we find the answer: define a utility function for individuals, and each can then maximise. In this case we run into problems: does each individual have all the relevant information? Does the economist have this? For the individual, all options need to be available on possible journeys – perfect information. The impossibility of this led Herbert Simon to the powerful concept of ‘satisficing’ rather than ‘maximising’. This was brilliant and shifts the modelling task to a probabilistic one (as well as being a realistic description of human behaviour – isn’t it what we all do?). Of course, economists responded to this too and associated probability distributions with the utility functions. This is more difficult to build into the basic economics’ text books, however. And for this problem, there is an associated issue: space. The economists’ toolkit is usually seen as having micro or macro dimensions but when space needs to be added, we might think of the resulting scale as being ‘meso’. More adaptation needed.

So the lesson to heed at this stage is that while ‘optimisation’ is a powerful concept and tool, it should be used with care and should be ‘blurred’ via the introduction of probability distributions – which makes everything more messy – when appropriate. Hence, we should ‘beware’. A related field to explore in this respect are the very fashionable agent-based models (ABM). If individuals in an ABM are behaving according to ‘rules’, does the specification of these rules incorporate the necessary blurring? I know this can be done, and have done it, but is it always done? I suspect not[109].

There is a temptation to stick to simpler forms of optimisation because the associated mathematics and computer software is so attractive. This is particularly true of linear programming problems like the transport business owner’s scheduling problem. A good example is the algorithm for the shortest path through a network which Dijkstra discovered in the 1950s. A great thing to have. In the early days of transport modelling this provided the basis for assigning origin-destination flows to networks – essentially assuming that all travellers took the best route. Again, this proved too simple and eventually it became possible, from the early 1970s, though a shade more difficult, to calculate second best and third best routes, even the kth best route – and then the trips could allocated probilistically.  And then we have to recognise that much of the world is nonlinear. The mathematics of nonlinear optimisation is more tricky but huge progress has been made – and indeed one of the core methods dates back to Lagrange in the Eighteenth Century.

I have been lucky with my own work in all these respects because entropy-maximising is a branch of nonlinear optimisation which actually offers optimum blurring. It is possible to take a traditional economic model and to turn it into and optimally blurred, and hence more realistic, model by these means. Indeed, there is one remarkable mathematical result: that the nonlinear version of an equivalent of the scheduling problem transforms into the linear problem when one of the parameters becomes infinite[110].

Conclusion: different kinds of optimisation methods should be in the toolkit, but we should be wary about matching uses with reality!

7. 7. Missing data

All the talk of ‘big data’ sometimes carries the implication that we must now surely have all the data that we need. However, frequently, crucial data is ‘missing’. This can then be seen as inhibiting research: ‘can’t work on that because there’s no data’! For important research, I want to make the case that missing data can often be estimated with reasonable results. This then links to the ‘statistics vs mathematical models’ issue: purist statisticians really do need the data – or at least a good sample; if there is a good model, then there is a better chance of getting good estimates of missing data.[111]

As noted earlier, I started my life in elementary particle physics at the Rutherford Lab. I was working in part at CERN on a bubble chamber experiment in the synchrotron. A high-energy proton collided with another proton and the results of the collision left tracks in the chamber which were curved by a magnetic field thereby offering a measurement of momentum, having hypothesised the mass. My job was to identify the particles generated in the collision. The ‘missing data’ was that of the neutral particles which left no tracks. The solution came from a mix of the model and statistics. The model offered a range of hypotheses of possible events and chi-squared testing identified the most probable – actually with remarkable ease though I confess that there was an element of mystery in this for me. But it made me realise, with some hindsight, that missing data could be discovered.

Science needs data – cf. Nullius in verba[112] – but it also hypotheses and theories to test – cf. Evolvere theoriae et intellectum[113]. In the case of my current field – the analysis of states and dynamics of cities and regions, there is an enormous need for data. I once estimated that I needed 1013 variables as the basis for a half-decent comprehensive urban model and in many ways this takes us beyond big data – though real-time sensor data will generate these kinds of numbers very quickly. The question is: is it the data we need? We can set out a comprehensive theory – and an associated model – of cities and regions. We have core data from decennial censuses together with a large volume of administrative data and much survey data. The real-time data – e.g. positional data from mobile phones – can be used to estimate person flows, taking over from expensive (and infrequent) surveys. In practice, of course, much of the data available to us is sample data and we can use statistics – either directly or to calibrate models – to complete the set.

My own early work in urban modelling was a kind of inversion of the missing data problem: entropy maximising generated a model which provided the best fit to what is known – in effect a model for the missing data. It turns out, not surprisingly, to have a close relationship to Bayesian methods of adding knowledge to refine beliefs. In theory, this only works with large ‘populations’ but there have been hints that it can work quite well with small numbers. This only gets us so far. The collection (or identification) of data to help us build dynamic models is more difficult. Even more difficult is connecting these models which rely on averaging in large populations with micro ‘data’ – maybe qualitative – on individual behaviour. There are research challenges to be met here.

There are other kinds of challenges: what to do when critical elements are missing of a simpler nature. An example is the need for data on ‘import and export’ flows across urban boundaries to be used in building input-output models at the city (or region) scale. We need these models so that we can work out the urban equivalent of the well-understood ‘balance of payments’ in the national accounts. How can we estimate something which is not measured at all, even on a sample basis? I recently started to ponder whether we could look at the sectors of an urban economy and make some bold assumption that the import and export propensities were identical to the national ones? This immediately throws up another problem, that we have to distinguish between intra-national – that is between cities – and international flows. It became apparent pretty quickly that we needed the model framework of interacting input-output models for the UK urban system before we could progress to making albeit very bold estimates of the missing data. We have done this for 200+ countries in a global dynamics research project and the task was now to translate this to the urban scale but for the country as a whole. A ‘missing data’ problem is seen as quite a tricky theoretical one.

Perhaps the best way to summarise the ‘missing data’ challenges is to refer back to the ‘Requisite knowledge’ argument of Chapter 1: what is the ‘requisite data set’ needed for an effective piece of research   – e.g. to calibrate a model? Then if the model is good, the model outputs looks like ‘data’ for other purposes. More generally: do not be put off from doing something important by ‘missing data’. There are ways and means albeit sometimes difficult ones!

Chapter 8. Managing research, managing ourselves

8.1. Introduction

‘Management’ is usually thought to be about ‘managing organisations’ and of course it is. But it is also, at a micro scale, about managing ourselves as individuals and the same principles apply. In the universities’ context, it is in another sense about ‘managing ourselves’ as the structures are typically dominated by academics.[114] In the following sections, I explore various dimensions of management starting with noting that most of us ‘learn management from experience’ and I use myself as an example (8.2). How to encourage ‘collaboration’ is something not usually in the standard textbook but is particularly important in research (8.3). Managers face challenges in allocating resources and I use the competition between ‘pure and ‘applied’ communities in research to illustrate this (8.4). Given the power of the big players – the big teams – at both the university or Institute scale, or in relation to very well-funded research groups, what do you to succeed if you are a manager in a small institution or group? I take the Leicester City example (8.5)! This always demands thinking on how to be ahead of the game. Warren Weaver provides a wonderful example. What would Warren Weaver say now (8.6)? How do we know we have at least achieved ‘best practice (8.7)? And finally, at the micro-scale, how do we manage our time most effectively (8.8)?

8.2. Do you really need an MBA?

‘How to manage’ has itself become big business – evidenced by the success of University Business Schools and the MBA badge that they offer; note also the size of the management section of a bookshop. I don’t dispute the value of management education or of much of the literature, but my own experience has been different: learning on the job. It is interesting, for myself at least, to trace the evolution of my own knowledge of ‘management’. I don’t claim this as a recipe for success. There were failures along the way, and I have no doubt that there are many alternative routes. But I did discover some principles that have served me well – most of them filleted from the literature, tempered with experience. This can be thought of as ‘research on research’, at least at an elementary level.

In my first ten years of work, my jobs were well focused, with more or less single objectives and ‘management’ consisted of getting the job done. This began, at the then newly founded Rutherford Laboratory at Harwell, writing computer programmes to identify bubble chamber events at the CERN synchrotron; on to implementing transport models for planning purposes at the then Ministry of Transport. I set up the Mathematical Advisory Unit in the Ministry, which became large and doing this was clearly a management job. I moved to the Centre for Environmental Studies as Assistant Director – another management job. These roles were on the back of a spell of research into urban economics and modelling in Oxford which also had me working on a broader horizon even when I was a civil servant – and of course, CES was a research institute. From 1964-67 I was an Oxford City Councillor (Labour) – then a wholly ‘part-time’ occupation on top of my other jobs.

What did I learn in those ten years? At the Rutherford Lab, the value of teamwork and being lightly managed by the two layers above me and being given huge responsibilities in the team at a very young age. In MoT, I learned something about working in the Civil Service though my job was well-defined; again, I had sympathetic managers above me. On Oxford Council, I learned the workings of local government, and how a political party worked, at first hand. This was teamwork of a different kind, whether in Council business as a political group, or organising to win elections. There may be transferable skills here! At CES, I built my own team. At both MoT and CES I recruited some very good people who went on to have distinguished careers. In all cases, the working atmosphere was pretty good. It was in CES, however, that I realise with hindsight that I made a big mistake. I assumed that urban modelling was the key to the future development of planning, probably convinced Henry Chilver, the Director of this, and I neglected the wider context and people like Peter Wilmott from whom I should have learned much more. That led to the Director being fired as the Trustees sought to widen the brief – or to bring it back to its original purpose, and it nearly cost me my job (as I learned from one of the Trustees) but fortunately, it didn’t. It was also my first experience of a govering body – the Board of Trustees – and another mistake was to leave the relationship with them entirely with the Director – so I had no direct sense of what they were thinking. I left a few months later for the University of Leeds and within a couple of years, I began to have the experience of broader-based management jobs.

I went to Leeds as Professor of Urban and Regional Geography into a School of Geography that was being rebuilt through what would be described as strong leadership but which amounted to bullying at times. I was left to get on with my job, I enjoyed teaching and I was successful in bringing in research grants and I built a good team. The atmosphere, however, was such that after two nor three years, I was thinking of leaving. Out of the blue, the Head of Department was appointed as a Polytechnic Head and I found myself as Head of Department. The first management task was to change the atmosphere and an important element of that was to make the monthly staff meeting really count – a key lesson – ‘leadership’ should be through the consent of the staff and this was something I could more or less maintain in later jobs. This isn’t as simple as it sounds of course, there are often difficult disagreements to be resolved. The allocation of work round the staff of the Department was very uneven and I managed to sort that out with a ‘points’ system. I learned the value of PR as the first Research Assessment Exercise approached (in 1987) – by making sure we publicised our research achievements and I believe (without hard evidence of course!) that this helped us to a top rating (which wasn’t common in the University at the time).

The Geography staff meeting was my first experience of chairing something and I probably learned from my predecessor – how not to do it, how to ensure that you secure the confidence, as far as possible, of the people in the room. I was elected as Head of Department for three three-year spells, alternating with my fellow Professor. I began to take on University roles, and in particular to chair one of the main committees – the Research Degrees Committee – and this led to me being Chair of the Joint Board of the Faculties of Arts, Social and Economic Studies and Law – equivalent of a Dean in modern parlance – which represented a large chunk of the University and was responsible for a wide range of policy and administration. There were many subcommittees. So lots of practice. The Board itself had something of the order of 100 members.

In the late 80s, I was ‘elected’ – quotation marks as there was only one candidate! – as Pro-Vice-Chancellor for 1989-91. There was only one such post at the time and the then VC became Chair of CVCP for those two years and delegated a large chunk of his job to me. The University was not in awful shape, but not good either. Every year, there was a big argument about cuts to departmental budgets. I began thinking about how to turn the ‘ship’ around – a new management challenge on a big scale. It helped that at the end of my first year as PVC, I was appointed as VC-elect from October 91, so in my second year, I could not only plan, but begin to implement some key strategies. It’s a long story so I will simply summarise some key elements – challenges and the beginnings of solutions.

The University had 96 departments and was run through a system of over 100 committees (as listed in the University Calendar) – seriously schlerotic. For example, there were seven Biology Departments each mostly doing molecular biology. We had to shift the ‘climate’ from one of cost cutting to one of income generation and this was done through delegation of budgets to each of a reduced number of departments (96 to 55) which was based on cost management but critically, with delegated income generating rules – and this became the engine for both growth and transformation. There were winners and losers of course and this led to some serious handling challenges at the margins. (I tried to resolve these by going to department staff meetings to take concerns head on. That sometimes worked, sometimes didn’t!) There was a challenge of how to marry department plans with the University’s plan. The number of committees was substantially reduced – with a focus on three key committees.

In my first two years as VC, there was a lot of opposition to the point where I even started thinking about the minimum number of years I would have to do to leave in a respectable way. By the third year, many of those who had been objecting had taken early retirement and the management responsibilities around the University – Heads of Department, members of key committees – were being filled by a new generation. I ended up staying in post for thirteen years.

Can I summarise at least some of the key principles I learned in that time?

  • Recognising that the University was not a business, but had to be business-like.
  • Having our own strategy within a volatile financial and policy environment; then operating tactically to bring in the resources that we needed to implement our strategy.
  • What underpins my thinking about strategy is the idea that I learned from my friend Britton Harris of the University of Pennsylvania in the 1970s – outlined in a different context earlier. He was a city planner (and urban model builder) and he argued that planning involved three kinds of thinking: policy, design and analysis with the added observation that ‘you very rarely find all three in the same room at the same time’. Apply this to universities: ‘analysis’ means understanding the business model and having all relevant information to hand; ‘policy’ means specifying objectives; ‘design’ means inventing possible plans and working towards identifying the best – which becomes the core strategy. This may well be the most valuable part of my toolkit.
  • Recognising that a large institution – by the end of my period, 33,000 students and 7,000 staff – could not be ‘run’ from the centre, so an effective system of real delegation was critical.
  • The importance of informal meetings and discussions – outside the formal committee system; I had at least termly meetings with all Heads of Department, with members of Council, with the main staff unions, with the Students Union Executive.
  • Openness: particularly of the accounts.
  • Accountability: in reorganising the committee system, I retained a very large Senate – about 200 of whom around half would regularly come to meetings.
  • And something not in the job description: realising that how I behaved somehow had an impact on the ethos of the University[115].

By many measures, I was successful as a manager and I learned most of the craft as a Vice-Chancellor. But I was constantly conscious of what I wasn’t succeeding at so I’m sure my ‘blueprint’ is a partial one. I learned a lot from the management literature. Mintzberg showed me that if in an organisation, your front-line workers are high-class professionals, if they didn’t feel involved in the management, you would have problems. (In this respect, I think the university system in the UK has done pretty well, the health service, less so.) Ashby taught me the necessity to devolve responsibility, Christensen taught me about the challenges of disruption and how to work around them. I learned a lot about developing strategy and the challenges of implementation – “strategy is 5% of the problem, implementation is 95%”[116]. I learned a lot about marketing. I tried to encapsulate much of this in running seminars for my academic colleagues and for the University administration. Much later, I wrote a lot of it up in my book, Knowledge power.

So do you really need an MBA? I admire the best of them and their encapsulated knowledge. In my case, I guess I had the apprenticeship version. Over time, it is possible to build an intellectual management toolkit in which you have confidence that it more or less works. I have tried to stick to these principles in subsequent jobs – UCL, AHRC, The Alan Turing Institute. Circumstances are always different, and the toolkit evolves!

8.3. Collaboration

I left CASA in UCL in July 2016 and moved to the new Alan Turing Institute. I’d planned the move to give me a new research environment – as a Fellow with some responsibility for developing an ‘urban’ programme. There were few employees – most of the researchers – part-time academics as Fellows, some Research Fellows and PhD students – were due in October. I ran a workshop on ‘urban priorities’ and wondered what to do myself with no supporting resources. I was aware that my own research was on the fringes of Turing priorities – ‘data science’. I could claim to be a data scientist – and indeed Anthony Finkelstein[117], then a Trustee and a UCL colleague – in encouraging me to move to Turing said: “You can’t have ‘big data’ without big models”. However, in Turing, data science meant machine learning and AI rather than modelling as I practised it. So I started to think about a new niche: Darwin in his later years decided to work on ‘smaller problems’, perhaps more manageable. I’m not comparing myself to Darwin, but there may be good advice there! And as for machine learning, though I put myself on a steep learning curve to learn something new and to fit in, though I couldn’t see how I could manage the ‘10,000 hours’ challenge that would turn me into a credible researcher in that field[118].

At the end of September, everything changed. In odd circumstances – to be described elsewhere – I found myself as the Institute CEO. There was suddenly a huge workload. I reported to a Board of Trustees, there were committees to work with, there were five partner universities to be visited. Above all, a new strategy had to be put in place – hewed out of a mass of ideas and forcefully-stated disagreements. I can now begin to record what I learned about a new field of research (for me) and the challenges of setting up a new Institute. I had to learn enough about data science and AI to be able to give presentations about the Institute and its priorities to a wide variety of audiences. I was able to attend seminars and workshops and talk to a great variety of people and by a process of osmosis, I began to make progress. I will start by recording some of my own experiences of collaboration in the Institute.

The ideal of collaboration is crucial for a national institute. Researchers from different universities, from industry, from the public sector, meet in workshops and seminars, and perhaps above all over coffee and lunch in our kitchen area, and new projects, new collaborations emerge. I can offer three examples from early days from my own experience which have enabled me to keep my own research alive in unexpected ways. All represent new interdisciplinary collaborations.

I met Weisi Guo at the August 2016 ‘urban priorities’ workshop. He presented his work on the analysis of global networks connecting cities which demonstrated the probabilities of conflicts. Needless to say, this turned out to be of interest to the Ministry of Defence and through the Institute’s partnership in this area, a project to develop this work was funded by DSTL. It seemed to me that Weisi’s work could be enhanced by adding flows (spatial interaction) and structural dynamics and we have worked together on this since our first meeting. New collaborators have been brought in and we have published a number of papers. From each of our viewpoints, adding research from a different previously unknown field, has proved highly fruitful [refs].

The second example took me into the field oh health. Mihaela van der Schaar arrived at Turing in October from UCLA, to a Chair in Oxford and as a Turing Fellow. One of her fields of research is the application of machine learning to rapid and precise medical diagnosis and prognosis. This is complex territory involving the accounting of co-morbidities as contributing to the diagnosis and prognosis of any particular disease, and having an impact on treatment plans. I recognised this as important for the Institute and was happy to support it. We had a lucky break early on. I was giving a breakfast briefing to a group of Chairs and CEOs of major companies and at the end of the meeting, I was approached by Caroline Cartellieri who thanked me for the presentation but said she wanted to talk to me about something else: she was a Trustee of the Cystic Fibrosis Trust. This led to Mihaela and her teams – mainly of PhD students – carrying out a project for the Trust which became an important demonstration of what could be achieved more widely – as well as being valuable for the Trust’s own clinicians. For me, it opened up the idea of incorporating the diagnosis and prognosis methods into a ‘learning machine’ which could ultimately be the basis of personalised medicine. And then a further thought: the health learning machine is generic: it can be applied to any flow of people for which there is a possible intervention to achieve an objective. For example, it can be applied to the flow of offenders into and out of prisons and this idea is now being developed in a project with the Ministry of Justice.

 Mihaela’s methods have also sown the seed of a new approach to urban modelling. The data for the co-morbidities’ analysis is the record over time of the occurrence of earlier diseases. If these events are re-interpreted in the urban modelling context as ‘life events’ -from demographics – birth migration and death – but to include entry to education, new job, new house and so on, then a new set of tools can be brought to bear.

The third example, still from very early on, probably Autumn 2016, came from me attending for my own education a seminar by Mark Girolami on (I think) the propagation of uncertainty – something I have never been any good at building into urban models. However, I recognised intuitively that his methods seemed to include a piece of mathematics that would possibly solve a problem that has always defeated me: how to predict the distribution of (say) retail centre sizes in a dynamic model. I discussed this with Mark who enthusiastically agreed to offer the problem to a (then) new research student, Louis Ellam. He also brought in an Imperial College colleague, Greg Pavliotis, an expert in statistical mechanics and therefore connected to my style of modelling. Over the next couple of years, the problem was solved and led to a four-author paper in Proceedings A of the Royal Society, with Louis as the first author [ref].

Collaboration in Turing now takes place on a large scale. For me, it has taken me into fruitful new areas, my collaborators making it both manageable for me and adding new skills – thereby solving the ‘10,000 hours’ challenge – by proxy!

8.4. Pure vs applied

In my early days as CEO in Turing, I was confronted with an old challenge: pure vs applied though often in a new language – foundational vs consultancy for example. In my own experiences from my schooldays onwards, I was always aware of the higher esteem associated with the ‘pure’ and indeed I myself leaned towards the pure end of the spectrum. Even when I started working in physics, I worked in ‘theoretical physics’. It was when I converted to the social sciences that I realised that in my new fields, I could have it both ways: I worked on the basic science of cities, through mathematical and computer modelling, but with outputs that were almost immediately applicable in town and regional planning and indeed commercially. So where did that kind of thinking leave me in trying to think through a strategy for the Institute?

Oversimplifying: there were two camps – the ‘foundational’ and the ‘domain-based’. Some of the former could characterise the latter as ‘mere consultancy’. There were strong feelings. However, there was a core that straddled the camps: brilliant theorists, applying their knowledge in a variety of domains. It was still possible to have it both ways. How to turn this into a strategy – especially given that the root of a strategic plan will be the allocation of resources to different kinds of research? In relatively early days, it must have been June 2017, we had the first meeting of our Science Advisory Board and for the second day, we organised a conference, inviting the members of our Board to give papers. Mike Lynch gave a brilliant lecture on the history of AI through its winters and summers with the implicit question: will the present summer be a lasting one? At the end of his talk, he said something which has stuck in my mind ever since: “The biggest challenge for machine learning is the incorporation of prior knowledge”. I would take this further and expand ‘knowledge’ to ‘domain knowledge’. My intuition was that the most important AI and data science research challenges lay within domains – indeed that the applied problems generated the most challenging foundational problems.

Producing the Institute’s Strategic Plan in the context of a sometimes heated debate was a long drawn out business – taking over a year as I recall. In the end, we had a research strategy based on eight challenges, six of which were located in domains: health, defence and security, finance and the economy, data-centric engineering, public policy and what became ‘AI for science’. We had two cross-cutting themes: algorithms and computer science, and ethics. The choice of challenge areas was strongly influenced by our early funders:  the Lloyds Register Foundation, GCHQ and MoD, Intel and HSBC. Even without a sponsor at that stage, we couldn’t leave out ‘health’! All of these were underpinned by the data science and machine learning methods tool kit. Essentially, this was a matrix structure: columns as domains, rows as methods – an effective way of relaxing the tensions, of having it both ways. This structure has more or less survived, though with new challenges added – ‘cities’ for example and the ‘environment’.

When it comes to allocating resources, other forces come into play. Do we need some quick wins? The balance between the short term and the longer – the latter inevitably more speculative? Should industry fund most of the applied? This all has to be worked in the context of a rapidly developing Government research strategy (with the advent of UKRI [add footnote]) and the development of partnerships with both industry and the public sector. There is a golden rule, however, for a research institute (and for many other organisations such as universities): think through your own strategy rather than simply ‘following the money’ which is almost always focused on the short term. Then given the strategy, operate tactically to find the resources to support it.

In making funding decisions, there is an underlying and impossible question to answer: how much has to be invested in an area to produce results that are truly transformative? This is very much a national question but there is a version of it at the local level. Here is a conjecture: that transformative outcomes in translational areas demand a much larger number of researchers to be funded than to produce such transformations in foundational areas. This is very much for the ‘research’ end of the R and D spectrum – I can see that the ‘D’ – development – can be even more expensive. So what did we end up with? The matrix works and at the same time acknowledges the variety of viewpoints. And we are continually making judgements about priorities and the corresponding financial allocations. Pragmatism kicks in here!

8.5. Leicester City: a good defence and breakaway goals

Followers of English football will be aware that the top tier is the Premier League and that the clubs that finish in the top four at the end of the season play in the European Champions League the following year. These top four places are normally filled by four of a top half a dozen or so – let’s say Manchester United, Manchester City, Arsenal, Chelsea, Tottenham Hotspur and Liverpool. There are one or two others on the fringe. This group does not include Leicester City. At Christmas 2014, Leicester were bottom of the Premier League with relegation looking inevitable. They won seven of their last nine games in that season and survived. At the beginning of the current (2015-16) season, the bookmakers’ odds on them winning the Premier League were 5000-1 against. Towards the end of that season, they topped the league by eight points with four matches to play. They duly became the Premier League champions. The small number of people who might have bet £10 or more on them at the start of the season, and there were a few, made a lot of money!

How was this achieved? They had a very strong defence and so conceded little; they could score ‘on the break’, notably through Jamie Vardy, a centre forward who not long ago was playing for Fleetwood Town in the nether reaches of English football; they had an interesting and experienced manager, Claudio Ranieri; and they worked as a team. It was certainly a phenomenon and the bulk of the football-following population were delighted to see them win the League.

What are the academic equivalents? There are university league tables and it is not difficult to identify a top half dozen. There are tables for departments and subjects. There is a ranking of journals. I don’t think there is an official league table of research groups but certainly some informal ones. As in football, it is very difficult to break into the top group from a long way below. Money follows success – as in the REF (the Research Excellence Framework) – and facilitates the transfer of the top players to the top group. So what is the ‘Leicester City’ strategy for an aspiring university, an aspiring department or research group or a journal editor? The strong defence must be about having the basics in place – good REF ratings and so on. The goal-scoring break-out attacks is about ambition and risk taking. The ‘manager’ can inspire and aspire. And the team work: we are almost certainly not as good as we should be in academia, so food for thought there.

Then maybe all of the above requires at the core – and I’m sure Leicester City had these qualities – hard work, confidence, good plans while still being creative; and a preparedness to be different – not to follow the fashion. So when The Times Higher has its ever-expanding annual awards, maybe they should add a ‘Leicester City Award’ for the university that matches their achievement in our own leagues.

8.6. What would Warren Weaver say now?

Warren Weaver was a remarkable man. He was a distinguished mathematician and statistician. He made important early contributions on the possibility of the machine translation of languages. He was a fine writer who recognised the importance of Shannon’s work on communications and the measurement of information and he worked with Shannon to co-author ‘The mathematical theory of communication’. But perhaps above all, he was a highly significant science administrator. For almost 30 years, from 1932, he worked in senior positions for the Rockefeller Foundation, latterly as Vice-president. I guess he had quite a lot of money to spend. From his earliest days with the Foundation, he evolved a strategy which was potentially a game-changer, or at the very least, seriously prescient: he switched his funding priorities from the physical sciences to the biological. In 1948, he published a famous paper in The American Scientist that justified this – maybe with an element of post hoc rationalisation – on the basis of three types of problem (or three types of system – according to taste): simple, of disorganised complexity and of organised complexity. Simple systems have a relatively small number of entities; complex systems have a very large number. The entities in the systems of disorganised complexity interact only weakly; those of organised complexity have entities that interact strongly. In the broadest terms – my language not his – Newton had solved the problems of simple systems and Boltzmann those of disorganised complexity. The biggest research challenges, he argued, were those of systems of organised complexity and more of these were to be found in the biological sciences than the physical. How right he was and it has only been after some decades that ‘complexity science’ has come of age – and become fashionable. (I was happy to re-badge myself as a complexity scientist which may have helped me to secure a rather large research grant.)

There is famous management scientist, no longer alive, called Peter Drucker. Such was his fame that a book was published confronting various business challenges with the title: ‘What would Peter Drucker say now?’. Since to my knowledge, no one has updated Warren Weaver’s analysis, I am tempted to pose the question ‘What would Warren Weaver say now?’. I have used his analysis for some years to argue for more research on urban dynamics – recognising cities as systems of organised complexity. But let’s explore the harder question: given that we understand urban organised complexity – though we haven’t progressed a huge distance with the research challenge – if Warren Weaver was alive now and could invest in research on cities, could we imagine what he might say to us? What could the next game changer be? I will argue it for ‘cities’ but I suspect, mutatis mutandis, the argument could be developed for other fields. Let’s start by exploring where we stand against the original Weaver argument.

We can probably say a lot about the ‘simple’ dimension. Computer visualisation for example can generate detailed maps on demand which can provide excellent overviews of urban challenges. We have done pretty well on the systems of disorganised complexity in areas like transport, retail and beyond. This has been done in an explicit Boltzmann-like way with entropy maximising models but also with various alternatives – from random utility models via microsimulation to agent-based modelling (ABM). We have made a start on understanding the slow dynamics with a variety of differential and difference equations, some with roots in the Lotka-Volterra models, some connected to Turing’s model of morphogenesis. What kinds of marks would Weaver give us? Pretty good on the first two: making good use of dramatically increased computing power and associated software development. I think on the disorganised complexity systems, when he saw that we have competing models for representing the same system, he would tell us to get that sorted out: either decide which is best and/or work out the extent to which they are equivalent or not at some underlying level. He might add one big caveat: we have not applied this science systematically and we have missed opportunities to use it to help tackle major urban challenges. On urban dynamics and organised complexity, we would probably get marks for making a goodish start but with a recommendation to invest a lot more[119].

So we still have a lot to do – but where do we look for the game changers? Serious application of the science – equilibrium and dynamics – to the major urban challenges could be a game changer. A full development of the dynamics would open up the possibility of ‘genetic planning’ by analogy with ‘genetic medicine’. But for the really new, I think we have to look to rapidly evolving technology. I would single out two examples, and there may be many more. The first is in one sense already old hat: big data. However, I want to argue that if it can be combined with hi-speed analytics, this could be a game changer. The second is something which is entirely new to me and may not be well known in the urban sciences: block chains. A block is some kind of set of accounts at a node. A block chain is made up of linked nodes – a network. There is much more to it and it is being presented as a disruptive technology that will transform the financial world (with many job losses?). A block chain transfers money. Could it transfer data and solve security challenges at the same time? If you google it, you will find out that it is almost wholly illustrated by the bitcoin system. A challenge is to work out how it could transform urban analytics and planning.

However, the big question is worth further exploration: what would Warren Weaver say now?

8.7. Best practice

Everything we do, or are responsible for, should aim at adopting ‘best practice’. A basic management principle. This is easier said than done! We need knowledge, capability and capacity. Then maybe there are three categories through which we can seek best practice: (1) from ‘already in practice’ elsewhere; (2) could be in practice somewhere but isn’t: the research has been done but hasn’t been transferred; (3) problem identified, but research needed.

How do we acquire the knowledge? Through reading, networking, CPE courses, visits. Capability is about training, experience, acquiring skills. Capacity is about the availability of capability – access to it – for the services (let us say) that need it. Medicine provides an obvious example; local government another. How do each of 164 local authorities in England acquire best practice? Dissemination strategies are obviously important.  We should also note that there may be central government responsibilities. We can expect markets to deliver skills, capabilities and capacities – through colleges, universities and, in a broad sense, industry itself (in its most refined way through ‘corporate universities’). But in many cases, there will be a market failure and government intervention becomes essential. In a field such as medicine, which is heavily regulated, the Government takes much of the responsibility for ensuring supply of capability and capacity. There are other fields, where in early stage development, consultants provide the capacity until it becomes mainstream – GMAP in relation to retailing being an example from my own experience. (See Section 6.4 above.)

How does all this work for cities, and in particular, for urban analytics? Good analytics provide a better base for decision making, planning and problem solving in city government. This needs a comprehensive information system which can be effectively interrogated. This can be topped with a high-level ‘dashboard’ with a hierarchy of rich underpinning levels. Warning lights might flash at the top to highlight problems lower down the hierarchy for further investigation.  It needs a simulation (modelling) capacity for exploring the consequences of alternative plans. Neither of these needs are typically met. In some specific areas, it is potentially, and sometimes actually, OK: in transport planning in government; in network optimisation for retailers for example. A small number of consultants can and do provide skills and capability. But in general, these needs are not met, often not even recognised. This seems to be a good example of a market failure. There is central government funding and action – through research councils and particularly perhaps, Innovate UK. The ‘best practice’ material exists – so we are somewhere in between categories 1 and 2 of the introductory paragraph above. This tempts me to offer as a conjecture the obvious ‘solution’: what is needed are top-class demonstrators. If the benefits were evident, then dissemination mechanisms would follow!

This argument has been presented here in terms of research and its translation into government – the development of best practice in public services for example. However, it can be applied to research management itself: we can use the three categories to think through the implications for managing research in a university, or even within a research group. An exercise for the reader!

8.8. Time management

When I was Chair of AHRC, I occasionally attended small meetings of academics who we were consulting about various issues – our version of focus groups. On one occasion, we were expecting comments – even complaints – about various AHRC procedures. What we actually heard were strong complaints about the participants’ universities who ‘didn’t allow them enough time to do research’. This was a function, of course, of the range of demands in contemporary academic life with at least four areas of work: teaching, research, administration and outreach – all figuring in promotion criteria. There is a classic time management problem lurking here and the question is: can we take some personal responsibility for finding the time to do research amidst this sea of demands? More broadly, can the ’management’ at a university or research group level find ways of helping the research community with this challenge?

There is a huge literature on time management and I have engaged with it over the years for my own sake as I have tried to juggle with the variety of tasks at any one time. The best book I ever found was titled ‘A-time’ by an author whose name I have forgotten – jog my memory please – and which now seems to be out of print. My own copy is long lost. It was linked to a paper system which helped deliver its routines. the fact that it is now out of print is probably linked to the fact that I am talking about a pre-PC age. I used that system. I used Filofax. And it all helped. There was much sensible advice in the book. ‘Do not procrastinate’ was particularly good – I still have a problem but at least I see myself doing it! In the pre-email days, correspondence came in the post and piled up in an in-try and it didn’t take long for it to form an impossible pile. ‘Do not procrastinate’ meant: deal with it more or less as it comes in. This is true now, of course, of e-mails. I think ‘A-time’ in the title of the book referred to two things: first, sort out your best and most effective time – morning, night, whatever – you’re A-time; and then divide tasks into A, B and C categories. Then focus your A-time on the A tasks.

So what does this mean for contemporary academic life? Teaching and administration are relatively straightforward and efficiency is the key. Although sometimes derided, Powerpoint – or an equivalent – is a key aid for teaching: once done – no pain, no gain – it can easily be updated (and can easily be an outline of a book[120]!).  Achieving clarity of expression for different audiences can be very satisfying and creative in its own right. Good writing, as a part of good exposition, is a good training for research writing. So teaching may be straightforward, but it is very important. 

Research and outreach are harder. First, research. The choices are harder: what to research, what problem to work on, how to make a difference? How not to simply engage with the pressure to publish for your CV’s sake? Note the argument in Alvesson’s book The triumph of emptiness[121]. So what do we actually do in making research decisions? Here is a mini check list. Define your ‘problems’. Something ‘interesting and important’ – interesting at least to you and important to someone else. Be ambitious. Be aware of what others are doing and work out how you are going to be different, not simply fashionable. All easier said than done of course. And the ‘keeping up’ is potentially incredibly time consuming with the number of journals now current. Form a ‘journals reading club’? All of this is different if you are part of an existing team but you can still think as an individual if only for the sake of your own future.

And finally, outreach. ‘Interesting and important’ kicks-in in a different way. Material from both teaching and research can be used. Consultancy becomes possible – though yet another time demand – see Section 6.4!

Thinking things through on all four fronts should produce first a list of pretty routine tasks – administration, ‘keeping up’ and so on. The rest can be bundled into a number of projects. The two together start to form a work plan with short run, middle run and long run elements. If you want to be very text book about it you can define your critical success factors – CSFs – but that may be going too far! So, we have a work plan, almost certainly too long and extensive. How do we find the time?

First, be aware of what consumes time: e-mails, meetings, preparing teaching, teaching, supervisions, administration – all of which demand diary management because we have not yet added ‘research to’ the list It is important that research is not simply a residual, so time has to be allocated. Within the research box, avoid too much repetition – giving more or less the same paper many times at many conferences for instance. And on outreach, be selective. On all fronts, be prepared to use cracks in time to do something useful. In particular in relation to research, don’t wait for the ‘free’ day or the free week to do the writing. If you have a well-planned outline for a paper, a draft can be written in a sequence of bits of time.

What do I do myself? Am I a paragon of virtue? Of course not, but I do keep a ‘running agenda’ – a list of tasks and projects with a heading at the top that says ‘Immediate’ and a following one that says ‘Priorities’. It ends with a list headed ‘On the backburner’. Quite often the whole thing is too long and needs to be pruned. When I was in Leeds, I used to circulate my running agenda to colleagues because a lot of it concerned joint work of one kind or another. At one point, there was 22 pages of it and needless to say, I was seriously mocked about it. So, do it, manage it – and control it!!

8.9. On writing

Research has to be ‘written up’. To some, writing comes easily – though I suspect this is on the basis of learning through experience. To many, especially research students at the time of thesis writing, it seems like a mountain to be climbed. There are difficulties of getting started, there are difficulties of keeping going! An overheard conversation in the Centre where I work was reported to me by a third party: “Why don’t you try Alan’s 500 words a day routine?” The advice I had been giving to one student – not a party to this conversation – was obviously being passed around. So let’s try that as a starting point. 500 words doesn’t feel mountainous. If you write 500 words a day, 5 days a week, 4 weeks a month, 10 months a year, you will write 100,000 words: a thesis, or a long book, or a shorter book and four papers. It is the routine of writing that achieves this so the next question is: how to achieve this routine? This arithmetic, of course, refers to the finished product and this needs preparation. In particular, it needs a good and detailed outline. If this can be achieved, it also avoids the argument that ‘I can only write if I have a whole day or a whole week’: the 500 words can be written in an hour or two first thing in the morning, it can be sketched on a train journey. In other words, in bits of time rather than the large chunks that are never available in practice.

The next questions beyond establishing a routine are: what to write and how to write? On the first, content is key: you must have something interesting to say; on the second what is most important is clarity of expression, which is actually clarity of thought. How you do it is for your own voice and that, combined with clarity, will produce your own style. I can offer one tip on how to achieve clarity of expression: become a journal editor. I was very lucky that early in my career I became first Assistant Editor of Transportation Research (later TR B)  and then Editor of Environment and Planning (later EP A). As an editor you often find yourself in a position of thinking ‘There is a really good idea here but the writing is awful – it doesn’t come through’. This can send you back to the author with suggestions for rewriting, though in extreme cases, if the paper is important, you do the rewriting yourself. This process made me realise that my own writing was far from the first rank and I began to edit it as though I was a journal editor. I improved. So the moral can perhaps be stated more broadly: read your own writing as through an editor’s eyes – the editor asking ‘What is this person trying to say?’.

The content, in my experience, accumulates over time and there are aids to this. First, always carry a notebook! Second, always have a scratch pad next to you as you write to jot down additional ideas that have to be squeezed in. The ‘how to’ is then a matter of having a good structure. What are the important headings? There may be a need for a cultural shift here. Writing at school is about writing essays and it is often the case that a basic principle is laid down which states: ‘no headings’. I guess this is meant to support good writing so that the structure of the essay and the meaning can be conveyed without headings. I think this is nonsense – though if, say a magazine demands this, then you can delete the headings before submission! This is a battle I am always prepared to fight. In the days when I had tutorial groups, I always encouraged the use of headings. One group refused point blank to do this on the basis of their school principle: ‘no headings in essays’. I did some homework and for the following week, I brought in a book of George Orwell’s essays, many of which had headings. I argued that if George Orwell could do it, so could everybody, and I more or less won.

The headings are the basis of the outline of what is to be written. I would now go further and argue that clarity, especially in academic writing, demands subheadings and sub-subheadings – a hierarchy in fact. This is now reinforced by the common use of Powerpoint for presentations. This is a form of structured writing and Powerpoint bullets, with sequences of indents, are hierarchical – so we are now all more likely to  be brought up with this way of thinking. Indeed, I once had a sequence of around 200 Powerpoint slides for a lecture course. I produced a short book by using this as my outline. I converted the slides to Word, and then I converted the now bullet-less text to prose.

I am a big fan of numbered and hierarchical outlines: 1, 1.1, 1.1.1, 1.1.2, 1.2,…..2, 2.1, 2.1.1, 2.1.2, etc. This is an incredibly powerful tool. At the top level are say 6 main headings, then maybe six subheadings and so on. The structure will change as the writing evolves – a main heading disappears and another one appears. This is so powerful, I became curious about who invented it and resorted to google. There is no clear answer, and indeed it says something about the contemporary age that most of the references offer advice on how to use this system in Microsoft Word! However, I suspect the origins are probably in Dewey’s Library classification system – still in use – in effect a classification of knowledge. Google ‘Dewey’s decimal classification’ to find its Nineteenth Century history.

There are refinements to be offered on the ‘What to ….’ and ‘How to ….’ questions. What genre: an academic paper, a book – a text book? – a paper intended to influence policy, written for politicians or civil servants? In part, this can be formulated as ‘be clear about your audience’. One academic audience can be assumed to be familiar with your technical language; another may be one that you are trying to draw into an interdisciplinary project and might need more explanation. A policy audience probably has no interest in the technicalities but would like to be assured that they are receiving real ‘evidence’.

What next? Start writing, experiment; above all, always have something on the go – a chapter, a paper or a blog piece. Jot down new outlines in that notebook. As Mr Selfridge said, ‘There’s no fun like work!’ Think of writing as fun. It can be very rewarding – when it’s finished!!

Chapter 9. Organising research

9.1. Introduction

The intention in this chapter is to explore the broader question of the research landscape at bigger scales: how the available funding should be allocated for example, but first to take an example of a field that represents a professional discipline but is in character, essentially interdisciplinary: operational research (OR). This is a field that involves the application of a range of techniques – essentially modelling – to a wide variety of domains, and therefore combines technical knowledge with domain knowledge. In effect, we pose the ‘Weaver question’ (section 8.6 above) for the future of OR (9.2 below). We then turn to the wider question of the allocation of research funding – which in effect means identifying future research priorities: the Weaver question again, taking in this case the UK as an example (9.3).

9.2. OR in the Age of AI

In 2017, I was awarded Honorary Membership of the Operational Research Society. I felt duly honoured, not least because I had considered myself, in part, an operational researcher since the 1970s and had indeed published in the Society’s Journal and was a Fellow at a time when that had a different status. However, there was a price! For the following year, I was invited to give the annual Blackett Lecture, delivered to a large audience at the Royal Society in November 2018. The choice of topic was mine. Developments in data science and AI are impacting most disciplines, not least OR. I thought that was something to explore and that gave me a snappy title: OR in the Age of AI. In the context of this book, this raises the question of setting research priorities in a specific field which is essentially interdisciplinary.

OR shares the same enabling disciplines as data science and AI and (in outline) is concerned with system modelling, optimisation, decision support, and planning and delivery. The systems focus forces interdisciplinarity and indeed this list shows that insofar as it is a discipline, it shares its field with many others. If we take decision support and planning and delivery as at least in part distinguishing OR, then we can see it is applied and it supports a wide range of customers and clients. These have been through three industrial revolutions and AI promises a fourth. We can think of these customers, public or private, as being organisations driven by business processes. What AI can do is read and write, hear and see, and translate, and these wonders will transform many of these business processes. There will be more complicated shifts – robotics, including soft robotics, understanding markets better, using rules-based algorithms to automate processes – some of them large-scale and complicated. In many ways all this is classic OR with new technologies. It is ground-breaking, it is cost saving, and does deplete jobs. But in some ways it’s not dramatic.

The bigger opportunities come from the scale of available data, computing power and then two things: the ability of systems to learn; and the application to big systems, mainly in the public sector, not driven in the way that profit-maximising industries are. For OR, this means that the traditional roles of its practitioners will continue, albeit employing new technologies; and there is a danger that because these territories overlap across many fields – whether in universities or the big consultancies – there will be many competitors that could shrink the role of OR. The question then is:  can OR take on leadership roles in the areas of the bigger challenges?

Almost every department of Government has these challenges – and indeed many of them – say those associated with the criminal justice system – embrace a range of government departments, each operating in their own silos, not combining to collect the advantages that could be achieved if they linked their data. They all have system modelling and/or ‘learning machine’ challenges. Can OR break into these?

The way to break in is through ambitious proof-of-concept research projects – the ‘R’ part of R and D – which then become the basis for large scale development projects, the ‘D’. There is almost certainly a systemic problem here. There have been large scale ambitious projects – usually concerned with building data systems – arguably a prerequisite – and many of these fail. But most of the funded research projects are relatively small and the big ‘linking’ projects are not tackled. So the challenge for OR, for me, is to open up to the large-scale challenges, particularly in government, to ‘think big’.

The OR community can’t do this alone, of course. However, there is a very substantial OR service in government – one of the recognised analytics professions – and there is the possibility of asserting more influence from within. But the Government itself has a responsibility to ensure that its investment in research is geared to meet these challenges. This has to be a UKRI responsibility – ensuring that research council money is not too thinly spread, and ensuring that research councils work effectively together as most of the big challenges are interdisciplinary and cross-council. Government Departments themselves should both articulate their own research challenges and be prepared to fund them.

9.3. Research and innovation in an ecosystem

UK Research and Innovation (UKRI) is a key node in a complex UK – indeed international – research ecosystem.  It can offer strategic direction and for many it will a key funder.[122]

How are the strategic priorities of a research ecosystem categorised?  I am a researcher who has worked in a national Institute (today it is the Rutherford Lab) and as a university professor building research teams on Research Council grants.  I was a founder and director of a spin-out company, a university vice-chancellor, Chair of a Research Council and, recently, CEO of The Alan Turing Institute.

In all these activities, there are common questions and challenges.  There is a need to acquire a knowledge of the current landscape, decisions on where to invest resources and on how to build capacity and skills.  There is also the question of how to connect a top-down strategy with bottom-up creativity.  All of these are challenges, on a much bigger scale, for UKRI.

Where are the potential game changers in research?  Some will be rooted in pure science, while others will be related to wider societal challenges, like curing cancer.  Another key consideration is where knowledge can be applied. That can be used as a working definition of ‘innovation’.

So, how to set about answering these questions?  A systems perspective is nearly always valuable: what is the system of interest and how is it embedded in other systems?  As ever, the systems view forces an interdisciplinary perspective. At what scale is the research to be focussed?  Any system of interest will in fact be embedded in a hierarchy of supra-systems and sub-systems. Most innovation comes from the lower reaches of the hierarchy[123]; and what is more, these discoveries can often be transferred to other domains.  Take computers, for example: invented as calculating machines, they are now ubiquitous in a wide range of systems. Contributions to strategy can come top-down from institutions (reading the landscape and horizons scanning) or bottom-up from individual researchers. Impact also plays an important part.  Does anyone want to do research that has no impact? I doubt it, but ‘impact’ should include transformative change in and across disciplines just as much as in industry and the public sector. Perhaps, we have been too narrow in our definition of impact. These challenges, questions and approaches have to be addressed at each node in the ecosystem.  Then the nodes must be effectively connected.  For example, money has to flow in the direction of the potential game-changers and high impact innovations.  Each node, from the individual piece of research up to UKRI has to have a strategy, grounded in experience, but employing horizon scanning and imagination.

The ecosystem has not been functioning effectively for some time – notably in the transfer of research findings into industry and the public sector.  Herein lies a particular challenge for UKRI.  Its strategy has to be open to the ‘bottom up’; and to incentivise Research Councils, Innovate UK, the universities, the Research Institutes and, not least, industry.  It needs to do all these things if it is to have a chance of delivering game-changers and ground-breaking innovations.

To build an effective strategy, UKRI will have to:

  • identify and build on strengths and opportunities – both the people with track records and the early career researchers with skills, imagination and ambition – there is a top-down vs bottom-up aspect here;
  • find ways of avoiding the conservatism of peer review which is enforced by the Research Excellence Framework.  I believe that universities do not always provide the right incentives by insisting on both the volume of publication and focusing promotion on ‘top journals’. This has skewed the motivation of researchers, particularly by neglecting applied research whose outputs do not qualify for the selected journals.

Industry has a role to play.  Where are the modern equivalents of Bell Labs? How much R&D is now being done in start-ups with the big players relying on purchasing success?  While there are many excellent examples of industry-university joint working, there could perhaps be many more.

Another strategic question which demands sensitive judgement relates to the size of research groups.  What should be located at the ‘big science’ end of the spectrum?  There are established successes, from CERN to Sanger; there are new Institutes like Turing and Diamond, with others in development.  Yet is the average size of a research group in a university too small?  Are there potential ‘big science’ areas that are not funded as such? Cities, for example, falls into this category. Indeed, how do we value different fields of research for public funding? Health, education, justice – all are obviously important.  Basic research is needed to support future industrial development.  Should there be more applied research as well, both in industry or the public sector?

We can then recap on the Weaver question. In the 1950s, Warren Weaver was the Science Vice-President of the Rockefeller Foundation.  He argued that systems of interest fell into three categories, those that were simple; of disorganised complexity; of organised complexity. Roughly speaking, the first two represented (among other things) the physical sciences of the time, while the third comprised biology.  He switched his funding from physics to biology.  That was a prescient decision. Is there an equivalent diagnosis to be made now? UKRI’s strategy needs to be connected to the social questions of our time: climate change and sustainability; the future of work and incomes; growing social inequalities.  Does this agenda demand a Weaver-like shift?

While I have focussed on questions specifically relating to UKRI strategy, in reality, every element of the research ecosystem needs strategic thinking: from universities and institutes, through industry and Government Departments, to individual researchers.  All of it needs to be strongly connected to translational and development ecosystems. These challenges are articulated in a report published by the British Academy and the Royal Society in 2019[124]. The underpinning ecosystem concept is vital and three main categories of ‘interest are used: the researchers themselves, the practitioners – more broadly, the users of research, and the policy makers (which could include those working in a specific area like education and those concerned with research policy). The ecosystem players create a variety of demands for research. The researchers and some associated policy makers might focus on the ‘blue skies’ dimension. The users will have more or less well-articulated demands. The policy makers in a domain will want to be ‘science-led’ (that is, ‘research led’). The research policy makers will have the challenge of allocating funds in a situation where resources are scarce and cannot possibly meet all the demand[125].

Inconclusion, we can return to the core argument which is implicit throughout this book. Many – most? – of the research challenges are interdisciplinary. The research ecosystem still has many of its most powerful nodes rooted in disciplines. There is a major restructuring challenge to re-balance from the within-discipline to interdisciplinary foundations.

Bibliography.

Ackoff, R. L. (1999) Ackoff’s best: his classic writings on management, John Wiley, New York.

Aleksander, I. and Morton, H. (1990) An introduction to neural computing, Chapman and Hall, London.

Alexander, C. (1964) Notes on the synthesis of form, Harvard University Press, Cambridge, Mass.

Alvesson, M. (2013) The triumph of emptiness, Oxford University Press, Oxford

Andersson, C. (2005) Urban evolution, Department of Physical Resource Theory, Chalmers University of Technology, Goteborg.

Anderson, P. W., Arrow, K. J., and Pines, D. (editors) (1988) The economy as an evolving complex system, Addison Wesley, Menlo Park, California.

Angier, N. (2007) The canon: the beautiful basics of science, Faber and Faber, London

Arthur, W. B. (1988) Urban systems and historical path dependence, in Ausubel, J. H. and Herman, R. (editors) Cities and their vital systems: infrastructure, past, present and future, National Academy Press, Washington, DC

Arthur, W. B. (1994-A) Increasing returns and path dependence in the economy, University of Michigan Press, Ann Arbor, Michigan.

Arthur, W. B. (1994-B) Inductive reasoning and bounded rationality, American Economic Association Papers and Proceedings, 84, pp. 406-411.

Arthur, W. B. (2009) The nature of technology, The Free Press, New York

Ashby, W. R. (1956) An introduction to cybernetics, Chapman and Hall, London

Bailey, F. G. (1977) Morality and expediency: the folklore of academic politics, Blackwell, Oxford.

Barber M. (2016) How to run a government, Penguin, Harmondsworth

Baudains, P; Zamazalová, S; Altaweel, M; Wilson, A. G., 2015. Modeling Strategic Decisions in the Formation of the Early Neo-Assyrian Empire. Cliodynamics: The Journal of Quantitative History and Cultural Evolution, 6(1), 1-23.

Baudains, P. and Wilson, A. (2016) Conflict modelling: spatial interaction as threat, in Wilson, A. (ed.) Global dynamics, Wiley, Chichester, pp. 145 – 158

Becher, T. (1989) Academic tribes and territories, Open University Press, Milton Keynes.

Beck, C. and Schlogl, F. (1993) Thermodynamcs of chaotic systems, Cambridge University Press, Cambridge.

Becker, H. (1998) Tricks of the trade: how to think about your research while you’re actually doing it, The University of Chicago Press, Chicago.

Beer, S.  (1972, Second Edition, 1981) Brain of the firm, John Wiley, Chichester

Beer, S. (1994) Designing freedom, John Wiley, Chichester.

Bertalanffy, L. von (1968) General system theory, Braziller, New York.

Bevan, A. and Wilson, A. G., 2013. Models of settlement hierarchy based on partial evidence. Journal of Archaeological Science, 40 (5), pp. 2415-2427.

Birkin, M., Clarke, G. P. , Clarke, M. and Wilson, A. G. (1996) Intelligent GIS: location decisions and strategic planning, Geoinformation International, Cambridge.

Blum, B. I. (1996) Beyond programming: to a new era of design, Oxford University Press, New York.

Bochel, H. and Duncan, S. (eds.) (2007) Making policy in theory and practice, The Policy Press, Bristol

Bok, D. (1986) Higher learning, Harvard University Press, Cambridge, Mass.

Boltzmann, L. (1896, translated by S. G. Brush, 1964) Lectures on gas theory, University of California Press, Berkeley and Los Angeles

Brouwer, L. E. J. (1910) Uber eineindeutige stige Transformationen von Flachen in Sich, Mathematische Annalen, 67, 176-80.

Chapman, G. T. (1977) Human and environmental systems, Academic Press, London.

Checkland, P. (1981) Systems thinking, systems practice, John Wiley, Chichester.

Checkland, P. and Scholes, J. 1991) Soft systems methodology in action, John Wiley, Chichester.

Checkland, P. and Holwell, S. (1998) Information, systems and information systems, John Wiley, Chichester.

Christaller, W. (1933) Die centralen Orte in Suddeutschland, Gustav Fisher, Jena; English translation by Baskin, C. W. Central places in Southern Germany, Prentice Hall, Englewood Cliffs, N. J.

Christensen, C. M. (1997) The innovator’s dilemma, Harper Business, New York.

Cisco (2007) Equipping every learner for the 21st Century, Cisco Systems Inc, San Jose, Ca.

Clarke, B. R. (1998) Creating entrepreneurial universities,: organizational pathways of transformation, Pergamon, Oxford.

Clarke, G. P. (ed.) (1996) Microsimulation for urban and regional policy analysis, Pion, London

Clarke, G. P. and Wilson, A. G. (1987-A)  Performance indicators and model-based planning I:  the indicator movement and the possibilities for urban planning, Sistemi Urbani, 2, 79-123

Clarke, G. P. and Wilson, A. G. (1987-B) Performance indicators and model-based planning II:  model-based  approaches, Sistemi Urbani, 9,  138-165

Clarke, M. (2020) How Geography changed the world, and my small part in it, Sweet Design Ltd, Bristol.

Cromer, A. (1997) Connected knowledge: science, philosophy and education, Oxford University Press, Oxford

Cronon, W. Nature’s metropolis,

Dantzig, G. B. (1963) Linear programming and extensions, Princeton University Press, Princeton, N. J.

Davies, T., Fry, H., Wilson, A. G., and Bishop, S.R., 2013 A mathematical model of the London riots and their policing. Nature Scientific Reports 3, 1303. doi:10.1038/srep01303

Davies, T., Fry, H., Wilson, A., Palmisano, A., Altaweel, M. and Radner, K. 2014. Application of an Entropy Maximizing and Dynamics Model for Understanding Settlement Structure: The Khabur Triangle in the Middle Bronze and Iron Ages.  Journal of Archaeological Science doi:10.1016/j.jas.2013.12.014.

Dearden, J., Gong, Y., Jones, M. and Wilson, A., Using state space of a BLV retail model to analyse the dynamics and categorise phase transitions of urban development, Urban Science, 3, 31-47, doi.10.3390/urbansci.3010031

Dearden, J. and Wilson, A.G., 2011-A. A framework for exploring urban retail discontinuities. Geographical Analysis, 43 (2), pp. 172-187

Dearden, J. and Wilson, A. G., 2011-B. The Relationship of Dynamic Entropy Maximising and Agent-Based Approaches. In: Heppenstall, A.J., Crooks, A.T., See, L.M., Batty, M. (eds.) Urban Modelling in Agent-Based Models of Geographical Systems. 2011. Berlin: Springer, Chp. 35-, pp. 705-720.

Dearden, J. and Wilson, A. G., 2015, Explorations in Urban and Regional Dynamics. Abingdon: Routledge.

Dennett, A. and Wilson, A. G., 2013. A multi-level spatial interaction modelling framework for estimating interregional migration in Europe. Environment and Planning A, 45: 1491-1507

Dijkstra, E., W. (1959) A note on two problems in connection with graphs, Numerische Mathematik, 1, pp. 269-271.

Drucker, P. F. (1989) The new realities, Heinemann, London.

Ellam, L., Girolami, M., Pavliotis, G. and Wilson, A. (2018) Stochastic modelling of urban structure, Proceedings of the Royal Society A, 474: 20170700, http://dx.doi:10.1098/rspa.2017.0700.

Epstein., J. M. and Axtell, R. (1996) Growing artificial societies: social science from the bottom up, MIT Press, Cambridge, Mass.

Epstein, J. M. (1997) Nonlinear dynamics, mathematical biology and social science, Addison-Wesley, Reading Ma.

Evans, S. P. (1973) A relationship between the gravity model for trip distribution and the transportation model of linear programming, Transportation Research, 7, 39-61.

Feynman, R. P., Leighton, R. B. and Sands, M. (1963) The Feynman lectures on physics, Addison-Wesley, Reading, Ma.

Foster, C. D. (2005) British government in crisis: the third English revolution, Hart Publishing, Oxford and Portland, Oregon

Foster, C. D. and Beesley, M. E. (1963) Estimating the social benefit of constructing an underground railway in London, Journal of the Royal Statistical Society, A, 126, 46-92

Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P. and Trow, M. (1994) The new production of knowledge: the dynamics of science and research in contemporary societies, Sage, London.

Gladwell, M. (2008) Outliers: the strategy of success, Little Brown, New York

Glass, N. M. (1996) Management masterclass: a practical guide to the new realities of business, Nicholas Brealey, London.

Government Office for Science (2013) Future of cities project reports:

Overview: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/520963/GS-16-6-future-of-cities-an-overview-of-the-evidence.pdf

Science of Cities:

Foresight for Cities:

Graduate Mobility:

Guo, W., Gleditsch, K., and Wilson (2019) A. Retool AI to forecast and limit wars, Nature, 562, 331-333

Habermas, J. (1974) Theory and practice, Heinemann, London.

Haggett, P. (1965) Locational analysis in human geography, Edward Arnold, London

Hall, P. (1988) Cities of tomorrow, Blackwell, Oxford

Hamilton, I. (2002) Against oblivion, Viking Penguin, London

Hansen, W. G. (1959) How accessibility shapes land use, Journal of the American Institute of Planners, 25, pp. 73-76.

Harris, B. (1965) Urban development models: new tools for planners, Journal of the American Institute of Planners, 31, pp. 90-95.

Harris, B. and Wilson, A. G. (1978) Equilibrium values and dynamics of attractiveness terms in production-constrained spatial-interaction models, Environment and Planning, A, 10,  371-88.

Hawken, P.  (1993) The ecology of commerce, Harper-Collins, New York

Herbert and Stevens (1960)

Hidalgo (XXXX) Why information grows

Holland, J. H. (1992) Adaptation in natural and artificial systems: an introductory analysis with applications in biology, control and artificial intelligence, MIT Press, Cambridge, Mass.

Holland, J. H. (1995) Hidden order: how adaptation builds complexity, Addison-Wesley, Reading, Mass.

Holland, J. H. (1998) Emergence, Addison-Wesley, Reading, Mass.

Horgan, J. (1997) The end of science, Little Brown and Co. (UK), London.

Hotelling, H. (1929) Stability in competition, Economic Journal, 39, pp. 41-57.

Hudson, L. (1972) The cult of the fact, Jonathan Cape, London

Isard, W. (1956) Location and the space-economy, MIT Press, Cambridge, Ma.

Isard, W. (1960) Methods of regional analysis, MIT Press, Cambridge, Ma.

Jacobs, J. (1970) The economy of cities, Jonathan Cape, London

Jaynes, E. T. (1957) Information theory and statistical mechanics, Physical Review, 106, pp. 620-630.

Jaynes, E. T. (2003) Probability theory, Cambridge University Press, Cambridge.

Johnson, G. (1994) University politics: F. M. Cornford’s Cambridge and his advice to the young academic politician, Cambridge University Press, Cambridge.

Johnson, S. 2010) Where good ideas come from: the seven patterns of innovation, Penguin, London

Jordan, M. I. (2019) Artificial intelligence – the revolution hasn’t happened yet, Harvard Data Science Review, 7(1), https://doi.org/10.1162/99608f92.f06c6e6

Kaplan, R. S. and Norton, D. P. (2005) The Office of Strategic Management, Harvard Business Review.

Kim, T. J., Boyce, D. E. and Hewings, G. J. D (1983) Combined input-output and commodity flow models for interregional develoment planning, Geographical Analysis, 15, 330-342

Klir, J. and Valach, M. (1967) Cybernetic modelling, Illiffe, London.

Kostitzin, V. A. (1939) Mathematical biology, Harrap, London

Kronman, A. T. (2007) Education’s end: why our colleges and universities have given up on the meaning of life, Yale University Press, New Haven, Conn.

Lakoff, G. and Johnson, M. (1980) Metaphors we live by, The University of Chicago Press, Chicago.

Lakshmanan, T. R. and Hansen, W. G. (1965) A retail market potential model, Journal of the American Institute of Planners, 31, 134-143.

Lancaster, K. (1966) A new approach to consumer theory, The Journal of Political Economy

Lee, D. B. (1973) Requiem for large-scale models, Journal of the American Institute of Planners, 39, 163-78.

Leontief, W. (1967) Input-output analysis, Oxford University Press, Oxford.

Lotka, A. J. (1925) The elements of physical biology, Williams and Wilkins, Baltimore.

Lowry, I. S. (1964) A model of metropolis, RM-4035-RC, The Rand Corporation, Santa Monica

McCann, P. (2001) Urban and regional economics, Oxford University Press, Oxford

Madge, J., Colavizza, G., Hetherington, J., Guo, W., Wilson, A., Assessing Simulations of Imperial Dynamics and Conflict in the Ancient World,  Cliodynamics, 10(2), 25-39

May, R. M. (1971) Stability in multi-species community models, Mathematical Biosciences, 12, 59-79.

May, R. M. (1973) Stability and complexity in model ecosystems, Princeton University Press, Princeton, New Jersey.

Medda, F. R., Caravelli, F., Caschili, S. and Wilson, A., 2017, Collaborative approach to trade: enhancing connectivity in sea- and land-locked countries, Springer, Heidelberg

Mintzberg, H. (1989) Structure in fives: designing effective organisations, Prentice-Hall, Englewood Cliffs, NJ

Mullan, J. (2006) How novels work, Oxford University Press, Oxford

Nadler, D. A., Gerstein, M. S., Shaw, R. B. and associates (1992) Organisational architecture: designs for changing organisations, Jossey-Bass, San Francisco

Nash, J. (1950) Equilibrium points in n-person games, Proceedings of the National Academy of Sciences, 36, 48-49.

National Academy of Sciences, National Acvademy of Engineering and Institute of Medicoine (2007) Rising above the gathering storm: energizing and employing America for a brighter economic future, Tha National Academies Press, Washington, D. C.

National Infrastructure Commission (2018) Data for the public good

Neumann, J. von (1966) Theory of self-reproducing automata, University of Illinois Press, Urbana.

Newman, M., Barabasi, A-L. and Watts, D. J. (2006) The structure and dynamics of networks, Princeton University Press, Princeton, N.J.

Nicolis, G. and Prigogine, I. (1977) Self-organisation in non-equilibrium systems: from dissipative structures to ordewr through fluctuations, John Wiley, Chichester.

Nowak, M. A. (2006) Evolutionary dynamics: exploring the equations of life, Belknap Press of Harvard University Press, Cambridge, Ma.

Nowak, M. A. and May, R. M. (2000) Virus dynamics: mathematical principles of immunology and virology, Oxford University Press, Oxford.

Nystuen, J. D. and Dacey, M. F. (1961) A graph theory interpretation of nodal regions, Papers, Regional Science Association, 7, 29-42.

Orcutt, G. H. (1957) A new type of socio-economic system, Review of Economic Statistics, 58, pp. 773-797.

Pagliara, F., de Bok, M., Simmonds, D. and Wilson, A. G., eds, 2012, Employment location in cities and regions: models and applications, Heidelberg: Springer.

Papdimitrious and Steiglitz (1982) Combinatorial optimization: algorithms and complexity, Pretice Hall, Englewood Cliffs, N. J.; Dover edition, 1998.

Polya,  G.  (1945) How to solve it,

Quandt, R. and Baumol, W. J. (1966)

Rees, P. H. and Wilson, A. G. (1976) Spatial population analysis, Edward Arnold, London.

Rihll, T. E. and Wilson, A. G. (1987-A) Spatial interaction and structural models in historical analysis: some possibilities and an example, Histoire et Mesure II-1,  5-32.

Rihll, T. E. and Wilson, A. G. (1987-B)  Model-based approaches to the analysis of regional settlement structures: the case of ancient Greece, in Denley, P. and Hopkin, D.,  10-20.

Rihll, T. E. and Wilson, A. G. (1991)  Settlement structures in Ancient Greece: new approaches to the polis, in Rich, J.  and Wallace-Hadrill, A. , 58-95.

Richardson, L. F. (1960) Arms and insecurity, The Boxwood Press, Pittsburgh

Rittel, H. W. J. and Webber, M. M. (1973) Dilemmas in a general theory of planning, Policy Sciences, 4, 155-169

Robinson, K. (2009) The element: how finding your passion changes everything, Allen Lane, Penguin, London

Royal Society (2019) Harnessing education research, British Academy and the Royal Society, London

Rosser, J. B.  Jr. (1991) From catastrophe to chaos: a general theory of economic discontinuities, Kluwer Academic Publishers, Boston.

Roumpani and Wilson (forthcoming) A two-tier model for urban planning.

Ruelle, D. (1991) Chance and chaos, Penguin, Hamondsworth.

Ruelle, D. (2002) The thermodynamic formalism: the mathematical structure of equilibrium statistical mechanics, Cambridge University Press, Cambridge.

Sadler, P. (1991) Designing organisations, Mercury Books, London

Scarf, H.  (1973-B) Fixed-point theorems and economic analysis, American Scientist, 71, pp. 289-296.

Scarf, H. (1973-A) The computation of economic equilibria, Yale University Press, New Haven.

Schlecty, P. C. (2005) Creating great schools: six critical systems at the heart of educational innovation, Jossey-Bass, San Francisco, Ca.

Sennett, R.  (1998) The corrosion of character, W. W. Norton, New York

Sennett, R. (2006) The culture of the new capitalism, Yale University Press, New Haven, Conn.

Sennett, R. (2008) The craftsman, Allen Lane, Penguin, London

Shannon, C. and Weaver, W. (1949) The mathematical theory of communication, University of Illinois Press, Urbana.

Senior, M. L. and Wilson, A. G. (1974) Explorations and syntheses of linear programming and spatial interaction models of residential location, Geographical Analysis, 6, 209-238

Simmonds, D. (1999) The design of the DELTA land-use modelling package, Environment and Planning, B, 26, 665-684

Simon H. A. (1996, 3rd Edition) The sciences of the artificial, MIT Press, Cambridge, Mass.

Singleton, A. D., Wilson, A. G. and O’Brien, O., 2012. Geodemographics and spatial interaction: an integrated model for higher education. Journal of Geographic Systems, 14:223–241, DOI: 10.1007/s10109.010-0141-5, Online First.

Stern, N. (2007) The economics of climate change, Cambridge University Press, Cambridge.

Stewart, I. ((xxxx) Seventeen equations that changed the world,

Stone , R. (1967) Mathematics in the social sciences, Chapman and Hall, London.

Stone, R. (1970) Mathematical models of the economy, Chapman and Hall, London.

Thom, R. (1975) Structural stability and morphogenesis, W. A. Benjamin, Reading, Mass..

Thomsen, E. (1997) OLAP solutions: building multidimensional information systems, John Wiley, New York.

Thunen, J. H. von (1826) Der isolierte staat in beziehung auf landwirtschaft und nationalokonomie, Gustav Fisher, Stuttgart; English translation by C. M. Wartenburg (1966) The isolated state, Oxford University Press, Oxford.

Turing, A. M. (1952) The chemical basis of morphogensis, Philosophical Transactions of the Royal Society of London, series B, 237, pp. 37-72.

Volterra, V. (1938) Population growth, equilibria and extinction under specified breeding conditions: a development and extension of the theory of the logistic curve, Human Biology, 10.

Weaver, W. (1948) Science and complexity, American Scientist, 36,  536 – 544.

Weaver, W. (1958) A quarter century in the natural sciences, Annual Report, The Rockefeller Foundation, New York, pp. 7-122.

Weiner, N. (1994 edition) Invention, MIT Press, Cambridge, Mass.

Williams, H. C. W. L. (1977) On the formation of travel demand models and economic evaluation measures of user benefit, Environment and Planning, A, 9, 285-344

Wilson, A. G. (1967) A statistical theory of spatial distribution models, Transportation Research, 1,  253-69

Wilson, A. G. (1970) Entropy in urban and regional modelling, Pion, London.

Wilson, A. G. (1971) Generalising the Lowry model, London Papers in Regional Science, 2,  121-134

Wilson, A. G. (1974) Urban and regional models in geography and planning,  John Wiley, Chichester and New York

Wilson, A. G. (1978) Spatial interaction and settlement structure: towards an explicit central place theory, in Karlquist, A., Lundquist, L., Snickars, F. and Weibull, J. W. Spatial interaction theory and planning models, North Holland Amsterdam, 137-56.

Wilson, A. G. (1981) Catastrophe theory and bifurcation: applications to urban and regional systems, Croom Helm, London; University of California Press, Berkeley.

Wilson, A. G. (1983-A) Varieties of structuralism, Working Paper 350, School of Geography, University of Leeds, 1983.

Wilson, A. G. (1983-B) From the specific to the general, Times Higher Education Supplement, 14,  October.

Wilson, A. G. (1983-C) Billiard cue, Times Higher Education Supplement, 18 November.

Wilson, A. G. (1983-D) The best of both worlds?  Times Higher Education Supplement, 23 December.

Wilson, A. G. (1984-A) New foundations laid for a general approach, Times Higher Education  Supplement, 10 February.

Wilson, A. G. (1984-B) Adding a degree of subtlety, Times Higher Education Supplement, 23 March.

Wilson, A. G. (1984-C) Understanding each other, Times Higher Education Supplement, 20 April.

Wilson, A. G. (1984-D) Catastrophe theory, Times Higher Education Supplement, 25 May.

Wilson, A. G. (1984-E) When decoding is encoding, Times Higher Education Supplement, 29 June.

Wilson, A. G. (1984-F) Reticular research, Times Higher Education Supplement, 20 July.

Wilson, A., G. (1984-G) Jam on the bread and butter, Times Higher Education Supplement, 28, September.

Wilson, A. G. (1984-H) Solving the problems, Times Higher Education Supplement, 30 November.

Wilson, A. G. (1985-A) Useful philosophy Working Paper 437, School of Geography, University of Leeds.

Wilson, A. G. (1985-B) Humble role on the world’s stage, Times Higher Education Supplement, 18, March.

Wilson, A. G. (1992) New maps of old terrain, Times Higher Education Supplement, 1 May.

Wilson, A. G. (1996) Employability and graduateness, presentation to a conference of Science Deans, University of Leeds, November.

Wilson, A. G. (2000-A) Complex spatial systems: the modelling foundations of urban and regional analysis, Prentice Hall, Harlow.

Wilson, A. G. (2000-B) The widening access debate: student flows to universities and associated performance indicators, Environment and Planning, A, 32, pp. 2019 – 2031.

Wilson, A. G. (2006) Ecological and urban systems models: some explorations of similarities in the context of complexity theory, Environment and Planning, A, pp. 633-646.

Wilson, A. G. (2007) A general representation for urban and regional models, Computers, Environment and urban systems, 31, pp. 148-161.

Wilson, A. G. (2008) Boltzmann, Lotka and Volterra and spatial structural evolution: an integrated methodology for some dynamical systems, Journal of the Royal Society, Interface, 5, pp. 865-871, doi:10.1098/rsif.2007.1288.

Wilson, A.G., 2010. Entropy in urban and regional modelling: retrospect and prospect. Geographical Analysis, 42, pp. 364-394.

Wilson, A.G., 2010. Knowledge power: interdisciplinary education for a complex world. Abingdon: Routledge.

Wilson, A.G., 2010. Knowledge power: ambition and reach in a re-invented university. In: R. Munck and Mohrman. K., eds., 2010. Re-inventing the university. Dublin: Glasnevin Publishing, Chapter 3, pp. 29-36.

Wilson, A.G., 2012. The Science of Cities and Regions: Lectures on Mathematical Model Design. London: Springer.
Wilson, A. G., editor, 2013, Urban modelling: critical concepts in urban studies. 5 Volumes, Abingdon: Routledge.

Wilson, Alan, editor, 2016-A, Global dynamics: approaches from complexity science, John Wiley, Chichester.

Wilson, Alan, editor, 2016-B, Approaches to geo-mathematical modelling: new tools for complexity science, John Wiley, Chichester.

Wilson, A.  (2016-C) New roles for urban models: planning for the long term, Regional Studies, Regional Science, 3:1, 48-57, DOI:10.1080/21681376.2015.1109474.

Wilson, A. (2017) A data-driven future, Prospect

Wilson, A. (2017) Data science: the new kid on the block, EPSRC.

Wilson, A. (2018) ‘Data science will change the world’, in D. Stephens (ed.) Knowledge Quarter, London.

Wilson, A. (2018) Research and innovation in an ecosystem, Journal, Foundation for Science and Technology.

Wilson, A. (2020) Epidemic models with geography: a new perspective on r-numbers, ArXiv:2005.07673 [physics.soc-ph]

Wilson, AG. and Dearden, J., 2011. Phase transitions and path dependence in urban evolution. Journal of Geographical systems, 13, (1), pp. 1-16.

Wilson, A. G. and Dearden, J., 2011. Tracking the evolution of regional DNA: the case of Chicago. In: M. Clarke and J.C.H. Stillwell, eds. Understanding population trends and processes. Berlin: Springer, Chp. 1-, pp. 209-222.

Wilson, A.G. and Oulton, M.J., 1983. The corner-shop to supermarket transition in retailing:  the beginnings of empirical evidence. Environment and Planning, A, 15, pp. 265-74.*

Wilson, A. G. and Pownall, C. M. (1976) A new representation of the urban system for modelling and for the study of micro-level interdependence, Area, 8, 256-264.

Wilson, A.G. and Senior M.L., 1974. Some relationships between entropy maximising models, mathematical programming models and their duals. Journal of Regional  Science, 14, pp. 207-15.

Index


[1]  for example, see von Bertalanffy (1968)

[2] Scale questions define disciplines and subdisciplines: quantum to cosmology; ethnography and psychology to social policy.

[3] ‘Mathematics or statistics?’ is a major question here

[4] Boyce and Williams (2015) set out the history of this in some detail.

[5] Swinney and Thomas (2015)

[6] Rudlin and Falk (2014)

[7] See for example McAra-McWilliam (2016)

[8] Becher (1989)

[9] Wilson (2010-A)

[10] There have been attempts to change – Sussex for example in the UK – largely failed; but somewhere like Arizona State in the US ‘abolished’ departments. See the book referenced in Wilson (2010-B).

[11] Stone(1966) makes these points very forcibly in his Foreword; and this is especially important coming from an economist: on the serviceableness of mathematics in the social sciences, he notes “the techniques developed for some specific purpose in one science can quite often be fruitfully applied in another”. And later: “My work on [the … British economy] has brought home to me ….the difficulty of disengaging the economic aspects of life from their demographic, social and psychological setting”.

[12] There are sometimes issues of ‘esteem’. I recall an American economist observing to me that “putting the word ‘urban’ before ‘economist’ is rather like putting ‘horse’ before ‘doctor’!”

[13] See Wilson (1970

[14] Lotka (1925), Volterra (1938), Richardson (1960) which summarises his earlier work. See Chapter 4, section 4.5.

[15] Ashby (1956)

[16] Weiner (1994 edition)

[17] Arthur (2010)

[18] See Chapter 4, section 4.2.

[19] For example, see Sen

[20] See Foster and Beesley (1963)

[21] See Wilson (2012) for an overview; or Wilson (2016-C)

[22] Wilson and Senior (1975)

[23] Recall Britton Harris’ comment on PDA: there are these three kinds of thinking and you rarely find all three in the room at the same time!

[24] The ‘Research assessment framework’ which has been in pace in the UK in some form or other since 1987.

[25] ‘Catapults’ are technology transfer organisations established by Innovate UK.

[26] For a historical overview see Wilson (2013) Urban modelling, 5 volumes

[27] I S Lowry (1964)

[28] This was a very crude form of economic base model which can be elaborated as an input-output model.

[29] There are many examples of Lowry-based models – see for example Simmonds (1999), edited and updated in Wilson (2013), Volume 5, Chapter 99 – and the references therein.

[30] The science of cities and regions – add refs

[31] Wilson (2008)

[32] Dearden and Wilson (2011)

[33] Lotka (1925), Volterra (1938)

[34] Wilson (2008) op cit

[35] Wilson and Oulto9n (1983)

[36] Introduced in Section 1.2.

[37] This had one nearer contemporary consequence. I was at the first meeting of the Vice-Chancellors of universities that became the Russell Group. There was a big argument about the name. We were meeting in the Russell Hotel and after much time had passed, I said something like ‘Why not call it the Russell Group?’ – citing not just the hotel but also Bertrand Russell as a mark of intellectual respectability. Such is the way that brands are born.

[38] Wilson (2007)

[39] See Dearden and Wilson (2011) to link spatial interaction and agent-based modelling

[40] Government Office for Science, Foresight, Future of cities project reports. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/520963/GS-16-6-future-of-cities-an-overview-of-the-evidence.pdf

Science of Cities: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/516407/gs-16-6-future-cities-science-of-cities.pdf

Foresight for cities: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/516443/gs-16-5-future-cities-foresight-for-cities.pdf

Graduate Mobility: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/510421/gs-16-4-future-of-cities-graduate-mobility.pdf

[41] Quandt and Baumol (1966)

[42] Lancaster (1966)

[43] Boyce and Williams (2013)

[44] Lowry (1964), noted earlier

[45] Ian Stewart

[46] E. T. Jaynes (2003)

[47] Hidalgo

[48] There is an interesting example cited by Weaver that illustrates the power of Boltzmann: consider a billiards table with a large number of balls. Launch the white ball and there can be a large number of collisions. The task of solving the equations of motion for each ball is intractable. However, if we ask a different question – Boltzmann’s question – ‘What is the average number of times a ball will strike a cushion?’, the problem becomes (in principle!) tractable.

[49] May (1971, 1973)

[50] Harris and Wilson (1978)

[51] Richardson (1960)

[52] Kostitzin (1939)

[53] Hall and Pesenti review – ref

[54] Jordan, M. I. (2019)

[55] Mike Lynch gave a ‘history of AI’ talk to a Turing conference and ended by arguing that ithe ‘incorporation of prior learning’ was the biggest future challenge for AI.

[56] Arthur (2010)

[57] ASG url:

[58] Shannon (1945), see also Shannon and Weaver (XXXX)

[59] US Parole example

[60] This analysis is based on the work of Professor Mihaela van der Schaar of The Alan Turing Institute and the University of Oxford.

[61] National Infrastructure Commission (2018) ‘Data for the public good’

[62] For a detailed account of the challenges, see the Government Office for Science, Foresight Future of Cities website.

[63] See The Ada Lovelace Institute website

[64] Rittel and Webber (1973)

[65] Clarke, C. (ed) (2014) The too difficult box, Biteback Publishing, London

[66] Remploy was a government funded company that provided employment for disabled people. It would not be difficult to develop such a concept in relation to offenders the costs of which would almost certainly be more than met by savings.

[67] Barber (2016)

[68] See, for example, Sennett (1996, 2006, 2008)

[69] Government Office for Science (2013)

[70] Many years later, when I was Vice-Chancellor in Leeds, I was at an event at which Sir John Walker, who had recently been awarded a Nobel Prize, was a guest. I introduced myself to him: he looked me up and down and said “I know you – I canvassed for you in Oxford in 1964!”.

[71] Foster and Beesley (1963)

[72]I was summoned to see John Moore, the Assistant Secretary responsible for what we would now call HR. He had obviously been instructed to solve the problem. “If you are not an economist, what are you?”, he asked. I replied that I was a mathematician. “That’s fine”, he said, “we’ll call you the Mathematical Adviser.”

[73] Clarke (2020)

[74] Isard (1956)

[75] Herbert and Stevens (1960)

[76] Senior and Wilson (1973)

[77] I visited Alan Voorhees – the person and the company – at the ir offices in Bethesda. For some reason, this Washington visit was supported by the British Embassy and they arranged for a car and driver to take me out to Bethesda. It was a large Daimler, almost certainly the Ambassador’s car. When I left, Alan Voorhees and colleagues came to the entrance to see me off and there was  great hilarity at the sight of the car!

[78] Lakshmanan and Hansen (1965)

[79] See Clarke (2020), op. cit., for a detailed history.

[80] Rees and Wilson (1976) op. cit.

[81] Kim, Boyce and Hewings (1983)

[82] Rihill and Wilson (1987-A, 1987-B, 1991)

[83] Bevan and Wilson (2013)

[84]See for example,  Davies et al (2014)

[85] Wilson (2006)

[86] Wilson (2020)

[87] Sadly, Joel died in 2020

[88] Wilson and Dearden (2011)

[89] Cronon, W. (    ) Nature’s metropolis,

[90] Richardson (1960), op. cit.

[91] Baudains and Wilson (2016)

[92] Baudains, P; Zamazalová, S; Altaweel, M; Wilson, A. G. (2015)

[93] Weisi, Gleditsch and Wilson (2019)

[94] Haggett (1965) was perhaps the iconic quantitative geography book of the period.

[95] Isard (1960)

[96] Lee (1973)

[97] Hamilton (2002)

[98] Boyce and Williams (2013), op.cit.

[99] Beer (1972)

[100] Beer (1994)

[101] I was at a Cambridge Moral Sciences (aka ‘philosophy’!) seminar when I was a student when Marjorie Masterman made the comment that the library classification problem was of the same order as that of the machine translation of languages. Decades later, I made this comment in a British Academy seminar and Karen Sparke Jones said ‘I was there’ – and sent me some papers on the subject!

[102] Dearden and Wilson (2015)

[103] Nystuen and Dacey (1961)

[104] Wilson (1978)

[105] ref – Clarke and Wilson

[106] Roumpani and Wilson (forthcoming)

[107] See Section 6.3 in the previous chapter.

[108] Dearden and Wilson (2015) op, cit.

[109] Dearden and Wilson (2011-B), Dearden and Wilson (2015) op. cit.

[110] Evans (1973)

[111] See Dennett and Wilson (2013) for examples of the use of biproportional fitting in this context.

[112] ‘Nullius in verba’ is the Royal Society’s motto which roughly translates as ‘Take nobody’s word for it’ – supporting data and experiment.

[113] I wanted to add something on ‘theory’, hence: Evolvere theoriae et intellectum which google translates as ‘To develop the theory and understanding’ – more of a mouthful!

[114] See Mintzberg (1989). He argued that if in an organisation the front-line staff were professionals – academics or hospital consultants – then if they were not part of the management, there would be problems. In the UK, universities have done well by this maxim, hospitals, less well.

[115] In my first day as Vice-Chancellor in Leeds, I had to sort out a problem with David Birchall, the Assistant Registrar. I walked down the corridor to his office. This was done, and as I left, he smiled and said: “That’s a first – the Vice-Chancellor has never been in my office before!”. It taught me at first hand something about ‘management by walking about’.

[116] I think this is much quoted – see Kaplan and Norton (2005).

[117] Anthony Finkelstein, at the time of writing is the Government Chief Scientific Adviser for National Security and Vice-Chancellor elect of City University.

[118] This was the very popular and much quoted  argument in Gladwell (2008)

[119] Investment in urban research is small beer compared, for example, to Google’s investment in Deep Mind.

[120] Which was how Wilson (2012) was produced.

[121] Alvesson (2013)

[122] UKRI – UK Research and Innovation – is the UK body that allocates government research funding

[123] Arthur (20120) op. cit and section 2.4 in Chapter 2.

[124] Royal Society (2019)

[125] There is an argument here for multiplicity of funding sources, as does exist of course, since there cannot be a single omniscient body.