50. Management: do you really need an MBA?

‘How to manage’ has itself become big business – note the success of University Business Schools and the MBA badge that they offer; note the size of the management section of a bookshop. I don’t dispute the value of management education or of much of the literature, but my own experience has been different: learning on the job. It is interesting, for myself at least, to trace the evolution of my own knowledge of ‘management’. I don’t claim this as a recipe for success. There were failures along the way, and I have no doubt that there are many alternative routes. But I did discover some principles that have served me well – most of them filleted from the literature, tempered with experience.

In my first ten years of work, my jobs were well focused, with more or less single objectives and ‘management’ consisted of getting the job done. This began, at the newly founded Rutherford Laboratory at Harwell, writing computer programmes to identify bubble chamber events at the CERN synchrotron; on to implementing transport models for planning purposes at the then Ministry of Transport. I set up the Mathematical Advisory Unit in the Ministry, which became large and doing this was clearly a management job. I moved to the Centre for Environmental Studies as Assistant Director – another management job. These roles were on the back of a spell of research into urban economics and modelling in Oxford which also had me working on a broader horizon even when I was a civil servant – and of course, CES was a research institute. From 1964-67 I was an Oxford City Councillor (Labour) – then a wholly ‘part-time’ occupation on top of my other jobs.

What did I learn in those ten years? At the Rutherford Lab, the value of teamwork and being lightly managed by the two layers above me and being given huge responsibilities in the team at a very young age. In MoT, I learned something about working in the Civil Service though my job was well-defined; again, I had sympathetic managers above me. On Oxford Council, I learned the working of local government, and how a political party worked, at first hand. This was teamwork of a different kind. At CES, I built my own team. At both MoT and CES I recruited some very good people who went on to have distinguished careers. In all cases, the working atmosphere was pretty good. It was in CES, however, that I realise with hindsight that I made a big mistake. I assumed that urban modelling was the key to the future development of planning, probably convinced Henry Chilver, the Director of this, and I neglected the wider context and people like Peter Wilmott from whom I should have learned much more. That led to the Director being fired as the Trustees sought to widen the brief – or to bring it back to its original purpose, and it nearly cost me my job (as I learned from one of the Trustees) but fortunately, it didn’t. It was also my first experience of a govering body – the Board of Trustees – and another mistake was to leave the relationship with them entirely with the Director – so I had no direct sense of what they were thinking. I left a few months later for the University of Leeds and within a couple of years, I began to have the experience of broader-based management jobs.

I went to Leeds as Professor of Urban and Regional Geography into a School of Geography that was being rebuilt through what would be described as strong leadership but which amounted to bullying at times. I was left to get on with my job, I enjoyed teaching and I was successful in bringing in research grants and I built a good team. The atmosphere, however, was such that after two nor three years, I was thinking of leaving. Out of the blue, the Head of Department was appointed as a Polytechnic Head and I found myself as Head of Department. The first management task was to change the atmosphere and an important element of that was to make the monthly staff meeting really count – a key lesson – ‘leadership’ should be through the consent of the staff and this was something I could more or less maintain in later jobs. This isn’t as simple as it sounds of course, there are often difficult disagreements to be resolved. The allocation of work round the staff of the Department was very uneven and I managed to sort that out with a ‘points’ system. I learned the value of PR as the first Research Assessment Exercise approached – by making sure we publicised our research achievements and this helped us to a top rating (which wasn’r common in the University at the time).

The Geography staff meeting was my first experience of chairing something and I probably learned from my predecessor – how not to do it, how to ensure that you secure the confidence, as far as possible, of the people in the room. I was elected as Head of Department for three three-year spells, alternating with my fellow Professor. I began to take on University roles, and in particular to chair one of the main committees – the Research Degrees Committee – and this led to me being Chair of the Joint Board of the Faculties of Arts, Social and Economic Studies and Law – equivalent of a Dean in modern parlance – which represented a large chunk of the University and was responsible for a wide range of policy and administration. There were many subcommittees. So lots of practice. The Board itself had something of the order of 100 members.

In the late 80s, I was ‘elected’ – quotation marks as there was only one candidate! – as Pro-Vice-Chancellor for 1989-91. There was only one such post at the time and the then VC became Chair of CVCP for those two years and delegated a large chunk of his job to me. The University was not in awful shape, but not good either. Every year, there was a big argument about cuts to departmental budgets. I began thinking about how to turn the ‘ship’ around – a new management challenge on a big scale. It helped that at the end of my first year as PVC, I was appointed as VC-elect from October 91, so in my second year, I could not only plan, but begin to implement some key strategies. It’s a long story so I will simply summarise some key elements – challenges and the beginnings of solutions.

The University had 96 departments and was run through a system of over 100 committees (as listed in the University Calendar) – seriously schlerotic. For example, there were seven Biology Departments each mostly doing molecular biology. We had to shift the ‘climate’ from one of cost cutting to one of income generation and this was done through delegation of budgets to each of a reduced number of departments (96 to 55) which was based on cost management but critically, with delegated income generating rules – and this became the engine for both growth and transformation. There were winners and losers of course and this led to some serious handling challenges at the margins. (I tried to resolve these by going to department staff meetings to take concerns head on. That sometimes worked, sometimes didn’t!) There was a challenge of how to marry department plans with the University’s plan. The number of committees was substantially reduced – indeed with a focus on three key committees.

In my first two years as VC, there was a lot of opposition to the point where I even started thinking about the minimum number of years I would have to do to leave in a respectable way. By the third year, many of those who had been objecting had taken early retirement and the management responsibilities around the University – Heads of Department, members of key committees – were being filled by a new generation. I ended up staying in post for thirteen years.

Can I summarise at least some of the key principles I learned in that time?

  • Recognising that the University was not a business, but had to be business-like.
  • Having our own strategy within a volatile financial and policy environment; then operating tactically to bring in the resources that we needed to implement our strategy.
  • What underpins my thinking about strategy is an idea that I learned from my friend Britton Harris of the University of Pennsylvania in the 1970s. He was a city planner (and urban model builder) and he argued that planning involved three kinds of thinking: policy, design and analysis with the added observation that ‘you very rarely find all three in the same room at the same time’. Apply this to universities: ‘analysis’ means understanding the business model and having all relevant information to hand; ‘policy’ means specifying objectives; ‘design’ means inventing possible plans and working towards identifying the best – which becomes the core strategy. This may well be the most valuable part of my toolkit.
  • Recognising that a large institution – by the end of my period, 33,000 students and 7,000 staff – could not be ‘run’ from the centre, so an effective system of real delegation was critical.
  • The importance of informal meetings and discussions – outside the formal committee system; I had at least termly meetings with all Heads of Department, with members of Council, with the main staff unions, with the Students Union Executive.
  • Openness: particularly of the accounts.
  • Accountability: in reorganising the committee system, I retained a very large Senate – about 200 of whom around half would regularly come to meetings.
  • And something not in the job description: realising that how I behaved somehow had an impact on the ethos of the University.

By many measures, I was successful as a manager and I learned most of the craft as a Vice-Chancellor. But I was constantly conscious of what I wasn’t succeeding at so I’m sure my ‘blueprint’ is a partial one. I learned a lot from the management literature. Mintzberg showed me that if in an organisation, your front-line workers are high-class professionals, if they didn’t feel involved in the management, you would have problems. (In this respect, I think the university system in the UK has done pretty well, the health service, less so.) Ashby taught me the necessity to devolve responsibility, Christensen taught me about the challenges of disruption and how to work around them. I learned a lot about developing strategy and the challenges of implementation – “strategy is 5% of the problem, implementation is 95%”. I learned a lot about marketing. I tried to encapsulate much of this in running seminars for my academic colleagues and for the University administration. Much later, I wrote a lot of it up in my book, Knowledge power.

So do you really need an MBA? I admire the best of them and their encapsulated knowledge. In my case, I guess I had the apprenticeship version. Over time, it is possible to build an intellectual management toolkit in which you have confidence that it more or less works. I have tried to stick to these principles in subsequent jobs – UCL, AHRC, The Alan Turing Institute. Circumstances are always different, and the toolkit evolves!

Alan Wilson

49. OR in the Age of AI

In 2017, I was awarded Honorary Membership of the Operational Research Society. I felt duly honoured, not least because I had considered myself, in part, an operational researcher since the 1970s and had indeed published in the Society’s Journal and was a Fellow at a time when that had a different status. However, there was a price! For the following year, I was invited to give the annual Blackett Lecture, delivered to a large audience at the Royal Society in November last year. The choice of topic was mine. Developments in data science and AI are impacting most disciplines, not least OR. I thought that was something to explore and that gave me a snappy title: OR in the Age of AI.

OR shares the same enabling disciplines as data science and AI and (in outline) is concerned with system modelling, optimisation, decision support, and planning and delivery. The systems focus forces interdisciplinarity and indeed this list shows that insofar as it is a discipline, it shares its field with many others. If we take decision support and planning and delivery as at least in part distinguishing OR, then we can see it is applied and it supports a wide range of customers and clients. These have been through three industrial revolutions and AI promises a fourth. We can think of these customers, public or private, as being organisations driven by business processes. What AI can do is read and write, hear and see, and translate, and these wonders will transform many of these business processes. There will be more complicated shifts – driving robotics, including soft robotics, understanding markets better, using rules-based algorithms to automate processes – some of them large-scale and complicated. In many ways all this is classic OR with new technologies. It is ground-breaking, it is cost saving, and does deplete jobs. But in some ways it’s not dramatic.

The bigger opportunities come from the scale of available data, computing power and then two things: the ability of systems to learn; and the application to big systems, mainly in the public sector, not driven in the way that profit-maximising industries are. For OR, this means that the traditional roles of its practitioners will continue, albeit employing new technologies; and there is a danger that because these territories overlap across many fields – whether in universities or the big consultancies – there will be many competitors that could shrink the role of OR. The question then is:  can OR take on leadership roles in the areas of the bigger challenges?

Almost every department of Government has these challenges – and indeed many of them – say those associated with the criminal justice system – embrace a range of government departments, each operating in their own silos, not combining to collect the advantages that could be achieved if they linked their data. They all have system modelling and/or ‘learning machine’ challenges. Can OR break into these?

The way to break in is through ambitious proof-of-concept research projects – the ‘R’ part of R and D – which then become the basis for large scale development projects, the ‘D’. There is almost certainly a systemic problem here. There have been large scale ambitious projects – usually concerned with building data systems – arguably a prerequisite – and many of these fail. But most of the funded research projects are relatively small and the big ‘linking’ projects are not tackled. So the challenge for OR, for me, is to open up to the large-scale challenges, particularly in government, to ‘think big’.

The OR community can’t do this alone, of course. However, there is a very substantial OR service in government – one of the recognised analytics professions – and there is the possibility of asserting more influence from within. But the Government itself has a responsibility to ensure that its investment in research is geared to meet these challenges. This has to be a UKRI responsibility – ensuring that research council money is not too thinly spread, and ensuringthat research councils work effectively together as most of the big challenges are interdisciplinary and cross-council. Government Departments themselves should both articulate their own research challenges and be prepared to fund them.

Alan Wilson

48. Mix and match: The five pillars of data science and AI

There are five pillars of data science and AI. Three make up, in combination, the foundational disciplines – mathematics, statistics and computer science; the fourth is the data – ‘big data’ as it now is; and the fifth is a many-stranded pillar – domain knowledge. The mathematicians use data to calibrate and test models and theories; the statisticians also calibrate models and seek to infer findings from data; the computer scientists develop the intelligent infrastructure (cf Blog 47). Above all, the three combine in the development of machine learning – the heart of contemporary AI and its applications. Is this already a new discipline? Not yet, I suspect – not marked by undergraduate degrees in AI (unlike, say, biochemistry). These three disciplines can be thought of as enabling disciplines and this helps us to unpick the strands of the fifth pillar: both scientists and engineers are users, as are the applied domains such as  medicine, economics and finance, law, transport and so on. As the field develops, the AI and data science knowledge will be internalised in many of these areas – in part meeting the Mike Lynch challenge (see Blog 46) incorporating prior knowledge into machine learning.

Even this brief introduction demonstrates that we are in a relatively new interdisciplinary field. It is interesting to continue the exploration by connecting to previous drivers of interdisciplinarity – to see how these persist and ‘add’ to our agenda; and then to examine examples of new interdisciplinary challenges.

It has been argued in earlier posts that the concept of a system of interest drives interdisciplinarity and this is very much the case here in the domains for which the AI toolkit is now valuable. More recently, complexity science was an important driver with challenges articulated through Weaver’s notion of ‘systems of organised complexity’. This emphasises both the high dimensionality of systems of interest and the nonlinear dynamics which drives their evolution. There are challenges here for the applications of AI in various domains. Handling ‘big data’ also drives us towards high dimensionality. I once estimated the number of variables I would like to have to describe a city of a million people at a relatively coarse grain, and the answer came out as 1013! This raises new challenges for the topologists within mathematics: how to identify structures within the corresponding data sets – a very sophisticated form of clustering! These kinds of system can be described through conditional probability distributions again with large numbers of variables – high dimensional challenges for Bayesian statisticians. One way to proceed with mathematical models that are high dimensional and hence intractable is to run them as simulations. The outputs of these models can then be treated as ‘data’ and, to my knowledge, there is an as-yet untouched research challenge: to apply unsupervised machine learning algorithms to these outputs to identify structures in a high-dimensional nonlinear space.

We begin to reveal many research challenges across both foundational, and especially, applied domains. (In fact a conjecture is that the most interesting foundational challenges emerge from these domains?) We can then make another connection – to Brian’s Arthur’s argument in his book The nature of Technology. A discovery in one domain can, sometimes following a long period, be transferred into other domains: opportunities we should look out for.

Can we optimise how we do research in data science and AI? We have starting points in the ideas of systems analysis and complexity science: define a system of interest and recognise the challenges of complexity. Seek the data to contribute to scientific and applied challenges – not the other way round – and that will lead to new opportunities? But perhaps above all, seek to build teams which combine the skills of mathematics, statistics and computer science, integrated through both systems and methods foci. This is non-trivial, not least due to the shortage of these skills. In the projects in the Turing Institute funded by the UKRI Special Priorities Fund – AI for Science and Government (ASG) and Living with machines (LWM) – we are trying to do just this. Early days and yet to be tested. Watch this space!

Alan Wilson

47. What is ‘data science? What is ‘AI’?

When I first took on the role of CEO at The Alan Turing Institute, the strap line beneath the title was ‘The National Institute for Data Science’. A year or so later, this became ‘The National Institute for Data Science and AI’ – at a time when there was a mini debate about whether there should be a separate ‘national institute for AI’. It has always seemed to me that ‘AI’ was included in ‘data science’ – or maybe vice versa. In the early ‘data science’ days, there were plenty of researchers in Turing focused on machine learning for example. However, we acquired the new title – ‘for avoidance of doubt’ one might say – and it now seems worthwhile to unpick the meanings of these terms. However we define them, there will be overlaps but by making the attempt, we can gain some new insights.

Ai has a long history, with well-known ‘summers’ and ‘winters’. Data science is newer and is created from the increases in data that have become available (partly generated  by the Internet of Things) closely linked with continuing increases in computing power. For example, in my own field of urban modelling, where we need location data and flow data for model calibration, the advent of mobile phones means that there is now a data source that locates most of us at any time – even when phones are switched off. In principle, this means that we could have data that would facilitate real-time model calibration. New data, ‘big data’, is certainly transforming virtually all disciplines, industry and public services.

Not surprisingly, most universities now have data science (or data analytics) centres or institutes – real or virtual. It has certainly been the fashion but may now be overtaken by ‘AI’ in that respect. In Turing, our ‘Data science for science’ theme has now transmogrified into ‘AI for science’ as more all embracing. So there may now be some more renaming!

Let’s start the unpicking. ‘Big data’ has certainly invigorated statistics. And indeed, the importance of machine learning within data science is a crucial dimension – particularly as a clustering algorithm with obvious implications for targeted marketing (and electioneering!). Machine learning is sometimes called ‘statistics reinvented’! The best guide to AI and its relationship to data science that I have found is Michael Jordan’s blog piece ‘Artificial intelligence – the revolution hasn’t happened yet’ – googling the title takes you straight there. He notes that historically AI stems from what he calls ‘ human-imitative’ AI; whereas now, it mostly refers to the applications of machine learning – ‘engineering’ rather than mimicking human thinking. As this has had huge successes in the business world and beyond, ‘it has come to be called data science’ – closer to my own interpretation of data science, but which, as noted, fashion now relabels as AI.

We are a long way from machines that think and reason like humans. But what we have is very powerful. Much of this augments human intelligence, and thus, following Jordan, we can reverse the acronym: ‘IA’ is ‘intelligence augmentation’ – which is exactly where the Turing Institute works on rapid and precise machine-learning-led medical diagnosis – the researchers working hand in hand with clinicians. Jordan also adds another acronym: ‘II’ – ‘intelligent infrastructure’. ‘Such infrastructure is  beginning to make its appearance in domains such as transportation, medicine, commerce and finance, with vast implications for individual humans and societies.’ This is a bigger scale concept than my notion that an under-developed field of research is the design of (real-time) information systems.

This framework, for me, provides a good articulation of what AI means now – IA and II. However, fashion and common useage will demand that we stick to AI! And it will be a matter of personal choice whether we continue to distinguish data science within this!!

Alan Wilson

46. Moving on 2: pure vs applied

In my early days as CEO in Turing, I was confronted with an old challenge: pure vs applied though often in a new language – foundational vs consultancy for example. In my own experiences from my schooldays onwards, I was always aware of the higher esteem associated with the ‘pure’ and indeed leaned towards that end of the spectrum. Even when I started working in physics, I worked in ‘theoretical physics’. It was when I converted to the social sciences that I realised that my new fields, I could have it both ways: I worked on the basic science of cities through mathematical and computer modelling but with outputs that were almost immediately applicable in town and regional planning. So where did that kind of thinking leave me in trying to think through a strategy for the Institute?

Oversimplifying: there were two camps – the ‘foundational’ and the ‘domain-based’. Some of the former could characterise the latter as ‘mere consultancy’. There were strong feelings. However, there was a core that straddled the camps: brilliant theorists, applying their knowledge in a variety of domains. It was still possible to have it both ways. How to turn this into a strategy – especially given that the root of a strategic plan will be the allocation of resources to different kinds of research? In relatively early days, it must have been June 2017, we had the first meeting of our Science Advisory Board and for the second day, we organised a conference, inviting the members of our Board to give papers. Mike Lynch gave a brilliant lecture on the history of AI through its winters and summers with the implicit question: will the present summer be a lasting one? At the end of his talk, he said something which has stuck in my mind ever since: “The biggest challenge for machine learning is the incorporation of prior knowledge”. I would take this further and expand ‘knowledge’ to ‘domain knowledge’. My intuition was that the most important AI and data science research challenges lay within domains – indeed that the applied problems generated the most challenging foundational problems.

Producing the Institute’s Strategic Plan in the context of a sometimes heated debate was a long drawn out business – taking over a year as I recall. In the end, we had a research strategy based on eight challenges, six of which were located in domains: health, defence and security, finance and the economy, data-centric engineering, public policy and what became ‘AI for science’. We had two cross-cutting themes: algorithms and computer science, and ethics. The choice of challenge areas was strongly influenced by our early funders:  the Lloyds Register Foundation, GCHQ and MoD, Intel and HSBC. Even without a sponsor at that stage, we couldn’t leave out ‘health’! All of these were underpinned by the data science and machine learning methods tool kit. Essentially, this was a matrix structure: columns as domains, rows as method – an effective way of relaxing the tensions, of having it both ways. This structure has more or less survived, though with new challenges added – ‘cities’ for example and the ‘environment’.

When it comes to allocating resources, other forces come into play. Do we need some quick wins? The balance between the short term and the longer – the latter inevitably more speculative? Should industry fund most of the applied? This all has to be worked in the context of a rapidly developing Government research strategy (with the advent of UKRI) and the development of partnerships with both industry and the public sector. There is a golden rule, however, for a research institute (and for many other organisations such as universities): think through your own strategy rather than simply ‘following the money’ which is almost always focused on the short term. Then given the strategy, operate tactically to find the resources to support it.

In making funding decisions, there is an underlying and impossible question to answer: how much has to be invested in an area to produce results that are truly transformative? This is very much a national question but there is a version of it at the local level. Here is a conjecture: that transformative outcomes in translational areas demand a much larger number of researchers to be funded than to produce such transformations in foundational areas. This is very much for the ‘research’ end of the R and D spectrum – I can see that the ‘D’ – development – can be even more expensive. So what did we end up with? The matrix works and at the same time acknowledges the variety of viewpoints. And we are continually making judgements about priorities and the corresponding financial allocations. Pragmatism kicks in here!

Alan Wilson

45. Moving on 1: collaboration

I left CASA in UCL in July 2016 and moved to the new Alan Turing Institute. I’d planned the move to give me a new research environment – as a Fellow with some responsibility for developing an ‘urban’ programme. There were few employees – most of the researchers – part-time academics as Fellows, some Research Fellows and PhD students – were due in October. I ran a workshop on ‘urban priorities’ and wondered what to do myself with no supporting resources. I was aware that my own research was on the fringes of Turing priorities – ‘data science’. I could claim to be a data scientist – and indeed Anthony Finkelstein, then a Trustee and a UCL colleague – in encouraging me to move to Turing said: “You can’t have ‘big data’ without big models”. However, in Turing, data science meant machine learning and AI rather than modelling as I practised it. So, I started to think about a new niche: Darwin in his later years decided to work on ‘smaller problems’, perhaps more manageable. I’m not comparing myself to Darwin, but there may be good advice there! And as for machine learning, though I put myself on a steep learning curve to learn something new and to fit in, though I couldn’t see how I could manage the ‘10,000 hours’ challenge that would turn me into a credible researcher in that field.

At the end of September, everything changed. In odd circumstances – to be described elsewhere – I found myself as the Institute CEO. There was suddenly a huge workload. I reported to a Board of Trustees, there were committees to work with, there were five partner universities to be visited. Above all, a new strategy had to be put in place – hewed out of a mass of ideas and forcefully-stated disagreements. Unsurprisingly, this blog was put on hold. I can now begin to record what I learned about a new field of research (for me) and the challenges of setting up a new Institute. I had to learn enough about data science and AI to be able to give presentations about the Institute and its priorities to a wide variety of audiences. I was able to attend seminars and workshops and talk to a great variety of people and by a process of osmosis, I began to make progress. I will start by recording some of my own experiences of collaboration in the Institute.

The ideal of collaboration is crucial for a national institute. Researchers from different universities, from industry, from the public sector, meet in workshops and seminars, and perhaps above all over coffee and lunch in our kitchen area, and new projects, new collaborations emerge. I can offer three examples from early days from my own experience which have enabled me to keep my own research alive in unexpected ways. (There are later ones which I will return to in due course.)

I met Weisi Guo at the August 2016 ‘urban priorities’ workshop. He presented his work on the analysis of global networks connecting cities which demonstrated the probabilities of conflicts. Needless to say, this turned out to be of interest to the Ministry of Defence and through the Institute’s partnership in this area, a project to develop this work was funded by DSTL. It seemed to me that Weisi’s work could be enhanced by adding flows (spatial interaction) and structural dynamics and we have worked together on this since our first meeting. New collaborators have been brought in and we have published a number of papers. From each of our viewpoints, adding research from a different previously unknown field, has proved highly fruitful.

The second example took me into the field oh health. Mihaela van der Schaar arrived at Turing in October from UCLA, to a Chair in Oxford and as a Turing Fellow. One of her fields of research is the application of machine learning to rapid and precise medical diagnosis. This is complex territory involving the accounting of co-morbidities as contributing to the diagnosis and prognosis of any particular disease, and having an impact on treatment plans. I recognised this as important for the Institute and was happy to support it. We had a lucky break early on. I was giving a breakfast briefing to a group of Chairs and CEOs of major companies and at the end of the meeting, I was approached by Caroline Cartellieri who thanked me for the presentation but said she wanted to talk to me about something else: she was a Trustee of the Cystic Fibrosis Trust. This led to Mihaela and her teams – mainly of PhD students – carrying out a project for the Trust which became an important demonstration of what could be achieved more widely – as well as being valuable for the Trust’s own clinicians. For me, it opened up the idea of incorporating the diagnosis and prognosis methods into a ‘learning machine’ which could ultimately be the basis of personalised medicine. And then a further thought: the health learning machine is generic: it can be applied to any flow of people for which there is a possible intervention to achieve an objective. For example, it can be applied to the flow of offenders into and out of prisons and this idea is now being developed in a project with the Ministry of Justice.

 Mihaela’s methods have also sown the seed of a new approach to urban modelling. The data for the co-morbidities’ analysis is the record over time of the occurrence of earlier diseases. If these events are re-interpreted in the urban modelling context as ‘life events’ -from demographics – birth migration and death – but to include entry to education, new job, new house and so on, then a new set of tools can be brought to bear.

The third example, still from very early on, probably Autumn 2016, came from me attending for my own education a seminar by Mark Girolami on (I think) the propagation of uncertainty – something I have never been any good at building into urban models. However, I recognised intuitively that his methods seemed to include a piece of mathematics that would possibly solve a problem that has always defeated me: how to predict the distribution of (say) retail centre sizes in a dynamic model. I discussed this with Mark who enthusiastically agreed to offer the problem to a (then) new research student, Louis Ellam. He also brought in an Imperial College colleague, Greg Pavliotis, an expert in statistical mechanics and therefore connected to my style of modelling. Over the next couple of years, the problem was solved and led to a four-author paper in Proceedings A of the Royal Society, with Louis as the first author.

Collaboration in Turing now takes place on a large scale. For me, it has taken me into fruitful new areas, my collaborators making it both manageable for me and adding new skills – thereby solving the ’10,000 hours’ challenge!

Alan Wilson

June 2019

44. Competing models? Deconstruct into building bricks?

Models are representations of theories. I write this as a modeller – someone who works on mathematical and computer models of cities and regions but who is also seriously interested in the underlying theories I am trying to represent. My field, relative say to physics, is underdeveloped. This means that we have a number of competing models and it is interesting to explore the basis of this and how to respond. There may be implications for other fields – even physics!

A starting conjecture is that there are two classes of competing models: (i) those that represent different underlying theories (or hypotheses); and (ii) those that stem from the modellers choosing different ways of making approximations in seeking to represent very complex systems. The two categories overlap of course. I will conjecture at the outset that most of the differences lie in the second (perhaps with one notable exception). So let’s get the first out of the way. Economists want individuals to maximise utility and firms to maximise profits – simplifying somewhat of course. They can probably find something that public services can maximise – health outcomes, exam results – indeed a whole range of performance indicators. There is now a recognition that for all sorts of reasons, the agents do not behave perfectly and way have been found to handle this. There is a whole host of (usually) micro-scale economic and social theory that is inadequately incorporated into models, in some cases because of the complexity issue – the niceties are approximated away; but in principle, that can be handled and should be. There is a broader principle lurking here: for most modelling purposes, the underlying theory can be seen as maximising or minimising something. So if we are uncomfortable with utility functions or economics more broadly, we can still try to represent behaviour in these terms – if only to have a base line from which behaviour deviates.

So what is the exception – another kind of dividing line which should perhaps have been a third category? At the pure end of a spectrum, ‘letting the data speak for themselves’. It is mathematics vs statistics; or econometrics vs mathematical economics. Statistical models look very different – at least at first sight – to mathematical models – and usually demand quite stringent conditions to be in place for their legitimate application. Perhaps, in the quantification of a field of study, statistical modelling comes first, followed by the mathematical? Of course there is a limit in which both ‘pictures’ can merge: many mathematical models, including the ones I work with, can be presented as maximum likelihood models. This is a thread that is not to be pursued further here, and I will focus on my own field on mathematical modelling.

There is perhaps a second high-level issue. It is sometimes argued that there are two kinds of mathematician: those that think in terms of algebra and those who think in terms of geometry. (I am in the algebra category which I am sure biases my approach.) As with many of these dichotomies, they should be removed and both perspectives fully integrated. But this is easier said than done!

How do the ‘approximations’ come about? I once tried to estimate the number of variables I would like to have for a comprehensive model of a city of 1M people and at a relatively coarse grain, the answer was around 1013! This demonstrates the need for approximation. The first steps can be categorised in terms of scale: first, spatial – referenced by zones of location rather than continuous space – and how large should the zones be? Second, temporal: continuous time or discrete? Third, sectoral: how many characteristics of individuals or organisations should be identified and at how fine a grain? Experience suggests that the use of discrete zones – and indeed other discrete definitions – makes the mathematics much easier to handle. Economists often use continuous space in their models, for example, and this forces them into another kind of approximation: monocentricity, which is hopelessly unrealistic. Many different models are simply based on different decisions about, and representations of, scale.

The second set of differences turn on focus of interest. One way of approximating is to consider a subsystem such as transport and the journey to work, or retail and the flow of revenues into a store or a shopping centre. The dangers here are the critical interdependencies are lost and this always has to be borne in mind. Consider the evaluation of new transport infrastructure for example. If this is based purely on a transport model, there is a danger than the cost-benefit analysis will be concentrated on time savings rather than the wider benefits. There is also a potentially higher-level view of focus. Lowry very perceptively once pointed out that models often focus on activities – and the distribution of activities across zones; or on the zones, in which case the focus would be on land use mix in a particular area. The trick, of course, is to capture both perspectives simultaneously – which is what Lowry achieved himself very elegantly but which has been achieved only rarely since.

A major bifurcation in model design turns on the time dimension and the related assumptions about dynamics. Models are much easier to handle if it is possible to make an assumption that the system being modelled is either in equilibrium or will return to a state of equilibrium quickly after a disturbance. There are many situations where the equilibrium assumption is pretty reasonable – for representing a cross-section in time or for short-run forecasting, for example, representing the way in which a transport system returns to equilibrium after a new network link or mode is introduced. But the big challenge is in the ‘slow dynamics’: modelling how cities evolve.

It is beyond the scope of this piece to review a wide range of examples. If there is a general lesson here it is that we should be tolerant of each others’ models, and we should be prepared to deconstruct them to facilitate comparison and perhaps to remove what appears to be competition but needn’t be. The deconstructed elements can then be seen as building bricks that can be assembled in a variety of ways. For example, ‘generalised cost’ in an entropy-maximising spatial interaction model can easily be interpreted as a utility function and therefore not in competition with economic models. Cellular automata models, and agent-based models are similarly based on different ‘pictures’ – different ways of making approximations. There are usually different strengths and weaknesses in the different alternatives. In many cases, with some effort, they can be integrated. From a mathematical point of view, deconstruction can offer new insights. We have, in effect, argued that model design involves making a series of decisions about scale, focus, theory, method and so on. What will emerge from this kind of thinking is that different kinds of representations – ‘pictures’ – have different sets of mathematical tools available for the model building. And some of these are easier to use than others, and so, when this is made explicit, might guide the decision process.

Alan Wilson

August 2016

43. Lowering the bar

A few weeks ago, I attended a British Academy workshop on ‘Urban Futures’ – partly focused on research priorities and partly on research that would be useful for policy makers. The group consisted mainly of academics who were keen to discuss the most difficult research challenges. I found myself sitting next to Richard Sennett – a pleasure and a privilege in itself, someone I’d read and knew by repute but whom I had never met. When the discussion turned to research contributions to policy, Richard made a remark which resonated strongly with me and made the day very much worthwhile. He said: “If you want to have an impact on policy, you have to lower the bar!” We discussed this briefly at the end of the meeting, and I hope he won’t mind if I try to unpick it a little. It doesn’t tell the whole story of the challenge of engaging the academic community in policy, but it does offer some insights.

The most advanced research is likely to be incomplete and to have many associated uncertainties when translated into practice. This can offer insights, but the uncertainties are often uncomfortable for policy makers. If we lower the bar to something like ‘best practice’ – see preceding blog 42 – this may involve writing and presentations which do not offer the highest levels of esteem in the academic community. What is on offer to policy makers has to be intelligible, convincing and useful. Being convincing means that what we are describing should evidence-based. And, of course, when these criteria are met, there should be another kind of esteem associated with the ‘research for policy’ agenda. I guess this is what ‘impact’ is supposed to be about (though I think that is half of the story, since impact that transforms a discipline may be more important in the long run).

‘Research for policy’ is, of course, ‘applied research’ which also brings up the esteem argument: if ‘applied’, then less ‘esteemful’ if I can make up a word. In my own experience, engagement with real challenges – whether commercial or public – adds seriously to basic research in two ways: first, it throws up new problems; and secondly, it provides access to data – for testing and further model development – that simply wouldn’t be available otherwise. Some of the new problems may be more challenging and in a scientific sense more important, than the old ones.

So, back to the old problem: what can we do to enhance academic participation in policy development? First a warning: recall the policy-design-analysis argument much used in these blogs. Policy is about what we are trying to achieve, design is about inventing solutions; and analysis is about exploring the consequences of, and evaluating, alternative policies, solutions and plans – the point being that analysis alone, the stuff of academic life, will not of itself solve problems. Engagement, therefore, ideally means engagement across all three areas, not just analysis.

How can we then make ourselves more effective by lowering the bar? First, ensure that our ‘best practice’ (see blog 42) is intelligible, convincing and useful; evidence-based. This means being confident about what we know and can offer. But then we also ought to be open about what we don’t know. In some cases we may be able to say that we can tackle, perhaps reasonably quickly, some of the important ‘not known’ questions through research; and that may need resource. Let me illustrate this with retail modelling. We can be pretty confident about estimating revenues (or people) attracted to facilities when something changes – a new store, a new hospital or whatever. And then there is a category, in this case, of what we ‘half know’. We have an understanding of retail structural dynamics to a point where we can estimate the minimum size that a new development has to be for it to succeed. But we can’t yet do this with confidence. So a talk on retail dynamics to commercial directors may be ‘above the bar’.

I suppose another way of putting this argument is that for policy engagement purposes, we should know where we should set the height of the bar: confidence below, uncertainty (possibly with some insights), above. There is a whole set of essays to be written on this for different possible application areas.

Alan Wilson

June 2016.

42. Best practice

Everything we do, or are responsible for, should aim at adopting ‘best practice’. This is easier said than done! We need knowledge, capability and capacity. Then maybe there are three categories through which we can seek best practice: (1) from ‘already in practice’ elsewhere; (2) could be in practice somewhere but isn’t: the research has been done but hasn’t been transferred; (3) problem identified, but research needed.

How do we acquire the knowledge? Through reading, networking, cpe courses, visits. Capability is about training, experience, acquiring skills. Capacity is about the availability of capability – access to it – for the services (let us say) that need it. Medicine provides an obvious example; local government another. How do each of 164 local authorities in England acquire best practice? Dissemination strategies are obviously important. We should also note that there may be central government responsibilities. We can expect markets to deliver skills, capabilities and capacities – through colleges, universities and, in a broad sense, industry itself (in its most refined way through ‘corporate universities’). But in many cases, there will be a market failure and government intervention becomes essential. In a field such as medicine, which is heavily regulated, the Government takes much of the responsibility for ensuring supply of capability and capacity. There are other fields, where in early stage development, consultants provide the capacity until it becomes mainstream – GMAP in relation to retailing being an example from my own experience. (See the two ‘spin-out blogs.)

How does all this work for cities, and in particular, for urban analytics? Good analytics provide a better base for decision making, planning and problem solving in city government. This needs a comprehensive information system which can be effectively interrogated. This can be topped with a high-level ‘dashboard’ with a hierarchy of rich underpinning levels. Warning lights might flash at the top to highlight problems lower down the hierarchy for further investigation. It needs a simulation (modelling) capacity for exploring the consequences of alternative plans. Neither of these needs are typically met. In some specific areas, it is potentially, and sometimes actually, OK: in transport planning in government; in network optimisation for retailers for example. A small number of consultants can and do provide skills and capability. But in general, these needs are not met, often not even recognised. This seems to be a good example of a market failure. There is central government funding and action – through research councils and particularly perhaps, Innovate UK. The ‘best practice’ material exists – so we are somewhere in between categories 1 and 2 of the introductory paragraph above. This tempts me to offer as a conjecture the obvious ‘solution’: what is needed are top-class demonstrators. If the benefits were evident, then dissemination mechanisms would follow!

Alan Wilson
June 2016

41 Foresight on The Future of Cities

For the last three years (almost), I have been chairing the Lead Expert Group of the Government Office for Science Foresight Project on The Future of Cities. It has finally ‘reported’, not as conventionally with one large report and many recommendations, but with four reports and a mass of supporting papers. We knew at the outset that we could not look forward without learning the lessons of the past, and so we commissioned a set of working papers – which are on the web site – as a resource, historical in the main, looking forwards imaginatively where possible. The ‘Foresight Future of Cities’ web site is at https://www.gov.uk/government/collections/future-of-cities.

During the project, we have worked with fourteen Government Department – ‘cities’ as a topic crosses government – and we have visited over 20 cities in the UK and have continued to work with a number of them. The project had several (sometimes implicit) objectives: to articulate the challenges facing cities from a long run – 50 years – perspective; to consider what could be done in the short run in evidence-based policy development to generate possibly better outcomes in meeting these challenges; to review what we know and what we don’t know – the latter implying that we can say something about research priorities; and to review the tools that are available to support foresight thinking.

We developed six themes that seemed to work for us throughout the project:

  • people – living in cities
  • city economies
  • urban metabolism – energy and materials flows and the sustainability agenda
  • urban form – including the issues associated with density and connectivity
  • infrastructure – including transport
  • governance – devolution and mayors?

What have we achieved? I believe we have a good conceptual framework and a corresponding effective understanding of the scale of the challenges. It is clear that to meet these challenges in the long term, radical thinking is needed to support future policy and planning development. The project has a science provenance and this provides the analytical base for exploring alternative future scenarios. Forecasting for the long term is impossible, inventing knowledge-based future scenarios is not. In our work with cities – Newcastle and Milton Keynes provide striking examples – we have been met with enthusiasm and local initiatives have produced high-class explorations, complete with effective public engagement. There is a link to the Newcastle report on the GO-Science website; the Milton Keynes work is ongoing.

Direct links to the four project reports follow. The first is an overview; the second a brief review of what we know about the science of cities combined with an articulation of research priorities; the third is, in effect, a foresighting manual for cities who wish to embark on this journey; and the fourth is an experiment – work on a particular topic – graduate mobility – since ‘skills’ figures prominently in our future challenges list.

An overview of the evidence


Science of Cities:


Foresight for Cities:


Graduate Mobility:


Alan Wilson

May 2016