43. Lowering the bar

A few weeks ago, I attended a British Academy workshop on ‘Urban Futures’ – partly focused on research priorities and partly on research that would be useful for policy makers. The group consisted mainly of academics who were keen to discuss the most difficult research challenges. I found myself sitting next to Richard Sennett – a pleasure and a privilege in itself, someone I’d read and knew by repute but whom I had never met. When the discussion turned to research contributions to policy, Richard made a remark which resonated strongly with me and made the day very much worthwhile. He said: “If you want to have an impact on policy, you have to lower the bar!” We discussed this briefly at the end of the meeting, and I hope he won’t mind if I try to unpick it a little. It doesn’t tell the whole story of the challenge of engaging the academic community in policy, but it does offer some insights.

The most advanced research is likely to be incomplete and to have many associated uncertainties when translated into practice. This can offer insights, but the uncertainties are often uncomfortable for policy makers. If we lower the bar to something like ‘best practice’ – see preceding blog 42 – this may involve writing and presentations which do not offer the highest levels of esteem in the academic community. What is on offer to policy makers has to be intelligible, convincing and useful. Being convincing means that what we are describing should evidence-based. And, of course, when these criteria are met, there should be another kind of esteem associated with the ‘research for policy’ agenda. I guess this is what ‘impact’ is supposed to be about (though I think that is half of the story, since impact that transforms a discipline may be more important in the long run).

‘Research for policy’ is, of course, ‘applied research’ which also brings up the esteem argument: if ‘applied’, then less ‘esteemful’ if I can make up a word. In my own experience, engagement with real challenges – whether commercial or public – adds seriously to basic research in two ways: first, it throws up new problems; and secondly, it provides access to data – for testing and further model development – that simply wouldn’t be available otherwise. Some of the new problems may be more challenging and in a scientific sense more important, than the old ones.

So, back to the old problem: what can we do to enhance academic participation in policy development? First a warning: recall the policy-design-analysis argument much used in these blogs. Policy is about what we are trying to achieve, design is about inventing solutions; and analysis is about exploring the consequences of, and evaluating, alternative policies, solutions and plans – the point being that analysis alone, the stuff of academic life, will not of itself solve problems. Engagement, therefore, ideally means engagement across all three areas, not just analysis.

How can we then make ourselves more effective by lowering the bar? First, ensure that our ‘best practice’ (see blog 42) is intelligible, convincing and useful; evidence-based. This means being confident about what we know and can offer. But then we also ought to be open about what we don’t know. In some cases we may be able to say that we can tackle, perhaps reasonably quickly, some of the important ‘not known’ questions through research; and that may need resource. Let me illustrate this with retail modelling. We can be pretty confident about estimating revenues (or people) attracted to facilities when something changes – a new store, a new hospital or whatever. And then there is a category, in this case, of what we ‘half know’. We have an understanding of retail structural dynamics to a point where we can estimate the minimum size that a new development has to be for it to succeed. But we can’t yet do this with confidence. So a talk on retail dynamics to commercial directors may be ‘above the bar’.

I suppose another way of putting this argument is that for policy engagement purposes, we should know where we should set the height of the bar: confidence below, uncertainty (possibly with some insights), above. There is a whole set of essays to be written on this for different possible application areas.

Alan Wilson

June 2016.

42. Best practice

Everything we do, or are responsible for, should aim at adopting ‘best practice’. This is easier said than done! We need knowledge, capability and capacity. Then maybe there are three categories through which we can seek best practice: (1) from ‘already in practice’ elsewhere; (2) could be in practice somewhere but isn’t: the research has been done but hasn’t been transferred; (3) problem identified, but research needed.

How do we acquire the knowledge? Through reading, networking, cpe courses, visits. Capability is about training, experience, acquiring skills. Capacity is about the availability of capability – access to it – for the services (let us say) that need it. Medicine provides an obvious example; local government another. How do each of 164 local authorities in England acquire best practice? Dissemination strategies are obviously important. We should also note that there may be central government responsibilities. We can expect markets to deliver skills, capabilities and capacities – through colleges, universities and, in a broad sense, industry itself (in its most refined way through ‘corporate universities’). But in many cases, there will be a market failure and government intervention becomes essential. In a field such as medicine, which is heavily regulated, the Government takes much of the responsibility for ensuring supply of capability and capacity. There are other fields, where in early stage development, consultants provide the capacity until it becomes mainstream – GMAP in relation to retailing being an example from my own experience. (See the two ‘spin-out blogs.)

How does all this work for cities, and in particular, for urban analytics? Good analytics provide a better base for decision making, planning and problem solving in city government. This needs a comprehensive information system which can be effectively interrogated. This can be topped with a high-level ‘dashboard’ with a hierarchy of rich underpinning levels. Warning lights might flash at the top to highlight problems lower down the hierarchy for further investigation. It needs a simulation (modelling) capacity for exploring the consequences of alternative plans. Neither of these needs are typically met. In some specific areas, it is potentially, and sometimes actually, OK: in transport planning in government; in network optimisation for retailers for example. A small number of consultants can and do provide skills and capability. But in general, these needs are not met, often not even recognised. This seems to be a good example of a market failure. There is central government funding and action – through research councils and particularly perhaps, Innovate UK. The ‘best practice’ material exists – so we are somewhere in between categories 1 and 2 of the introductory paragraph above. This tempts me to offer as a conjecture the obvious ‘solution’: what is needed are top-class demonstrators. If the benefits were evident, then dissemination mechanisms would follow!

Alan Wilson
June 2016

41 Foresight on The Future of Cities

For the last three years (almost), I have been chairing the Lead Expert Group of the Government Office for Science Foresight Project on The Future of Cities. It has finally ‘reported’, not as conventionally with one large report and many recommendations, but with four reports and a mass of supporting papers. We knew at the outset that we could not look forward without learning the lessons of the past, and so we commissioned a set of working papers – which are on the web site – as a resource, historical in the main, looking forwards imaginatively where possible. The ‘Foresight Future of Cities’ web site is at https://www.gov.uk/government/collections/future-of-cities.

During the project, we have worked with fourteen Government Department – ‘cities’ as a topic crosses government – and we have visited over 20 cities in the UK and have continued to work with a number of them. The project had several (sometimes implicit) objectives: to articulate the challenges facing cities from a long run – 50 years – perspective; to consider what could be done in the short run in evidence-based policy development to generate possibly better outcomes in meeting these challenges; to review what we know and what we don’t know – the latter implying that we can say something about research priorities; and to review the tools that are available to support foresight thinking.

We developed six themes that seemed to work for us throughout the project:

  • people – living in cities
  • city economies
  • urban metabolism – energy and materials flows and the sustainability agenda
  • urban form – including the issues associated with density and connectivity
  • infrastructure – including transport
  • governance – devolution and mayors?

What have we achieved? I believe we have a good conceptual framework and a corresponding effective understanding of the scale of the challenges. It is clear that to meet these challenges in the long term, radical thinking is needed to support future policy and planning development. The project has a science provenance and this provides the analytical base for exploring alternative future scenarios. Forecasting for the long term is impossible, inventing knowledge-based future scenarios is not. In our work with cities – Newcastle and Milton Keynes provide striking examples – we have been met with enthusiasm and local initiatives have produced high-class explorations, complete with effective public engagement. There is a link to the Newcastle report on the GO-Science website; the Milton Keynes work is ongoing.

Direct links to the four project reports follow. The first is an overview; the second a brief review of what we know about the science of cities combined with an articulation of research priorities; the third is, in effect, a foresighting manual for cities who wish to embark on this journey; and the fourth is an experiment – work on a particular topic – graduate mobility – since ‘skills’ figures prominently in our future challenges list.

An overview of the evidence

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/520963/GS-16-6-future-of-cities-an-overview-of-the-evidence.pdf

Science of Cities:

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/516407/gs-16-6-future-cities-science-of-cities.pdf

Foresight for Cities:

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/516443/gs-16-5-future-cities-foresight-for-cities.pdf

Graduate Mobility:

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/510421/gs-16-4-future-of-cities-graduate-mobility.pdf

Alan Wilson

May 2016

40: Competing models

My immediately preceding blog post, ‘Truth is what we agree about’, provides a framework for thinking about competing models in the social sciences. There are competing models in physics, but not in relation to most of the ‘core’ – which is ‘agreed’. Most, probably all, of the social sciences are not as mature and so if we have competition, it is not surprising. However, it seems to me that we can make some progress by recognising that our systems of interest are typically highly complex and it is very difficult to isolate ideal and simple systems of interest (as physicists do) to develop the theory – even the building bricks. Much of the interest rests in the complexity. So that means that we have to make approximations in our model building. We can then distinguish two categories of competing models: those that are developed through the ‘approximations’ being done differently; and those that are paradigmatically different. Bear in mind also that models are representations of theories and so the first class – different ways of approximating – may well have the same underlying theory; whereas the second will have different theoretical underpinnings in at least some respects.

I can illustrate these ideas from my own experience. Much of my work has been concerned with spatial interaction: flows across space – for example, journey to work, to shop, to school, to health services, telecoms’ flows of all kinds. Flows decrease with ‘distance’ – measured as some kind of generalised cost – and increase with the attractiveness of the destination. There was even an early study that showed that marriage partners were much more likely to find each other if they lived or worked ‘nearer’ to each other – something that might be different now in times of greater mobility. Not surprisingly, these flows were first modelled on a Newtonian gravity model analogy. The models didn’t quite work and my own contribution was to shift from a Newtonian analogy to a Boltzmann one – a statistical averaging procedure. In this case, there is a methodological shift, but as in physics, whatever there is in underlying theory is the same: the physics of particles is broadly the same in Newton or Boltzmann. The difference is because Newton can deal with small numbers of particles, Boltzmann with very large numbers – but answering different questions. The same applies in spatial interaction: it is the large number methodology that works.

These models are consistent with an interpretation that people behave according to how they perceive ‘distance’ and ‘attractiveness’. Economists then argue that people behave so as to maximise utility functions. In this case the two can be linked by making the economists’ utility functions those that appear in the spatial interaction model. This is easily done – provided that it is recognised that the average behaviour is such that it does not arise from the maximisation of a particular utility function. So the economists have to assume imperfect information and/or, a variety of utility functions. They do this in most instances by assuming a distribution of such functions which, perhaps not surprisingly, is closely related to an entropy function. The point of this story is that apparently competing models can be wholly reconciled even though in some cases the practitioners on one side or other firmly locate themselves in silos that proclaim the rightness of their methods.

The same kind of system can be represented in an agent-based model – an ABM. In this case, the model functions with individuals who then behave according to rules. At first sight, this may seem fundamentally different but in practice, these rules are probabilities that can be derived from the coarser grain models. Indeed, this points us in a direction that shows how quite a range of models can be integrated. At the root of all the models I am using as an illustration, are conditional probabilities – a probability that an individual will make a particular trip from an origin to a destination. These probabilities can then be manipulated in different ways at different scales.

An argument is beginning to emerge that most of the differences involve judgements about such things as scale – of spatial units, sectors or temporal units – or methodology. The obvious example of the latter is the divide between statisticians and mathematicians, particularly as demonstrated by econometrics and mathematical economics. But, recall, we all work with probabilities, implicitly or explicitly.

There is perhaps one more dimension that we need to characterise differences in the social sciences when we are trying to categorise possibly competing approaches. That is when the task in hand is to ‘solve’ a real-world problem, or to meet a challenge. This determines some key variables at the outset: work on housing would need housing in some way as a variable and the corresponding data. This in turn illustrates a key aspect of the social scientists approach: the choice of variables to include in a model. We know that our systems are complex and the elements – the variables in the model – are highly interdependent. Typically, we can only handle a fraction of them, and when these choices are made in different ways for different purposes, it appears that we have competing models.  Back to approximations again.

Much food for thought. The concluding conjecture is that most of the differences between apparently competing models come from either different ways of making approximations, or  through different methodological (rather than theoretical) approaches. Below the surface, there are degrees of commonality that we should train ourselves to look for; and we should be purposeful!

Alan Wilson

May 2016

39: Abstract modes, generalised costs and constraints: exploring future scenarios

I have spent much of the last three years working on the Government Office for Science Foresight project on The future of cities. The focus was on a time horizon of fifty years into the future. It is clearly impossible to use urban models to forecast such long-term futures but it is possible in principle to explore systematically a variety of future scenarios. A key element of such scenarios is transport and we have to assume that what is on offer – in terms of modes of travel – will be very different to today – not least to meet sustainability criteria. The present dominance of car travel in many cities is likely to disappear. How, then, can we characterise possible future transport modes?

This takes me back to ideas that emerged in papers published 50 years ago (or in one case, almost that). In 1966 Dick Quandt and William Baumol, distinguished Princeton economists, published a paper in the Journal of Regional Science on ‘abstract transport modes’. Their argument was precisely that in the future, technological change would produce new modes: how could they be modelled? Their answer was to say that models should be calibrated not with modal parameters, but with parameters that related to the characteristics of modes. The calibrated results could then be used to model the take up of new modes that had new characteristics. By coincidence, Kelvin Lancaster, Columbia University economist, published a paper, also in 1966, in The Journal of Political Economy on ‘A new approach to consumer theory’ in which utility functions were defined in terms of the characteristics of goods rather than the goods themselves. He elaborated this in 1971 in his book ‘Consumer demand: a new approach’. In 1967, my ‘entropy’ paper was published in the journal Transportation Research and a concept used in this was that of ‘generalised cost’. This assumed that the cost of travelling by a mode was not just a money cost, but the weighted sum of different elements of (dis)utility: different kinds of time, comfort and so as well as money costs. The weights could be estimated as part of model calibration. David Boyce and Huw Williams in their magisterial history of transport modelling, ‘Forecasting urban travel’, wrote, quoting my 1967 paper, “impedance … may be measured as actual distance, as travel time, as cost, or more effectively as some weighted combination of such factors sometimes referred to as generalised cost……… In later publications, ‘impedance’ fell out of use in favour of ‘generalised cost’”. (They kindly attributed the introduction of ‘generalised cost’ to me.)

This all starts to come together. The Quandt and Baumol ‘abstract mode’ idea has always been in my mind and I was attracted to the Kelvin Lancaster argument for the same reasons – though that doesn’t seem to have taken off in a big way in economics. (I still have his 1971 book, purchased from Austicks in Leeds for £4-25.) I never quite connected ‘generalised cost’ to ‘abstract modes’. However, I certainly do now. When we have to look ahead to long-term future scenarios, it is potentially valuable to envisage new transport modes in generalised cost terms. By comparing one new mode with another, we can make an attempt – approximately because we are transporting current calibrated weights fifty years forward – to estimate the take up of modes by comparing generalised costs. I have not yet seen any systematic attempt to explore scenarios in this way and I think there is some straightforward work to be done – do-able in an undergraduate or master’s thesis!

We can also look at the broader questions of scenario development. Suppose for example, we want to explore the consequences of high density development around public transport hubs. These kinds of policies can be represented in our comprehensive models by constraints – and I argue that the idea of representing policies – or more broadly ‘knowledge’ – in constraints within models is another powerful tool. This also has its origins in a fifty year old paper – Jack Lowry’s ‘Model of metropolis’. In broad terms, this represents the fixing through plans of a model’s exogenous variables – but the idea of ‘constraints’ implies that there are circumstances where we might want to fix what we usually take as endogenous variables.

So we have the machinery for testing and evaluating long-term scenarios – albeit building on fifty year old ideas. It needs a combination of imagination – thinking what the future might look like – and analytical capabilities – ‘big modelling’. It’s all still to play for, but there are some interesting papers waiting to be written!!

Alan Wilson

April 2016

38. Truth is what we agree about?

I have always been interested in philosophy. I was interested in the big problems – the ‘What is life about?’ kind of thing with, as a special subject, ‘What is truth?’. How can we know whether something – a sentence, a theory, a mathematical formula – is true? And I guess because I was a mathematician and a physicist early in my career, I was particularly interested in the subset of this which is the philosophy of mathematics and the philosophy of science. I read a lot of Bertrand Russell – which perhaps seems rather quaint now. This had one nearer contemporary consequence. I was at the first meeting of the Vice-Chancellors of universities that became the Russell Group. There was a big argument about the name. We were meeting in the Russell Hotel and after much time had passed, I said something like ‘Why not call it the Russell Group?’ – citing not just the hotel but also Bertrand Russell as a mark of intellectual respectability. Such is the way that brands are born.

The maths and the science took me into Popper and the broader reaches of logical positivism. Time passed and I found myself a young university professor, working on mathematical models of cities, then the height of fashion. But fashions change and by the late 70s, on the back of distinguished works like David Harvey’s ‘Social justice and the city’, I found myself under sustained attack from a broadly Marxist front. ‘Positivism’ became a term of abuse and Marxism, in philosophical terms – or at least my then understanding of it, merged into the wider realms of structuralism. I was happy to come to understand that there were hidden (often power) structures to be revealed in social research that the models I was working on missed, therefore undermining the results.

This was serious stuff. I could reject some of the attacks in a straightforward way. There was a time when it was argued that anything mathematical was positivist and therefore bad and/or wrong. This could be rejected on the grounds that mathematics was a tool and that indeed there were distinguished Marxist mathematical economists such as Sraffa. But I had to dig deeper in order to understand. I read Marx, I read a lot of structuralists some of whom, at the time, were taking over English departments. I even gave a seminar in the Leeds English Department on structuralism!

In my reading, I stumbled on Jurgen Habermas and this provided a revelation for me. It took me back to questions about truth and provided a new way of answering them. In what follows, I am sure I oversimplify. His work is very rich in ideas, but I took a simple idea from it: truth is what we agree about. I say this to students now who are usually pretty shocked. But let’s unpick it. We can agree that 2 + 2 = 4. We can agree about the laws of physics – up to a point anyway – there are discoveries to be made that will refine these laws as has happened in the past. That also connects to another idea that I found useful in my toolkit: C. S. Peirce and the pragmatists. I will settle for the colloquial use of ‘pragmatism’: we can agree in a pragmatic sense that physics is true – and handle the refinements later. I would argue from my own experience that some social science is ‘true’ in the same way: much demography is true up to quite small errors – think of what actuaries do. But when we get to politics, we disagree. We are in a different ball park. We can still explore and seek to analyse and having the Habermas distinction in place helps us to understand arguments.

How does the ‘agreement’ come about? The technical term used by Habermas is ‘intersubjective communication’ and there has to be enough of it. In other words, the ‘agreement’ comes on the back of much discussion, debate and experiment. This fits very well with how science works. A sign of disagreement is when we hear that someone has an ‘opinion’ about an issue. This should be the signal for further exploration, discussion and debate rather than simply a ‘tennis match’ kind of argument.

Where does this leave us as social scientists? We are unlikely to have laws in the same way that physicists have laws but we have truths, even if they are temporary and approximate. We should recognise that research is a constant exploration in a context of mutual tolerance – our version of intersubjective communication. We should be suspicious of the newspaper article which begins ‘research shows that …..’ when the ‘research’ quoted is a single sample survey. We have to tread a line between offering knowledge and truth on the one hand and recognising the uncertainty of our offerings on the other. This is not easy in an environment where policy makers want to know what the evidence is, or what the ‘solution’ is, for pressing problems and would like to be more assertive than we might feel comfortable with. The nuances of language to be deployed in our reporting of research become critical.

Alan Wilson

April 2016

37: The ‘Leicester City’ phenomenon: aspirations in academia.

Followers of English football will be aware that the top tier is the Premier League and that the clubs that finish in the top four at the end of the season play in the European Champions’ League in the following year. These top four places are normally filled by four of a top half a dozen or so clubs – let’s say Manchester United, Manchester City, Arsenal, Chelsea, Tottenham Hotspur and Liverpool. There are one or two others on the fringe. This group does not include Leicester City. At Christmas 2014, Leicester were bottom of the Premier League with relegation looking inevitable. They won seven of their last nine games in that season and survived. At the beginning of the current (2015-16) season, the bookmakers’ odds on them winning the Premier League were 5000-1 against. At the time of writing, they top the league by eight points with four matches to play. The small number of people who might have bet £10 or more on them last August are now sitting on a potential fortune.

How has this been achieved? They have a very strong defence and so concede little; they can score ‘on the break’, notably through Jamie Vardy, a centre forward who not long ago was playing for Fleetwood Town in the nether reaches of English football; they have a thoughtful, experienced and cultured manager in Claudio Ranieri; and they work as a team. It is certainly a phenomenon and the bulk of the football-following population would now like to see them win the League.

What are the academic equivalents? There are university league tables and it is not difficult to identify a top half dozen. There are tables for departments and subjects. There is a ranking of journals. I don’t think there is an official league table of research groups but certainly some informal ones. As in football, it is very difficult to break into the top group from a long way below. Money follows success – as in the REF (the Research Excellence Framework) – and facilitates the transfer of the top players to the top group. So what is the ‘Leicester City’ strategy for an aspiring university, an ambitious department or research group, or a journal editor? The strong defence must be about having the basics in place – good REF ratings and so on. The goal-scoring break-out attacks is about ambition and risk taking. The ‘manager’ can inspire and aspire. And the team work: we are almost certainly not as good as we should be in academia, so food for thought there.

Then maybe all of the above requires at the core – and I’m sure Leicester City have these qualities – hard work, confidence, and good plans while still being creative; and a preparedness to be different – not to follow the fashion. So when The Times Higher has its ever-expanding annual awards, maybe they should add a ‘Leicester City Award’ for the university that matches their achievement in our own leagues. Meanwhile, will Leicester win the League? Almost all football followers in the country are now on their side. We will see in a month’s time!

Alan Wilson, April 2016