In my early days as CEO in Turing, I was confronted with an old challenge: pure vs applied though often in a new language – foundational vs consultancy for example. In my own experiences from my schooldays onwards, I was always aware of the higher esteem associated with the ‘pure’ and indeed leaned towards that end of the spectrum. Even when I started working in physics, I worked in ‘theoretical physics’. It was when I converted to the social sciences that I realised that my new fields, I could have it both ways: I worked on the basic science of cities through mathematical and computer modelling but with outputs that were almost immediately applicable in town and regional planning. So where did that kind of thinking leave me in trying to think through a strategy for the Institute?
Oversimplifying: there were two camps – the ‘foundational’ and the ‘domain-based’. Some of the former could characterise the latter as ‘mere consultancy’. There were strong feelings. However, there was a core that straddled the camps: brilliant theorists, applying their knowledge in a variety of domains. It was still possible to have it both ways. How to turn this into a strategy – especially given that the root of a strategic plan will be the allocation of resources to different kinds of research? In relatively early days, it must have been June 2017, we had the first meeting of our Science Advisory Board and for the second day, we organised a conference, inviting the members of our Board to give papers. Mike Lynch gave a brilliant lecture on the history of AI through its winters and summers with the implicit question: will the present summer be a lasting one? At the end of his talk, he said something which has stuck in my mind ever since: “The biggest challenge for machine learning is the incorporation of prior knowledge”. I would take this further and expand ‘knowledge’ to ‘domain knowledge’. My intuition was that the most important AI and data science research challenges lay within domains – indeed that the applied problems generated the most challenging foundational problems.
Producing the Institute’s Strategic Plan in the context of a sometimes heated debate was a long drawn out business – taking over a year as I recall. In the end, we had a research strategy based on eight challenges, six of which were located in domains: health, defence and security, finance and the economy, data-centric engineering, public policy and what became ‘AI for science’. We had two cross-cutting themes: algorithms and computer science, and ethics. The choice of challenge areas was strongly influenced by our early funders: the Lloyds Register Foundation, GCHQ and MoD, Intel and HSBC. Even without a sponsor at that stage, we couldn’t leave out ‘health’! All of these were underpinned by the data science and machine learning methods tool kit. Essentially, this was a matrix structure: columns as domains, rows as method – an effective way of relaxing the tensions, of having it both ways. This structure has more or less survived, though with new challenges added – ‘cities’ for example and the ‘environment’.
When it comes to allocating resources, other forces come into play. Do we need some quick wins? The balance between the short term and the longer – the latter inevitably more speculative? Should industry fund most of the applied? This all has to be worked in the context of a rapidly developing Government research strategy (with the advent of UKRI) and the development of partnerships with both industry and the public sector. There is a golden rule, however, for a research institute (and for many other organisations such as universities): think through your own strategy rather than simply ‘following the money’ which is almost always focused on the short term. Then given the strategy, operate tactically to find the resources to support it.
In making funding decisions, there is an underlying and impossible question to answer: how much has to be invested in an area to produce results that are truly transformative? This is very much a national question but there is a version of it at the local level. Here is a conjecture: that transformative outcomes in translational areas demand a much larger number of researchers to be funded than to produce such transformations in foundational areas. This is very much for the ‘research’ end of the R and D spectrum – I can see that the ‘D’ – development – can be even more expensive. So what did we end up with? The matrix works and at the same time acknowledges the variety of viewpoints. And we are continually making judgements about priorities and the corresponding financial allocations. Pragmatism kicks in here!