UCL Uncovering Politics

Military Technology and Intelligent Warfare

Episode Summary

This week we explore the role of military technology in modern warfare.

Episode Notes

Despite Putin’s expectation of a swift victory, over one year on from his full-scale invasion of Ukraine, that country’s defenders are still fighting – and, indeed fighting back. 

One important area in which Ukraine has managed to stay ahead of Russia is in military technology.

A new report from the Tony Blair Institute for Global Change examines the role of military technology in the Russia–Ukraine war, and considers the lessons that can be learnt from it. 

One of the authors, Dr Melanie Garson, Associate Professor in International Conflict Resolution & International Security here in the UCL Department of Political Science, and also Acting Director of Geopolitics and Cyber Policy Lead at the Tony Blair Institute, joins us today to discuss intelligent warfare, military technology and AI.

 

Mentioned in this episode:

Episode Transcription

SUMMARY KEYWORDS

thinking, technology, ukraine, people, military, countries, ecosystems, ai, tech, talked, ucl, autonomous weapons systems, questions, sorts, defence, part, melanie, work, innovation, communications infrastructure

SPEAKERS

Melanie Garson, Alan Renwick

 

Alan Renwick  00:06

Hello. This is UCL Uncovering Politics. And this week we explore the role of military technology in modern warfare.

 

Hello. My name is Alan Renwick. And welcome to UCL Uncovering Politics – the podcast of the School of Public Policy and Department of Political Science at University College London. 

 

Despite Putin's expectation of a swift victory, over one year on from his full-scale invasion of Ukraine, that country's defenders are still fighting – and, indeed, fighting back. 

 

One important area in which Ukraine has managed to stay ahead of Russia is in military technology. 

 

And a new report from the Tony Blair Institute for Global Change examines the role of military technology in the Russia-Ukraine war and considers the lessons that can be learned from it. 

 

Well, one of the authors is our very own Dr Melanie Garson, Associate Professor in International Conflict Resolution & International Security here in the UCL Department of Political Science, and also Acting Director of Geopolitics and Cyber Policy Lead at the Tony Blair Institute. 

 

Regular podcast listeners may remember our episode with Melanie last November when we focused on the role of global tech companies in Ukraine. 

 

And I'm delighted to say that Melanie joins me now to continue the conversation. 

 

Melanie, welcome back to UCL Uncovering Politics. 

 

And I thought it would be good to start with just a bit of a background on what the Tony Blair Institute actually is. So what is it? What are its aims? What sorts of questions is it asking?

 

Melanie Garson  01:50

So the Tony Blair Institute for Global Change works actually across over 30 countries. We are just under about 800 people working quite closely to help equip leaders with radical but practical plans for governing as a whole. And very much thinking underpinning that where tech sits with it, and that most politicians in many countries often treat technology as a side issue. And that presents real dangers. And how do we really make sure that it's part of the transformational change of what will be the 21st century strategic state. 

 

Alan Renwick  02:36

And you want to give us a flavour of the sorts of issues that the Institute is working on at the moment beyond your own area of focus?

 

Melanie Garson  02:43

Absolutely. I mean, we work on everything. With helping in politics – what's an optimistic political vision for the strategic states and what are the outcomes they want? Thinking of economics and public financing – how do we, you know, fund and support that kind of state? From climate and energy policy – how can we think that the strategic state can achieve Net Zero? How do we shift to preventative and personalised models and medicine that improve quality and quantity of life, definitely on the science and tech side, where we help people generate the ideas to build the companies that help countries flourish. And from geopolitics – thinking about how we can really build that open and interconnected and secure world where each human can prosper.

 

Alan Renwick  03:30

And we'll go on in just a moment to talk about a recent report that you've co-authored. 

 

But you want to give us a sense of your kind of overall, your personal overall research programme and the sorts of issues that you're looking at?

 

Melanie Garson  03:42

Yep. So my work really sits at the heart of what I call 'tech geopolitics', and particularly cyber. So thinking about that secure interconnected world and where the internet ecosystem is fit for purpose for everything that we need to overlay on top of it. 

 

And from a geopolitical perspective that now involves everything from the subsea cables that run across all parts of the world that we don't think about at all and all the way up to now space and where the satellites are part of that communication infrastructure, and everything in between. So that's where making sure that those services are operational, but also that it's, if you want, clean like running water – in some sense fit for purpose in the sense that the information ecosphere is fit for the kind of democratic politics that we need, and that also that our economies are protected from the kind of cyber threats and the things that we usually think about when people say the word 'cyber'. So really rather large and ever-expanding surface area that we think about.

 

Alan Renwick  04:54

Sounds like you have your work cut out. Got lots of big and important issues there. 

 

So let's focus in then on military technology. And this may seem like an obvious question, but I think it's useful to begin with: what actually is military technology? What are we talking about here? What kinds of things are included within that concept?

 

Melanie Garson  05:13

It's really interesting because in some ways that nexus has got a lot smaller. 

 

So military technology, generally, we think about it as the application of technology for use in warfare. And we usually think about that as tech that doesn't have a commercial application, or it lacks any sort of usefulness in civilian life, or that you need particularly military training to use it. 

 

But as I said, that line is super, super blurred now, because we're thinking a lot about dual-use technologies. And dual-use technologies have always existed. So if you think about a knife, it's easy to think about how it was used to kill someone or how it was used to cut your cucumbers. But on the other hand, when this sword, or various sorts of uses of the sword came around, its efficiency or its effectiveness was really altered by the extent of the training and your expertise in using it.

 

Alan Renwick  06:11

And in terms of kind of concretely particular things, you focus quite a lot on the paper on drones and these sorts of things. So unmanned – for what's the phrase – unmanned aerial vehicles. Is that the right phrase? Which I guess... You know, we know drones very well from civilian life, as you suggest, as well as from the world of warfare.

 

Melanie Garson  06:33

Yeah, exactly. 

 

And it's the extent that these technologies take on a different function in the battlefield with different rules of engagement that they would have in civilian life, and obviously, an extent that they can be weaponized, particularly drones in themselves. And that's been in the news, certainly over the last 24 hours with the question of whether Ukraine or the US sort of sent drones to kill Putin.

 

Alan Renwick  07:03

Yeah, absolutely. 

 

We should explain to listeners that we're recording this a few days ahead of actually issuing the podcast. So yes, the alleged at least attack on the Kremlin has just taken place just before we were recording.

 

Yeah, one of the things I find very interesting in the report actually was you talk quite a lot about the role of hobbyists, and how they've been quite involved in the war in Ukraine, and the Ukrainian state has been able to draw on the kind of expertise of people whose hobby it is to fly drones and do other such things. And do you want to talk a little bit about that? Because I guess that also speaks to the kind of crossover aspect of technology.

 

Melanie Garson  07:43

Yeah, it's been a really interesting aspect both in if we're thinking in the cyber domain, and a lot of people have thought about it, originally, of thinking of the hacktivists that joined the army. But sort of being able to have that agility to draw on this platform of people engaged in ecosystems. And when the ecosystem is not as centralised – and this is the difference we draw, and this is the distinction, really, with Russia, where everything has been very formalised and centralised and the nexus between civilian and military technology is very closed, and they are close and closed. So once those people are always involved in the state, it makes it difficult to exponentially grow that. 

 

Whereas in Ukraine they've been really able to integrate these different flows of both civilian and military tech and the people themselves, and using the companies and the private companies and the people that have trained on some of these systems to integrate as part of the army because often some of those people are actually more expert or have been using them. The hobbyist and the commercial developers of the technology have been using them more often than actually the military has been using them.

 

Alan Renwick  09:00

Okay. We know what we're talking about then.

 

Do you want to sum up the questions you're asking in this paper about military technology? What are the core issues that you're trying to drive at?

 

Melanie Garson  09:10

One of the things that we're trying to drive at in this paper – and it's thinking of what I call 'intelligent power' in what we call an 'artificially intelligent age' – is thinking about how we have to rethink, or how modern militaries have to rethink, both where the development of their technology is happening and also the agility of the supply and how they're going to access these technologies, and also what the technologies actually rely on, which is also on the communication networks in this. So really that defence and strategic stability are no longer just about, you know, how much can you spend your way out of it, but also fostering this environment where the military can act coherently in all different types of warfare, whether that's physical, cognitive, and virtual warfare. And that needs a greater sort of joint conceptualisation with commercial entities or, if you want, the sort of hobbyist individuals that are working on that. So it's thinking about – and really thinking forward of – if you're looking at what the army of 2040 is going to look like, who do you need to be sitting with right now in order to develop that?

 

Alan Renwick  10:30

And what's the answer to that?

 

Melanie Garson  10:33

Well, we've seen this. 

 

So the US Department of Defence has been working on something called a project convergence that really is the bringing together of leaders from data governance, from AI, from across the technologies, to be a lot more agile. 

 

NATO has opened the first, so the first of what they called their Diana Labs, which is looking at this defence and innovation lab ecosystem of bringing it together the two. That sadly has not been opened at UCL. And actually the first lab has been opened over at Imperial first. 

 

It's that really beginning to host those sorts of ecosystems, those campuses, that are going to bring together the military, the commercial, the academic, into the space to do that joint thinking so that people are on the same page of where it goes forward.

 

Alan Renwick  11:23

And are there lessons from the war in Ukraine that can helpfully be learned about just how to do this effectively and what needs to be done?

 

Melanie Garson  11:31

Ukraine has been particularly successful in being able to integrate quite quickly all the different strands. So they've had on one level strands of different weaponry coming from them as the way the allies have provided it from all different sorts of streams. And they've had to be able to synthesise those together quite quickly, train the people to be able to do it. 

 

Part of that has happened because they've been quite tech forward as a nation before the conflict happened. So and they've had, and we see the access to that nexus has been vastly different from where we've seen in Russia, where there's been instances of them actually having to order some of the commercial parts for some of their weaponry from like Ali Baba Express. 

 

So having the chains and having those clear flows are going to be really important thinking about where you can integrate them. And again, what has been critical – and what we talked a little bit more about on the last podcast – was about the communication system, because all this runs via the either internet systems or via the communications network, and being able to rely on that being clean and operational for all these technologies to function.

 

Alan Renwick  12:45

Interesting. So, okay. So you're suggesting that military technology is very important in modern warfare. And you're suggesting that in order to develop that effectively it's important to have a forward-looking attitude in the country towards technology and to have cooperation between government, between researchers, between the commercial private sector, all driving innovation forward in this area. 

 

What are the risks that governments and others need to be aware of in thinking about how to do this effectively?

 

Melanie Garson  13:20

Well, there's always that risk of thinking about the extent... I think with any of the technologies, the key thing becomes actually the tension between the policy on it and the innovation and where there's sometimes a disjunct with that, particularly on emerging technologies, and particularly the extent of where the private sector may have the advantage on some of these technologies before the military, and certainly before that policy catches up with it. But policy fearing – that clamps down on the innovation too quickly. 

 

But that's why it becomes even more important to have these close-knit tech ecosystems. So the advantage of having the close-knit ecosystem that allows that mutual sort of beneficial circulation between the sectors, that has that kind of transparency, will allow for greater agility on a hyper-connected battlefield. 

 

So we see countries that are similar to that – Israel, for instance – where the close nexus between the military and commercial tech companies and that increased collaboration has given an advantage on the future battlefields. Estonia has also leveraged that kind of cooperation. So in those smaller countries having that gives a distinct advantage.

 

Alan Renwick  14:46

It's interesting. I guess, so I'm a total non-expert in this area. I know absolutely nothing in this area at all. But I guess to me, I hear you talking about what facilitates innovation. And, you know, we tend to think that innovation is advanced through having a very kind of open culture, having lots of people involved, people coming and going, all of this kind of thing. And then when I think about military technology, I kind of think: Well, got to be very careful with that; lots of it is secrets; you know, you've got to be very sure that you know exactly who knows what and the wrong people don't find out things. 

 

Alan Renwick  14:48

So it feels to me like there's a tension there between, on the one hand, the needs of innovation, and on the other hand, the needs of security. Is that correct? And if so, how can you deal with it effectively?

 

Melanie Garson  15:37

At the heart of this is that these are all dual-use technologies – or most of them. I mean, even nuclear is a dual-use technology. So these are all going to be used in multiple ways. And thinking about how they're enabled in different spheres will make sure that when we bring those together... I mean, obviously, there's the legal ways of thinking about that – the contracts and the NDAs and how that's all brought together. And yet as always in any, you know, system – and we saw that with the Discord Leak, so someone that goes rogue and leaks, all sorts of information and papers – and sometimes that does happen. 

 

But in the end, the military aspect of it will be how it's then enabled and how it's used, and then how it's strategically used. And that bit of it can still be kept, to a point, secret. You're not necessarily sharing your strategy, but you are having at a closer conversation of just making sure that you're absolutely at the edge of the technology being developed. And that that's also being developed responsibly, because sometimes it's a two-way street: sometimes the military might see the dangers in some of the technologies before the commercial entities actually see it as well. 

 

Alan Renwick  16:59

And so you've talked about the kind of ecosystem that you need to get going. And you've talked a little bit about some places that do this well. So Ukraine does it at least better than Russia. You've talked about Israel doing this relatively well as well. 

 

Is it possible to kind of define the conditions in which things are going to work well? Is it possible to kind of take a step further back and understand, you know, what do the politicians need to do, what do the commercial actors need to do? Are there particular background factors that are important in determining whether a country is successful in this area?

 

Melanie Garson  17:34

Yeah, I think where we think about sort of at the political level – and I think this goes through the whole of society – and really thinking about security and defence, if we even just, you know, step back from the military technology, because it's really small part of what we think about as security and defence for a nation as we go forward. 

 

But at the highest political level, we're going to have to think about how a country thinks about using tech to transform and modernise. And a big part of that is going to be investing in tech and AI-era infrastructures. So moving away from very siloed structures and duplicative structures towards a lot more of a shared and interoperable infrastructure and thinking about the platforms where these can be made to share the sorts of learning and that kind of agility. Really thinking about being more agile and responsive and targeted – things that governments and countries aren't usually great at. 

 

And but I think behind it, these partnerships really need to have sort of have real clear purpose. So you have that sort of greater appetite for risk and innovation, but also bring in that greater expertise and link from between government, between military, between the private sector, that can really inform the direction going forward.

 

Alan Renwick  19:03

I should of course, ask: we're talking here about military technologies, the use of AI in war. We've heard a lot in recent weeks about concerns about artificial intelligence, and how it might have potentially highly detrimental effects upon the future of humanity. And many of the most sort of lurid future scenarios that get talked about relate, of course, to the role of AI in warfare. 

 

How worried should we be and what particularly should we be focusing our worries on?

 

Melanie Garson  19:39

There is a lot out there at the moment, and generative AI has certainly been a game changer. And it's not one that's been unanticipated. And I think it was really... I think Craig Martell, who's sort of in the US, was talking this week about generative AI and particularly on disinformation. And it wasn't just the fact of what it can create, but also the way that it sounds authoritative in what it's doing. So it opens a lot of questions of how we use it and where we look, but also sort of thinking a little bit of: is it overhyped? Where can we spot it? 

 

And it comes back to anything along the development of the technologies. And we've said they are sort of dual use in a sense. I always say: with every capability comes great vulnerability. We always have to think about where are the vulnerabilities within everything that we build, and then trying to pre think – I mean, and often I'd say they're also dual use from the other side in the sense that they can both be good and bad, right? People can use them for good or they could use them for ill. 

 

And so the irony in a lot of this tech development is from the internet itself is that the people who think about the tech – these are amazing, great thinkers, huge, massive optimists – they are usually trying to solve a great problem and they never anticipate when they create it that somebody's going to do something awful with it. That's a great power in the people that are thinking this through. And there's a tension between, well, we don't want everybody always thinking about every awful application something because that will constrain innovation. 

 

So it's thinking about where in the system do we need to promote the responsible use of the development? Where is it being used? Do we know what's in it? So thinking about things like software bill of materials or where we think of, do we know what's in the black box? Do we teach people to use things responsibly? You know, and this is kind of it's a lot of what we do when we think at TBI in the thinking about the strategic state and think about the tech enabled state, where we take that step back and say: Okay, what do we have to build into the system that this actually works for us, and that we can make it work for us and with us?

 

So there's lots of interesting questions of human and machine teaming. And I think, I think it was a useful quote that I heard recently about generative AI in particular that said: 'it's not like having the CEO answer your question, it's like having 1,000 interns do it for you'. I think that's what people forget about it: it is a machine, and it's thinking about where it's sort of, you know, it's a set of processes. And it is kind of like the 1,000 interns doing the work. But still at some point where do you take that way? But the the danger is people feel it sort of authoritative. To me, it's a bit more like a magic eight ball.

 

Alan Renwick  22:59

Yeah, I mean, that metaphor of the 1,000 interns. I mean, you know, one of the worries that many people have is that we have the 1,000 robots or the 100,000 robots that are armed and at some point kind of turn against us. Or they get into the hands of a malign actor who, you know, isn't terribly concerned about what long-term consequences there might be down the road and isn't interested in building a system that ensures that these technologies are used for good rather than for ill. And suddenly we kind of lose control. And it's very difficult to get control back. Are these realistic concerns?

 

Melanie Garson  23:37

It depends what books you read.

 

There is always that tension, and particularly when... And I think that's one of the great myths of AI is when people anthropomorphize AI as ultimately the great AI. And really thinking about what is it made up of at this point. And we have the machine learning processes and we have processes that add in, you know, the sort of set element of autonomy, which is what people think about when they begin to think about killer robots and the extent of that level and the real questions of autonomous weapons systems. 

 

And it's something the House of Lords is considering at the moment in particular in the UK. And looking at the US, the US have just released a review on what we mean by autonomous weapons systems and what the responsibility has to be in this. But thinking about where do we make sure we maintain the human – or the human interaction – within that sort of point of decision making of the use of the weapons. 

 

So we already have autonomous weapons systems in a system and already a lot of them are used for defensive purposes. So many of the defensive shields or the antimissile technology are functionally autonomous or they very, very technically could be semi-autonomous, have a very, very small window to intervene and prevent the process. But they're reactive. 

 

And we have to think about whether that more offensive capability of using it, where do we decide to draw the line or where do we make sure that those sort of norms of the responsible development and responsible use of it, that humans have to be part of the actual decision-making process.

 

A lot of it, there's questions about how much different levels of autonomy will drive down the cost of war, so that war would be more accessible. There's, you know, a lot that we have to still unpack. 

 

Most countries – and we do worry about the malign actors and the ones that aren't part of the responsible normative framework – but most countries are still taking a measured approach to it. Obviously, this is military technology. So you go back to 'is it a black box?', and you don't know what people do with it. But it points to more the dangers of geopolitical factors. This is sort of: where does the intersection between the geopolitical factors really affect what we worry about in the tech? Because it's the geopolitical fragmentation that we have right now that worries us more because that's going to impact our trust in how the tech is being used. So, it's sort of: where do we turn our attention on what to fix first?

 

Alan Renwick  26:35

I'm not sure whether I'm reassured or not. 

 

We're coming close towards the end. So let me just ask you one final kind of wrap-up question. What are your top specific policy recommendations in light of all the work that you've done in light of the report, your new report here? What are the key particular things that you would want to get across to policymakers today?

 

Melanie Garson  26:59

I think there's a big question still, and similar to what we talked previously about, where private companies that are part of controlling the whole communications infrastructure are sitting in this wider picture, because that will materially affect our use of this technology. So even, you know, we've got the companies as part of the development of it, but the companies that are part of the use of it, and where that can... 

 

A good example we cite or we touch upon in the paper is Starlink in particular, for example, that gave – or they sort of, was hailed as backing up – the communications infrastructure in Ukraine, materially tipping the balance of power in the war. Then [Starlink] stepped back and was like: Well, hang on a minute, we didn't realise that that same communications infrastructure is being used as part of the drone warfare aspect of the conflict, and we're a commercial company, and we're not set up for this. 

 

So there's a lot of questions where private companies still need to understand the geopolitical implications of where their tech is being provided. And that's not to say that they shouldn't, but they're going to have greater transparency in thinking about where they're intervening and why. 

 

You know, we look at what's happening in Sudan today. And the internet is, if anyone's following, sort of internet connectivity and access, where it's blocked and where it's not, and do we see the same kinds of response to a much more complex conflict for any commercial entity to think: Well, do we get involved now it's really messy? But the principle is: we went into one, should they stand in another, and how do we make that rationalised? 

 

So I think having greater transparency, given the amount of power that is concentrated privately in that infrastructure, is really, really important. Building these close-knit ecosystems, tech ecosystems, not just for military tech, but for the whole of society. And that goes for education, that goes for environment, because that is all part of what is security and what is national security and economic security in today's world. Making sure that's really even handed. 

 

So you've talked about AI. We're doing a project – we could talk about it on another podcast – on what is access to the future of compute, right, which is the massive amount of power. So we've touched on generative AI that relies on sort of the large language models that require a huge amount of power to do that. Not every country is going to be able to build that, and do we have the potential for new digital divide or do we have to think really cleverly as to how we provide countries with the access to compute power that they need to do all the data-driven decision making, all the aspects of of technology that can really be used for good? 

 

And then thinking about how the international community comes together and thinks about the supply chains that are part of all this technology. So one of the things in this paper we raised is that Ukraine was successful in having this sort of agile supply chain from multiple sources that it was able to integrate really quickly. That was because of the relationships that it had, because of the way that NATO worked together. That might not be the case in every country in every system, and not just a geopolitical crisis that's a conflict, but maybe one that's driven by a volcano eruption or an environmental event. And sort of how do we think about that technology supply chain for critical needs that is coming together? And are we as an international community set up to underwrite that, not just in Ukraine – sort of, what are the lessons we take away? And take that forward.

 

And it's going to be really interesting to see how things like Diana – how the innovation accelerator – come out, and sort of what the impacts of that in the future. But certainly on the cyber side Ukraine has benefited from having multiple nations coming from like the Cyber Centre of Excellence – that they've all worked together, provided that advice, and to be able to strengthen as part of that collaboration. 

 

So increased collaboration and cooperation, information sharing, is actually the key to underpinning some of the fears that are founded by some of our dystopian visions of where AI or any emerging tech can go.

 

Alan Renwick  31:43

So if I can sum all of this up in one phrase, it feels like fostering close-knit tech ecosystems, which is one thing you said there, is just absolutely fundamental for all of this: making sure that the relevant people in governments and private sector, researchers, across the across the globe so far as possible, are speaking with each other, engaging with each other, thinking through these issues collaboratively and collectively. That's the key thing.

 

Melanie Garson  32:11

Yep. You know, it's moved fast. And, you know, sometimes the philosophy is move fast and break things. So we might have to move a little slower, keep it all together, but make sure that we still get to the outcome. And I think it's, you know, it's stepping back and having that picture of what's the outcome we want here, and are we building towards that outcome?

 

Alan Renwick  32:34

Well, fantastic. Melanie, thank you so much. It's great to have you back. 

 

And, as you say, I think we'll need to have you back on several times as I mean, there are these huge potential transformations that are coming down the line as a result of AI and rapidly developing AI in so many different aspects of politics. So we focused here on military technology, but as you say, there are so many other aspects of politics and society that are going to be changing over the coming years. So definitely, we will have you back very soon. 

 

Thank you for coming on.

 

Melanie Garson  33:05

Thank you for having me.

 

Alan Renwick  33:06

We've been looking at Dr Melanie Garson's paper for the Tony Blair Institute co-authored with Pete Furlong and Jeegar Kakkad, entitled Software and Hard War: Building Intelligent Power for Artificially Intelligent Warfare. As ever, the details of the paper, including a link, are in the show notes for this episode. 

 

Next week, coinciding with the International Day Against Homophobia, Biphobia and Transphobia on the 17th of May, we will be discussing LGBTQ rights activism in Tunisia and escalating repression of LGBTQ communities in parts of the Middle East and North Africa. 

 

Remember, to make sure you don't miss out on that or other future episodes of UCL Uncovering Politics, all you need to do is subscribe – you can do so on Apple, Google Podcasts, or whatever podcast provider you use. And while you're there, we'd love it if you could take a moment of time to rate or review us too.

 

I'm Alan Renwick. This episode was researched by Alice Hart and produced by Eleanor Kingwell-Banham. Our theme music is written and performed by John Mann. 

 

This has been UCL Uncovering Politics. Thank you for listening.