UCL Uncovering Politics

Twitter, the Online Safety Bill, and Free Speech

Episode Summary

This week we’re looking at Twitter, the Online Safety Bill, and the limits of free speech. Is it a good thing that Twitter is promoting free speech - or would more regulation be better? How much of a problem is disinformation for society and democracy? Might there even be a moral duty for social media platforms – or the state – to tackle disinformation and otherwise harmful speech?

Episode Notes

Two current news stories raise important questions about online speech, and how it should be regulated. 

First, twitter has been taken over by Elon Musk, who has slashed staff numbers, allowed previously barred users – not least, Donald Trump – to return, and pledged a new era of free speech and less regulation. Some claim that as a result, Twitter has seen a deluge of disinformation and hate speech.

In the UK, meanwhile, the Online Safety Bill is making its way through parliament. This was originally intended in part to protect democracy against disinformation. But these provisions have now largely been stripped out, weakening the protections it will provide.

This week we are joined by Dr Jeff Howard, Associate Professor in Political Theory here in the UCL Department of Political Science, an expert in free speech and on the ethics of online speech.

Episode Transcription

SUMMARY KEYWORDS

speech, misinformation, platforms, free speech, twitter, disinformation, falsehoods, bill, people, content, cases, duty, kinds, musk, space, harms, context, responsibility, views, false

SPEAKERS

Emily McTernan, Jeffrey Howard

 

Emily McTernan  00:05

Hello. This is UCL Uncovering Politics. And this week we're looking at Twitter, the Online Safety Bill, and the limits of free speech.

 

Hello. My name is Emily McTernan. And welcome to UCL Uncovering Politics – the podcast of the School of Public Policy and Department of Political Science at University College London. 

 

Two current news stories raise important questions about online speech and how it should be regulated. 

 

First, Twitter has been taken over by Elon Musk, who has slashed staff numbers, allowed previously barred users – not least, Donald Trump – to return, and pledged a new era of free speech and less regulation. Some claim that as a result, Twitter has seen a deluge of disinformation and hate speech.

 

In the UK, meanwhile, the Online Safety Bill is making its way through parliament. This was originally intended in part to protect democracy against disinformation. But these provisions have now largely been stripped out, weakening the protections it will provide. 

 

What should we think about these developments? Is it a good thing that Twitter is promoting free speech, or would more content regulation be better? How much of a problem is disinformation for society and democracy? Might there be a moral duty for social media platforms – or for the state – to tackle disinformation and otherwise harmful speech? 

 

There is no better person to explore such questions with them Dr Jeffery Howard, Associate Professor in Political Theory here in the UCL Department of Political Science. Jeff is an expert in free speech and on the ethics of online speech. He currently holds a UKRI Future Leaders Fellowship, leading a cross-disciplinary research project on the ethics of content moderation on social media and the future of free speech online. And I'm delighted that Jeff joins me now. 

 

Jeff, welcome back to UCL Uncovering Politics. 

 

Jeffrey Howard  02:20

Thanks so much, Emily. Great to be with you. 

 

Emily McTernan  02:23

Let's start by digging a little bit further into two real-world developments that have prompted our discussion today, focusing on Twitter first. Listeners will be familiar, of course, with the general story of Elon Musk's takeover. But what are the features of this saga that you want us to think about?

 

Jeffrey Howard  02:40

Well, we know that Twitter was taken over by Musk in late October of this past year. [He] got it for a heavy price of $44 billion. And the first thing he did was fire the top three executives at Twitter. And then he started doing a lot of layoffs, laying off about half of the company's nearly 8,000 employees. 

 

And a lot of the layoffs arose in the trust and safety space. So these were the people who are working on the problems of content moderation. And by content moderation, we mean the systems that platforms have in place that decide what kind of speech is allowed on the platform, what kind of speech isn't allowed on the platform, and then deploy a combination of both human teams and artificial intelligence systems to take down the speech that isn't allowed on the network. 

 

And Elon Musk has been saying for a long time that he thinks Twitter restricts too much speech. He very much sees Twitter as the public square. And so it's not especially surprising then that when Elon Musk took over Twitter, one of the first things he did was really try to open up the network. And as part of that he brought back a lot of accounts that had been previously banned: he brought back Donald Trump onto Twitter; he brought back accounts that had been suspended for violating policies on hate speech and misinformation. And so it's no surprise especially that we've seen various reporting that hate speech [and] misinformation are on the rise on that network.

 

Emily McTernan  04:01

Great. And how has the user experience been affected? Are there complaints from the users that they're seeing a lot more of this material? Or are people actually happy to see free speech returned to the public square as it is online?

 

Jeffrey Howard  04:16

Well, I think it depends on who you ask. There certainly are a lot of allegations, a lot of anecdotes, about increased rates of abuse [and] harassment on the platform. On the other hand, mostly in the American context, those on the political right are seeing this as a kind of triumph against what they see as a wave of efforts by big tech to suppress conservative speech that company executives dislike. I don't agree with that characterisation of the situation. 

 

I think given that Musk himself holds different views about what the limits of free speech are than the previous team at Twitter, it would be very surprising if we didn't see an upsurge in the kinds of speech that had previously been restricted. Certainly, one of the things to note in this context is that advertisers themselves have found Elon's developments in this space to be deeply unpopular. And so a lot of advertisers have fled the platform, putting Twitter in really difficult financial shape. And so that reminds us that, as interesting as these lofty academic discussions are about the limits of free speech, Twitter is fundamentally a business. And so there is always a business case for engaging in this kind of content moderation as well, which is that users don't want these networks to be a kind of swamp saturated and foreign troll bots and misinformation and hate. And it certainly hasn't become that horrible under Musk. But there has been a flood of advertisers. And by the way, an interesting exodus of people to alternative platforms like Mastodon, which are a very different kind of product. And so I think it's a really, really interesting development. 

 

And I think what people are watching now is: will Twitter actually survive financially? And you'll need to ask our friends in the UCL Management School for the answer to that question. But it's not looking great financially, that's for sure. And Elon's views about free speech seem to be at the core of why that is.

 

Emily McTernan  06:06

Fascinating. And we're going to drill down into those questions about free speech and disinformation, and how to think about the space there in moral terms, soon. 

 

But first of all, let's just get the Online Safety Bill onto our table as well. Could you tell us a bit about the ways in which this Bill might change the online environment and whether you think it will do enough to handle disinformation that we find on these platforms?

 

Jeffrey Howard  06:29

Absolutely. So the Online Safety Bill, previously known as the Online Harms Bill, had been under discussion in this country for the past several years. Under the new government, it's been picked up again after it was put on ice for a while, and there have been a series of recent changes to modify the Bill. But now it really looks like it's going to come to fruition: it's being worked through the House of Lords at present, and I wouldn't be surprised if we saw the Bill enacted into law later this spring. 

 

And the Bill is a quite radical piece of legislation in that it does subject the social media companies to all sorts of accountability. So the Online Safety Bill is going to undertake a number of different measures. One of the things it's going to do is make platforms accountable for moving varieties of speech that are currently illegal. One of the ways it's going to do this is by expanding the scope of what kind of speech is illegal. So part of the revised version of the Online Safety Bill is going to include provisions for criminalising speech that directly encourages self-harm. Certain forms of false communications will also be criminalised under the Bill. The Bill is also going to hold platforms accountable for keeping children safe, and so there'll be a series of stringent measures – potentially stringent measures – put in place to ensure that there's age-gating systems, so that children aren't accessing more mature content that they're not allowed to see, that they're not supposed to see under the platform rules. And further, the Bill is going to ensure that platforms are actually held accountable for enforcing their own terms of service – the policies that they claimed to have. 

 

And so I think these measures are pretty radical in that it's going to be the first time, in this country anyway, that these platforms have been subjected to serious accountability in terms of their content moderation systems.

 

Emily McTernan  08:11

So in the Bill, what kinds of false communications are going to be banned?

 

Jeffrey Howard  08:16

Right, so there's a new offence called the 'false communications offence', and the basic thought of the offence is that a person commits the offence if they send a message, and the message conveys information that the person knows to be false, and that at the time of sending it, the person intended the message – or the information in it – to cause non-trivial psychological or physical harm to a likely audience. Okay. And finally, the person has no reasonable excuse for sending the message. So that's the new provision. 

 

And what's interesting here is that in the previous version of the Bill, there were requirements that the big platforms take action against content that was deemed so-called 'legal but harmful', sometimes referred to as 'lawful but awful' content. And the basic idea there is that some speech on social media isn't worthy of being criminalised in individual cases, but that when the platforms aggregate and amplify the speech, and flood it into particular echo chambers, then it becomes truly dangerous. And so we don't want to criminalise individual instances of the speech, but we do want to hold the platforms accountable for their role in amplifying and aggregating that speech. 

 

So on the previous version of the Bill, platforms had a special – the big platforms anyway – had a special set of duties to implement risk assessments for the kind of speech that, while legal, became harmful when spreading across the platform. And one of the new developments in response to free speech concerns by the current Conservative government, under Rishi Sunak, has been to eliminate that provision of the Bill. So there are no longer these risk assessments that are required for speech that is legal but harmful. One way to kind of mitigate- Yeah-

 

Emily McTernan  10:01

Right. Could you give us an example of this 'lawful but awful' speech? What kind of thing do people have in mind? Was it COVID denial, was it...?

 

Jeffrey Howard  10:09

So COVID denial absolutely falls into that bucket. Outside the misinformation context, you might think about speech that glorifies or promotes self-harm, so you might think of speech that casts anorexia or bulimia in a positive light. And so without these provisions, the platforms just aren't going to be held accountable for this kind of speech, except if the speech is already banned by the platform, then it's going to be held to its promises. But of course, the platform could just change its mind overnight and decide, as we've seen it at Twitter, to roll back how stringent some of these categories are. 

 

Now one of the interesting things about the move to take away these so-called 'adult safety duties' to do risk assessments on legal but harmful content is that they've introduced these new offences, and they basically rejected the original thought, which was, 'well, some stuff shouldn't be criminalised in individual cases because it's only genuinely harmful when aggregated and amplified across the platform, and so you really just want to hold the platform accountable', and now they're saying, 'no, no, let's criminalise it in the individual cases, at least in some of these instances'. And so here's one of these instances where we've been talking about speech that conveys false information with some intent to cause non-trivial psychological or physical harm. There are new fences regarding explicitly encouraging self-harm that also fall into this general category. 

 

And one of the things to say about this is, you know, I don't think it's a manifestly crazy offence; I don't think it's fundamentally unjust to have a new offence like this on the statute books. But it is instructive that it doesn't go after the kind of systemic harms that the Bill tried to deal with at the outset.

 

Emily McTernan  11:50

Interesting. That's an interesting set of developments. Thank you, Jeff, for that discussion. 

 

I wonder if we can get a bit deeper into these developments by thinking about some of the political philosophy and ethics behind it which, of course, it's wonderful to have you here as our expert for. 

 

I wonder if you could start by helping us clarify these definitions a little bit. So we've talked a little bit about disinformation and misinformation. Could you tell us about the difference between those two?

 

Jeffrey Howard  12:14

Absolutely. So generally, in the literature and among policy experts in the space, people distinguish between disinformation – which is defined as the deliberate circulation of falsehoods, where the author knows that the content is false, and so there seems to be some intention to deceive, to dupe audiences into believing that the content is true – from misinformation, where the author isn't necessarily aware that the content is false. 

 

So someone who innocently or someone who honestly circulates a falsehood about COVID vaccines not working, believing that it is true – that's an example of misinformation. But in nefarious actor who peddles the exact same content with either the aim of deceiving the audience or at least confusing the audience, perhaps even with the aim of bringing harm upon the audience – that would be considered a case of disinformation. 

 

And what's interesting is that there's a lot of philosophical literature on lying and the wrongness of lying. And depending how you exactly cash out the definition of disinformation, it just looks like a form of lying, and a lot of people think that lying is at least prima facia unprotected under a free speech principle. You might think that what we value about free communication is people honestly sharing what they think to one another, and that lying is somehow incompatible with that purpose – Seana Shiffrin has convincingly, to my mind, defended that claim in her work. Where it gets really tricky, though, in the free speech space is misinformation, because we're talking about empirical falsehoods that are circulated by people who think that they're genuinely true.

 

Emily McTernan  13:49

Interesting. So you're saying that this distinction between misinformation and disinformation is incredibly important because only disinformation falls outside the realm of potentially protected free speech. So if it's deliberately wrong, then free speech shouldn't protect it at all. 

 

So that seems quite [inaudible] to me. I mean, you might say, shouldn't we be protecting the person who deliberately plays devil's advocate – who wants to be in the debate and just open it up to other spaces; that they don't really believe in these views, but they think it's really important these views get on the table? Do we really want to say that's not protected under free speech?

 

Jeffrey Howard  14:23

So I think you have a really good point, which is that it would be completely implausible to say that communicating falsehoods, as such, lacks free speech value. And the real insight here, I think – and this cuts across disinformation and misinformation – is that the history of scientific inquiry, public conversation of all kinds, is a history of people constantly making mistakes about what is true and what is false. And so we all have a fundamental interest in being part of an ongoing cooperative inquiry, social conversation – there are lots of ways to describe it – in which we are working together to decide what is true and what is false. So to say that at the outset, we're going to presuppose that we've figured it all out, and we're going to make a list of all the true stuff and a list of all the bad stuff, and say we're going to ban the list of all the false stuff – that's not going to be compatible with this vision of free speech as an ongoing cooperative inquiry. So if there is a duty to refrain from communicating these kinds of falsehoods. I think it's going to be a very specific duty that refers to only a subset of these falsehoods. 

 

So in my work, I think about what I call 'dangerous misinformation'. And here I'm thinking about clear empirical falsehoods whose communication poses a serious and foreseeable risk of causing harm. Okay? And by talking about clear empirical falsehoods, I'm trying to narrow our understanding to the subset of falsehoods where we have a really, really high confidence that the content is false. And further, we have a really, really high confidence that the content in virtue of its falsehood creates a real danger to others, either inspiring people to harm themselves or inspiring people to harm others. 

 

And so I'm happy to give some examples in that space. So if you look at horrific episodes of ethnic cleansing and civil conflict in recent years, there's been a lot of talk about incitement and hate speech in that space. But often the most pernicious language comes in the form of misinformation – falsehoods about the groups and questions. So if you think about the conflict in Myanmar, claims that the Rohingya themselves perpetrated heinous crimes that, of course, they didn't perpetrate, were absolutely central to the propaganda mobilising violence against them. 

 

And I view this as a kind of misinformation or disinformation that we might call 'threat fabrication', where a threat that doesn't actually exist is concocted up either malevolently – and we have reason to believe in that case, members of the Myanmar military were genuinely engaging in disinformation – or by those who negligently pass it on. So in that case, it's misinformation – people who are gullibly duped into thinking it's true, but it's really not. 

 

But of course, it doesn't need to be a matter of threat fabrication. There are also cases in which this misinformation takes the form of what I call 'threat denial'. So if you think of cases where they say, 'well, climate change isn't really happening', or 'if it is happening, there's nothing we can do about it', or 'it's happening, but it's not so bad' – these are all versions of the thought that climate change doesn't actually constitute a serious threat that is capable of being averted. And I think threat fabrication and threat denial are two of the most important forms of misinformation. 

 

Now I think clearly we have a duty to refrain from actively lying about those kinds of threats. I think that's pretty easy to justify. But I think even in the case of misinformation where we think that it's genuinely true, we have a moral responsibility not to take action, not to communicate. Now, that's not going to persuade us, right, because we think it's true. But it is relevant to what the platforms are thinking about in terms of what kind of speech it's acceptable for them to stop.

 

Emily McTernan  17:56

Great. And well, what about the anti-vaccination people? So people who say, 'don't take your COVID jab', you know, '[the vaccination] has all of these serious health risks' and 'COVID is not so serious anyway'. Does that count as the kind of threat denial that you're interested in? Or is that not quite fitting? And if not, why not?

 

Jeffrey Howard  18:12

Well, I guess it depends on the... I think different kinds of vaccine denial might fall into either bucket. So the view that says that these vaccines are enormously dangerous, that they pose a fundamental risk to you and your children, that would be a kind of threat fabrication. 

 

And it's tricky, right? Because you could imagine some day it coming along that there was a vaccine that was genuinely dangerous, right? Now, you'd hope that these dangers would have been caught in testing, but there will potentially be hard cases. Now I think in COVID it wasn't an especially hard case at all – there was a huge amount of confidence among the relevant experts that these vaccines didn't pose a danger. And so therefore, I think the platforms certainly are obligated to act against that kind of misinformation. But of course, judgments are fallible, and you can imagine them making mistakes in the future based on different kinds of cases. But I don't think that would be one of those cases.

 

Emily McTernan  19:08

Great. That's very interesting, Jeff, thank you very much for that. I mean, I do wonder slightly whether you're underestimating the strength of that sceptical challenge you offered us earlier, where you noted, of course, the scientific facts may not turn out to be correct. So even things we think we have very clear evidence of, sometimes it turns out that scientists were wrong about that. 

 

Jeffrey Howard  19:25

Yeah, that's right. And I think this helps explain-

 

Emily McTernan  19:27

You can imagine the BSE crisis kind of playing out now, right? So back then there were all these MPs and scientists confidently saying, 'no, it's absolutely fine to eat British beef'. And you might imagine a social media platform removing any suggestions to the contrary, and then the facts on the ground change. 

 

So I wonder if we have to just hope that this sceptical challenge doesn't go that deep, that we can rely on what seemed like clear evidence to distinguish amongst misinformation and appropriate counter-views? Are you optimistic that we can do?

 

Jeffrey Howard  19:59

So one thing to keep in mind is that it's, of course, always possible to make mistakes in this context. But I think we're used to thinking about free speech concerns in the context of the criminal law, and where is it permissible for the state to criminalise speech. And I do think that there is something lower stakes about thinking about free speech concerns in the content moderation space – it's not zero stakes, but it's lower stakes, because we're not talking about incarcerating people for holding the wrong views about vaccines; we're talking about trying to protect other people from the dangers that, based on our current evidence, we are justified in believing that speech poses to others. 

 

Now, of course, we might make mistakes, and we might have to apologise later if it turns out we've made those mistakes, and of course it's important to try not to make mistakes – and that's why we have to have a really, really high level of confidence before we act against the speech in question – but I think having your post removed is a far cry from being incarcerated. Having fewer people see your speech than otherwise would, or having to communicate it to them through other offline venues, is a cost that is imposed on you to be sure, but it's not nearly as great a cost as in these other contexts. So I think the kind of cost-benefit analysis we need to do to justify speech restrictions just takes on a different form in this kind of context.

 

Emily McTernan  21:17

That's really helpful. 

 

Let's turn to think about the moral duty to tackle this disinformation or misinformation. So on who do you think that falls? Is it falling on the social media platform? Is it the responsibility of the state to ensure there's the right kind of regulation? What's your view? Or is it a moral duty of the individual not to spread disinformation or misinformation?

 

Jeffrey Howard  21:37

So I think that the individual does have a moral duty not to spread this kind of – and let's just say misinformation now as the kind of master category that includes both knowing circulation (so disinformation), but also other cases in which the speaker believes that the falsehood she's communicating is in fact true. 

 

So I think that across these cases of dangerous dissent, dangerous misinformation, speakers do have a moral duty to refrain from communicating that speech. But I also think that these kinds of digital intermediaries, like the big platforms, also have a responsibility to limit that speech. And I think I think that that responsibility is grounded in a few underlying duties. 

 

And the first, most obvious duty is that these platforms are just in the right place at the right time with the right capacity to protect people from the danger that this kind of speech poses. So if you go back to the case of lies about the Rohingya that spread during the campaign of ethnic cleansing against them in Myanmar, Facebook allegedly did very little to stop the spread of that misinformation. And you might ask, well, well, why was that wrong? 

 

And I think one obvious reason why it was wrong is that they could have done it, and they had the systems that they could have put in place to do that, and it would have been easy for them to try to help rescue the Rohingya from the dangers that that speech led. And they chose not to. 

 

And so there's just a fundamental rescue or defence duty that all of us have as agents, whether we're individual agents or corporations like companies. And I think that is a duty that speaks in favour of some level of minimal, at least content moderation, to try to get the worst of the worst speech off the platform that poses a danger to others. 

 

But I think above and beyond that duty there are some other responsibilities too. So this kind of rescue duty gives you an obligation to kind of look out for obviously wrongful speech on your platform, to do something about it. When you're made aware of it, it doesn't necessarily ground an obligation to actively police your platform for the duty. And so I think you're gonna need something more stringent – a duty to explain that more demanding responsibility. 

 

And here, the way I think of it is as a duty not to be complicit in the wrongful speech of the actual people doing the misinforming or the disinforming. So my thought is that platforms have a responsibility to reduce the likelihood that their product – that their space – is going to be abused for wrongful purposes. And that by providing people with a platform on which they can commit various kinds of serious wrongs, and then doing nothing to reduce the likelihood that people do use the platform in that abusive way, you can become actually complicit in the wrongdoing that people then use your platform to cause. And that's why I think people who accused Facebook of some kind of complicity with the crimes committed against the Rohingya have a deep philosophical point here: it isn't just that they failed to rescue them, it was that they were connected to the wrongdoing in a certain way by providing the space in which it occurs.

 

Emily McTernan  24:39

And what do we do if these social media companies don't willingly take up their responsibilities, both of rescue and of avoiding complicity in these sorts of harms?

 

Jeffrey Howard  24:48

Yeah, so I think these are pretty fundamental responsibilities. I mean, in the case of platforming the speech, sometimes the platforms don't just provide a space for it: sometimes their algorithms actively amplify the speech, and that seems to be what's happening in a lot of conflict situations, where really toxic speech gets amplified for the psychological reason that it's more engaging. And therefore the algorithms – the purpose of which are to increase the amount of time people spend on the platform so that there are more eyeballs, so that there is more advertising revenue – mean that more people see this speech. Now, platforms are of course starting to recognise that and are doing work to limit that kind of speech: even if the borderline of speech doesn't actually constitute a genuine violation, they're still trying to ramp it down. 

 

I think that it's fully appropriate to hold platforms accountable in some way for this kind of speech. I think that dangers of giving the state powers it might abuse militate against advocating for these kinds of policies in contexts where states are manifestly untrustworthy. So I wouldn't argue that like the Russian government should have more power to crack down on social media platforms. 

 

But I do think that within stable democracies it's completely plausible to argue that states have some responsibility for overseeing the content moderation process. And that's why I think the kinds of legislation like the Online Safety Bill, even though I quibble about plenty of the details, are broadly defensible in trying to subject these platforms to some kind of regulatory oversight. Not a tight kind of specification of 'you must remove exactly this kind of speech and we're going to oversee every decision you make', but a broad system that ensures that they're thinking through the harms that their content moderation systems can lead to when they're not operating effectively.

 

Emily McTernan  26:43

So you're thinking in terms of the government fining social media companies if they let these things happen, as opposed to the government coming up with a list of all the things you mustn't permit to be published on your platform? Is that the kind of-

 

Jeffrey Howard  26:54

That's right. And one of my worries about the social media platforms. Sorry, one of my worries about the Online Safety Bill is it was envisaged at the start of this kind of systemic approach that required the companies to undertake these different risk assessments about the harms that their systems could lead to; agree with the regulator – in this case Ofcom – with a list of things that they were going to do in order to reduce those harms; and then they could be fined if they failed to live up to their promises about what they said they were going to do. 

 

And I think that kind of system is more future proof, in the sense that the kinds of harms that arise will vary over time, than a system that simply says 'right, here are some existing speech categories that are already criminalised, or that we are criminalising, and your job is to take those off. Oh, and by the way, keep kids away from various kinds of speech that isn't appropriate for them.' I just don't think that that is exactly the right approach here.

 

Now, interestingly, over the past few weeks, there has been discussion about amending the Online Safety Bill to call for prison time for executives at these companies if they fail to keep children safe. And so here, we've heard horror stories about cases in which kids engaged in serious forms of self-harm after exposed to content promoting that kind of harm on social media. And you might ask, should we stop at fines or should we embrace these more demanding requirements? 

 

And my general thought on that is that there's no in principle reason why corporate executives should be immune from that level of responsibility. I think it creates huge incentives to make sure that this kind of illicit speech is off the platform. And I think that's broadly a good thing.

 

Emily McTernan  28:29

Great. I wonder if we could turn briefly back to Twitter, with which we began. So you've given us a kind of account of how to think about misinformation, you're pro the Online Safety Bill. Where is it that you think that Twitter is falling short if it is at the moment? What is it about the Elon Musk reforms that make you uncomfortable?

 

Jeffrey Howard  28:49

So Elon Musk has broadly endorsed the idea that First Amendment principles should be the governing principles on Twitter. And he hasn't exactly done this because the First Amendment – and the First Amendment is the free speech provision of the US Constitution – the First Amendment, as it's been interpreted by the courts over hundreds of years, has various exceptions to free speech. So if you make what's called a 'true threat', where you threatened to harm someone in a way that's credible, or that they have reason to believe is credible, that's unprotected. If you engage in inciting violence, where the incitement is likely to cause imminent harm, that's unprotected. And there's other categories too. And Twitter, even its current content rules under Musk, go much further than the First Amendment in allowing for various restrictions on speech. 

 

But I think the basic insight that Musk has, which is that even when speech is seriously harmful, even when seriously dangerous, we might still have reasons to allow it on the space because that's what freedom of speech means. It means people being allowed to say stuff even when it's genuinely hurtful, even when it's generally harmful, because the standard we apply when we're restricting speech is just a way higher standard than the standard we apply when we're just restricting ordinary conduct. And that idea permeates a lot of liberal philosophy of free speech, especially in the American legal tradition. 

 

And it's that broad framework that Elon buys into that I just don't buy into. So it's just unthinkable to suggest in the context of traditional liberal American free speech arguments that dangerous misinformation, as I've defined it, as a whole category falls outside the protection of free speech. Maybe there are specific instances that do, but as a category it certainly doesn't. 

 

And so in advocating this view, I'm starkly departing from the received orthodox liberal wisdom about the limits of free speech. And so my objections to Twitter content moderation policies are that they don't go far enough in tackling various forms of misinformation, in tackling various forms of hate, and that the platform does need to go back closer to the level it was before Elon Musk took over, I'm afraid.

 

Emily McTernan  31:06

Thank you, Jeff, for that strident defence of better online content moderation on places like Twitter. We look forward to having you back on the podcast.

 

Next week, we'll hopefully have space for a bit more optimism. We'll be looking at the role of praise in politics. 

 

Remember, to make sure that you don't miss out on that or other future episodes of UCL Uncovering Politics, all you need to do is subscribe. You can do so on Apple, Google Podcasts, or whatever podcast provider you use. And while you're there, we'd love it if you could take a moment of time to rate or review us. 

 

I'm Emily McTernan. This episode was produced by Conor Kelly and Eleanor Kingwell-Banham. Our theme music is written and performed by John Mann. This has been UCL Uncovering Politics. Thank you for listening.