UCL Uncovering Politics

Lies, politicians, and social media: Should we fact check politicians?

Episode Summary

This week we are looking at politician's speech on social media. Should social media platforms act when the things they say are wrong?

Episode Notes

Social media plays a significant role in shaping political debates and, some argue, even influencing election outcomes. Politicians increasingly use platforms like X (formerly Twitter) to communicate directly with the public and run their campaigns. However, this unfiltered communication can sometimes spread misinformation or undermine democratic values.

A prime example is incoming US President Donald Trump, who was famously banned from Twitter for glorifying violence but has since returned to X with Elon Musk at the helm. This raises critical questions:

To explore these issues, we’re joined by Jeff Howard, a professor in this department and the Director of the Digital Speech Lab. Jeff co-authored a recent paper that dives deep into the responsibilities of social media companies when it comes to regulating political speech.
 

Mentioned in this episode:

Episode Transcription

Emily McTernan: [00:00:00] Hello, this is UCL Uncovering Politics. This week we're looking at politicians speech on social media. What should we do when the things they say are false? 

Hello, my name is Emily McTernan and welcome to UCL Uncovering Politics, the podcast of the School of Public Policy and Department of Political Science at University College London.

Social media has had a substantial impact on political debate and even, some say, on the outcomes of elections. Politicians themselves make use of social media platforms to talk to the public and to campaign. Yet in some cases what they post isn't always true, and sometimes it looks damaging to democracy.

President Trump famously was once banned from Twitter for glorifying violence, but now he's back on X with Elon Musk in his corner. So how ought social media companies be responding to politicians more inflammatory, and often false, claims? Unlike offline media, social media companies often claim to be just platforms, rather than publishers responsible for what they put out into the world.

But given the effects of social media, can they abdicate their [00:01:00] responsibility in this way? And yet, if they start challenging politicians speech, aren't they overstepping their role in a democracy? A recent paper tackles exactly this issue, and I'm delighted to be joined by one of its authors, Jeff Howard, a professor of this department and director of the Digital Speech Lab here at UCL.

Welcome to the podcast again. 

Jeff Howard: Thanks for having me, Emily. It's really good to be here. 

Emily McTernan: Let's start with some examples. What's the most outrageous thing a politician has said on social media? And what are the kinds of cases that worry you? 

Jeff Howard: Well, I think there is a pretty long list of outrageous things that politicians have said on social media.

And this has led the platforms to generate a series of policies around misinformation. So take meta as our starting point. So Meta is, the parent company of Facebook, Instagram, Threads, people Listening, probably have accounts on some of these platforms. And Meta has put in place a series of policies around misinformation, understood as demonstrably false or misleading content.

And I think we can break these policies into two big [00:02:00] groups. So one batch of misinformation is really dangerous stuff. So Meta understands this misinformation as the kind of misinformation that might risk physical harm. or violence, um, various kinds of harmful health misinformation like claiming taking a particular vaccine will, um, seriously harm you.

Um, they also include voter or census interference in undermining elections in this batch of misinformation. And so for this really serious stuff, the remedy is to remove it. So if you post this kind of content to Facebook or Instagram, um, claiming that a vaccine is going to kill you or, um, you know, perhaps spreading a lie that's linked to real world violence about some group.

So we know during the Rohingya genocide in Myanmar, there were falsehoods peddled about Rohingya that were instrumental in mobilizing violence against them, falsehoods in their case about crimes that they had allegedly committed that they hadn't committed. If you post this kind of stuff, you're going to get it taken down.

That's not really what this paper is about. It's about [00:03:00] a second batch of misinformation. So this is what I call problematic but permitted misinformation. This is stuff that is shown to be demonstrably false and misleading. Um, but it's not gonna risk any kind of imminent harm to individuals or elections.

And for this stuff, what the platform does is that it allows the person to post it, but it will often post a fact checking label that indicates that a independent third party fact checker has um, indicated that the, has decided that the content in question is false or misleading. And so the platform is labelling the post so that people who see it get that added context.

And that's really what this paper is about. 

Emily McTernan: So problematic but permitted content. It's not going to harm, but a third party's deeming it. Is there any worry that you have that this is limiting the appropriate scope of inquiry? So lots of people have said that scientific claims, political claims, they're the kinds of claims that should be open to challenge.

So the [00:04:00] mainstream might be wrong. The scientists could be wrong. This vaccine may do harm. Some of them have done harm. This medicine may be dangerous. Even if mostly those claims are true. Are you worried at all that this is a way of closing off the inquiry in a way that can't really be justified, and certainly can't be justified that social media companies are in charge of choosing the limits of it?

Jeff Howard: I think we have to be highly alert to this concern. I think it militates in favor of only labeling misinformation as false or misleading when the relevant community of experts is demonstrating a pretty high degree of confidence that it is in fact false. So what Meta does is it appeals to nonpartisan third party fact checking organizations who've been certified under the international fact checking network.

And so. If there's ongoing reasonable controversy or debate within a community about the, uh, veracity of a particular claim, about the [00:05:00] accuracy of a particular claim, that kind of speech, um, is generally allowed, and when it's not allowed, that's a problem, and it should be allowed. I think we want to make sure that there's a wide range of legitimate opinion on empirical claims that can be aired.

But when, for example, there is um, a overwhelming consensus among relevant experts. For example, that climate change is happening. I think it is defensible to label that the posts that deny climate change as false or misleading and by doing so we are enabling people to make up their own minds so that they can think about it and so that they aren't just duped by people spreading harmful falsehoods.

Emily McTernan: But do you not think that you're taking over people's capacity to rationally assess it? So I'm thinking, what if there is some climate change semi skeptic, who in fact is a scientist who works at a university, who just thinks perhaps that it's less human created than we're assuming, or that the consequences will be less devastating than the vast majority of people think that it's going to be.[00:06:00]

Are you saying it's acceptable for those people, every time they try and post and have a debate with people in their fields, to have a little label that says, warning, this is false. 

Jeff Howard: I think it is acceptable. Uh, I think that one of the arguments that Meta raises uh, about why it exempts politicians speech from this fact checking system. I know we'll go on to, to talk about that. Is that it's really important for people's free expression interests to be satisfied on social media. To put the point another way, it's really important that people be able to express themselves, express their dearly held beliefs and opinions. Um, I think it's crucial, on our view, that fact checking isn't incompatible with that. In fact, it's the platform participating in the discussion by offering its own, um, point of view that the content in question, um, has been deemed false. And so in a way, the fact check label itself is Meta's own counter speech. Or if you like drawing attention to the counter speech of other relevant experts so that the people who see the posts can then see it in [00:07:00] context and make up their own minds.

If the policy was to remove all of this speech, simply on the grounds that it was false, then it would raise this wary, which Zuckerberg, Mark Zuckerberg, the head of Meta, raised years ago, which is that Meta would be becoming a kind of arbiter of truth, where it decided what kind of speech is true, what kind of speech is false, and it deletes all the false from the platform.

That would go way too far in the other direction. Um, so I think a policy like this strikes the balance. 

Emily McTernan: Let's get back to politicians. So politicians are sometimes the ones posting this false information. Did we see some of that in the last election? 

Jeff Howard: The most recent, uh, American election, um, between Donald Trump and Kamala Harris, I think, uh, we may well have seen examples of, uh, false information posted by, um, uh, incoming President Trump.

One of the striking features of Meta's system is that it doesn't label politicians falsehoods as false. Um, false. [00:08:00] Uh, it only applies that system to ordinary users who are not themselves candidates or current holders of elected office. Um, so if Trump said things that were false that were posted to his platforms or his pages on Facebook and Instagram during the election, users on Meta would not have been, uh, would not have had that drawn to their attention because he's a, he's a politician.

And that's that issue that we really take issue with. 

Emily McTernan: I'm being very coy about whether Trump did or didn't post false information there. 

Jeff Howard: Well, I think there's a, there's a strong record. There's a clear record of Trump having spread misinformation in the past about elections, about public health concerns.

Um, it seems very clear that if, uh, President Trump had lost, this election, he would have started spreading all manner of misinformation about the outcome of that election. Um, that's certainly what he did last time he lost the election. And it is striking that social media platforms have been gearing up, um, over the past [00:09:00] really year to prepare for a fire hose of misinformation put out by Trump and his surrogates the event that he lost. Now it turned out he didn't lose, he won. Uh, and so that particular problem has not arisen. There are other problems we now have to deal with, but it is a striking fact that, um, misinformation that was peddled by Trump during the campaign would not have been labeled as um, would not have been labeled as false, um, through the fact checking protocols because of this rule that Meta has that says, if you're a politician, we're not going to fact check.

Emily McTernan: It. So what motivates that role? Why might some people think politicians are different to you and I? We shouldn't have social media companies in the business of deleting or even putting these fact checking notes under their speech.

Jeff Howard: There's a question of what might justify the position and then there's a question of what might actually motivate platforms to have it. I think the motivational issue is one of realpolitik. Platforms just don't want to be in the bad side of politicians.

That's especially true in countries where politicians have some kind of authoritarian inclination. They're [00:10:00] worried about being subjected to regulations that might have nothing to do with this issue. But there are other areas in which Big Tech is under scrutiny for antitrust reasons, um, there are questions about genuinely illegal speech and whether platforms should be at all liable for that speech.

Currently platforms in many countries enjoy some kind of immunity from being directly held responsible as if they were speakers. There's all these implicit threats that, um, Republicans are gonna attack big tech on these points. And so I think tech companies are worried about Republicans coming after them.

Trump's incoming chair of the FCC has called, um, the big companies, the big tech companies, a censorship cartel, and so I think they're nervous, and I think their grounds for nervousness have only intensified thanks to Trump's re election. I think that's what's motivating it, but of course they don't say that.

Emily McTernan: They offer more principled reasons. 

Jeff Howard: Offer more principled reasons, and that's what allows us to get our hooks in as it were as political theorists trying to grapple with the moral reasons that they offer in defense of their view. And they give four [00:11:00] reasons why they think fact checking systems shouldn't apply to politicians speech, and what we do in the paper is we just quickly tick through each of the four and find each of them actually pretty seriously wanting, um, as a not particularly persuasive basis for exempting, uh, politicians.

And the four reasons are freedom of expression, the importance of the democratic process, the level of scrutiny that politicians speech ought to enjoy, and the final one is, is newsworthiness. Happy to talk about them if you want, Emily. 

Emily McTernan: Let's talk a bit about them. So, so which do you think it would be the strongest?

Do you think they have any case to answer? Because presumably freedom of expression, well, that just applies to everyone's speech. So if you're going to do it for anyone, then there's no reason politicians should be special. Is that one of the others that you think is more weighty? 

Jeff Howard: I think that's right.

And we've already briefly addressed the free expression concern by observing that fact checks don't seem to limit freedom of expression. It looks like fact checks are just themselves an exercise of free expression by the fact checkers and the platforms. In fact, you could imagine a system that didn't even apply a fact check [00:12:00] label.

It just algorithmically sorted or curated content such that after, uh, an instance of problematic misinformation, you just saw another user's contrary perspective. So it's actually not even essential to the practice that you have these labels affixed to the post. There's a way to, um, create the relevant counter speech, um, through other means.

So free expression doesn't seem like a particularly promising basis. Democracy, that does seem like a potentially promising basis here. Citizens, of course, have a powerful interest in finding out what politicians have to say and hearing what their points of view are. I think that's a really, really fundamental principle, um, in a democracy.

Emily McTernan: But it's also I don't think we saw the motivation of that when Trump was kicked off Twitter. No, that might have been for good reasons. But, in fact, there's something very strange about getting, kicking off a president or former president who the people have decided should have that level of authority in a democracy and rendering them unable to speak to the public on the platform.

There is something very anti [00:13:00] democratic feeling about that. 

Jeff Howard: That's absolutely right. I wrote a Washington Post op ed in which I argued that it was a mistake to kick Trump off Twitter for that reason. And I got such enormous blowback from progressive and liberal friends of mine who I think rightly, um, excoriate Trump as a threat to basic values, but I think they wanted to silence Trump because they didn't want his supporters to hear what he had to say. And I think in a democracy, you're not allowed to make that judgment. You're allowed to take action against particular pieces of content that violate the rules. I think that's really important. But I think certainly permanent bans of politicians are very, very difficult to justify.

However, I don't think it follows from that, that you shouldn't fact check their speech. These politicians have positions of enormous authority. One of the central tenets of the philosophical literature on harmful speech is that the magnitude of harm that speech can cause might depend on the authority of the speaker.

We know that when hate speech is communicated by an authoritative figure, it can do more damage. When an authoritative figure incites violence, it's more likely to actually mobilize [00:14:00] someone into action. Similarly, if the concern with misinformation is that it's going to lead people to do things that harm themselves or that harm others or that otherwise erode constructive debate on a particular topic, it stands to reason that politicians misinformation will be, especially capable of causing harm.

And so I think it's, really important that if platforms have duties to prevent harm, and I've argued elsewhere that they do, that they take a stand against politicians, um, misinformation. And actually, if the value here is that of informing and empowering democratic citizens so they can make up their own mind, then providing more information serves that very value.

Emily McTernan: And then that just leaves two more things that Meta have said in their defenses. So one is something like politicians are already under huge scrutiny, so there's no particular need for the social media platform to add to their many, many voices that say things like what person X, let's call them Trump, has said is false.

Um, and then the newsworthiness, we want to know their false stuff anyway, so just pop it up on the platform. Does that, does that have [00:15:00] any weight, or do those just seem far too? 

Jeff Howard: Well, there's something particularly galling, I think, about social media platforms, which have disrupted the traditional business model of media saying, hey, we don't need to point out the lives of politicians.

Other journalists will do that. Well, people aren't reading or watching the other journalists. Thanks to the advent of social media. Um, the business model of traditional media media has come under real pressure because people just aren't reading taking out subscriptions in the same way before, they're not buying newspapers and traditional magazines in the same way they were before.

They're seeing the content online, they may not even be clicking through to it. And so, I think it really is a dodge for social media platforms to say, no problem, everyone's going to be reading the New York Times, they can find out about politicians lives there. No, I think the fact that social media is now playing such a profound and pervasive role in our media ecosystem means that they do have not the full gamut of journalistic responsibilities that bona fide journalists have, but they do have a responsibility to pay attention to where [00:16:00] speech can cause harm and take action to mitigate that harm.

And so, I don't find this a persuasive one at all. As for the other one, which has to do with newsworthiness, I completely, I completely buy the idea that people have an interest in finding out what their politicians have to say. And therefore, we want to allow speech by politicians out there, even if the speech has some risk of inspiring harm.

In fact, Meta has a separate policy basically saying that when politicians engage in speech that is newsworthy but violating their policies, they sometimes don't enforce their policies just because they want people to learn what politicians have said. And so we can see this as something of an extension of that policy.

I think, again, fact checking politician speech is compatible with citizens seeing it and deciding whether they want, um, to go along with it or whether they disagree with it, whether they agree with it or whether they disagree with it. There's a wrinkle here that I think [00:17:00] is worth calling attention to. So under Meta's current policy, if it labels a post as false or misleading, and puts a link to the third party fact checker.

It also then algorithmically demotes it in the feed so that people are less likely to encounter it. Um, so take climate misinformation. It's not just that those posts are labeled. It's that they then receive reduced dissemination in the feed so that fewer people see it. I think that can be justified. If you really want to, um, if you really believe in this newsworthiness idea that it's maximally important for people to see what their politicians say.

Well, then you can just separate these two policies. You can say, we're going to fact check, but we're not going to demote. Um, I think they should demote for the harm prevention reasons I mentioned before. Politician speech has a particular propensity to go viral and be seen by lots of people. So I don't think it is a serious objection, um, to reduce its dissemination when it causes harm, but someone who wanted to press that argument could press it without, um, [00:18:00] disagreeing with the fact checking policy, which I think we should stand behind. 

Emily McTernan: So that's a convincing case against Meta's current actual justification, if not the real underlying justification. I wonder if fact checking is enough, given the kinds of harms you've been describing and the kinds of cases we're thinking about.

So you might think that even if everything a politician like Trump says that is not true has a label underneath it saying this is a false claim, that isn't really going to undermine people's perception that is a true claim. His supporters are likely to read it, dismiss the fact checking notes. Just believe it. 

Jeff Howard: So there is a wary in the empirical social science literature about the efficacy of fact-checking labels, um, whether they actually succeed in accomplishing their objective, which just raises the question of what's their objective, what's the actual purpose of, of these labels? So on one view, the purpose of these labels would be to give the truth a fighting chance.

So even if it's not guaranteed that people will be swayed, um, some people might be swayed. There's a, um, [00:19:00] wonderful passage in On Liberty where John Stuart Mill is talking about the value of seeing two interlocutors really going at it in fierce debate. And he mentions that it's not for the benefit of the convinced partisan that we want to allow this free exchange of ideas to carry forth.

It's for the benefit of the undecided person. Person who hasn't really been paying attention, hasn't made up their own mind, maybe has heard some thoughts about climate change being false, but also heard people in their family or community questioning those thoughts and they want to get to the bottom of it themselves.

I think for that audience, fact checking stands some important chance of making a positive, possible difference. I also think that if the duty of platforms is a duty not to be complicit in spreading this harmful misinformation, it looks like sticking their neck out and saying, hey, what this guy's saying is false might be enough to discharge that particular duty, even if people don't believe it in the long run, or some people don't believe it in the long run. 

Emily McTernan: I guess there's a worry from the other side, which is that fact checking [00:20:00] is failing to be politically neutral in some crucial way. So, listeners may have noticed that when we're thinking about cases, we are predominantly thinking of right wing politicians. So is there a worry that someone on the right could reasonably have that even though these fact checking organizations have claimed to be independent, actually there's a kind of left wing bias built into this, and built into the ways that we're thinking about it.

So we think that right wing politicians are posting things that aren't true, And fact checking is neutral, but would someone on the other side of the political spectrum have quite a different view of it? 

Jeff Howard: I think it's really important that we, um, condemn misinformation, whether bipedaled by people on the right or on the left.

Now in this particular political moment in the United States, um, I'm confident that more misinformation is coming from the right than is coming from the left. But for that reason, it's important to be vigilant when there is misinformation coming from the left, so that it is condemned. Because it will lead to the appearance of bias.

The appearance of bias, however, is not the same [00:21:00] as bias. And if it just so happens that people from a particular political group tend to breach justified rules with greater frequency than people from another political group, the remedy can't be to decline the, the remedy cannot be to decline to enforce those rules with the requisite vigor against that group in order to maintain the appearance of neutrality.

Because then the effect would be that, in fact, you're enforcing it with greater vigor against the other group. And so I think you just have to come up with the a justified set of rules and then enforce them even handedly. And if it turns out that people in a particular political movement are breaking those rules with greater frequency, so be it.

We just have to hold the line and defend it. Um, I am alert to this concern. The wary that social media has been biased against the right is a familiar talking point. It's no accident that the richest person in the world, who now has the ear of the most powerful person in the world, and by the way, owns the most important social media platform in the world, um, keeps [00:22:00] hammering on this point.

So I understand why people are alert to it. I think that's all the more reason that we should defend efforts like the Digital Services Act in the EU to insist that platforms be even more transparent with their internal processes, with their systems, with how their rules are enforced, both the rules on removing content and the rules about how content that's allowed is algorithmically curated and present it to users to shore up that public confidence in even handedness. 

Emily McTernan: I suspect that the person on the right is still going to worry. So I suspect they're going to worry that these independent fact checkers that you're referring to are filled with people who have been encouraged to think a certain way by universities.

So I take it that's another part of a kind of right left divide at the moment is the attitude towards the university. The university is treated as a kind of left wing place. Professors are assumed to be all kind of inculcated in this kind of groupthink. And I think, while there's lots to say against that, there is some truth to professors tend to be thinking in similar lines, that there isn't a lot of political diversity in some kinds of departments.[00:23:00]

Does that have any bite for you? 

Jeff Howard: I think it's a huge problem. I think the lack of viewpoint diversity in universities is seriously failing to prepare students for a pluralist democracy where they're going to encounter people with political views that they believe, perhaps rightly, are deeply objectionable.

Um, and so I think it's all the more important that universities be places where, um, people are prepared for the realities of living in a difficult pluralist democracy where people have real disagreements with one another. Um, I think the crisis of faith in mainstream institutions that we've seen over the last few years is real.

People are losing faith in traditional media, losing faith in the universities, trust indicators are down across all of these categories, people are losing faith in politicians, and social media companies are just caught in the middle of this crisis. Um, and so what we're seeing in our reaction to them is as much a symptom of our wider [00:24:00] pathology and as it is anything, and social media platforms are just hostage to it.

They're not going to fix it, and they can't opt out of it. 

Emily McTernan: The depth of the issues that we started to touch on, I think, revealed the answer to the question I'm going to press on you now, which is to turn from the content of the paper, which I think we've covered, um, extensively, to think a bit about the method that you're using here, because sometimes on the podcast we like to talk a bit about method.

So, political philosophy is usually straight for the big questions. And this paper very much starts from a very particular policy of a particular social media platform, that Meta doesn't fact check politician speech in the way that it does everyone else's. How are you thinking about this? Why is political philosophy so useful for getting to grips with these policy questions?

What do these policy questions tell us about political philosophy? 

Jeff Howard: One could certainly write a big paper about whether politician speech merits a different level of treatment as part of one's general theory of freedom of expression. Um, that would be a paper I'd be very keen to write at some [00:25:00] point. Um, and this wouldn't be the only applied area where that comes up.

So if you look at defamation law in the United States, the standard for allowing a defamation claim to succeed when the target is a politician is different on the grounds that the Supreme Court has held that people need to be able to have really robust, vigorous debate about what politicians are doing, what their behavior is.

And in the course of that, um, robust debate, people will make factual errors about what politicians have done. And if politicians are able to sue for defamation the moment something false and potentially reputationally damaging about them is said, that will chill the overall debate. So that's one totally different context in which we think a different rule should apply to politician's speech.

So the rule in the US context for those who care, uh, is that, um, a defamation claim against a politician can be successful only if there's what's called active malice. So essentially the speaker knew that the content was false, um, [00:26:00] or at least they were reckless, they were, they were aware that it might be false and they pressed ahead.

Anyway, um, there are other cases that have to do with, um, privacy torts and infliction of emotional distress torts, where we think it's especially important to be able to vigorously criticize politicians. And speak about them and to them in ways that we might think would be unacceptable about ordinary persons.

So I, I do think stepping back, there's loads of really interesting questions to think about when it comes to should politicians speech be treated differently than ordinary citizens speech. Um, in this particular context, though, we just don't think that the argument for differential treatment, uh, can be justified.

Now, you raised this question about the granularity and specificity of the paper's focus, because it really does lean in to this very specific question of platforms policies on fact checking politicians. And platforms like TikTok and X, um, don't have an exemption, for politicians. Politicians speech [00:27:00] is subjected to the same fact checking rules as others, at least in theory.

Whether X actually does this in practice is a totally separate question, and we think that approach is more defensible than Meta's approach. Will Meta listen to us on this particular point? I will say that when it comes to a lot of content moderation questions, like how you exactly define what counts as a threat, what counts as a veiled or implicit threat versus an explicit threat, my sense is that how you toggle the dial on answering a question like that doesn't make a massive difference to these massive companies bottom lines.

And so the trust and safety teams within the companies actually really do have the latitude. To try to get to the bottom of what they think the right policy is usually engaging a wide array of external stakeholders and experts. And so doing, and then implementing that policy. Um, I, the reason that I raise a question about fact checking is because it is so politically fraught.

Uh, and so, if there were ever an issue where I'd be less hopeful that social media platforms will listen to what we're saying, [00:28:00] um, it would be this one. Um, but that's not going to stop us from making the case. I think in a project like mine where we really are focused tightly on the responsibilities of social media companies, our aim is to, do this kind of philosophy that helps us think through what those duties are, and then communicate it to receptive audiences within the companies.

And then it's if they agree with us, then it's a battle for them within the company politics to try to get it through. Our job is to just give them the most persuasive arguments we can. 

Emily McTernan: Thank you, Jeff, for coming on the podcast and for that wide ranging discussion about fact checking, social media, the role of politicians, free speech, and what it is that political theorists can offer at these social media platforms.

We've been discussing the paper, Should Politicians Be Exempt From Fact Checking? It's recently published in the Journal of Online Trust and Safety, on which Jeff is one of four co authors. Full details, as ever, are available in the show notes. Next week we will be discussing memories, political violence, and Ukraine.

Remember, to make sure you don't miss out on that or other future [00:29:00] episodes of UCLR Governing Politics, all you need to do is subscribe. You can do so on Apple, Google Podcasts, or whatever podcast provider you use. And while you're there, we'd love it if you could take a moment of your time to rate or review us, too.

I'm Emily McTernan. This episode was produced by Eleanor Kingwell Banham. Our theme music is written and performed by John Mann. This has been UCL Uncovering Politics. Thank you for listening.