How is misinformation about the war in Ukraine spreading?

Wednesday 20th Apr 2022, 12.30pm

Misinformation about the war in Ukraine – and countless other topics, such as the pandemic and climate change – spreads like wildfire online. It aims to confuse people, make them question their own knowledge, and ultimately raise suspicion and doubt. But how exactly does this misinformation spread, why is it so prevalent on social media, and what are the various platforms doing to help prevent it? We chat to Dr Aliaksandr Herasimenka from the Oxford Internet Institute to find out.

Read Transcript

(Music)

Emily Elias: As war continues in Ukraine, online, there’s been a campaign of misinformation happening on social media platforms people turn to to learn what’s going on. On this episode of the Oxford Sparks Big Questions podcast, we’re asking how is misinformation about the war in Ukraine spreading?

Hello. I’m Emily Elias and this is the show where we seek out the brightest minds at the University of Oxford and we ask them the big questions. For this one, we found a researcher who has been keenly watching how misinformation travels.

Aliaksandr Herasimenka: My name is Aliaksandr Herasimenka and I’m a researcher at the Oxford Internet Institute where I am studying how misinformation and disinformation spreads online on the internet across social media, why it happens, and what are the consequences of this process.

Emily: So, looking at the war in Ukraine, I guess you’re uniquely positioned to look at this because you understand the language that all these messages are coming out in.

Aliaksandr: Yes. One of the key focuses of my work is Eastern Europe, it’s Ukraine, Russia, Belarus and these countries, I’ve been studying for more than 10 years now. Before, I was working as a journalist there and I also worked for NGOs and [unclear speech 0:01:29] organisations.

Emily: What tactics have you seen being used in the area of misinformation?

Aliaksandr: Across this conflict, there are so many different ways people try to spread misinformation online and it’s only been growing recently, over the last months. There are ways, of course, when state organisations try to engage in spreading misinformation. This is what we hear very often. When an organised, coordinated campaign happens online when, for instance, the Russian state tries to spread pro-war message, pro-war propaganda. It’s very common these days.

There are also quite a lot of non-state organisations as well involved in these processes or sometimes also individuals. They rely on familiar human-supported ways of spreading misinformation or, very often, on automatic machine-supported ways when entities like bots spread misinformation to cover as big an audience as possible.

Disinformation also takes different forms when it comes to the type of message. It can be textual, it can be video misinformation, they are also types. What is very common across all types of misinformation we observe, it almost always contains real facts but mixed, in some proportion, with conspiracies and with lies. Those lies, often, they’re not the biggest part, very often, of a message.

Very often a message contains a lot of information that is true and then people who intend to spread lies, they just add a bit of misleading facts or confusing facts. It happens with one key goal, one key primary aim, to confuse people, to make people question every bit of information they receive and, in that way, distort and disrupt somehow or raise suspicion in the society. That’s been one of the key goals of Russian state-backed propaganda spread across the world, essentially, for many, many years so far.

Emily: When it comes to social media algorithms, it’s all about the algorithm, how are people adjusting to what they know about the algorithm to spread misinformation?

Aliaksandr: Yes, indeed, platform algorithms are very important for one key reason, social media platforms tweak those algorithms they use in order to increase virality of messages. They are interested in large engagement, more intensive engagement with information that is being spread on social media.

They are interested in increasing the virality because they turn this virality into money, into profit. This is what’s being abused by misinformation actors. Those people and organisations, they know how algorithms function, they know these platform’s greed in some ways and they, essentially, abuse this interest of platforms, essentially, this quest for virality of platforms.

Emily: That’s what they’re doing on their side, how are, then, the platforms, compensating for this in their algorithms to counter misinformation?

Aliaksandr: Yes, the platforms have been facing questions of how to address the situation with the virality being abused for some time as well. They’ve been trying to find policies. They’ve been trying to implement very many different approaches. For instance, attempts to moderate content, attempts to tweak their own algorithms, attempts to de-platform or remove users who abuse these virality features from their platforms and so on.

There are so many different ways they’ve been trying to implement and the most recent trend we observe when we analyse how platforms react to the war in Ukraine is, essentially, censorship. We’ve seen that Russian state-backed media have been removed or blocked from certain regions from main platforms.

Some platforms restrict individual users on almost a weekly basis now if they find that they spread misinformation, especially the pro-war misinformation. Accounts of individual prominent Russian media personalities that have been previously found to spread propaganda have been blocked on the most popular platforms, like YouTube.

YouTube is the most popular platform in many countries, including Russia. TikTok, which has, for the last couple of years, been one of the most fastest growing platforms is not just blocking pro-war content, it blocked all users in Russia from uploading content on this platform.

This is quite an unusual step, in fact, but this shows the scale of attempts to somehow prevent pro-war propaganda or sometimes just totally disengage and remove.. Although, perhaps, TikTok’s interesting move was quite unusual. The more common way is just moderation that involves people’s specific entities, these are the key most common ways of addressing this problem.

Emily: How does that moderation work? Is it a human that’s going through all of these posts and determining what is and isn’t misinformation or is it machine?

Aliaksandr: It’s a combination of things. Sometimes it’s a machine or algorithm that is programmed in some way to detect most usual cases of misinformation or sometimes we’re talking about humans but it also depends on what type of content we’re discussing. It’s relatively easy to moderate textual data and sometimes images than, say, videos and especially more difficult to moderate if content comes in other languages than English and especially if those languages have not been as extensively programmed.

For instance, Ukrainian is, of course, less studied in that regard compared to English or Russian. Language is one challenge, another challenge is, of course, how to distinguish content. Very often the users help. It’s often that platforms rely on user feedback. Most platforms have some different buttons in their menus allowing people to click on a button and say that this content might be misinformation, this content might be abusive and that’s when platforms act.

Emily: Has it been working?

Aliaksandr: It’s always a case that partly this venture cannot be successful by definition because it is difficult to moderate all the content and find all the bits of misinformation. For instance, when we’ve been studying COVID-19 related misinformation when the pandemic just started, we found that YouTube, as I mentioned, one of the key most popular platforms across the world, was quite slow in removing misinformation. It took several weeks and sometimes months to remove conspiracy videos, for instance.

It takes time for them to moderate but I think one of the most viral pieces of misinformation now can be identified quite fast and can be removed, especially if it attracts lots of attention. If it attracts less attention, it’s more likely that information might stay unnoticed for some time.

I think it’s never possible to totally censor and totally remove all unwanted content. I think we must, in fact, learn how to live with this phenomenon online just like we learned how to live with conspiracies that circulate offline. That’s why we should also discuss ways to improve resilience of users when they face misinformation, how to improve their analytical skills, how to improve their skills of fast detection of misinformation, how to help them to navigate the online environment safely.

It’s also the job of the platforms and the job of those people who design algorithms and, of course, we should recognise that the current model of how platforms operate unfortunately produces the consequences we’re discussing. Essentially, the platform’s greed, the platform’s focus on profit and only profit is a direct reason for the current eruption of digital misinformation.

Emily: What about a platform like Telegram where it doesn’t have an algorithm scheduled into it the same way that Twitter or Facebook do that is delivering information based off of likes and virality?

Aliaksandr: Telegram, perhaps, which is a messaging platform but also a social media platform, is a key example here. Telegram became one of the key battlegrounds for, essentially, misinformation content around the current war in Ukraine, especially when it comes to content in Russian and content in Ukrainian.

Telegram was used to spread information about possible attacks on Ukrainian cities, to collect evidence of war crimes and war atrocities, to help people, to help them evacuate, to help them to escape but it is also used by military, used by politically-minded people who want to know what’s going on right now. It’s well designed to spread messages fast to a large network of people.

Telegram is popular and that’s why it’s been targeted. It’s not curious because, essentially, what we see is even if a platform does not have virality as its key element, it still becomes a target for misinformation campaigns and the key reason for this is very simple, just because it’s popular and it’s especially popular in those countries.

Emily: If you get a message on Telegram, how do you know that it is a trusted message? What signals either, “Yes, you can trust this” or “No, this is some propaganda or this is misinformation?”

Aliaksandr: On Telegram, essentially everything is invisible. Everything happens somewhere that normally isn’t public until you know where to search. People learn how to use Telegram and where to find trusted sources. If they learn how to do this and if they learn about trusted sources, they stick to them. That’s exactly how, essentially, the media system functioned before the internet.

Journalists played this very important gatekeeping role of selecting what types of information spreads across a society and then the internet crashed and, essentially, destroyed this gatekeeping division. Now, some platforms, like Telegram, in some way perhaps found a way to rebuild this gatekeeping and people select their sources and they stick to them. If, somehow, people are interested in conspiracy channels or conspiracy sources, well, they stick to them.

That’s what happens, of course, with sensational, for instance, press. Conspiracy can spread across the offline world as well, as we know, but somehow if people learn how to select trusted sources and what are the trusted sources and they can stick to them and Telegram helps those who know how to select sources to keep following trusted users and trusted information.

Emily: As you’ve been watching all of this stuff unfold over the past few weeks and you’ve seen different platforms tackling it in different ways, it does, ultimately, come down to the people themselves with how they interact with the message. What tools do you think people need to have as they look at information about the conflict coming to their phones?

Aliaksandr: I believe we must have a healthier information environment where platforms do not try to simply profit from everything that comes into their way, from every bit of information. In other words, people should have access to healthier platform ecology.

While it’s still not there, it’s emerging hopefully, people should also develop their analytical skills, should be able to learn how exactly to distinguish a trusted source from, possibly, a misleading source of information. Also, people should be equipped with very basic critical and analytical skills but, also, with opportunities to, obviously, report the most outrageous and the most misleading content and platforms should allow this.

Emily: This podcast was brought to you by Oxford Sparks from the University of Oxford with music by John Lyons and a special thanks to Aliaksandr Herasimenka. We are on the internet, @OxfordSparks and go and check out our new redesigned website at oxfordsparks.ox.ac.uk.

I’m Emily Elias. Bye for now.

(Music)

 

Transcribed by UK Transcription.

Topics: |