Why are deep fakes happening?

Wednesday 1st Nov 2023, 12.30pm

Welcome to the new series of the Big Questions podcast, where we ask Oxford scientists to shed light on everyday questions that you really want to know the answer to.

Remember those photos or videos online that don’t look quite right? Perhaps you’ve heard a celebrity’s voice somewhere unexpected? In this episode, we chat to computational social scientist Dr Bernie Hogan from the Oxford Internet Institute about deepfakes; media synthetically generated by technology to capture someone’s likeness. As AI and machine learning technology develop rapidly, how can we regulate the creation of deepfakes to know what is real? Tune in to find out!

Read Transcript

EMILY ELIAS: Maybe you’ve seen it online. Tom Cruise on TikTok. “I’m going to show you some magic. It’s the real thing”. But that’s not actually Tom Cruise. It’s a deep fake.

On this episode of the Oxford Sparks Big Questions podcast, we’re asking, why are deep fakes happening?

Hello, I’m Emily Elias, and this is the show where we seek out the brightest minds at the University of Oxford and we ask them the big questions. And for this one, we have found a researcher who’s doing deep dives into deep fakes.

BERNIE HOGAN: Well, I’m Bernie Hogan. I’m an Associate Professor and a Senior Research Fellow at the Oxford Internet Institute. We’re a graduate department at the University of Oxford looking at life online. I, myself, am a social data scientist, so I’m part sociologist, part computer scientist. I mainly use quantitative data, and, sometimes I use qualitative data, like text analysis, image analysis, and mainly to look at just issues with how people relate to each other online and how they represent themselves online.

EMILY: Well, that puts you in a very good position to answer a bunch of questions I have today about deepfakes. First off, let’s just start with sort of the basics. Why are people even going down this route and making deepfakes?

BERNIE: Well, I guess the first answer, maybe it’s not the most satisfying, but is because they can, because it’s a fascination with a new technology. I mean, if we look through aesthetic forms throughout history, as media would evolve, so would the ways people would want to work with that media. Now, what we have lately is media that’s derived from machine learning or artificial intelligence. Not really just mapping of images or things in the world, but kind of inferring them and putting them into something that looks like an image, a video, or even, audio.

EMILY: And are these deepfakes, are they easy to make?

BERNIE: Deepfakes are not necessarily easy to make, but it really depends on the quality of the deep fake and the intent of them. There’s a kind of a difference between making an image of a portrait of someone looking straight ahead, and it’s kind of very square and stable and other sorts of shots, maybe someone running or jumping, people in dialogue and so forth. Those things can get a lot more difficult because there’s so much more complexity in what you want to represent.

EMILY: I mean, when I look at them, sometimes they just kind of, like, make my brain melt of, like, how did they do that? How are they so good? Is the whole ‘being able to capture somebody’s likeness in a really clear way’ make it a good deep fake, I guess?

BERNIE: I guess, yeah. It really depends on what you want to accomplish from the image. Now, it’s worth considering that we got two technologies working at the same time, or two ideas working at the same time. And the first one is synthetic imagery. And synthetic imagery is based on you train a lot of photos, and then you make a photo that looks like it could be a house, could be a dog or a person. And the second one is trying to capture, as information, somebody’s representation or their likeness, like you mentioned. And the difference is in that if you have a photo, you can kind of, like, paste someone’s face into that photo, and it doesn’t necessarily look very good, in fact, it can look very silly and terrible. But what’s happened is that you’ve just taken an image and put in an image, but that’s not a likeness. To do that, you need to kind of infer what the likeness is like in a number of different circumstances. So that’s where the machine learning comes in. And you would say want to see someone’s face under different lighting conditions, or see the shape of their body or how they stand or hold themselves, much like how body doubles would be trained in the posture of an actor that they’ve been working with. So in order to do that training, I guess to get back to the first question of ‘how hard are they?’, it can get pretty tricky to capture someone’s likeness and then represent it in another way that’s, like, really credible, that makes it seem like, yeah, you’ve captured that person.

EMILY: I mean, but that’s the technical aspect of it. Capturing somebody else’s likeness and using it for whatever means you want, obviously must raise some massive ethical questions.

BERNIE: Absolutely, yes. So that is where my interest in the topic comes in. Not so much in the like, what’s the best new algorithm to do it, but what is the likeness to begin with? Is it just a description of us, or is it something that we can’t really exist without us? It’s kind of come from us, we might say. And that likeness we now have a sense of is really part of the self. It’s part of how we understand ourselves both to the self and to other people. And so when you represent someone in an image, it’s not a guarantee that you’re going to represent them in a way that they want to be represented, but because you can combine their likeness with all kinds of other representations, maybe a soldier or someone riding a dragon or so forth, you could put them in those situations if you sufficiently capture the likeness.

EMILY: Where we see a lot of celebrities that get deepfakes made about them like your Tom Cruises, your Harrison Fords of this world. But could we see a more democratizing, deepfake world where we’re all of a sudden see a deepfake of our sister or of our aunt or of our parents?

BERNIE: Indeed, it’s certainly possible for now to render them from existing models. You can do that on, like, a MacBook or a decent gaming machine. In order to train them, you need slightly better equipment, but you can also do this stuff online and the guides for it are pretty clear. It’s not necessarily everywhere for someone to be able to make their own models, but the actual technology is becoming pretty broadly diffused. We can already see an example of this through the app Lenza, where Lenza uses sort of common diffusion-based technologies, as far as I could tell, and you upload, 10, 20 photos of yourself and it renders these profile photos, but some are really cool and I don’t know, psychedelic or colourful, and some look professional. And that’s become a bit common and indeed, TikTok filters and Snapchat filters have been really amping up what can be done. But to train on someone else, like you said, indeed, that still takes some pretty decent computing power, but it’s not that complicated to do. You really just have to follow the instructions of the repositories of computer code that show how to do it step by step. The challenging part is ensuring that you have the right kind of images, and the number of images, and the patience, and I think most importantly, the permission of the people that you’re rendering as a model.

EMILY: So right now, are there any rules that are guidelines that are currently out there that regulate who you can make a deep fake of, or how to spread a deep fake around?

BERNIE: You’d think! But no, there aren’t, there aren’t really rules per se. There’s laws against certain kinds of imagery being involved in this, but that’s kind of as a precursor, because you shouldn’t have certain images to train on anyway, you know, sexual abuse images and so forth. But beyond that, we’re left with convention. So, one of the studies that I’ve done with a student of mine over the past couple of months is looking at the online spaces and how they self regulate on this. And right now, it’s pretty evident that there’s some spaces that are pretty well moderated. And when they’re well moderated, that means limiting content of children, limiting content that is of a sexual nature, and limiting content that combines public figures with content of a violent or sexual nature. But those tend to be norms of the space. There’s no specific law on either what you could train on or how you can represent someone, as far as I understand beyond what you’d see for standard copyright or trademark.

EMILY: As we’re hurtling towards the future, what do you think lawmakers need to think about when it comes to regulating this sort of corner of the internet that’s right now in a wild, wild west sort of stage?

BERNIE: There’s two kind of main areas of consideration here and the one is the regulation of the generation of synthetic imagery, and the other is the regulation of the distribution of synthetic imagery. There’s been a lot of focus on both, but I think most regulatory focus has been on the regulation of the distribution of the images. So images should be in different jurisdictions. They talk about images being watermarked or clearly stated as synthetic, but we don’t yet have, as I understand it, a sort of likeness rights for an individual that says, if you share my likeness, you should get my permission for that. Or if you’re misrepresenting me using a deep fake, that should be acknowledged. Of course, as soon as one starts down that road, then it becomes kind of goes back into other sorts of civil rights legislation or free press legislation. What about parody and what about promotional work? So, do we have a public right to make a parody of someone already? Right now, people can do sort of cartoon drawings in the newspaper of presidents or prime ministers and so forth. Should we still be allowed to make deepfakes with a public figure, much like Channel Four did with The Queen several years ago?

THE QUEEN PARODY: “If there is a theme to my message today, it is trust. Trust in what is genuine and what is not.”

BERNIE: Sort of farcical. You wouldn’t really identify it as The Queen, but it was a very plausible way. No, we don’t have a lot of clear legislation on that yet. On the generation side, legislation is starting and Europe has some rather considerable legislation coming in that’s on the providence of the images or the providence of the data that’s used in order to create the models in the first place. You should be able to tell people and show them exactly what went into that model in order to make that. And that kind of gets around some issues because it’s like if you don’t have rights to those images to train on, then you shouldn’t really be rendering with that model to begin with.

EMILY: But then I guess there is that issue of consent. So say Carrie Fisher consents to her image being used, but she doesn’t necessarily know how it’s going to be used.

BERNIE: Yeah, indeed, that’s a really thorny issue in the private sphere among the public. When money is involved, then you know, liabilities or lawyers can get involved. We could say, why were you rendering a movie in a Warner Brothers movie with Carrie Fisher when she’s licensed it for Star Wars only exclusively or something? Which itself is kind of a wild idea, whether or not and then distinguishing whether we’re talking about representing Carrie Fisher or Princess Leia. Now, an early and striking example of a deepfake was indeed Princess Leia in the movie Rogue One, which was deepfaked. But, more recently, we might say, is anyone else allowed to represent her? Netflix, have had a show, Black Mirror, the most recent series, the first episode, discusses this and kind of explores some of these issues in a bit of extreme detail about what we sign our rights away for. But now that question gets even more complicated when it’s in the private sphere, because then, it’s civil. It’s not like you’re trying to make money off someone, but you could exploit them in other ways. You could bully them, put them in compromising situations and harass people with that. And it’s not unlikely that we will see examples of criminality, indeed, even with deepfakes involved in the future.

EMILY: Well, that’s going to give you a lot to study.

BERNIE: Yeah, indeed, it’s a lot of work. It’s complicated, first to get an understanding of the technology and what it’s capable of, and then also to understand what is it good for? Why are people doing this? Not just as a form of entertainment, but as a form of artistic practice, but also now as a social practice? And will it be a practice that we can even legislate? Or whether we merely have conventions, normative conventions that sort of are conventions of good or bad taste, is still really up for discussion. But there are some vulnerable people in vulnerable situations that have to be considered with this technology.

EMILY: This podcast was brought to you by Oxford Sparks from the University of Oxford with music by John Lyons and a special thanks to Dr Bernie Hogan.

Tell us what you think about this podcast. You can find us on social media. We are @oxfordsparks. We’ve got a website: oxfordsparks.ox.ac.uk. And if you’ve got a big question, we will try find you an answer, get in touch. I’m Emily Elias. Bye for now.