“Humans have been lying since the day they were born” – The deepfake phenomenon is just a new form of lying

Can you imagine a world where you can never know if what you read, see, or hear on your phone, laptop, or TV screen is true or if it ever happened? Where you can never be sure whether they are created by humans or artificial intelligence. Or perhaps this is what you already feel? I talked to Dr. Petra Aczél, a communications researcher, who is one of the editors and authors of the book 'Deepfake: The Unreal Reality', published a few months ago.

Dr. Petra Aczél
Dr. Petra Aczél – Photo: László Emmer

For years, your articles in Képmás magazine has been about artificial intelligence and fake news, and this book, written by fourteen different authors, is about the intertwining of these two problems. How did this collection, which you edited together with linguist and communication researcher Dr. Ágnes Veszelszki, come about?  

When ChatGPT was made available for free in February 2023, the age of artificial intelligence came as suddenly as wars broke out overnight. Even before that, there was much talk of a metaverse, but we thought robotization would change our lives by taking over the physical work from humans. So the book was obviously not written when this news came out, it was lying dormant. A year and a half earlier, we had organised a professional conference which had already shown that there was a lot of interest in deepfake, because artificial intelligence generates millions of messages every day, and we cannot tell whether they are real or not.  

This book illustrates that it is useful to approach these new phenomena with a combined, problem-based approach from different scientific disciplines, just as it is high time to move beyond the subject-based framework in education.  

The book tries to approach the phenomenon in a variety of ways, for example with fourteen definitions emphasizing different factors. The authors are not all equally pessimistic or optimistic. 

We aimed to create a professional book in which we can make people aware that this is not a problem for others, but a task for all of us.

It was important that people could also read about this topic in Hungarian.

The essays in the book show, among other things, that while the brain can sometimes instinctively distinguish a false image from the real one, the human mind sometimes overrides intuition and seeks a kind of "comfortable truth" in order to function effectively.  

Lying is nothing new, humans have been lying since they were born, and so have many other creatures. To get along in society, we need this tool a little, for example, we can consider certain forms of politeness to be beneficial lies. 

We believe about half of the lies to be true, saving ourselves from dramatic daily realizations such as: "this person doesn't like me after all". 

What's new in deepfake is the programmer who enters our lies and produces free programs to, for example, paste the picture of my ex I'm angry at into porn content, or put my own compulsive fantasies into the mouth of a rival politician to discredit him. Moreover, the user or programmer cannot be held liable for this, nor can they be exposed.  

Image
Dr. Petra Aczél
Photo: László Emmer

And can we not believe that this will only affect us if we are important enough in public life?  

No. In fact, the deepfake about ordinary people is less recognisable than the deepfake about public life. We can think of a personal revenge or simply a student prank, for example, when teenagers make a fake video of their teacher or classmate. Deepfake is the use of artificial intelligence to plagiarise a person. I portray someone without their will: I make them speak and act, for my own benefit. There are, of course, precedents for fake reality: some of us may remember the first fake Hungarian documentary attempt, The Oil Gobblers. A whole country was abuzz because, by the rules of the genre, it was so believable. 

At the same time, deepfake emphasises the exploitation of the person, the personality. What makes this theme so important is the ease of access. Free software makes it possible not only for certain people, using techniques available in certain studios, to produce fake content, but also for bank fraudsters or people pretending to be someone's grandchild asking for money from grandparents, for example. But content created out of good intentions can later become harmful since it is impossible to determine the future of what is put online.

There are relatively few prophecies or predictions in the essays. Do you think that if the online space is flooded with unverifiable content from the "holy amateurs" of content production, won't the credibility of professions and individuals be reassessed?  

There is no prediction in the book because, as I said, we could not have predicted that it would so suddenly become such a significant issue, so widely. This is a warning to us to be cautious in our predictions about our processes. Will credibility be appreciated more? It does not have to be appreciated more, because credibility is still the most valuable thing today. 

Credibility is not just about reasonableness, and it does not operate on a purely informational basis. It requires the power of full experience, the weight of reality. 

As you said, we really don't teach cross-curricular problems and skills, and intuition has been taken out of our education for centuries. I think it is also very much missing from our culture, from the interactions that are important to us. And it is our intuition with which we perceive authenticity most of the time.  

Image
Dr Petra Aczél
Photo: László Emmer

However, not to have any prejudices based on intuition is in front of us as a stop sign...  

But prejudice remains in technology just the same, it's just someone programming their values, their truths into it. We have very much underestimated our soft skills, we explain man in two dimensions as rational or emotional. But there is a third dimension in each of us: the moral-transcendental. The device can pretend to be emotional, the voice-based artificial intelligence can talk to you in Hungarian in a kindly, polite way. You might even feel that it is your friend. We now know that you can fall in love with a voice assistant. There are even examples of people who have married their voice assistant. But as human-faced as AI is, it doesn't have a vision of God. It lacks something human, that certain third dimension that is necessary to know and understand the whole world. 

Do you feel anxiety about the world shaped by artificial intelligence?  

At first, I felt a resistance, a kind of detachment that told me: this is just for fun. Then I realized that I had to be involved in this because communication is my specialty. But I am not afraid. I think I am free to decide what to do with this opportunity. I can see that there is caution and suspicion in people in the workplace. Most of all, we fear that we ourselves will be replaced by artificial intelligence. Most parents are also worried about their children when they use new technologies. 

So we blame technology, but we live with its potential, but also in its bondage. And there are good aspects of technology, including AI, for example in education, when it brings history to life, or when it simulates a chemical experiment.

How vulnerable are we?  

At a societal level, there is the potential for a repressive ideology to take hold through deepfake content because it is worth investing money in. Today, states are spending a lot of money on how they can influence wars with artificial intelligence. I think vulnerability develops when you don't ask, don't seek, don't talk, don't make changes. There is no need for an eight-month-old child to spend one, two or three hours a day in front of a screen, many parents have actually managed to keep their preschoolers away from that. We need to talk about how we can be influenced by the new media as a medium, we need to admit our own addictions. 

And it's worth asking, for example, your teenage child if they know of any free artificial intelligence software, and if so, why they use it. If we don't think we are omniscient, but ask our child for advice, we can learn from each other. We can ask them, "Hey, that text or picture you put together is exciting, funny, but have you thought about whether it's yours or the algorithm's, or the company's that programmed it?" Their responses can bring us closer to becoming aware, ourselves as well as them, of what technology can do to us, through us. 

At what points should moral and ethical considerations and legal regulation regulate and prevent the creation of deepfake content?  

This should be a simultaneous process. The state cannot afford not to regulate, and Europe is trying to do so, just think of the important rules on data protection. 

The law can give us guidance, but we have to save ourselves. 

I can choose to entertain myself by watching porn with actresses' faces. If I do, no one is protected by the law. What the AI does, I do, or another person does. This needs to be talked about, not with a constant sense of threat, but with an intention to change or improve. Of course, technology saves us time, only to take it away at the same time. In any case, it cannot bring the end of the world without us. So it is up to us what we allow it to do.  
 

You may also be interested in this