Broadway

Complete News World

Deepfakes on social media can affect you too

Deepfakes on social media can affect you too

Fake videos

“Deepfakes on social media are a new kind of spam.”

Whether it’s Tom Hanks or Mr. Best, deep fake videos are spreading across social media. But it’s not just celebrities who can become victims, as Patrick Remy of the University of Freiburg explained in an interview.

published

HeyGen AI software translates the language spoken in the video into another language – the audio remains almost the same while the lip movements are modified. The result of the deepfake was astonishing.

20 minutes / Tariq Al-Sayed

  • Deepfakes are increasingly being used to attempt fraud on social media.

  • AI-generated videos are difficult to distinguish from real videos.

  • They are also displayed as banner ads on platforms like Tiktok.

  • Media expert Patrick Remy from the University of Freiburg explains the risks this poses.

Whether they are celebrities, politicians or influencers: anyone who has videos of them on social media or the internet can become a victim of an AI deepfake. The technology for creating deepfakes has now advanced to the point where fakes are… It is often difficult to define more than that. Now platforms like Tiktok and Instagram are increasingly encountering such videos. There are often fraudulent intentions behind this. Actor Tom Hanks recently warned that a fake photo of him has spreadwhich enhances dental insurance. Popular influencer MrBeast handed out cheap iPhones to his fans on Tiktok – without knowing anything about it.

Patrick Remy, a communications scientist and deepfake expert at the University of Freiburg, explains in an interview the dangers these deepfake videos pose and why they are becoming more prevalent again.

Mr. Raimi, have you seen MrBeast’s video?

Yes, the example illustrates the risk to the person whose image and video material has been misused, as well as the risk that recipients of the message will fall into a trap. The fact that the video effectively appears as an ad makes the whole thing even more important. It gives it more originality. Deepfakes are like a new type of spam – the successor to phishing emails. The difference is that people need to be more aware of deepfakes.

Why are there more videos like this?

With ChatGPT, there has been a buzz around artificial intelligence, which has also brought greater focus to the topic of deepfakes. Artificial intelligence has made tremendous progress in the past few months, and so has the technology behind deepfakes. About five years ago, it would have taken an entire university team to create a deepfake video of former US President Obama, but today we can assume that creating videos like this is much easier.

And it looks more realistic, too.

Yes, social media has been fed with images and videos in better resolution than ever over the years. All of this material makes celebrities, influencers, and politicians alike vulnerable to deepfakes. They are also at high risk of real-life deepfakes. For individuals, the risk is less serious, but as technology advances, the risk of being affected increases.

Deepfakes are realistic-looking media content, whether images, audio content or videos, that have been modified in a realistic way by artificial intelligence. This type of media manipulation makes people do or say things that do not correspond to reality. The term consists of the English word “deep learning”, machine learning, and the word “mock”. You can find an example in the video at the top of the article.

Can you even protect yourself?

An old rule from the early days of social media says: “Think twice before posting a photo or video online.” Today this rule applies more than ever. People should be aware that their photos and videos are on the Internet and can easily be misused.

“An old rule says: Think twice before posting a photo or video online. This applies today more than ever.”

Patrick Remy, media expert at the University of Freiburg

How do you avoid falling into the deep fake trap yourself?

It’s difficult to identify a well-created deepfake as such. A necessary piece of media competence is required – and not just for young people. It is possible to develop a good instinct for faking. Most of the time, simple mental steps help: Who is the writer of the letter? Where does the message appear? How reliable is the source? Is there anyone else reporting this news? With common sense, trained common sense, and critical thinking, you will quickly notice that something is wrong.

Deepfakes also have their technical limitations. Because they’re created from existing materials and because the AI’s ability to create new situations is limited, there are a few features that often stand out: look for unusual eye movements, unrealistic lighting, and strange facial movements. Also check the background for inconsistencies and pay attention to unnatural speech patterns – a look at the lips can often reveal a lot. In addition, people in deepfakes usually move their bodies only slightly.

The danger also affects politicians. Are there already discussions at this level?

As previously mentioned, politicians are often vulnerable to deepfakes on camera and as exposed subjects. Regarding the current election campaign, artificial intelligence has already been used in election posters. For this reason, almost all parties, with the Free Democratic Party and the People’s Congress absent, adopted a law at the end of September that immediately bans the use of artificial intelligence on posters or advertisements.

At the end of 2021, Guy Parmelin’s deepfake became a topic of discussion. The video was created by an expert and was removed from the Internet after the intervention of the Federal Council.

20 minutes

What is the current state of research on deepfakes?

We are conducting an interdisciplinary study on deepfakes. Researchers from the University of Zurich, the University of Freiburg, and the Fraunhofer Institute in Germany, among others, are participating in this project. In this study we want to evaluate the impacts, opportunities and risks of digital manipulation of audio, image and video materials. We also analyze existing technical options for detecting and reporting such manipulated content. The results are scheduled to be published in June 2024.

Patrick Remy, lecturer in communication sciences and AI media literacy expert, is currently involved in a research project on deepfakes in journalism.

Unifr.ch / Patrick Remy

With daily update, you can stay up to date with your favorite topics and never miss any news about current global events.
Get the most important things, short and concise, straight to your inbox every day.

View comments