Skip to main content

World needs to be 'vigilant' as AI technology improves and deepfakes spread: UN adviser

Share

A UN adviser says the world needs to be "vigilant" as artificial intelligence technology improves, allowing for more realistic-looking deepfakes.

Deepfakes refer to media, typically video or audio, manipulated with AI to falsely depict a person saying or doing something that never happened in real life.

"A digital twin is essentially a replica of something from the real world… Deepfakes are the mirror image of digital twins, meaning that someone had created a digital replica without the permission of that person, and usually for malicious purposes, usually to trick somebody," California-based AI expert Neil Sahota, who has served as an AI adviser to the United Nations, told CTVNews.ca over the phone on Friday.

Deepfakes have been used to produce a wide variety of fake news content, such as one supposedly showing Ukrainian President Volodymyr Zelenskyy telling his country to surrender to Russia. Scammers have also used deepfakes to produce false celebrity endorsements. In one instance, an Ontario woman lost $750,000 after seeing a deepfake video of Elon Musk appearing to promote an investment scam.

On top of scams and fake news, Sahota notes that deepfakes have also been widely used to create non-consensual pornography. Last month in Quebec, a man was sentenced to prison for creating synthetically generated child sexual abuse imagery, using social media photos of real children.

"We hear the stories about the famous people, it can actually be done to anybody. And deepfake actually got started in revenge porn," he said. "You really have to be on guard."

Sahota says people need to have a keen eye for videos and audio that appear off, as it could be a sign of manipulated media.

"You got to have a vigilant eye. If it's a video, you got to look for weird things, like body language, weird shadowing, that kind of stuff. For audio, you got to ask… 'Are they saying things they would normally say? Do they seem out of character? Is there something off?'" he explained.

At the same time, Sahota says policymakers need to do more when it comes to educating the public on the dangers of deepfakes and how to spot them. He also suggests there should be a content verification system using digital tokens to authenticate media and snuff out deepfakes.

"Even celebrities are trying to figure out a way to create a trusted stamp, some sort of token or authentication system so that if you're having any kind of non-in-person engagement, you have a way to verify," he said. "That's kind of what's starting to happen at the UN-level. Like, how do we authenticate conversations, authenticate video?”

CTVNews.ca Top Stories

Local Spotlight