TORONTO -- As fake videos generated by AI continue to become more convincing, what was once a tool to share laughs on the internet has grown into a worrying sector of digital media.

Whether it’s a viral video of “Tom Cruise” doing a magic trick or "Facebook’s Mark Zuckerberg" boasting about having “total control of billions of people’s stolen data,” deepfake videos have the capacity to cause real harm to people who fall for their deception.

A Pennsylvania woman was charged last weekend with allegedly making deepfake videos of girls on a cheerleading team her daughter used to belong to – the videos showed the girls nude, smoking or partying – in an attempt to get them kicked off the team. 

Graphic artist Chris Ume, the mastermind behind the Tom Cruise TikTok deepfake, told CTV News that when he started making deepfake videos it was just to “have good fun.” 

But now as manipulated media continues to make headlines, his views have changed. 

“I’m concerned that it’s getting easier to do it,” Ume said. “Especially when people want to misuse the technology.”

WHAT ARE DEEPFAKES?

An AI process known as “deep learning” is used to manipulate photos and videos to create deepfake media.

Many of them are used for pornography. A 2019 report from the AI firm Deeptrace Labs found more than 14,500 deepfake videos online in September of that year – 96 per cent of them pornographic in nature. Of the videos studied, 99 per cent involved swapping female celebrities' faces onto porn stars without their consent.

Deepfake technology can also be used to create convincing but ultimately fake pictures from scratch, and audio can be faked as well in a process known as “voice skins” - where someone’s voice is cloned and then manipulated to “say” what the user wants it to. 

Public figures such as politicians or CEOs are especially vulnerable to this process because of their frequent public addresses; in 2019, scammers used a voice skin to trick a CEO of a U.K.-based energy firm into sending funds to a Hungarian supplier. The CEO thought he was speaking to his boss, a German executive.

A 2019 deepfake video of former Italian prime minister Matteo Renzi insulting other politicians caused outrage in Italy before it was revealed to be a manipulated video for an Italian satirical show. 

HOW ARE DEEPFAKE VIDEOS MADE?

There are a few ways to make a deepfake, and several steps must be taken. 

First, a user can begin by inputting thousands of photos of two people into an AI algorithm called an encoder, which then finds similarities between the two faces, reducing them to their shared common features. 

A secondary algorithm known as a decoder is then used to recover the faces from the files. To make a face swap, the user simply switches the feeds to the “wrong” decoder for each person, effectively swapping their faces. This has to be done for every frame of the video being created.

Generative Adversarial Networks, or GANs, are another method. GANs are made up of two computer networks: a synthesizer which creates content, and a detector that compares images to the real thing. 

By cycling the content through the networks hundreds of thousands of times, the two systems make believable manipulated media. 

The technology needed to create convincing deepfakes, powerful desktops with high-end graphics cards and a knowledge of video editing, means that your average internet user is not going to be churning out manipulated videos or photos anytime soon.

However, “shallow fakes,” which are videos that are manipulated using regular editing tools, are still capable of fooling people. 

Facebook banned deepfakes on their site prior to the U.S. 2020 election in a bid to stem misinformation, but their policy did not extend to “shallow fakes” – which is why a manipulated video of House speaker Nancy Pelosi “slurring” her way through a speech was allowed to stay on the site. 

HOW CAN YOU SPOT A DEEPFAKE?

As AI technology advances, spotting deepfake videos becomes much more difficult. 

Poor lip-syncing or flickering in the video frames are some ways to spot a poorer-quality deepfake, but most high-quality AI technology has figured out ways to render those issues moot.

In 2018, researchers at Cornell University announced their discovery that deepfake faces didn’t blink, but no sooner had the research been published then AI was programmed to address the issue. 

Ironically, AI may be the best way to spot deepfakes, and corporations like Microsoft have taken initiatives to detect and remove manipulated media. 

WHAT IS THE DANGER WITH DEEPFAKES?

While tricking people into believing in a large-scale event is unlikely, as most countries have their own surveillance systems and intelligence communities to verify data, deepfake media can erode trust in public institutions and individuals.

In 2019, professor Hany Farid of the University of California Berkeley warned that deepfake technology has been “weaponized” against women, especially when they are used to create and distribute revenge porn. 

Farid told CTV News that the risks from deepfakes are tantamount to “massive fraud.”

A deepfake bot on the encrypted chat app Telegram was discovered in 2020 that was used to “undress” more than 100,000 women, many who were under the age of 18. 

The Telegram bot is thought to be powered by DeepNude software, first reported on by Vice in 2019, which uses deep learning to generate what it thinks a person’s body looks like. 

“We can really wreak havoc, and the real concern here is the virality with which this content spreads online before anybody figures out that its fake,” Farid said. 

------------

With files from CTV News' Washington bureau correspondent Richard Madan