Skip to main content

Glut of social media posts, political divisiveness a challenge for content moderators

Leigh Adams, director of moderation services at Viafoura, a content moderation company, is photographed in Toronto, Friday, May 13, 2022.  Chris Young / THE CANADIAN PRESS Leigh Adams, director of moderation services at Viafoura, a content moderation company, is photographed in Toronto, Friday, May 13, 2022. Chris Young / THE CANADIAN PRESS
Share
TORONTO -

Leigh Adams has seen a steady rise of material for review since she began moderating user comments on websites roughly 14 years ago, but she says the volume only exploded in the last few years as the content's nature became so divisive there's only one word for it: "Bonkers."

Misinformation, trolling and worse has always existed online, but Adams says she saw a shift after the U.S. elected Donald Trump president in 2016 that reached a new height when George Floyd, a Black man in Minneapolis, was killed in police custody in May 2020, fuelling racial tensions just as the world was locked down due to the COVID-19 pandemic.

"It was really the perfect storm ...The internet was already struggling at that time with, 'How do we reconcile anonymity and accountability? How do we make sure to amplify the voices of those who might not be heard?"' said Adams, director of moderation services at Viafoura, a Toronto business reviewing user content for publishers.

"We had not solved for that and we still haven't solved for that, but then you have these (events) on top of it, it just made it a bad situation worse."

Adams noticed Trump being out of office and the return of pre-pandemic activities slightly quelled the "inflamed rhetoric" seen by Viafoura's more than 800 clients, which include media brands CBC, Postmedia and Sportsnet.

But she expects future "swelling" and other content moderation companies say they've detected no significant signs of the onslaught receding. It's likely that keeping up with the volumewill mean tackling an evolving suite of challenges.

Moderators foresee health misinformation continuing to spread rampantly, dubious posters becoming even more sophisticated in their attempts to disrupt platforms and a slew of new regulations targeting online harms in Canada and abroad.

"I don't see the demand declining any time soon, despite all of the talk of recession," said Siobhan Hanna, Telus International's managing director and global vice-president of artificial intelligence.

"For better or worse, this content moderation need will continue to just grow, but the need is going to be for more intelligent, efficient, thoughtful, representative, risk mitigative solutions to handle the increased demand."

Hanna says video is becoming one of the most challenging areas because moderators are no longer only reviewing clips depicting violence, indecency or other harms that may be difficult to watch.

Now there are also so-called deep fakes -- videos where someone's face or body has been digitally spliced into the frame so they appear to be doing or saying things they never did.

The technology has cropped up prominently on TikTok, when visual effects artist Chris Ume spread clips purporting to be of actor Tom Cruise playing card tricks, eating a gum-filled lollipop and performing Dave Matthews Band's song "Crash Into Me."

"I don't think anybody's going to be harmed by ... the videos he's creating, but it's also getting us all used to these deep fakes and maybe drawing our attention away from the more sinister applications, where it could affect the course of an election, or it could affect health care outcomes or decisions made around crimes," Hanna said.

In Ireland for example, videos supposedly depicting political candidates Diane Forsythe and Cara Hunter committing sexual acts were circulated while they ran for office earlier this year.

"I never cease to be surprised," said Adams. "You see the worst thing and then something else comes along, you think, 'what could possibly happen next?"'

Her team recently found a photo that appeared to be a sunset at first glance, but 17 layers back, showed a nude woman.

"If we had not had five people looking at that, it would have been live and up there," she said.

"It's getting more sophisticated and so you have to find new artificial intelligence (AI) tools that are just going to keep digging deeper."

Most companies rely on a blend of human moderators and AI-based systems to review content, but many like Google have conceded machine-based systems "are not always as accurate or granular in their analysis of content as human reviewers."

Adams sees the follies of AI when people invent and popularize new terms -- "seggs" instead of sex, "unalive"instead of dead and "not see" instead of "Nazi" -- to avoid being flagged by moderators, security filters and parental controls.

"In the amount of time it's going to take machines to learn that, that news cycle is over and we're onto something else because they found a new way to say it," Adams said.

But humans also aren't perfect and often can't keep up with the volumes of content alone.

Two Hat, a Kelowna, B.C. moderation company used by gaming brands Nintendo Switch and Rovio and owned by Microsoft, went from processing 30 billion comments and conversations a month before the health crisis to 90 billion by April 2020. Microsoft Canada did not provide more recent numbers, with spokesperson Lisa Gibson saying the company is not able to discuss trends at this time.

Facebook, Instagram, Twitter, YouTube and Google warned users in 2020 they were taking longer to remove harmful posts as the pandemic began and staff retreated home, where viewing sensitive content was tougher and in some cases, forbidden for security reasons.

When asked whether backlogs have been cleared, Twitter declined to comment and Facebook and Instagram did not respond. Google temporarily relied on more technology to remove content violating its guidelines as the pandemic began, which led to an increase in total video removals, spokesperson Zaitoon Murji said. The company expects to see a decline in video removals as it scales back that technology as more moderators return to the office, she added.

As the backlogs formed, countries toughened their stance on harmful content.

The EU recently reached a landmark deal requiring the prompt removal of harmful materials online, while Canada is promising to soon table a bill combating online hate, after a previous iteration was shelved amid a federal election.

Adams says the convergence ofCOVID-19, Trump's rise and the killing of Floyd made publishers more willing to take a stand against problematic content such as hate speech and health misinformation. Legislation, which can vary across countries and often be left up to interpretation, could result in companies having even less tolerance and taking anything that runs the risk of being seen as problematic down, she said.

The stakes are high because letting too much problematic content on a platform can make it unsafe, but removing too much can also interfere with free speech, said Anatoliy Gruzd, a Toronto Metropolitan University professor of information technology management.

"From the user side, that may feel like there's not enough effort to make platforms a welcoming and safe place for everyone, and in part that's because the platforms become so huge, with millions and billions of users at once," he said.

Gruzd doesn't see striking a balance between safety and freedom getting any easier as the policy patchwork evolves, but believes society will move toward considering boundaries and what is acceptable or not to be exposed to.

He said, "Some people will vote with their usage, whether they stop using Facebook or Twitter for certain things, they might decide to go to other platforms with or without too much moderation or they may decide to stop using social media completely."

This report by The Canadian Press was first published May 22, 2022.

CTVNews.ca Top Stories

Hertz CEO out following electric car 'horror show'

The company, which announced in January it was selling 20,000 of the electric vehicles in its fleet, or about a third of the EVs it owned, is now replacing the CEO who helped build up that fleet, giving it the company’s fifth boss in just four years.

Stay Connected