Glut of social media posts, political divisiveness a challenge for content moderators

Leigh Adams has seen a steady rise of material for review since she began moderating user comments on websites roughly 14 years ago, but she says the volume only exploded in the last few years as the content's nature became so divisive there's only one word for it: "Bonkers."
Misinformation, trolling and worse has always existed online, but Adams says she saw a shift after the U.S. elected Donald Trump president in 2016 that reached a new height when George Floyd, a Black man in Minneapolis, was killed in police custody in May 2020, fuelling racial tensions just as the world was locked down due to the COVID-19 pandemic.
"It was really the perfect storm ...The internet was already struggling at that time with, 'How do we reconcile anonymity and accountability? How do we make sure to amplify the voices of those who might not be heard?"' said Adams, director of moderation services at Viafoura, a Toronto business reviewing user content for publishers.
"We had not solved for that and we still haven't solved for that, but then you have these (events) on top of it, it just made it a bad situation worse."
Adams noticed Trump being out of office and the return of pre-pandemic activities slightly quelled the "inflamed rhetoric" seen by Viafoura's more than 800 clients, which include media brands CBC, Postmedia and Sportsnet.
But she expects future "swelling" and other content moderation companies say they've detected no significant signs of the onslaught receding. It's likely that keeping up with the volumewill mean tackling an evolving suite of challenges.
Moderators foresee health misinformation continuing to spread rampantly, dubious posters becoming even more sophisticated in their attempts to disrupt platforms and a slew of new regulations targeting online harms in Canada and abroad.
"I don't see the demand declining any time soon, despite all of the talk of recession," said Siobhan Hanna, Telus International's managing director and global vice-president of artificial intelligence.
"For better or worse, this content moderation need will continue to just grow, but the need is going to be for more intelligent, efficient, thoughtful, representative, risk mitigative solutions to handle the increased demand."
Hanna says video is becoming one of the most challenging areas because moderators are no longer only reviewing clips depicting violence, indecency or other harms that may be difficult to watch.
Now there are also so-called deep fakes -- videos where someone's face or body has been digitally spliced into the frame so they appear to be doing or saying things they never did.
The technology has cropped up prominently on TikTok, when visual effects artist Chris Ume spread clips purporting to be of actor Tom Cruise playing card tricks, eating a gum-filled lollipop and performing Dave Matthews Band's song "Crash Into Me."
"I don't think anybody's going to be harmed by ... the videos he's creating, but it's also getting us all used to these deep fakes and maybe drawing our attention away from the more sinister applications, where it could affect the course of an election, or it could affect health care outcomes or decisions made around crimes," Hanna said.
In Ireland for example, videos supposedly depicting political candidates Diane Forsythe and Cara Hunter committing sexual acts were circulated while they ran for office earlier this year.
"I never cease to be surprised," said Adams. "You see the worst thing and then something else comes along, you think, 'what could possibly happen next?"'
Her team recently found a photo that appeared to be a sunset at first glance, but 17 layers back, showed a nude woman.
"If we had not had five people looking at that, it would have been live and up there," she said.
"It's getting more sophisticated and so you have to find new artificial intelligence (AI) tools that are just going to keep digging deeper."
Most companies rely on a blend of human moderators and AI-based systems to review content, but many like Google have conceded machine-based systems "are not always as accurate or granular in their analysis of content as human reviewers."
Adams sees the follies of AI when people invent and popularize new terms -- "seggs" instead of sex, "unalive"instead of dead and "not see" instead of "Nazi" -- to avoid being flagged by moderators, security filters and parental controls.
"In the amount of time it's going to take machines to learn that, that news cycle is over and we're onto something else because they found a new way to say it," Adams said.
But humans also aren't perfect and often can't keep up with the volumes of content alone.
Two Hat, a Kelowna, B.C. moderation company used by gaming brands Nintendo Switch and Rovio and owned by Microsoft, went from processing 30 billion comments and conversations a month before the health crisis to 90 billion by April 2020. Microsoft Canada did not provide more recent numbers, with spokesperson Lisa Gibson saying the company is not able to discuss trends at this time.
Facebook, Instagram, Twitter, YouTube and Google warned users in 2020 they were taking longer to remove harmful posts as the pandemic began and staff retreated home, where viewing sensitive content was tougher and in some cases, forbidden for security reasons.
When asked whether backlogs have been cleared, Twitter declined to comment and Facebook and Instagram did not respond. Google temporarily relied on more technology to remove content violating its guidelines as the pandemic began, which led to an increase in total video removals, spokesperson Zaitoon Murji said. The company expects to see a decline in video removals as it scales back that technology as more moderators return to the office, she added.
As the backlogs formed, countries toughened their stance on harmful content.
The EU recently reached a landmark deal requiring the prompt removal of harmful materials online, while Canada is promising to soon table a bill combating online hate, after a previous iteration was shelved amid a federal election.
Adams says the convergence ofCOVID-19, Trump's rise and the killing of Floyd made publishers more willing to take a stand against problematic content such as hate speech and health misinformation. Legislation, which can vary across countries and often be left up to interpretation, could result in companies having even less tolerance and taking anything that runs the risk of being seen as problematic down, she said.
The stakes are high because letting too much problematic content on a platform can make it unsafe, but removing too much can also interfere with free speech, said Anatoliy Gruzd, a Toronto Metropolitan University professor of information technology management.
"From the user side, that may feel like there's not enough effort to make platforms a welcoming and safe place for everyone, and in part that's because the platforms become so huge, with millions and billions of users at once," he said.
Gruzd doesn't see striking a balance between safety and freedom getting any easier as the policy patchwork evolves, but believes society will move toward considering boundaries and what is acceptable or not to be exposed to.
He said, "Some people will vote with their usage, whether they stop using Facebook or Twitter for certain things, they might decide to go to other platforms with or without too much moderation or they may decide to stop using social media completely."
This report by The Canadian Press was first published May 22, 2022.
RISKIN REPORTS
CTVNews.ca Top Stories
BREAKING | 6 dead, 30 wounded in shooting at Chicago-area July 4 parade
A gunman on a rooftop opened fire on an Independence Day parade in suburban Chicago on Monday, killing at least six people, wounding at least 30 and sending hundreds of marchers, parents with strollers and children on bicycles fleeing in terror, police said. The suspect remained on the loose hours later as authorities scoured the area.

Former Sask. premier Brad Wall gave strategic advice to key convoy organizer
Former Saskatchewan premier Brad Wall was in contact with a key organizer of the Freedom Convoy anti-mandate protest, providing strategic advice before and after the Ottawa occupation began, according to court records obtained by CTV News.
Daughter of Toronto Blue Jays coach killed in 'terrible accident' while tubing in U.S.
The 17-year-old daughter of the Toronto Blue Jays' first base coach died in a 'terrible accident' while tubing in the U.S. this weekend.
U.S. man to be charged with kidnapping, rape after Edmonton teen found: Oregon police
A 41-year-old man will be charged with kidnapping and rape after an Edmonton girl who was missing for more than a week was found, Oregon City Police said.
'It's the real deal': Doctors warn about future wave fuelled by Omicron variants
COVID-19 cases are rising again in Canada, with the two fast-spreading Omicron sub-variants known as BA.4 and BA.5 to blame. CTVNews.ca has a guide to what you need to know about the new variants.
Canada signs $20B compensation agreement on First Nations child welfare
The federal government says it has signed a $20-billion final settlement agreement to compensate First Nations children and families harmed by chronic underfunding of child welfare.
Canadian airlines, airports top global list of delays over the weekend
Canadian airlines and airports claimed top spots in flight delays over the July long weekend, notching more than nearly any other around the world.
U.S. Capitol riot: More people turn up with evidence against Donald Trump
More witnesses are coming forward with new details on the Jan. 6 U.S. Capitol riot following former White House aide Cassidy Hutchinson's devastating testimony last week against former U.S. President Donald Trump, says a member of a U.S. House committee investigating the insurrection.
'He was a hero': Family says Ottawa man killed in fatal collision sacrificed himself
The family of an Ottawa man killed in a Canada Day crash in the west end says Tom Bergeron died exactly as he lived: selflessly thinking of others before himself.