Why a group of Canadian doctors says workplace sick notes need to go
Canadian doctors are calling for employers and schools not to require sick notes when it comes to short-term minor illnesses.
A child psychiatrist who altered a first-day-of-school photo he saw on Facebook to make a group of girls appear nude. A U.S. Army soldier accused of creating images depicting children he knew being sexually abused. A software engineer charged with generating hyper-realistic sexually explicit images of children.
Law enforcement agencies across the U.S. are cracking down on a troubling spread of child sexual abuse imagery created through artificial intelligence technology — from manipulated photos of real children to graphic depictions of computer-generated kids. Justice Department officials say they're aggressively going after offenders who exploit AI tools, while states are racing to ensure people generating “deepfakes” and other harmful imagery of kids can be prosecuted under their laws.
“We’ve got to signal early and often that it is a crime, that it will be investigated and prosecuted when the evidence supports it,” Steven Grocki, who leads the Justice Department's Child Exploitation and Obscenity Section, said in an interview with The Associated Press. “And if you’re sitting there thinking otherwise, you fundamentally are wrong. And it’s only a matter of time before somebody holds you accountable.”
The Justice Department says existing federal laws clearly apply to such content, and recently brought what’s believed to be the first federal case involving purely AI-generated imagery — meaning the children depicted are not real but virtual. In another case, federal authorities in August arrested a U.S. soldier stationed in Alaska accused of running innocent pictures of real children he knew through an AI chatbot to make the images sexually explicit.
Trying to catch up to technology
The prosecutions come as child advocates are urgently working to curb the misuse of technology to prevent a flood of disturbing images officials fear could make it harder to rescue real victims. Law enforcement officials worry investigators will waste time and resources trying to identify and track down exploited children who don’t really exist.
Lawmakers, meanwhile, are passing a flurry of legislation to ensure local prosecutors can bring charges under state laws for AI-generated “deepfakes” and other sexually explicit images of kids. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered child sexual abuse imagery, according to a review by The National Center for Missing & Exploited Children.
“We’re playing catch-up as law enforcement to a technology that, frankly, is moving far faster than we are," said Ventura County, California District Attorney Erik Nasarenko.
Nasarenko pushed legislation signed last month by Gov. Gavin Newsom which makes clear that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office could not prosecute eight cases involving AI-generated content between last December and mid-September because California's law had required prosecutors to prove the imagery depicted a real child.
AI-generated child sexual abuse images can be used to groom children, law enforcement officials say. And even if they aren’t physically abused, kids can be deeply impacted when their image is morphed to appear sexually explicit.
“I felt like a part of me had been taken away. Even though I was not physically violated,” said 17-year-old Kaylin Hayman, who starred on the Disney Channel show “Just Roll with It” and helped push the California bill after she became a victim of “deepfake” imagery.
Hayman testified last year at the federal trial of the man who digitally superimposed her face and those of other child actors onto bodies performing sex acts. He was sentenced in May to more than 14 years in prison.
Open-source AI-models that users can download on their computers are known to be favoured by offenders, who can further train or modify the tools to churn out explicit depictions of children, experts say. Abusers trade tips in dark web communities about how to manipulate AI tools to create such content, officials say.
A report last year by the Stanford Internet Observatory found that a research dataset that was the source for leading AI image-makers such as Stable Diffusion contained links to sexually explicit images of kids, contributing to the ease with which some tools have been able to produce harmful imagery. The dataset was taken down, and researchers later said they deleted more than 2,000 weblinks to suspected child sexual abuse imagery from it.
Top technology companies, including Google, OpenAI and Stability AI, have agreed to work with anti-child sexual abuse organization Thorn to combat the spread of child sexual abuse images.
But experts say more should have been done at the outset to prevent misuse before the technology became widely available. And steps companies are taking now to make it harder to abuse future versions of AI tools "will do little to prevent" offenders from running older versions of models on their computer “without detection," a Justice Department prosecutor noted in recent court papers.
“Time was not spent on making the products safe, as opposed to efficient, and it's very hard to do after the fact — as we’ve seen,” said David Thiel, the Stanford Internet Observatory's chief technologist.
AI images get more realistic
The National Center for Missing & Exploited Children's CyberTipline last year received about 4,700 reports of content involving AI technology — a small fraction of the more than 36 million total reports of suspected child sexual exploitation. By October of this year, the group was fielding about 450 reports per month of AI-involved content, said Yiota Souras, the group’s chief legal officer.
Those numbers may be an undercount, however, as the images are so realistic it's often difficult to tell whether they were AI-generated, experts say.
“Investigators are spending hours just trying to determine if an image actually depicts a real minor or if it’s AI-generated,” said Rikole Kelly, deputy Ventura County district attorney, who helped write the California bill. “It used to be that there were some really clear indicators ... with the advances in AI technology, that’s just not the case anymore.”
Justice Department officials say they already have the tools under federal law to go after offenders for such imagery.
The U.S. Supreme Court in 2002 struck down a federal ban on virtual child sexual abuse material. But a federal law signed the following year bans the production of visual depictions, including drawings, of children engaged in sexually explicit conduct that are deemed “obscene.” That law, which the Justice Department says has been used in the past to charge cartoon imagery of child sexual abuse, specifically notes there's no requirement “that the minor depicted actually exist.”
The Justice Department brought that charge in May against a Wisconsin software engineer accused of using AI tool Stable Diffusion to create photorealistic images of children engaged in sexually explicit conduct, and was caught after he sent some to a 15-year-old boy through a direct message on Instagram, authorities say. The man's lawyer, who is pushing to dismiss the charges on First Amendment grounds, declined further comment on the allegations in an email to the AP.
A spokesperson for Stability AI said that man is accused of using an earlier version of the tool that was released by another company, Runway ML. Stability AI says that it has “invested in proactive features to prevent the misuse of AI for the production of harmful content” since taking over the exclusive development of the models. A spokesperson for Runway ML didn't immediately respond to a request for comment from the AP.
In cases involving “deepfakes,” when a real child's photo has been digitally altered to make them sexually explicit, the Justice Department is bringing charges under the federal “child pornography" law. In one case, a North Carolina child psychiatrist who used an AI application to digitally “undress” girls posing on the first day of school in a decades-old photo shared on Facebook was convicted of federal charges last year.
“These laws exist. They will be used. We have the will. We have the resources,” Grocki said. “This is not going to be a low priority that we ignore because there’s not an actual child involved."
Canadian doctors are calling for employers and schools not to require sick notes when it comes to short-term minor illnesses.
Here's how retirees will get their funds from the federal benefit.
Bad Bunny threw his support behind U.S. Vice-President Kamala Harris on Sunday by sharing a video of the Democratic presidential nominee shortly after a comedian at Donald Trump's Madison Square Garden rally made crude jokes about Latinos and called Puerto Rico a 'floating island of garbage,' angering artists and some Hispanic Republicans.
A manhunt is underway in northern Austria after a hunter allegedly fatally shot two people and fled the scene, local police said Monday.
Conservative Leader Pierre Poilievre says if his party forms government, it will scrap the federal sales tax on new homes sold for less than $1 million and push provinces to do the same.
An organization that represents clothing recyclers says they’re frustrated after a W5 investigation found a fake charity and some violent players connected to organized crime have been muscling in on the clothing donation bin industry, and is calling for governments to do more.
A major amusement park is part of Ontario's grand vision to turn the Niagara region into Las Vegas north, but Marineland may not fit the bill, the provincial tourism minister says.
Canada's reliance on food banks has soared to a grim new milestone, according to data from Food Banks Canada.
Adele and Celine Dion were each brought to tears after an emotional interaction during Adele's concert at The Colosseum at Caesars Palace in Las Vegas.
The Westfield & District Recreation Association hosted its first Witches and Warlocks on the Water event Saturday, with costumed paddlers in pointed hats launching from Westfield Beach.
New Brunswicker Jillea Godin’s elaborate cosplay pieces attract thousands to her online accounts, as well as requests from celebrities for their own pieces.
A new resident at a Manitoba animal rescue has waddled her way into people's hearts.
Hundreds of people ran to the music of German composer and pianist Beethoven Wednesday night in a unique race in Halifax.
He is a familiar face to residents of a neighbourhood just west of Roncesvalles Avenue.
A meteor lit up our region's sky last night – with a large fireball shooting across the horizon over Lake Erie at around 7:00 p.m.
Residents of Ottawa's Rideauview neighbourhood say an aggressive wild turkey has become a problem.
A man who lost his life while trying to rescue people from floodwaters, and a 13-year-old boy who saved his family from a dog attack, are among the Nova Scotians who received a medal for bravery Tuesday.
A newly minted Winnipegger is hoping a world record attempt will help bring awareness for the need for more pump track facilities in the city.