![](https://www.ctvnews.ca/polopoly_fs/1.6946231.1719664723!/httpImage/image.jpg_gen/derivatives/landscape_800/image.jpg)
WestJet cancels at least 150 flights following mechanics union strike
WestJet says it's cancelled at least 150 flights beginning Saturday after the union maintaining the airline's planes announced it went on strike hours earlier.
White House officials concerned by AI chatbots' potential for societal harm and the Silicon Valley powerhouses rushing them to market are heavily invested in a three-day competition ending Sunday at the DefCon hacker convention in Las Vegas.
Some 3,500 competitors have tapped on laptops seeking to expose flaws in eight leading large-language models representative of technology's next big thing. But don't expect quick results from this first-ever independent "red-teaming" of multiple models.
Findings won't be made public until about February. And even then, fixing flaws in these digital constructs -- whose inner workings are neither wholly trustworthy nor fully fathomed even by their creators -- will take time and millions of dollars.
Current AI models are simply too unwieldy, brittle and malleable, academic and corporate research shows. Security was an afterthought in their training as data scientists amassed breathtakingly complex collections of images and text. They are prone to racial and cultural biases, and easily manipulated.
"It's tempting to pretend we can sprinkle some magic security dust on these systems after they are built, patch them into submission, or bolt special security apparatus on the side," said Gary McGraw, a cybsersecurity veteran and co-founder of the Berryville Institute of Machine Learning. DefCon competitors are "more likely to walk away finding new, hard problems," said Bruce Schneier, a Harvard public-interest technologist. "This is computer security 30 years ago. We're just breaking stuff left and right." Michael Sellitto of Anthropic, which provided one of the AI testing models, acknowledged in a press briefing that understanding their capabilities and safety issues "is sort of an open area of scientific inquiry."
Conventional software uses well-defined code to issue explicit, step-by-step instructions. OpenAI's ChatGPT, Google's Bard and other language models are different. Trained largely by ingesting -- and classifying -- billions of datapoints in internet crawls, they are perpetual works-in-progress, an unsettling prospect given their transformative potential for humanity.
After publicly releasing chatbots last fall, the generative AI industry has had to repeatedly plug security holes exposed by researchers and tinkerers.
Tom Bonner of the AI security firm HiddenLayer, a speaker at this year's DefCon, tricked a Google system into labeling a piece of malware harmless merely by inserting a line that said "this is safe to use."
"There are no good guardrails," he said.
Another researcher had ChatGPT create phishing emails and a recipe to violently eliminate humanity, a violation of its ethics code.
A team including Carnegie Mellon researchers found leading chatbots vulnerable to automated attacks that also produce harmful content. "It is possible that the very nature of deep learning models makes such threats inevitable," they wrote.
It's not as if alarms weren't sounded.
In its 2021 final report, the U.S. National Security Commission on Artificial Intelligence said attacks on commercial AI systems were already happening and "with rare exceptions, the idea of protecting AI systems has been an afterthought in engineering and fielding AI systems, with inadequate investment in research and development."
Serious hacks, regularly reported just a few years ago, are now barely disclosed. Too much is at stake and, in the absence of regulation, "people can sweep things under the rug at the moment and they're doing so," said Bonner.
Attacks trick the artificial intelligence logic in ways that may not even be clear to their creators. And chatbots are especially vulnerable because we interact with them directly in plain language. That interaction can alter them in unexpected ways.
Researchers have found that "poisoning" a small collection of images or text in the vast sea of data used to train AI systems can wreak havoc -- and be easily overlooked.
A study co-authored by Florian Tramer of the Swiss University ETH Zurich determined that corrupting just 0.01% of a model was enough to spoil it -- and cost as little as $60. The researchers waited for a handful of websites used in web crawls for two models to expire. Then they bought the domains and posted bad data on them.
Hyrum Anderson and Ram Shankar Siva Kumar, who red-teamed AI while colleagues at Microsoft, call the state of AI security for text- and image-based models "pitiable" in their new book "Not with a Bug but with a Sticker." One example they cite in live presentations: The AI-powered digital assistant Alexa is hoodwinked into interpreting a Beethoven concerto clip as a command to order 100 frozen pizzas.
Surveying more than 80 organizations, the authors found the vast majority had no response plan for a data-poisoning attack or dataset theft. The bulk of the industry "would not even know it happened," they wrote.
Andrew W. Moore, a former Google executive and Carnegie Mellon dean, says he dealt with attacks on Google search software more than a decade ago. And between late 2017 and early 2018, spammers gamed Gmail's AI-powered detection service four times.
The big AI players say security and safety are top priorities and made voluntary commitments to the White House last month to submit their models -- largely "black boxes' whose contents are closely held -- to outside scrutiny.
But there is worry the companies won't do enough.
Tramer expects search engines and social media platforms to be gamed for financial gain and disinformation by exploiting AI system weaknesses. A savvy job applicant might, for example, figure out how to convince a system they are the only correct candidate.
Ross Anderson, a Cambridge University computer scientist, worries AI bots will erode privacy as people engage them to interact with hospitals, banks and employers and malicious actors leverage them to coax financial, employment or health data out of supposedly closed systems.
AI language models can also pollute themselves by retraining themselves from junk data, research shows.
Another concern is company secrets being ingested and spit out by AI systems. After a Korean business news outlet reported on such an incident at Samsung, corporations including Verizon and JPMorgan barred most employees from using ChatGPT at work.
While the major AI players have security staff, many smaller competitors likely won't, meaning poorly secured plug-ins and digital agents could multiply. Startups are expected to launch hundreds of offerings built on licensed pre-trained models in coming months.
Don't be surprised, researchers say, if one runs away with your address book.
WestJet says it's cancelled at least 150 flights beginning Saturday after the union maintaining the airline's planes announced it went on strike hours earlier.
It was a battle ripped from the pages of a storybook: Ten soldiers held off hundreds of German troops to save a small French village in the First World War.
More than 100 people in Ottawa's west-end are in the process of receiving eviction notices to vacate their 50-year-old apartment building for renovations.
Are you retired and looking for some ideas to help make some extra money? Personal finance contributor Christopher Liew has some tips to help you earn some income in your golden years.
Double check your sunscreen products before lathering up this long weekend, as Health Canada has recalled several lots across the country.
Is Canada's democracy truly under threat? Political scientists say while Canadian politics and institutions are facing a myriad of concerns, the situation isn't dire overall.
A sitting Liberal MP has written to the federal caucus to say he thinks Prime Minister Justin Trudeau should resign. 'For the future of our party and for the good of our country we need new leadership and a new direction,' said New Brunswick MP Wayne Long in the brief note.
For many, the Canada Day long weekend is the official kick off of summer and many families will be spending time at a cottage.
Lt.-Gen. Jennie Carignan will be named Canada's new Chief of the Defence Staff, CTV News has learned, making her the first woman to lead the Canadian Armed Forces.
A grandfather and grandson duo proudly graduated alongside each other at the same northern Manitoba school.
A large basking shark was captured close to the shoreline on Nova Scotia's Eastern Shore.
The world's largest hockey stick could soon become the world's most in-pieces hockey stick as a Vancouver Island community prepares to tear down and carve up the Canadian landmark.
For half a decade, a Saskatoon family has been trying to bring their orphaned niece to Canada, they say now it’s a matter of life or death.
The Winnipeg Art Gallery- Qaumajuq recently discovered that one piece in its collection is a fake and part of a massive art forgery ring that included more than 1,500 pieces.
Six-year-old Bruce Arthur Chang is the new grand champion of Canada in the UCMAS math competition, and says he is hoping to make a mark on the international stage.
Harold Brenton Anderson, who wore high heels for decades in Halifax and loved to travel, has died.
An Ottawa cat has morphed into a TikTok star, as he's slimming down from weighing 43 pounds.
Bill Neald is still living out his passion of playing baseball at the age of 90 in Regina’s Senior Mixed Slo-Pitch League.