Growing concerns about how artificial intelligence (AI) makes decisions has inspired U.S. researchers to make computers explain their “thinking.”

“Computers are going to become increasingly important parts of our lives, if they aren’t already, and the automation is just going to improve over time, so it’s increasingly important to know why these complicated systems are making the decisions that they are,” assistant professor of computer science at the University of California Irvine, Sameer Singh, told CTV’s Your Morning on Tuesday.

Singh explained that, in almost every application of machine learning and AI, there are cases where the computers do something completely unexpected.

“Sometimes it’s a good thing, it’s doing something much smarter than we realize,” he said. “But sometimes it’s picking up on things that it shouldn’t.”

Such was the case with the Microsoft AI chatbot, Tay, which became racist in less than a day. Another high-profile incident occurred in 2015, when Google’s photo app mistakenly labelled a black couple as gorillas.

Singh says incidents like that can happen because the data AI learns from is based on humans; either decisions humans made in the past or basic social-economic structures that appear in the data.

“When machine learning models use that data they tend to inherit those biases,” said Singh.

“In fact, it can get much worse where if the AI agents are part of a loop where they’re making decisions, even the future data, the biases get reinforced,” he added.

Researchers hope that, by seeing the thought process of the computers, they can make sure AI doesn’t pick up any gender or racial biases that humans have.

However, Google’s research director Peter Norvig cast doubt on the concept of explainable AI.

“You can ask a human, but, you know, what cognitive psychologists have discovered is that when you ask a human you’re not really getting at the decision process. They make a decision first, and then you ask, and then they generate an explanation and that may not be the true explanation,” he said at an event in June in Sydney, Australia.

“So we might end up being in the same place with machine learning where we train one system to get an answer and then we train another system to say – given the input of this first system, now it’s your job to generate an explanation.”

Norvig suggests looking for patterns in the decisions themselves, rather than the inner workings behind them.

But Singh says understanding the decision process is critical for future use, particularly in cases where AI is making decisions, like approving loan applications, for example.

“It’s important to know what details they’re using. Not just if they’re using your race column or your gender column but are they using proxy signals like your location, which we know it could be an indicator of race or other problematic attributes,” explained Singh.

Over the last year there’s been multiple efforts to find out how to better explain the rational of AI.

Currently, The Defense Advanced Research Projects Agency (DARPA) is funding 13 different research groups, which are pursuing a range of approaches to making AI more explainable.