TORONTO -- Can words posted to social media suggest where a genocide might happen?

Christopher Tuckwood thinks so.

"If we look historically at any case of genocide, mass atrocities, a lot of other collective human rights abuses, a lot of it begins with language – language that's used to identify certain groups, polarize people, dehumanize groups and really start moving that verbal or social violence toward physical violence," he said Wednesday on CTV's Your Morning.

Tuckwood is the executive director of The Sentinel Project, a Canadian non-profit organization that tries to harness new technologies to prevent atrocities such as genocides.

For one of its projects, the organization works with collaborators around the world to create lists of keywords that could suggest hate speech. It then runs a computer program to analyze popular online platforms for those terms.

While hate speech is one concern, misinformation is another – and Tuckwood said the two often go hand-in-hand.

He saw this firsthand while in Kenya shortly after severe violence between ethnic groups in the Tana River District left more than 100 people dead.  Asking around about what had led to the clashes, Tuckwood said he heard much of the conflict was based on rumours about child abductions, water supply poisonings and rival communities planning attacks.

"Most people had no way of knowing if these things were true or not," he said.

"Very clearly, the information deficit in the area and the rumours that people were hearing were very significant."

In response to this, The Sentinel Project created Una Hakika – a text messaged-based service that essentially acts as a fact-checker.

"The idea is to counter that spread of false information [and] give people access to better information that enables them to make better decisions about how to relate to other communities," Tuckwood said.