In an interview earlier this month with The Brussels Times, Guy De Pauw, CEO of the company Textgain, described efforts to develop artificial intelligence capable of analyzing hate speech. He explained how current AI programs, such as ChatGPT and Google Translate, have rejected efforts to review toxic language online.
“In our commercial projects, we were early adopters of [large language model] technology. But we noticed that for hate-speech detection, we couldn’t really use them,” he said. “You’ve probably had that experience yourself—as soon as you send anything toxic to ChatGPT, it will refuse to handle it.”
He says a team of 12 seeks to create a model that can recognize context in bigoted language.
De Pauw, who anticipated that his company’s final product could be worth millions of euros, said the program aims to “identify toxic messages and also find out what exactly they are about, who is being targeted and understand deeper patterns that are a lot more complex.”
While the Belgian company had spent nearly a decade developing its AI, De Pauw said “people are finally understanding what the technology can do for them. It was always something we had to explain, but now people know, and that makes a big difference.”
He said that contrary to other open-source models for AI, the company would not release its program since “if you release something like this open source, then bad actors will use it to start producing hate speech at scale, which is not the intention.”