Last year, Microsoft’s artificial intelligence chat bot ‘Tay’ raged SNS with racist remarks. Tay said, “Hitler is right and I do not like the Jews.” He also insisted that “the barrier between the United States and Mexico should be installed at the border, and that the costs must be paid by Mexico.” As Tay had trouble, Microsoft had to stop service in less than a day.
The belief that artificial intelligence, unlike humans, is objective and unbiased is no longer valid.
This is because the machine learning algorithm given to artificial intelligence is based on a database that contains the sexual and racial prejudices that exist in our society. The language is a classic example.
British daily the Guardian introduced a new study on how artificial intelligence learns and reveals human prejudices based on English sentences. The study, published in the latest issue of Science Magazine, warned that implicit social prejudices in language can potentially affect the behavior of artificial intelligence.
Researchers used a machine learning tool called ‘word embedding’ to make computers understand human language. Word embedding refers to making a map of a statistical network by grasping the frequency of other words in which the words are used together. It is mainly used for web search and automatic translation. This is because the computer can understand better when entering the cultural and social context of the word than simply entering the dictionary meaning.
Artificial intelligence showed the connection of the word ‘pleasantness’ with flowers and music in the universe of the learned language. On the other hand, words such as insects and weapons were perceived as far from pleasantness.
The problem is that artificial intelligence connects the word ‘woman’ or ‘girl’ with a lot of artistic, humanistic, and housewives, and the words ‘man’ and ‘male’ are closer to mathematics or engineering. Also, European American names are more easily associated with ‘gift’ or ‘happy’ words, while African American names are linked with negative words.
“The artificial intelligence is learning our prejudices,” said co-author of the study and computer scientist Joanna Bryson of the University of Bath in England. “In contrast to humans, artificial intelligence does not have a moral concept. It is possible to further strengthen the prejudice AI have already learned.”
This study not only scientifically analyzed the risk that artificial intelligence and machine learning outcomes may not be fair or ethically correct, but it also explains how certain social prejudices can be used and changed in historical contexts.
In fact, researchers have found that there is a strong correlation between labor demographics in the US Department of Labor and language use of artificial intelligence.
Principal researcher Arvind Narayanan, a computer Science at Princeton University said, “When analyzing only the use of language, the relationship between the gender word and the occupational word was found to be about 90% consistent with how well women are related to those jobs in the real world.”
Artificial intelligence can also be useful in preserving the native language of many ethnic minorities disappearing from the planet and understanding the differences in sociocultural emotions in various languages. It means that human cultural diversity can be expanded through artificial intelligence. Artificial intelligence that mirrors human beings will be able to broaden the reflexive and philosophical horizons of mankind. The reason that the more artificial intelligence resembles who human beings are the more the expectation for the future increases.