Algorithmic bias: how to prevent artificial intelligence from acquiring people’s biases
From smartphones to devices, from search engines to virtual assistants, there are many software and devices that depend on Artificial Intelligence, a tool that has great potential for businesses and society but which from the point of view of ethical view has raised a debate: Can Artificial Intelligence reproduce all aspects of the human mind, down to its unconscious biases? The answer is yes.
Virtually since the introduction of AI into everyday life, there have been cases of systems and devices with attitudes and operations that could perfectly well be characterized as biased and even discriminatory. One of the most recognizable events, featured in the Netflix documentary “Coded Bias,” occurred when African-American MIT researcher Joy Boulamnwini had to don a white mask for a facial recognition system to recognize her face. . And it is that the designers of this solution had mainly taken into account only the white faces during the development of the tool.
In this sense, algorithmic bias has become one of the main challenges for companies producing AI solutions, and the proof is that 65% of managers claim to be aware of these discriminatory trends in Artificial Intelligence, according to a Capgemini study.
The main challenge for these companies in the short term is to develop algorithms and systems that do not inherit the judged and biased opinions and thoughts of their developers. Something that is presented as a difficult goal if you don’t work on showing them that they have biases that quickly and directly influence their work. In other words, AI is not neutral, but learns from all the data and information provided to it, and which today generates mistrust among the population due to the lack of transparency that frequently surrounds it. .
To face this situation, Aiwin – cloud service for corporate video games – analyzed the keys that influence the adoption of discriminatory behaviors by AI, defining the 5 possible solutions to face this problem:
Have diverse teams. One of the main reasons AI doesn’t recognize the faces of non-white people, like in the example of Joy Boulamnwini, is that most of the teams that make algorithms are mostly white men who create unconsciously. programs from algorithms. settings they know without knowing who they are forgetting. To make algorithms that do not fall into these errors, it is necessary to have a diverse and multidisciplinary team that knows how to handle and program technology, being aware of including the entire social environment. The key, in this sense, is for these developers to be aware that the technology they develop will not only be used by people who are physically and intellectually similar to them. What seems obvious is not always so easy to take into account. Using unbiased data to train AI. Another high-profile example of AI discrimination occurred with Amazon, which had to withdraw worker recruitment software a few years ago to discriminate against women in its technical job vacancies. The reason for this behavior is quite simply because the AI system has learned from the profiles of job applicants in recent years that they are predominantly male. It is also essential to have an up-to-date database, representative of all profiles and which is not biased, since Artificial Intelligence can draw on it and learn from it. Periodic review of algorithms. Just as having an up-to-date and unbiased database is essential, it is also very important to monitor and analyze AI activity with some frequency to verify that it is not falling into possible biases against specific individuals and / or groups. That is, it is more beneficial for people to have control over algorithms than it is for technology to self-regulate. A better understanding of what artificial intelligence entails. Leading AI experts consider that the general public does not have enough confidence in this technology, not only because of the effect bias can have on it but also because of ignorance. And it is that the companies themselves and their employees are not really aware of their benefits and risks. For this reason, it is essential to reverse this situation, and with a better understanding of how Artificial Intelligence works in senior executives, a better understanding of the benefits, of how to apply it more effectively and productively to the workplace. daily. and the moral implications of this technology and how to apply it effectively and productively in everyday life. Training in unconscious prejudices. Designers, developers and programmers encode the reality they perceive to create artificial intelligence solutions. And if your perspective is heavily influenced by your own unconscious biases, the tool will also be biased. This situation makes it increasingly necessary to train and raise awareness of the prejudices of these workers. They will only be able to develop reliable AI that discriminates against anyone if they are able to identify and overcome their unconscious biases.
In this sense, Sergio Jimnez, CEO of Aiwin, affirms that “the development of digital products and services without prejudice and without prejudice has become a priority for technological and digital companies, and training and awareness on this issue is urgent for avoid situations of discrimination. To help you on this path, we at Aiwin SHE have developed an unconscious bias awareness and training solution in video game format, with which employees who develop these digital products and services can become aware of your own unconscious biases and how. they affect your daily life.