Five basic principles for ethical data management in HR departments
The use of artificial intelligence (AI) in human resource management was seen as another step to streamline processes, maximize performance, and ultimately increase business expectations.
BY RRHHDigital, 3:30 p.m. – February 17, 2021
Over the past 20 years, the ability to collect, store and process data by users and organizations has grown significantly. There are new tools that help automate processes, learn things that could not be seen before, recognize patterns, and predict what is likely to happen. Therefore, the use of artificial intelligence (AI) in human resource management was seen as an additional step to streamline processes, maximize performance and ultimately increase business expectations. For this reason, ADP, a world leader in the use of technologies for the management of human capital (HCM), identifies 5 principles to take into account to define a solid strategy for ethical data management and Artificial Intelligence for processes From Human resources.
“The ethical use of data and algorithms means working to do the right thing in the design, functionality and use of data in artificial intelligence. It assesses how data is used and what it is used for, considering who has and should have access to it, and anticipating how the data could be misused. It means thinking about what data should and shouldn’t connect to other data and how to store, move, and use it safely. Ethical use considerations include privacy, bias, accessibility, personally identifiable information, encryption, and legal requirements and restrictions, ”says Brbara Gmez, Director of BPI & ADP Service Excellence for Southern Europe.
Data-driven tools and AI are changing organizations and the way you work, but it’s important not to draw the wrong conclusions from the data, amplify biases, or trust the opinions or predictions of the company. AI without knowing in depth what they are based on. Ethics is about recognizing competing interests and considering what is right and asking questions such as: what does it matter? What is needed? What is fair? What could possibly go wrong? Should it be done? To answer these questions, ADP has defined a series of parameters to take into account when managing data.
5 common principles for the ethical use of data and AI
Transparency. This includes revealing the data collected, decisions made using AI, and whether the user is dealing with robots or humans. It also means being able to explain how algorithms work and what their results are based on. In this way, the information provided can be evaluated according to the problems to be solved. Transparency also includes the methodology that a company puts in place to inform its workers what data they have about them and how it is used. In this process, it is necessary to give the users the option to correct or delete the necessary information. AI doesn’t just offer information. Sometimes he offers opinions. Businesses need to think about how they use these tools and the information they provide. As the data comes from and relates to humans, it is essential to look for bias in the data collected, in the rules applied and in the questions asked. For example, if you want to increase diversity in hiring, you won’t have to rely solely on tools that provide data on workers who have been successful in the organization in the past. This information alone is likely to yield the same information rather than greater diversity. While there is no way to completely eliminate bias in tools created by and about people, you need to understand how tools are biased so that we can reduce and manage bias and correct it for the taking. decision making. Precision. The data used in AI must be current and accurate. It is essential to find suitable methodologies in order to be able to correct them. Data must also be handled, cleaned, classified, connected and shared with care to maintain its accuracy. Sometimes taking data out of a particular context can make it appear misleading or bogus. Accuracy depends, on the one hand, on the veracity of the data and, on the other hand, on their meaning and usefulness for the intended purposes. Confidentiality. Over the years, new laws have been established which recognize the right to privacy of the human being (name, image, financial and medical records, personal relationships, houses and properties), although it remains to be resolved how balance privacy and need. to use as much personal information. Policymakers have become more comfortable allowing anonymous data to be used more widely than data in which it is easy to know or find out who it is, but as more data is collected and connected , new questions arise on how to maintain this anonymity. Another privacy issue is information security and what users need to know about who has information about them and how they use it. Responsibility. It’s not just about complying with global laws and regulations. Accountability relates to the accuracy and integrity of data sources, understanding and assessing the risks and possible consequences of developing and using data and AI to ensure that new tools and technologies are created in an ethical manner.
As organizations continue to develop their own internal ethical practices and countries continue to establish new, more specific legal requirements, more specific legal standards and foundations for the ethical use of data and AI can be determined. . Gmez adds that “the way and amount of information that flows through new tools like AI is a challenge that all organizations must face. Data ethics involves asking tough questions about the potential risks and consequences for the people who are the data and the organizations that use it. “