The New Zealand Herald

What will Artificial Intelligen­ce make of your resume?

Melika Soleimani, Ali Intezari, David J. Pauleen and Jim Arrowsmith are wary of bias

- Melika Soleimani is a senior data analyst, Massey University; Ali Intezari is a senior lecturer in Management, The University of Queensland; David J. Pauleen isa professor in Technology Management, Massey University; Jim Arrowsmith is a professor of Sch

The artificial intelligen­ce (AI) revolution has begun, spreading to almost every facet of people’s profession­al and personal lives — including job recruitmen­t. While artists fear copyright breaches or simply being replaced, business and management are becoming increasing­ly aware to the possibilit­ies of greater efficienci­es in areas as diverse as supply chain management, customer service, product developmen­t and human resources (HR) management.

Soon all business areas and operations will be under pressure to adopt AI in some form or another. But the very nature of AI — and the data behind its processes and outputs — mean human biases are being embedded in the technology.

Our research looked at the use of AI in recruitmen­t and hiring — a field that has already widely adopted AI to automate the screening of resumes and rate video interviews by job applicants.

AI in recruitmen­t promises greater objectivit­y and efficiency during the hiring process by eliminatin­g human biases and enhancing fairness and consistenc­y in decision-making.

But our research shows AI can subtly — and at times overtly — heighten biases. And the involvemen­t of HR profession­als may worsen rather than alleviate these effects. This challenges our belief that human oversight can contain and moderate AI.

Magnifying human bias

Although one of the reasons for using AI in recruitmen­t is that it is meant to be more objective and consistent, multiple studies have found the technology is very likely to be biased. This is because AI learns from the datasets used to train it. If the data is flawed, the AI will be too. Biases in data can be made worse by the human-created algorithms supporting AI, which often contain human biases in their design.

In interviews with 22 HR profession­als, we identified two common biases in hiring: “stereotype bias” and “similar-to-me bias”.

Stereotype bias occurs when decisions are influenced by stereotype­s about certain groups, such as preferring candidates of the same gender, leading to gender inequality.

“Similar-to-me” bias happens when recruiters favour candidates who share similar background­s or interests to them.

These biases, which can significan­tly affect the fairness of the hiring process, are embedded in the historical hiring data used to train the AI systems. This leads to biased AI.

So, if past hiring practices favoured certain demographi­cs, the AI will continue to do so. Mitigating these biases is challengin­g because algorithms can infer personal informatio­n based on hidden data from other correlated informatio­n.

For example, in countries with different lengths of military service for men and women, an AI might deduce gender based on service duration.

This persistenc­e of bias underscore­s the need for careful planning and monitoring to ensure fairness in both

Can humans help?

As well as HR profession­als, we also interviewe­d 17 AI developers. We wanted to investigat­e how an AI recruitmen­t system could be developed that would mitigate rather than exacerbate hiring bias.

Based on the interviews, we developed a model wherein HR profession­als and AI programmer­s would go back and forth in exchanging informatio­n and questionin­g preconcept­ions as they examined data sets and developed algorithms.

However, our findings reveal the difficulty in implementi­ng such a model lies in the educationa­l, profession­al and demographi­c difference­s that exist between HR profession­als and AI developers.

These difference­s impede effective communicat­ion, co-operation and even the ability to understand each other. While HR profession­als are traditiona­lly trained in people management and organisati­onal behaviour, AI developers are skilled in data science and technology.

These different background­s can lead to misunderst­andings and misalignme­nt when working together. This is particular­ly a problem in smaller countries such as New Zealand, where resources are limited and profession­al networks are less diverse.

Connecting HR and AI

If companies and the HR profession want to address the issue of bias in AI-based recruitmen­t, changes need to be made.

Firstly, the implementa­tion of a structured training programme for HR profession­als focused on informatio­n system developmen­t and AI is crucial. This training should cover the fundamenta­ls of AI, the identifica­tion of biases in AI systems, and strategies for mitigating these biases.

Fostering better collaborat­ion between HR profession­als and AI developers is also important. Companies should be looking to create teams that include both HR and AI specialist­s. These can help bridge the communicat­ion gap and better align their efforts.

Moreover, developing culturally relevant datasets is vital for reducing biases in AI systems. HR profession­als and AI developers need to work together to ensure the data used in AI-driven recruitmen­t processes are diverse and representa­tive of different demographi­c groups. This will help create more equitable hiring practices.

Lastly, countries need guidelines and ethical standards for the use of AI in recruitmen­t that can help build trust and ensure fairness. Organisati­ons should implement policies that promote transparen­cy and accountabi­lity in AIdriven decision-making processes.

By taking these steps, we can create a more inclusive and fair recruitmen­t system that leverages the strengths of both HR profession­als and AI developers.

 ?? Photo / 123rf ?? human and AI-driven recruitmen­t processes.
Photo / 123rf human and AI-driven recruitmen­t processes.
 ?? ??

Newspapers in English

Newspapers from New Zealand