糖心Vlog

Hiring is normally featured as a prime example for algorithmic bias. This is where a tendency to favour some groups over others becomes accidentally fixed in an AI system designed to perform a specific task.

There are about this. Perhaps the best known example is when tried to use AI in recruitment. In this case, CVs were used as the data to train, or improve, this AI.

Since most of the CVs were from men, the AI learned to filter out anything associated with women, such as being the president of the women’s chess club or a graduate from a women’s college. Needless to say that Amazon did not end up using the system more widely.

Similarly, the practice of filming and then using an AI to analyse them for a candidate’s suitability is regularly criticised for its . Yet proponents of AI in hiring suggest that it makes hiring processes fairer and more transparent by reducing . This raises a question: is AI used in hiring inevitably reproducing bias, or could it actually make hiring fairer?

From a technical perspective, algorithmic bias refers to that lead to unequal outcomes for different groups. However, rather than seeing algorithmic bias as an error, it can also be seen as a function of society. AI is often based on data drawn from the real world and these datasets reflect society.

For example, if women of colour are underrepresented in datasets, facial recognition software has a higher when identifying women with darker skin tones. Similarly, for video interviews, there is that tone of voice, accent or gender- and race-specific language patterns may influence assessments.

Multiple biases

Another is that AI might learn, based on the data, that people called “Mark” do better than people named “Mary” and are thus ranked higher. Existing biases in society are reflected in and amplified through data.

Of course, data is not the only way in which AI-supported hiring might be biased. While designing AI draws on the expertise of a of people such as data scientists and experts in machine learning (where an AI system can be trained to improve at what it does), programmers, HR professionals, recruiters, industrial and organisational psychologists and hiring managers, it is often claimed that only of machine learning researchers are women. This raises concerns that the group of people designing these technologies is rather .

Machine learning processes can be biased too. For instance, a company that uses data to help companies hire programmers found that a strong predictor for good coding skills was frequenting a particular website. Hypothetically, if you wanted to hire programmers and use such data in machine learning, an AI might then suggest targeting individuals who studied programming at university, have “programmer” in their current job title and like . While the first two criteria are job requirements, the final one is not required to perform the job and and therefore should not be used. As such, the design of AI in hiring technologies requires careful consideration if we are aiming to create algorithms that support inclusion.

Impact assessments and that check systematically for discriminatory effects are crucial to ensure that AI in hiring is not perpetuating biases. The findings can then be used to tweak and the technology to ensure that such biases do not reoccur.

Careful consideration

of hiring technologies have developed different tools such as auditing to check outcomes against protected characteristics or monitoring for discrimination by identifying masculine and feminine words. As such, audits can be a useful tool to evaluate if hiring technologies produce biased outcomes and to rectify that.

So is using AI in hiring leading inevitably to discrimination? In my recent , I showed that if AI is used in a naive way, without implementing safeguards to avoid algorithmic bias, then the technology will repeat and amplify biases that exist in society and potentially also create new biases that did not exist before.

However, if implemented with a consideration for inclusion in the underlying data, in the designs adopted and in how decisions are taken, AI-supported hiring might in fact be a tool to create more inclusion.

AI-supported hiring does not mean that the final hiring decisions are or should be left to algorithms. Such technologies can be used to filter candidates, but the final hiring decisions rests with humans. Therefore, hiring can be improved if AI is implemented with attention to diversity and inclusion. But if the final hiring decision is made by a hiring  who is not aware of how to create an inclusive environment, bias can creep back in.The Conversation

 

This article is republished from under a Creative Commons license. Read the .