AI-based applications are hitting the market rapidly and another one is gaining ground. Research answering the question, “Can we partner with artificial intelligence and machine learning tools to build a more equitable workforce?” co-authored by Margrét Bjarnadóttir at the University of Maryland’s Robert H. Smith School of Business opens up a possibility.
Incorporating AI into the human resources process has to date failed. For example, in 2018, Amazon.com was forced to abandon an AI recruiting tool it built, when it was discovered to be discriminating against female job applicants.
In the research, Bjarnadóttir and her co-authors examine the roots of AI biases and offer solutions that could unlock the potential of analytics tools in the human resources space. One recommendation is to create a bias dashboard that parses a model’s performance for different groups, and checklists for assessing your work – one for internal analytical projects, and another for adopting a vendor’s tool.
HR data is typically unbalanced, meaning, not all demographic groups are equally represented and this causes issues when algorithms interact with the data. A simple example of that interaction is the fact that analytical models will typically perform best for the majority group, because a good performance for that group simply weights the most in the overall accuracy – the measure that most off-the-shelf algorithms optimize. Even carefully built models won’t lead to equal outcomes for different demographic groups.
The first step in doing better is to apply a bias-aware analytical process that asks the right questions of the data, of modeling decisions and of vendors, and then both monitors the statistical performance of the models, but perhaps more importantly, monitors the tool´s impact on the different employee groups.
Original Source: PR Newswire