by Connor Courtien, RDPFS Intern
Artificial intelligence (AI) brings convenience and efficiency to many organizations by automating well-defined processes, allowing people to spend their time on more novel or complex problems that require a more nuanced approach. However, that lack of nuance in automation can incur a cost, introducing bias into the processes it seeks to make more efficient. As outlined in an article by The Partnership on Employment and Accessible Technology (PEAT), this bias can often be at the expense of inclusivity for those with disabilities, particularly regarding hiring processes. An example is a digital application form that screens out an applicant who has a six-month gap in employment because they needed to address medical concerns related to their disability. This person has all the skills relevant for the job, but this qualified candidate won’t move on to the interview phase because of this bias in the hiring process’ automation. A counter-example is offered of how automation – when engineered for inclusion – can actually help to eliminate such biases in the hiring process. Instead of requiring applicants to fill out a form with their employment history and screening based on that information, AI could instead be used to create a chatbot that simply asks candidates about their skills and experience relevant to the role. It would then move all qualified candidates along in the hiring process. More examples and additional information about bias and inclusion in AI for those with disabilities are provided in the full article Reduce Bias and Increase Inclusion in Your AI Hiring Technology.