Transgender, nonbinary and disabled people more likely to view AI negatively, study shows
AI seems to be well on its way to becoming pervasive. You hear rumbles of AI being used, somewhere behind the scenes, at your doctor’s office. You suspect it may have played a role in hiring decisions during your last job search. Sometimes – maybe even often – you use it yourself.
And yet, while AI now influences high-stakes decisions such as what kinds of medical care people receive, who gets hired and what news people see, these decisions are not always made equitably. Research has shown that algorithmic bias often harms marginalized groups. Facial recognition systems often misclassify transgender and nonbinary people, AI used in law enforcement can lead to the unwarranted arrest of Black people at disproportionately high rates, and algorithmic diagnostic systems can prevent disabled people from accessing necessary health care.
These inequalities raise a question: Do gender and racial minorities and disabled people have more negative attitudes toward AI than the general U.S. population?
I’m a social computing scholar who studies how marginalized people and communities use social technologies. In a new study, my colleagues Samuel Reiji Mayworm, © The Conversation
