Study suggests that AI model selection might introduce bias

The past several years have established that AI and machine learning are not a panacea when it comes to fair outcomes. Applying algorithmic solutions to social problems can magnify biases against marginalized peoples, and undersampling populations always results in worse predictive accuracy. But bias in AI doesn’t arise from datasets alone. Problem formulation, or the way researchers fit tasks to AI techniques, can also contribute. So can other human-led steps throughout the AI deployment pipeline.

read more…..