Researchers reduce bias in AI models while preserving or improving accuracy

Machine-learning models can fail when they try to make predictions for individuals who were underrepresented in the datasets they were trained on.For instance, a model that predicts the best treatment option for someone with a chronic disease may be trained using a dataset that contains mostly male patients. That model might make incorrect predictions for female patients when deployed in a hospital.To improve outcomes, engineers can try balancing the training dataset by removing data points until all subgroups are represented equally. While dataset balancing is promising, it often requires removing…

This content is for Member members only.
Log In Register