Recognizing and Addressing

 Recognizing and Addressing Bias in Generative Models to Promote Fairness



The Center for Information Technology Policy at Princeton University revealed that biases similar to those of humans can be picked up by machine learning algorithms from their training data in a research published in 2021. A study about the AI hiring tool from Amazon [1] is one eye-catching illustration of this impact. The tool was ranking the various applicants after being trained on resumes Amazon received in the previous year. The enormous gender pay gap in the software industry over the past ten years taught the algorithm to link words with women, such as women's sports teams, and to devalue such resumes. This illustration demonstrates the need for models that are not just fair and accurate.

Click

Post a Comment

0 Comments