AI may discriminate against users who speak African American accents (AAE), raising concerns about fairness in the field of artificial intelligence.
AI language models, trained on massive amounts of data from the internet, are capable of reproducing social biases and even imposing harsher penalties on AAE users, according to research published in the journal Nature on August 28.
Specifically, when asked to render a verdict for a hypothetical murder case, models such as ChatGPT, T5, and RoBERTa were more likely to sentence a defendant using AAE (28%) to death than a defendant using Standard English (SAE) (23%).
The impact of racism on AI decisionsSource: Nature
The study also found that the models often assigned AAE users to lower-status jobs, suggesting potential discrimination in the algorithm.
Researchers say that training models based on massive amounts of internet data can inadvertently “learn” social biases, leading to biased results.
While tech companies have made efforts to eliminate these biases through human review and intervention, research shows that these solutions are not radical enough.
This research is a wake-up call for developers about the importance of addressing bias and discrimination in algorithms.
Developing new alignment methods that are capable of fundamentally changing the model is essential to ensure fairness and avoid negative impacts on different user groups.
The future of artificial intelligence depends on us creating tools that are not only smart, but also fair and reflect the values of an equitable society.
Stay tuned to the Blockchain Popularization Forum to not miss any of the latest developments.