top of page
Study provides evidence of AI’s alarming dialect prejudice
​
University World News article by Nathan Greenfield 

While large language models (LLMs) like ChatGPT-4 have been trained to avoid answers that overtly racially stereotype, a new study shows that they “covertly” stereotype African Americans who speak in the dialect prevalent in New York, Detroit, Washington DC, and other cities such as Los Angeles. 

​

In “AI generates covertly racial decisions about people based on their dialect” published in Nature (August 2024), a team of three researchers working with Dr Valentin Hofmann at the Allen Institute for AI in Seattle shows how AI’s (learned) prejudice against African-American English (AAE) can have harmful and dangerous consequences.

Aligning AI with human values needs a democratic approach
​
University World News article by Nathan Greenfield

In the 35 years since Craig Kaplan received his doctorate from Carnegie Mellon University in Pittsburgh, Pennsylvania, the field of computer science has progressed from mainframe computer systems to personal computers like Apple’s Macintosh 128K and high-end home computers with memories a million times larger and central processing units (CPUs) that are several orders of magnitude faster.

​

This increase in memory and processing speed has allowed the creation of artificial intelligence (AI) systems such as ChatGPT and Microsoft’s Bing Chat. These and other large language models (LLMs) were trained three times more data (for example, internet postings, books, articles, legal documents, scientific papers) than is contained in the Library of Congress in Washington, DC.

bottom of page