top of page
Writer's pictureVictoria Mortimer

Biases and Stereotypes in Generative AI

The tech race for Generative AI officially began in 2022 when Open AI launched ChatGPT, a conversational chatbot based on a large language model (LLM).


According to an article published in The Verge, ChatGPT might be the fastest-growing consumer internet app of all time, reaching an estimated 100 million monthly users in just two months. However, recent articles, particularly one published in Rest of World, suggest that "generative AI systems have tendencies towards bias, stereotypes, and reductionism" when it comes to portraying diverse identities.


Bias occurs in most algorithms and AI systems; these technologies are also prone to "hallucinations," meaning they generate false information.


A recent analysis of more than 5,000 AI-generated images by Bloomberg, revealed that images associated with higher-paying jobs featured people with lighter skin tones, and results for more professional roles were male-dominated.


Rest of World, a tech-focused media outlet covering technology’s impact in Latin American, African, and Asian societies, analyzed 3,000 images generated by Midjourney, an AI tool that generates images based on text prompts. Some of the results they obtained include:



Credits: Rest of World

Credits: Rest of World

"Essentially what this is doing is flattening descriptions into particular stereotypes, which could be viewed in a negative light," said Amba Kak, Executive Director of the AI Now Institute, in the article.


The Rest of World article also explains that even if stereotypes are not "inherently negative, they are still stereotypes: They reflect a particular value judgment and a winnowing of diversity."


Bias and stereotypes are not only related to the negative depiction of certain cities or ethnicities but also, as mentioned earlier, to gender:


"Across almost all countries, there was a clear gender bias in Midjourney’s results, with the majority of images returned for the 'person' prompt depicting men."

In conclusion, AI experts and researchers agree that bias in these kinds of large language models and image generators is "a tough problem to fix" because, after all, "the uniformity in their output is largely down to the fundamental way in which these tools work: the AI systems look for patterns in the data on which they’re trained, often discarding outliers in favor of producing a result that stays closer to dominant trends."


These tools are designed and trained to mimic what has been done before, not to ensure and promote diversity: "Any technical solutions to solve bias would likely have to start with the training data, including how these images are initially captioned."


Read the whole article and research here.


105 views8 comments

8 Comments


sijeye6674
Dec 30, 2024

Superbly written article, if only all bloggers offered the same content as you, the internet would be a far better place.. WhatsApp网页版

Like

Ahmed mujtaba
Ahmed mujtaba
Dec 26, 2024

I would like to add if you do not now have an insurance policy or you do not belong to any group insurance, you could possibly well make use of seeking the aid of a health insurance broker. Self-employed or individuals with medical conditions normally seek the help of an health insurance broker. Thanks for your post. ซื้อหวยออนไลน์เว็บไหนดี

Like

yaxawi8695
Dec 23, 2024

I believe one of your commercials caused my internet browser to resize, you may well want to put that on your blacklist. University of AlMaarif

Like

Soniya Singhania
Soniya Singhania
Dec 10, 2024

The Escorts Service in Hotel Royal Plaza Connaught Place is tailored to meet the sophisticated preferences of its guests. It’s a favored option for travelers looking to strike the ideal balance between relaxation and adventure while visiting Connaught Place.

Like

yaxawi8695
Dec 05, 2024

I’ve read a few good stuff here. Definitely worth bookmarking for revisiting. I wonder how much effort you put to make such a fantastic informative site. courtyard house plans

Like
bottom of page