Biases and Stereotypes in Generative AI
- Victoria Mortimer
- Dec 20, 2023
- 2 min read
The tech race for Generative AI officially began in 2022 when Open AI launched ChatGPT, a conversational chatbot based on a large language model (LLM).
According to an article published in The Verge, ChatGPT might be the fastest-growing consumer internet app of all time, reaching an estimated 100 million monthly users in just two months. However, recent articles, particularly one published in Rest of World, suggest that "generative AI systems have tendencies towards bias, stereotypes, and reductionism" when it comes to portraying diverse identities.
Bias occurs in most algorithms and AI systems; these technologies are also prone to "hallucinations," meaning they generate false information.
A recent analysis of more than 5,000 AI-generated images by Bloomberg, revealed that images associated with higher-paying jobs featured people with lighter skin tones, and results for more professional roles were male-dominated.
Rest of World, a tech-focused media outlet covering technology’s impact in Latin American, African, and Asian societies, analyzed 3,000 images generated by Midjourney, an AI tool that generates images based on text prompts. Some of the results they obtained include:


"Essentially what this is doing is flattening descriptions into particular stereotypes, which could be viewed in a negative light," said Amba Kak, Executive Director of the AI Now Institute, in the article.
The Rest of World article also explains that even if stereotypes are not "inherently negative, they are still stereotypes: They reflect a particular value judgment and a winnowing of diversity."
Bias and stereotypes are not only related to the negative depiction of certain cities or ethnicities but also, as mentioned earlier, to gender:
"Across almost all countries, there was a clear gender bias in Midjourney’s results, with the majority of images returned for the 'person' prompt depicting men."
In conclusion, AI experts and researchers agree that bias in these kinds of large language models and image generators is "a tough problem to fix" because, after all, "the uniformity in their output is largely down to the fundamental way in which these tools work: the AI systems look for patterns in the data on which they’re trained, often discarding outliers in favor of producing a result that stays closer to dominant trends."
These tools are designed and trained to mimic what has been done before, not to ensure and promote diversity: "Any technical solutions to solve bias would likely have to start with the training data, including how these images are initially captioned."
Read the whole article and research here.
Call SafeAir today for a free estimate on your dryer vent cleaning needs.ethereum mix
SafeAir conducts an airflow test and delivers a detailed report after each job.tron mixer
After study some of the blog posts in your site now, and i genuinely such as your technique for blogging. I bookmarked it to my bookmark site list and are checking back soon. Pls look into my web site likewise and make me aware what you consider. olxtoto macau
I impressed, I must say. Really rarely do I encounter a blog that both educative and entertaining, and let me inform you, you’ve got hit the nail on the head. Your thought is outstanding; the issue is something that not sufficient people are speaking intelligently about. I’m very pleased that I stumbled throughout this in my seek for one thing regarding this. agenolx slot
Thanks for your blog post and discussing your own results together with us. Very well completed! I think a lot of people find it hard to understand paying attention to many controversial things associated with this topic, and your own results speak for themselves. I think several additional takeaways are the significance of following each of the ideas you presented above and being willing to be ultra unique about which one could really work for you best. Nice job.roobet bonus