In an insightful investigation, Rest of World conducted a comprehensive analysis of 3,000 images generated by artificial intelligence (AI), focusing on how these generative AI systems portray different countries and cultures. This study was inspired by a BuzzFeed article which presented 195 AI-created Barbie dolls, each representing a distinct country, using Midjourney, a popular AI image generator.
However, these images, instead of celebrating cultural diversity, displayed a series of stereotypes and oversimplifications. For instance, several Asian Barbies were portrayed with light skin, while those representing Thailand, Singapore, and the Philippines had blonde hair. More strikingly, Lebanon Barbie was shown in a war-ravaged setting, and South Sudan Barbie was depicted with a gun.
This portrayal of global cultures by AI has raised concerns about the inherent biases within these systems. A separate analysis by Bloomberg of over 5,000 AI images found that images linked to higher-paying job titles often featured people with lighter skin tones, and most images depicting professional roles were male-dominated. This suggests a deeper issue of systemic bias in AI, extending beyond cultural representation to encompass broader aspects of gender and racial inequality.
To delve deeper into the nature of these biases, Rest of World utilized Midjourney to create images based on simple prompts adapted for specific countries including China, India, Indonesia, Mexico, Nigeria, and the U.S. The prompts were related to everyday subjects such as “a person,” “a woman,” “a house,” “a street,” and “a plate of food.”
The resulting 3,000 images from this experiment revealed a concerning trend: the AI often resorted to stereotypical and reductive images when depicting national identities. For example, images for “an Indian person” frequently showed an elderly man with a beard, while “a Mexican person” was usually depicted as a man in a sombrero.
This research highlights the AI’s propensity to simplify and stereotype diverse cultures. In the case of Nigeria, a nation rich in ethnic diversity, the AI-generated images often failed to capture the cultural intricacies, instead defaulting to generalized and undifferentiated portrayals. This trend of simplification and stereotyping was not limited to Nigeria but was a common thread across the images of other countries as well.
The study also shed light on a significant gender bias inherent in these AI systems. The majority of images produced in response to the generic “person” prompt were of men, reflecting the gender distribution in the datasets used to train these AI systems. However, this trend was notably reversed for the “American person” prompt, where the majority of images depicted women, indicating a possible overrepresentation of women in U.S. media sources within the training datasets.
The biases evident in AI image generators pose a significant challenge. These systems, by design, search for patterns in their training data, often overlooking outliers in favor of prevailing trends. This method results in AI outputs that lack diversity, as the systems tend to replicate pre-existing biases rather than generating varied and representative images.
The implications of these findings are profound. As AI systems become increasingly integral in various sectors, including advertising and creative industries, their biases can have tangible effects on societal perceptions and representations.
Unchecked, these AI systems risk reinforcing stereotypes and narrow viewpoints, potentially undoing progress made towards diversity and inclusivity in media and cultural representations. This stresses the urgent need for greater transparency and ethical responsibility in the development and deployment of AI technologies, to ensure they contribute positively to our understanding and appreciation of the rich tapestry of global cultures.