Generative AI Tools Like DALL-E Risk Reinforcing Racist, Gendered Stereotypes, Warns Educator

Generative Ai Tools Like Dall-E Risk Reinforcing Racist, Gendered Stereotypes, Warns Educator

Generative AI tools, such as DALL-E, can reproduce racist and gendered representations of people, according to a paper by Donnesh Dustin Hosseini, a decolonial and anti-racist educator. The study uses critical visual analysis and critical race semiotics to examine 16 images produced by DALL-E, revealing the problematic nature of using generative AI to create representations of people. The paper warns educators to exercise caution when using generative AI, as misrepresentations can create harmful reference points. The study also provides insights into the benefits of using generative AI for creating illustrations and images, but highlights the need for a critical and ethical approach.

What is the Problem with Generative AI in Representing Race and Gender?

Generative AI tools, such as DALL-E, have been found to reproduce racist, gendered, and racially gendered representations of people. This paper, written by Donnesh Dustin Hosseini, a decolonial and anti-racist educator affiliated with the University of Glasgow and University of Strathclyde, investigates these issues. Hosseini uses a combination of critical visual analysis and critical race semiotics to analyze a series of 16 images produced by DALL-E. The analysis reveals the problematic nature of using generative AI for creating representations of people, particularly how racially gendered avatars are portrayed. These creations are underpinned by contemporary values that are racist in nature.

The paper also provides educators with insights into the affordances of using generative AI for creating illustrations and images. However, it warns them to exercise caution and always take a critical and ethical stance when using generative AI. Misrepresentations can create harmful points of reference for learners and educators alike, and it is well documented that some groups are more vulnerable than others to misrepresentation and misclassification.

How Does Generative AI Work and What is its History?

Artificial intelligence (AI) emerged in the 1950s-60s with the emergence of modern computing and has since become embedded in everything from games such as SimCity to word processing software such as Microsoft Word. Generative AI, however, represents a more recent development. It represents AI that draws on pre-existing data to create new content such as text, images, and media.

For learners and educators, GenAI allows anyone with a modern computer and sufficiently speedy access to the Internet to experiment with creating content. This content can be used to illustrate knowledge in broad terms, including topics related to geography, cultures, languages, and peoples, or ideas, concepts, and theories related to mathematics, physics, biology, and chemistry, among others. However, educators must consider caveats of a practical and ethical nature.

What are the Ethical Issues with Generative AI?

The ethical issues of creating images with GenAI tools are significant. Misrepresentations of race, races, and gender can create harmful points of reference for learners and educators alike. Some groups are more vulnerable than others to misrepresentation and misclassification. Therefore, educators must exercise caution and always take a critical and ethical stance when using generative AI.

The potential power of creating images and representations of ideas, people, and concepts can help educators in their daily instructional and learning practices. However, the integration of generative AI tools into their work can be both exhilarating and bewildering. For example, tools like ChatGPT respond to textual prompts to craft written material. While these text-generating tools have been prominent in discourse since mid-2022, it’s worth noting that generative AI encompasses a broader range of instruments.

How Can Educators Use Generative AI Responsibly?

Educators who wish to use GenAI to create illustrations and representations of peoples and cultures must be aware of the potential for misrepresentation. They should conduct their own experiments with a critical eye, using tools like DALL-E and other text-to-image GenAI tools. They should also be aware of the underpinnings of GenAI and what it can create.

The paper encourages readers to connect with the issues presented while considering the underpinning decolonial thinking and analysis. The ideas and examples provided can help to develop teaching practice with students and colleagues, no matter their experiences as educators, while developing a critical awareness of the underpinnings of GenAI and what it can create.

What are the Implications of the Study?

The study concludes that the use of generative AI in creating representations of people is problematic. The images produced by tools like DALL-E are underpinned by contemporary values that are racist in nature. Therefore, educators must exercise caution when using these tools and always take a critical and ethical stance.

The study also provides insights into the affordances of using generative AI for creating illustrations and images. It offers examples of how algorithmic oppression operates by constructing misrepresentations of race, races, and gender. These insights can help educators develop their teaching practice while developing a critical awareness of the underpinnings of GenAI and what it can create.

Publication details: “Generative AI: a problematic illustration of the intersections of racialized gender, race, ethnicity”
Publication Date: 2024-02-03
Authors: Davood K Hosseini
DOI: https://doi.org/10.31219/osf.io/987ra