On this very first day of the SOTL 2024 advent calendar, we have a blog exploring the use of generative AI in education and teaching. It is a reflective piece on the development of a teaching resource and what was learned in its creation.
Illuminating the Intersections of Generative AI, Gender, Race, and Ethnicity: A Journey of Critical Exploration
Dustin Hosseini and Nayiri Keshishi
Over the past year, we have explored the intricate connections between generative AI and race, gender, and ethnicity. Our exploration has culminated in developing a comprehensive teaching resource designed to foster critical thinking and analysis about the underlying dynamics of these intersections within AI. In this post, we share what our scholarship encompasses.
What We Did
Our project began with experimenting with generative AI in summer 2023, which then led to a collaboration culminating in creating a resource that challenges students and educators to think critically about how generative AI interacts with social constructs such as race, gender, and ethnicity. To achieve this, we designed a teaching package consisting of PowerPoint slides and accompanying worksheets. These materials are crafted to provoke meaningful discussions and reflections on the often-overlooked intersections of racialized gender, race, and ethnicity within the realm of AI. To this end, understanding intersectionality (Crenshaw, 1991, hooks, 2015, Hill Collins, 2019) forms a crucial part of this workshop. Our resources are designed to encourage critical thinking, prompting students to ask questions like: How do the datasets used to train AI systems reflect societal biases? In what ways might these systems reproduce or challenge gender and racial stereotypes? What are the ethical implications of these technologies for marginalised communities?
We wanted the resource to be accessible to a broad audience, so we made it available online for free. Whether you’re an educator in the UK or beyond, the resource is designed to be flexible enough to suit a variety of educational contexts. It can be used in lectures, seminars, or workshops and encourages deep engagement with the ethical, social, and cultural dimensions of AI technology. Our materials are now live and can be accessed via the following links:
Additionally, we have actively sought feedback from students and educators who have engaged with our workshops and resources. This feedback is invaluable, as it helps us refine our materials and informs a future journal article that will explore these themes further.
Why We Did It
The motivation behind this project stems from our concern about the growing influence of generative AI technologies in society and the underexplored ways in which they intersect with issues of race, gender, and ethnicity. However, generative AI tools are not neutral; they are created by humans, drawing on vast datasets that often carry biases and assumptions about race, gender, and ethnicity (Benjamin, 2019, Mohamed et al., 2020; Zembylas, 2023.)
We aimed to create a resource that could empower educators to guide their students through critically examining what these technologies create. We wanted to encourage learners to reflect on how gen AI can reinforce or create new inequalities. By focusing on the intersections of racialized gender and ethnicity, we hoped to shed light on how these technologies can perpetuate harmful stereotypes or marginalise already vulnerable groups. In short, we aimed to illustrate how gen AI is not just a technical issue but a deeply social and political one.
What We Learned
Through our research and developing this resource, we learned that many educators and students are eager to engage with these complex issues. We also learned that the conversation around gen AI and social issues is still developing. Many people are just beginning to realise the profound ways in which AI technologies are shaped by—and, in turn, shape—social inequalities. This makes it all the more important to have resources like ours available to foster informed, critical discussions in classrooms and beyond.
Impact
The impact of our project is already being felt. Educators across different institutions have begun to incorporate our materials into their teaching, helping to spark discussions about the social dimensions of AI. One of our colleagues from The Glasgow School of Art remarked, “I just want to say how useful I’ve found your resource and how pleased I am that it’s now widely accessible.” Another colleague from XJTLU remarked “Your dedication and hard work have significantly enhanced the learning experience for our students, and we deeply appreciate your efforts in making this educational material engaging and informative.” This feedback underscores the importance of providing accessible, high-quality educational materials on the subject.
Moreover, our project has also evolved beyond its original scope. The blog post that introduced our resource has since been developed into a pre-print publication, further solidifying our commitment to advancing discourse in this field. For those interested in exploring our research in greater depth, you can access the pre-print here: Hosseini, D. D. (2024, February 3). Generative AI: a problematic illustration of the intersections of racialized gender, race, ethnicity. https://doi.org/10.31219/osf.io/987ra
As we refine our materials and gather feedback, we’re excited to see how educators and students use them to deepen their understanding of these crucial topics. By illuminating the intersections of technology and society, we hope to contribute to a more equitable and inclusive future where AI is developed and deployed with a critical awareness of its social impacts.
Thank you for reading, and we look forward to continuing the conversation about the intersections of generative AI, race, gender, and ethnicity. Together, we can foster a deeper understanding of the complexities of technology and its role in shaping our world.
N.B. Contact Dustin Hosseini ([email protected]) to find out more about the project, including staff feedback from the University of Glasgow.
References
- Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press.
- Crenshaw, K. (1991). Mapping the Margins: Intersectionality, Identity Politics, and Violence against Women of Color. Stanford Law Review, 43, No. 6, 1241-1299. https://doi.org/10.2307/1229039
- Hill Collins, P. (2019). Intersectionality as Critical Social Theory. Duke University Press.
- hooks, b. (2015). Ain’t I a Woman: Black Women and Feminism. Routledge.
- Mohamed, S., Png, M.-T., & Isaac, W. (2020). Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philosophy & Technology, 33(4), 659–684. https://doi.org/10.1007/s13347-020-00405-8
- Zembylas, M. (2023). A decolonial approach to AI in higher education teaching and learning: strategies for undoing the ethics of digital neocolonialism. Learning, Media and Technology, 48(1), 25-37. https://doi.org/10.1080/17439884.2021.2010094
Festive Song: Stille Nacht
On this first day of the Advent Calendar, we have a beautiful, festive rendition of Stille Nacht from The King’s Singers in a live performance from King’s College Chapel. 😇👼
Lead editor: Colin Mack
Editors: Louise Sheridan, Shaun Bremner-Hart & Edward Beggan