How can we mitigate bias in AI design?
Overview
This lesson introduces students to the concept of bias in AI training data and its societal implications. Using a real-world example of Google’s Gemini AI, students will explore how overrepresentation and underrepresentation in datasets affect various groups.
- AI & Society
- 60 minutes
Digital Materials
Objectives
After this experience, students will be able to
- Analyze the implications of overrepresentation and underrepresentation in AI training data.
- Identify real-world examples of AI bias and discuss the societal impacts.
- Propose strategies for promoting diversity and inclusion in AI development.
Questions explored
- How can data representation affect various groups?
- How can these biases be corrected?
- What are the challenges associated with over-correction?
Key Terms
Algorithmic Bias
- When AI produces repeatable errors that create unfair outcomes, favoring some groups over others.
overrepresentation
- When certain groups or categories are disproportionately included in a dataset compared to their presence in the real world.
underrepresentation
- When certain groups or categories are inadequately included in a dataset, leading to a lack of diversity in the training data.