皇冠足球体育

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Understanding AI bias: Lesson plan

How does AI bias happen?
Common sense education

Understanding AI bias: Lesson plan

GRADES 6–12
20 minutes
Artificial intelligence is trained on real-world data that people have given it, and if that data contains biases (or is incomplete), the AI can end up being biased, too. In this lesson, students will think critically about the training data that informs what AI tools can do, and consider possible ways to reduce AI bias.
An apple labeled as "apple" and a tomato labeled as "apple," with question marks floating around them.

Objectives

  • Define AI bias.
  • Understand how AI bias happens.
  • Reflect on ways to reduce AI bias.

Vocabulary

  • AI bias – when an AI tool makes a decision that is wrong or problematic because it learned from training data that didn't treat all people, places, and things accurately
  • training data – the information given to an AI to help it learn how to do specific tasks
  • testing data – the information used to check whether the AI that was created is reliable and accurate

What you'll need

Before the lesson

We encourage teaching the following lessons to help set a foundational understanding of how AI works:

Step by step

  1. Say: When computer scientists create AI, they use two different types of data: training data and testing data (Slide 4).
Training data is the information given to an AI to help it learn how to do specific tasks (Slide 5). Testing data is the information used to check whether the AI that was created is reliable and accurate (Slide 6).
  1. Say: Imagine we are computer scientists and we're in the process of creating an AI tool. The purpose of the tool we're building is to identify different types of fruits. We have some training data to help us get started (Slide 7).
  2. Ask: Based on these examples of training data, what types of fruit might our AI be able to identify? (Slide 8)
  3. Show Slide 9 and explain that the images here show examples of the testing data used to check if the AI is working properly. The labels under each image are what the AI thinks each fruit is called.
Ask: Do you notice any mistakes? Why do you think the AI is making these mistakes? (Slide 10)
  1. Explain that the mistakes the AI made are an example of AI bias, which is when an AI tool makes a decision that is wrong or problematic because it learned from training data that didn't treat all people, places, and things accurately (Slide 11).
Show Slide 12 and say: In the training data, apples were the only example of a red fruit. The testing data shows that the AI learned to identify anything red as an apple. In other words, the AI we created has a bias toward thinking that every red fruit is an apple (Slide 12).
  1. Say: What are some ways we could reduce the AI bias of this fruit detector? (Slide 13)
Invite students to share out, and then review the suggestions on Slide 14.
  1. Say: While it's almost impossible to completely eliminate AI bias from a tool, we can do our best to reduce it by coming up with as diverse and complete a set of training data as possible (Slide 15).
  2. If time permits, read Slide 16 and have students work independently to come up with a list of image descriptors. Then, have them pair up to compare their lists and continue to add any additional image descriptors.
Review the descriptors on Slide 17 and continue to add to the list based on any other ideas the students have.
  1. Say: Remember that behind every AI tool are humans making decisions on what training data the tool will use. Understanding how AI bias occurs can help us think critically about its potential impacts (Slide 18).

Want to join the conversation?