AI, Ethics, and Geoethics (CS 5970)


Module 6: Bias, Fairness, and Accountability

Summary

  • (60 min) Read the chapter
  • (15-30 min) Discussion on slack
  • (5 min) grading declaration

Reading

One of the black women data scientists, Ayodele Odubela,  whom I follow on Twitter is releasing a new book called “Getting Started in Data Science.”  She recently released a draft of the book which I have posted to canvas. 

While the majority of the book focuses on getting new people up to speed in data science, she has an entire chapter dedicated to bias and fairness in AI/ML.  This chapter takes a unique view compared to other chapters and videos and readings we have done so far (and will continue to do) in this course and in this module.  

Please read Chapter 8: Bias, Fairness, and Accountability in her book.  OU students can access the book on canvas.  

Book cover for Getting Started in Data Science

 Cover image from Getting Started in Data Science

Discussion

This discussion will happen in the #general channel. Remember to use threads so that we can keep track of the conversation more easily.

There is a LOT to learn from this chapter!  We are going to use the discussion of the different kinds of bias in the case study that follows so today we will focus on other parts of the chapter.  As with our other readings, I put some quotes that stood out to me below.   I would like to hear from you about what stood out to you and also to discuss some of these quotes, which may seem quite provocative.  For example, do AI/ML researchers have a god-complex as she states?  What exactly IS an AI incident response plan?  What does one look like? What would trigger it?  

  • “If you take away only one thing from this chapter, and ultimately this book, treat data in the ways in which you’d treat real people.  You have to remember, in many cases, each row is a human life.” (page 145, Chapter 8, Getting Started in Data Science)
  • “As technologists, we should assume every model we create will encode racist, sexist, and biased norms until we address them specifically.” (page 147, Chapter 8, Getting Started in Data Science)
  • “It is a drastic reduction to excuse bias in our data and models because of the idea that bias exists everywhere.” (page 148, Chapter 8, Getting Started in Data Science)
  • “We must address the fallacy that the problems with biased algorithms are only biased because of the data.  There are many use cases that outright continue to demonize marginalized communities.” (page 149, Chapter 8, Getting Started in Data Science)
  • “One of the major hurdles we have to overcome in ML/AI is the god-complex researchers can have when it comes to their work.” (page 151, Chapter 8, Getting Started in Data Science)
  • “One of the biggest barriers to having reproducible systems is the lack of good documentation.” (page 160, Chapter 8, Getting Started in Data Science)
  • “Build the muscle memory to ask, ‘Who tracks how ML is developed and used at my company? Who is responsible for auditing our ML systems? Do we have AI incident response plans?'” 

Declarations

  • OU students: After you have done your reading and engaged actively in discussion, complete the grading declaration titled “Module 6: Bias, Fairness, and Accountability”