AI, Ethics, and Geoethics (CS 5970)


Module 6: Case study on racial bias in health-care algorithms

Summary

  • (30 min) Read the articles
  • (5 min) Read the case study
  • (15-30 min) Discussion on slack
  • (5 min) grading declaration

Readings

Read this recent Nature article (it is a popular press Nature, not a science article) titled “Millions of black people affected by racial bias in health-care algorithms”.  It discusses an analysis of a recent ML algorithm that was unintentionally expressing bias towards providing additional care to non-black patients.

To better understand how systemic anti-black (as well as anti-Muslin or anti-Jewish) discrimination is in health-care, read this Harvard Health article “Racism and discrimination in health care: Providers and patients.”  If you are fascinated by the topic, there are a number of additional sources referred to in this article.   

 

Case study

Imagine that you work for a company that provides AI/ML solutions for a variety of end-users.  Note that by choosing this type of work situation, you and your colleagues may not be as familiar with the biases inherent in each specific industry. You recently won a big contract for a local hospital for one specific task:  build a model that can correctly decide when a patient should be admitted for potential heart attack and when the patient is safe to send home.  The data you have comes from all patients in this large hospital chain that covers a major metropolitan area (includes both suburban and urban patients).  When a patient arrives at the ER with chest pains, they get an EKG (which provides time-series data about the heart).  You also have access to their health history data as provided to the hospital.  Your task is to build a model that saves lives and saves money (e.g. admitting everyone does a lot of people no favors, it costs them extra in health care costs and exposes them to hospital germs and also overloads the health care system).

Discussion

This discussion will happen in the #case-studies channel. Remember to use threads so that we can keep track of the conversation more easily.

Consider the reading and the case study situation above.

  1. What potential biases could exist in this data?  Do you think there could be biases in the EKG data even without the health history data?
  2. How can you verify unintentional biases that show up in your model? 
  3. What can you do to improve trust in your model for all people? (Yes, this question is hard!)

Declarations

  • OU students: After you have done your reading and engaged actively in discussion, complete the grading declaration titled “Module 6: Case study on racial bias in health-care”