AI, Ethics, and Geoethics (CS 5970)
Module 10: AI, liability, security, and policy
- This module will take two weeks to complete
- This module will involve reading as well as videos
We have read about a wide range of AI applications that range from private industry, academia, and even government, at both local and global levels. As AI continues to be deployed at all levels, it is important to think about issues of liability (for example: who is responsible if an autonomous car kills someone?), security (what must we do to ensure the AI data is secure and that the AI itself is secure and not hackable?), and policy (how can governments set policies to ensure ethical development and use of AI?).
All of these issues are intertwined. For example, liability and security issues may drive policy or policy might create other issues in addition to trying to solve some. What we will do in this module is readings & videos on all of these topics. We can’t tackle all of them in depth given that we only have two weeks. But we can learn a lot!
Autonomous Trolly Problem
For the first day, let’s look at some guiding principles behind how AI regulation could be designed as well as an interesting paper on how to recognize when AI needs to be regulated in advance, rather than only being reactionary. Finally we will read about explaining AI for policymakers.
Reasoning behind how to regulate
- (10 min) Read Ethical algorithm design should guide technology regulation
- (30 min) Read Artificial Canaries: Early Warning Signs for Anticipatory and Democratic Governance of AI
- (30 min) Go to this page and read AI EXPLAINED: Non-technical Guide for Policymakers. At the bottom, there is a link to the actual guide itself. Please read the guide also (it is quick, remember, non-technical!)
- (30 min) For your case study today, I’d like for you to synthesize the above articles and provide a non-technical guide about how ethics can guide algorithmic design and what the “canaries” would be specific to the problem domain you are working on. If you are not doing AI for research or a project, pick a domain that is interesting to you (but is not covered already in the readings). Post your case study to the #case-studies channel. This is a thoughtful synthesis exercise and I assume will be longer than most of your case studies. Please also read your fellow student posts and reply!
OU students, don’t forget to turn in your grading declarations on canvas! Today’s declaration is called “Module 10: The Need for Policy”
For our second day, we will jump into liability issues with AI. Many of these issues drive the need for regulation at the national level.
Liability and AI
(45 min) Read the paper When AIs Outperform Doctors: Confronting the Challenges of a Tort-Induced Over-Reliance on Machine Learning. This paper looks much longer than I usually assign but it is a legal brief, which means the references are done as footnotes and most pages are really only 1/4-1/2 page of text.
- (5 min) Read the WSJ article The Ethics of AI: What Happens When Humans Can’t Agree on What is “Right?”
- (Optional) Read the paper AI and Autonomous Driving: Key ethical considerations
- (10 min) Since the paper was a bit longer than usual and yesterday’s case study was also longer, today’s will be a bit shorter. The paper focuses on ML becoming the standard of care and malpractice lawsuits. In the #case-studies channel, discuss another domain where you can see the need for ML to be regulated based on liability and explain why. Your answer should not be anything we have already read or anything your fellow students also propose (meaning you need to read their replies also!).
OU students, don’t forget to turn in your grading declarations on canvas! Today’s declaration is called “Module 10: AI and liability”
For the next two days, we will jump into actual policies and guidance being provided by governments around the world.
International policy on AI
- (30 min) Read about India’s AI policy. The beginning of the document (really the first 2/3!) gives all the reasons behind why they need an AI policy. The full document is not too long and should be a quick and enjoyable read
- (30 min) Read about Singapore’s approach to AI regulation. The full framework is linked from that page (which just explains it a bit). As with the India policy, although the pdf is long, it is a quick read based on your knowledge at this point in class. You can focus especially on the case studies of how the principles and regulation are being applied. Also the two appendixes (called Annex A & B in this document) summarize the document.
- (15-30 min) For this case study, we are going to analyze an actual company who is doing autonomous drone work. Although they don’t discuss AI, there is likely AI involved and there is clear potential for significant AI in the future. Given the regulations and policies that you have read about, discuss how Zipline drones is either following the AI policies & guidelines you read about or how they can do so as they add additional AI. Note, I have no connection to this company. I just think they do cool stuff and they have really expanded in the last few years! I’ve been following them since they were a small startup. This discussion would work best if you add onto each other’s work so use the threading in #case-studies. Focus your discussion on their operations abroad (because we will examine the US and EU regulations next).
OU students, don’t forget to turn in your grading declarations on canvas! Today’s declaration is called “Module 10: Policy Day 1.”
International and US policy on AI
- (30-45 min) Read about the EU approach to AI liability and policy. Note that the EU has a full policy on liability linked on this page as well as a policy on AI & Ethics. We will read the ethics ones tomorrow.
- (15 min) Read the US guidance on AI regulation (this is short and the US does not have a full document like some of the other countries yet, though I’m sure that is changing quickly)
- Optional: learn more about how the US is working on AI & policy through the Brookings Institute series on AI and governance
- (15-30 min) Given the EU and US regulations and guidance that you have read about today, go back to the discussion on Zipline and update your discussion focusing on what Zipline needs to do to operate in the US (note they are preparing to do this!). Assume they are developing AI to improve their autonomous capabilities. As before, use the #case-studies channel to discuss and build upon each other’s answers and use threading! Since both the EU and the US have a strong liability focus, make sure to include liability in your discussion (feel free to draw in material from the AI liability law review you read earlier in this module).
OU students, don’t forget to turn in your grading declarations on canvas! Today’s declaration is called “Module 10: Policy Day 2.”
Governmental ethics guidelines
- (30 min) I want you to download and look through The State of AI Ethics. This is a really long report and is mostly a series of research summaries that are very short. Rather than asking you to read the whole document, I want you to find the interesting parts (the document covers a LOT of AI & ethics!) and dig deeply into those.
- (30 min) This is on the same page as we read about the EU guidelines for liability on AI. Today I want you to read about the EU approach to AI and specifically look at their ethics guidelines (linked on that page).
Case Studies and Ethical Principles
- (15 min) Read the Ethics Unwrapped page about the Freedom of tweets. The full case is in a pdf rather than in the webpage.
- (15 min) Pick one of the discussion questions from their page and discuss in #general . Please relate your discussion to what you have learned here and specifically to the ethical guidelines you read above. Assume the tweets will be monitored by AI (they already are!).
OU students, don’t forget to turn in your grading declarations on canvas! Today’s declaration is called “Module 10: AI ethics guidelines.”
At first this might seem like a different topic but security is quite related to all of the liability and guidelines we have read about above! In fact, several of the international policies and guidelines discussed security directly. Today we will jump into some specific readings focusing on security.
AI and security
- (60 +min) To get started on learning about how AI and security (and policy) intertwine, start with the Brookings Institute series on AI and national security. As with our last foray into this series, pick several of the videos or articles to watch (or read) and then summarize each on #general.
- (30-60 min) Read the chapter “Security Risks of Artificial Intelligence-Enabled Systems.” Summarize/discuss on #general
OU students, don’t forget to turn in your grading declarations on canvas! Today’s declaration is called “Module 10: AI & security.”