Ethics and Governance of Artificial Intelligence Reading Group | Samantha Bates | August 08, 2017

H2O

Ethics and Governance of Artificial Intelligence Reading Group

by Samantha Bates Show/Hide
Elon Musk, Bill Gates, and Stephen Hawking say that AI poses an existential threat to humanity. Whether or not that's true, the mainstreaming of tightly-coupled complex autonomous systems raises pressing questions now. Who bears responsibility for what an autonomous system does? When and how should governments seek to regulate their uses? This reading group will explore questions like these in two areas: self-driving cars and risk scores in the justice system. EDIT PLAYLIST INFORMATION DELETE PLAYLIST

Edit playlist item notes below to have a mix of public & private notes, or:

MAKE ALL NOTES PUBLIC (4/4 playlist item notes are public) MAKE ALL NOTES PRIVATE (0/4 playlist item notes are private)
    1. 1.1 Show/Hide More “Minds, Brains, and Programs” by John Searle (Behavioral and Brain Sciences, 1980).
      Notes:
      Searle presents a thought experiment reflecting on fundamental semantic and philosophical differences between human minds and computational processes. Is it theoretically possible for a computer to “think” like a human being?
    2. 1.2 Show/Hide More “The Chinese Room Argument” by David Cole (Stanford Encyclopedia of Philosophy, Updated 2014).
      Read Sections 1, 3, Introduction to 4, 4.1-4.1.1, 5.1-5.3, Conclusion. The remainder is optional.
      Notes:
      An entry from the Stanford Encyclopedia of Philosophy outlining the Chinese Room argument and several significant critiques it has faced. What are the key issues at stake in the debate surrounding the Chinese Room argument?
    3. 1.3 Show/Hide More “Thinking Machines: The Search for Artificial Intelligence” by Jacob Roberts (Chemical Heritage Foundation, 2016).
      Notes:
      This piece outlines key milestones and ideas in the development of AI technology. How have public perceptions of AI development diverged from its realities? Have such perceptions changed over time?
    1. 2.1 Show/Hide More “Whose Life Should Your Car Save?” by Jean-François Bonnefon, Azim Shariff, Iyad Rahwan (New York Times, 2016).
      Notes:
      The authors introduce challenging ethical questions related to how autonomous vehicles should handle unavoidable accidents with potentially lethal consequences. Should autonomous vehicles prioritize the safety of their passengers relative to that of pedestrians and passengers in other vehicles?
    2. 2.3 Show/Hide More “The Numbers Don’t Lie: Self-Driving Cars Are Getting Good” by Alex Davies (Wired, 2017).
      Notes:
      Davies investigates the improvement of a number of autonomous vehicles. How should we monitor the performance of autonomous vehicles during development and deployment?
    3. 2.4 Show/Hide More “Who’s Responsible When a Self-Driving Car Crashes?” by Corinne Iozzio (Scientific American, 2016).
      Notes:
      This article outlines a number of liability-related questions that are relevant to autonomous vehicles that cause an accident. Is an autonomous vehicle manufacturer responsible for damages inflicted by its products throughout their lifespan?
    4. 2.5 Show/Hide More “How Drive.ai is Mastering Autonomous Driving With Deep Learning” by Evan Ackerman (IEEE, 2017).
      Notes:
      Ackerman explores the challenges and promises involved in developing heavily AI-dependent autonomous vehicle platforms. What level of “black box” opacity should we accept in an autonomous vehicle’s decisionmaking process?
    5. 2.6 Show/Hide More “Tesla’s Self-Driving System Cleared in Deadly Crash” by Neal E. Boudette (New York Times, 2017).
      Notes:
      This article describes the outcome of an investigation into Tesla’s Autopilot mode following a deadly collision in May 2016. In instances where human drivers have contributed to the behavior of autonomous vehicle systems, how should liability be distributed between the human and the system?
    6. 2.7 Show/Hide More “Securing the Future of Driverless Cars” by Darrell West (Brookings, 2016).
      Notes:
      This report details some prospective benefits and challenges of autonomous vehicle deployment with a strong focus on regulatory action and the creation of standards. How could the rise of autonomous vehicles be disruptive to established legal, social, and economic institutions?
    7. 2.8 Show/Hide More “Autonomous Vehicles | Self-Driving Vehicles Enacted Legislation” (National Conference of State Legislatures, 2017).
      Notes:
      This source provides an overview of enacted legislation governing autonomous vehicle systems. How could discrepancies between states’ regulations on autonomous vehicles pose a challenge to their deployment? How should autonomous vehicle manufacturers handle different legislation across jurisdictions?
    1. 3.1 Show/Hide More "Sent to Prison by a Software Program's Secret Algorithms" by Adam Liptak (New York Times, 2017).
      Notes:
      This article provides an overview of the major events and issues surrounding the use of a sentencing algorithm in the case of Wisconsin man Eric Loomis. Do you agree with the article’s claim that “There are good reasons to use data to ensure uniformity in sentencing”?
    2. 3.2 Show/Hide More “How We Analyzed the COMPAS Recidivism Algorithm” by Jeff Larson, Surya Mattu, Lauren Kirchner and Julia Angwin (Pro Publica, 2016).
      Notes:
      A data-driven investigative analysis of the <span class="caps">COMPAS</span> algorithm used in the Loomis case and elsewhere to determine whether a criminal is likely to commit another offense. How can technically engaged public activists shed light on the problems and failures of deployed AI technology?
    3. 3.3 Show/Hide More State of Wisconsin v. Eric Loomis
      Original Creator: Samantha Bates
      Notes:
      State of Wisconsin v. Eric L. Loomis is a 2016 case heard by the Supreme Court of Wisconsin. The opinion reflects on the use and limitations of algorithms as applied to sentencing practices. Can algorithms trained on data from a broad swath of the population help inform suitably individualized sentencing?
    4. 3.4 Show/Hide More [OPTIONAL] “Fairness Through Awareness” by Cynthia Dwork et al. (ITCS, 2012).
      Notes:
      This paper articulates a framework for classifying individuals in as fair and unbiased a manner as possible. What, fundamentally, is fairness? Does the paper succeed in making fairness explicable in formal, quantitative terms?
    1. 4.1 Show/Hide More "Our Fear of Artificial Intelligence" by Paul Ford (MIT Technology Review, February 11, 2015).
      Notes:
      This piece provides a good overview of the central arguments of Nick Bostrom’s book, Superintelligence, and explains why many others share his alarm about AI. The author suggests that we may be able to avoid harmful consequences if we design AI to respect human interests and values.
    2. 4.2 Show/Hide More "Can we Build AI without Losing Control Over It?" by Sam Harris (June 2016).
      Notes:
      Sam Harris echoes Nick Bostrom’s concerns about AI and focuses on the lack of concern expressed by AI supporters and the general public about AI and the future of human civilization. According to Harris, the development of superintelligent machines is inevitable. He proposes that we think about how to build superintelligent AI that respects humans and shares our interests. Given what we’ve read and discussed, do you think superintelligent AI is inevitable? How should we mitigate the risks that Harris identifies?
    3. 4.3 Show/Hide More "Why Zuckerberg and Musk are Fighting About the Robot Future" by Ian Bogost (The Atlantic, July 27, 2017).
      Notes:
      Bogost points out that the recent back and forth between Mark Zuckerberg and Elon Musk about whether we should be worried or excited about AI is mainly fueled by Zuckerberg’s and Musk’s business interests. Does this perspective change your opinion about ongoing debates about the future of AI?
    4. 4.4 Show/Hide More "A Blueprint For Coexistence with Artificial Intelligence" by Kai-Fu Lee (Wired, July 12, 2017).
      Notes:
      This article offers the opposite perspective and claims that our quality of life will improve if we can find a way to work with machines. The author acknowledges that machines will displace many human workers, but maintains that there will always be a need for humans because of our capacity to love and connect emotionally with one another. How can machines work with humans rather than against them and how can we best address concerns about the impact of AI on the job market?
    5. 4.5 Show/Hide More "How I learned to Stop Worrying and Love A.I." by Robert Burton (The New York Times, September 21, 2015).
      Notes:
      The author points out that machine intelligence stems primarily from computing power. Machines may be able to outwit humans when it comes to quantifiable data, but they will always lack emotional intelligence. Will machines eventually develop emotional intelligence and how would it impact our society?
Close

Playlist Information

September 05, 2017

Author Stats

Samantha Bates

Research Associate

Harvard Law School, Berkman Center

Other Playlists by Samantha Bates

Find Items

Search below to find items, then drag and drop items onto playlists you own. To add items to nested playlists, you must first expand those playlists.

SEARCH
Leitura Garamond Futura Verdana Proxima Nova Dagny Web
small medium large extra-large