Elon Musk, Bill Gates, and Stephen Hawking say that AI poses an existential threat to humanity. Whether or not that's true, the mainstreaming of tightly-coupled complex autonomous systems raises pressing questions now. Who bears responsibility for what an autonomous system does? When and how should governments seek to regulate their uses? This reading group will explore questions like these in two areas: self-driving cars and risk scores in the justice system.EDIT PLAYLIST INFORMATIONDELETE PLAYLIST
Edit playlist item notes below to have a mix of public & private notes, or:
Notes: Searle presents a thought experiment reflecting on fundamental semantic and philosophical differences between human minds and computational processes. Is it theoretically possible for a computer to “think” like a human being?
Read Sections 1, 3, Introduction to 4, 4.1-4.1.1, 5.1-5.3, Conclusion. The remainder is optional.
Notes: An entry from the Stanford Encyclopedia of Philosophy outlining the Chinese Room argument and several significant critiques it has faced. What are the key issues at stake in the debate surrounding the Chinese Room argument?
Notes: This piece outlines key milestones and ideas in the development of AI technology. How have public perceptions of AI development diverged from its realities? Have such perceptions changed over time?
Notes: The authors introduce challenging ethical questions related to how autonomous vehicles should handle unavoidable accidents with potentially lethal consequences. Should autonomous vehicles prioritize the safety of their passengers relative to that of pedestrians and passengers in other vehicles?
Notes: This article outlines a number of liability-related questions that are relevant to autonomous vehicles that cause an accident. Is an autonomous vehicle manufacturer responsible for damages inflicted by its products throughout their lifespan?
Notes: Ackerman explores the challenges and promises involved in developing heavily AI-dependent autonomous vehicle platforms. What level of “black box” opacity should we accept in an autonomous vehicle’s decisionmaking process?
Notes: This article describes the outcome of an investigation into Tesla’s Autopilot mode following a deadly collision in May 2016. In instances where human drivers have contributed to the behavior of autonomous vehicle systems, how should liability be distributed between the human and the system?
Notes: This report details some prospective benefits and challenges of autonomous vehicle deployment with a strong focus on regulatory action and the creation of standards. How could the rise of autonomous vehicles be disruptive to established legal, social, and economic institutions?
Notes: This source provides an overview of enacted legislation governing autonomous vehicle systems. How could discrepancies between states’ regulations on autonomous vehicles pose a challenge to their deployment? How should autonomous vehicle manufacturers handle different legislation across jurisdictions?
Notes: This article provides an overview of the major events and issues surrounding the use of a sentencing algorithm in the case of Wisconsin man Eric Loomis. Do you agree with the article’s claim that “There are good reasons to use data to ensure uniformity in sentencing”?
Notes: A data-driven investigative analysis of the <span class="caps">COMPAS</span> algorithm used in the Loomis case and elsewhere to determine whether a criminal is likely to commit another offense. How can technically engaged public activists shed light on the problems and failures of deployed AI technology?
Notes: State of Wisconsin v. Eric L. Loomis is a 2016 case heard by the Supreme Court of Wisconsin. The opinion reflects on the use and limitations of algorithms as applied to sentencing practices. Can algorithms trained on data from a broad swath of the population help inform suitably individualized sentencing?
Notes: This paper articulates a framework for classifying individuals in as fair and unbiased a manner as possible. What, fundamentally, is fairness? Does the paper succeed in making fairness explicable in formal, quantitative terms?
Notes: This piece provides a good overview of the central arguments of Nick Bostrom’s book, Superintelligence, and explains why many others share his alarm about AI. The author suggests that we may be able to avoid harmful consequences if we design AI to respect human interests and values.
Notes: Sam Harris echoes Nick Bostrom’s concerns about AI and focuses on the lack of concern expressed by AI supporters and the general public about AI and the future of human civilization. According to Harris, the development of superintelligent machines is inevitable. He proposes that we think about how to build superintelligent AI that respects humans and shares our interests. Given what we’ve read and discussed, do you think superintelligent AI is inevitable? How should we mitigate the risks that Harris identifies?
Notes: Bogost points out that the recent back and forth between Mark Zuckerberg and Elon Musk about whether we should be worried or excited about AI is mainly fueled by Zuckerberg’s and Musk’s business interests. Does this perspective change your opinion about ongoing debates about the future of AI?
Notes: This article offers the opposite perspective and claims that our quality of life will improve if we can find a way to work with machines. The author acknowledges that machines will displace many human workers, but maintains that there will always be a need for humans because of our capacity to love and connect emotionally with one another. How can machines work with humans rather than against them and how can we best address concerns about the impact of AI on the job market?
Notes: The author points out that machine intelligence stems primarily from computing power. Machines may be able to outwit humans when it comes to quantifiable data, but they will always lack emotional intelligence. Will machines eventually develop emotional intelligence and how would it impact our society?
This is the old version of the H2O platform and is now read-only. This means you can view content but cannot create content. If you would like access to the new version of the H2O platform and have not already been contacted by a member of our team, please contact us at firstname.lastname@example.org. Thank you.