Digital Edition (Free):
Desktop Edition (PDF)
Mobile Edition (PDF)
By Chapter (PDF)
Print Edition (MIT Press)
Order from MIT Press
Order from Amazon
Video Edition (Free)
By Chapters
How Humans Judge Machines
Executive Summary
A short summary of the main findings.
Chapter 0: Introduction
Why should we worry about the way in which people judge machines? Together with chapter 1, the introduction motivates the study and puts the experiments in the context of some of the relevant literature.
Chapter 1: The Ethics of Artificial Minds
This chapter introduces basic concepts used throughout the book: moral status and agency, intent, and the moral dimension of the moral foundations theory of social psychology. This chapter also introduces the basic experimental technique used in subsequent chapters.
Chapter 2: Unpacking the Ethics of AI
This chapter explores scenarios involving emergency decisions with uncertain outcomes, algorithmic creativity, self-driving cars, and patriotism. It provides the first examples of people judging humans and machines differently, exhibiting both bias in favor and against machines.
Chapter 3: Judged by Machines (Algorithmic Bias)
This chapter presents experiments related to algorithmic bias in human resources, university admissions, and police scenarios. This chapter also discusses basic ideas about fairness and theoretical work attempting on the mathematical foundations of fairer algorithms.
Chapter 4: In the Eye of the Machine (Privacy)
This chapter explores experiments related to privacy using scenarios involving camera systems and personal data. This chapter also discusses basic concepts in privacy, such as anonymity and privacy preserving data collection methods.
Chapter 5: Working Machines (Labor Displacement)
This chapter compares people’s reactions to labor displacement attribute to technology and humans. It also discusses recent literature on labor displacement, automation, and labor precarization.
Chapter 6: Moral Functions
This chapter uses statistics to model the behavior observed across experiments to uncover general principles governing differences in the way people judge humans and machines. It also explores the average effects of demographics (e.g. education, gender) in people’s judgments of humans and machines.
Chapter 7: Liable Machines
This chapter concludes by connecting the work presented before with ideas from science fiction, mathematics, and law.
Appendix and Additional Scenarios
The statistical appendix contains additional details about the data collection methodology plus dozens of new scenarios which were not discussed in the main text of the book.
Advance Praise for How Humans Judge Machines
“Profound and farsighted“
—NICHOLAS CHRISTAKIS, Yale University
“Fascinating, provocative and growing more important each time another AI system goes live.“
— ERIK BRYNFJOLFSSON, Stanford Digital Economy Lab
“A must-read for everybody who wishes to understand the future of AI in our society.”
—DARON ACEMOGLU, MIT
“Evidence—not mere conjecture or anecdote—on human moral judgments.“
— PAUL ROMER, NYU, 2018 Nobel Prize in Economics
“Fascinating, deeply provocative and highly relevant for the mid-21st century“
—EDWARD GLAESER, Harvard University
“A must read“
—SENDHIL MULLAINATHAN, University of Chicago
“A visual and intellectual tour de force!“
—ALBERT-LASZLO BARABASI, Northeastern University
“A framework to consider when and under what circumstances we are biased against or in favor of machines“
—MARYANN FELDMAN, University of North Carolina
“Indispensable to any scholar studying the psychological aspects of AI ethics“
—IYAD RAHWAN, Max Planck Institute
“An invaluable guide to making wise choices.“
— ANDREW MCAFEE, MIT
“Shines important light on this vital yet poorly understood issue.“
—STEVEN PINKER, Harvard University
A look inside
Experiments
Are there conditions in which we judge machines unfairly? Is our judgment of machines affected by the moral dimensions of a scenario? Do our attitudes towards machines correlate with demographic factors, such as education or gender? Through dozens of experiments, Hidalgo and his colleagues explore and unpack when and why we judge machines differently than humans.
80+ scenarios
How Humans Judge Machines exposes and investigates our biases through experiments that will make you think about the way we perceive machines and humans.
Moral dimensions
Scenarios are analyzed along the five dimensions outlined in the moral foundations theory of psychology (harm, fairness, authority, loyalty, and purity). Using this framework we find that people tend to see the actions of machines as more harmful and immoral in scenarios involving physical harm. Conversely, we find people tend to judge humans more harshly in scenarios involving unfairness.
Hidalgo and colleagues use hard science to take on these pressing technological questions. Using randomized experiments, they create counterfactuals and statistical models to explain how people judge A.I. Through original research, this book brings us one step closer to understanding the ethical consequences of artificial intelligence.
Authors
STATEMENT OF CONTRIBUTIONS:
Written by: César A. Hidalgo
Data Analysis by: César A. Hidalgo
Data Collection: Diana Orghian, Filipa de Almeida
Experiment Design: César A. Hidalgo, Diana Orghian, Filipa de Almeida
Scenarios: César A. Hidalgo, Diana Orghian, Filipa de Almedia, Jordi Albo-Canals, Natalia Martin
César A. Hidalgo
César A. Hidalgo leads the Collective Learning group at the Artificial and Natural Intelligence Toulouse Institute (ANITI) at the University of Toulouse. He also holds appointments at the University of Manchester and Harvard University. Hidalgo has authored dozens of peer reviewed publications and three books: Why Information Grows ,The Atlas of Economic Complexity, and How Humans Judge Machines . Between 2010 and 2019, Hidalgo led the Collective Learning group at MIT. He holds a Ph.D. in physics from the University of Notre Dame.
Diana Orghian
Diana Orghian is a Social Psychologist and Researcher at the University of Lisbon. Her research focuses on understanding how people perceive the personality and appearance of others, and on how people evaluate artificial agents. Orghian holds a Ph.D. in Psychology from the University of Lisbon (2017), was a visiting scholar at New York University (2015) and Harvard (2016), and a Postdoctoral Fellow at the the MIT (2017-2019).
Filipa De Almeida
Filipa de Almeida is a psychologist with a PhD in Social Cognition from the University of Lisbon. During her studies, she was a visiting student at the University College London and at MIT . Recently she completed a postdoctoral fellowship in consumer psychology at the University of Lisbon. Currently she is an invited assistant professor at Católica Lisbon School of Business and Economics, teaching at the undergraduate, masters and executive Masters levels, and supervising master students working in social power and decision making. Her research focuses on the effects of social power and on the perception of artificially intelligent agents.
Jordi Albo-Canals
Jordi Albo-Canals is a researcher and entrepreneur with expertise on human factors engineering, embodied social agents, AI-based education, health technologies, and cloud computing. He has worked on research and technology transfer at La Salle University, Tufts University, NTT DATA Corporation, and between the Barcelona Children's Hospital Foundation, and the company he co-founded, Lighthouse-DIG, LLC.
Natalia Martin
Natalia Martin is a marketeer and publicist with a Master Degree from the ESIC Business & Marketing School. Martin has experience working for international companies, such as as NTT DATA, where she was part of the BXH team at the MIT Media Lab. Martin works on understanding how new marketing tools can influence a user’s perception of services and products.