book-cover7.jpg
 

SEPT 29, 2020, 12pm EST

How would you feel about losing your job to a machine? How about an automated tsunami alert system that fails?

How Humans Judge Machines compares people’s reactions to human and machine actions across dozens of experiments, revealing when and why humans are biased in favor or against machines.

Register and get notified about the digital edition and online events

How Humans Judge Machines is a peer-reviewed book comparing people’s reactions to human and machine actions. Through dozens of experiments, it brings us closer to understanding when people judge humans and machines differently, and why.

Advanced Praise for How Humans Judge Machines

robot-judge2.jpg

Profound and farsighted

—NICHOLAS CHRISTAKIS, Yale University

Fascinating, provocative and growing more important each time another AI system goes live.

— ERIK BRYNFJOLFSSON, Stanford Digital Economy Lab

A must-read for everybody who wishes to understand the future of AI in our society.”

—DARON ACEMOGLU, MIT

Evidence—not mere conjecture or anecdote—on human moral judgments.

— PAUL ROMER, NYU, 2018 Nobel Prize in Economics

Fascinating, deeply provocative and highly relevant for the mid-21st century

—EDWARD GLAESER, Harvard University

A must read

—SENDHIL MULLAINATHAN, University of Chicago

A visual and intellectual tour de force!

ALBERT-LASZLO BARABASI, Northeastern University

A framework to consider when and under what circumstances we are biased against or in favor of machines

—MARYANN FELDMAN, UNC

Indispensable to any scholar studying the psychological aspects of AI ethics

—IYAD RAHWAN, Max Planck Institute

An invaluable guide to making wise choices.

— ANDREW MCAFEE, MIT

Shines important light on this vital yet poorly understood issue.

—STEVEN PINKER, Harvard University

 

A look inside

Experiments

Are there conditions in which we judge machines unfairly? Is our judgment of machines affected by the moral dimensions of a scenario? Is our judgment of machines correlated with demographic factors, such as education or gender? Through dozens of experiments, Hidalgo and colleagues explore and unpack when and why we judge machines differently than humans.

+80 scenarios

How Humans Judge Machines unpacks our biases through experiments that make you think about your gut reactions. It is an invitation to revisit human righteousness in a world where we don't yet understand how we judge machines.

Moral dimensions

Scenarios are decomposed into five dimensions of the moral foundations theory of psychology (harm, fairness, authority, loyalty, and purity), we find that people tend to see the actions of machines as more harmful and immoral in scenarios involving physical harm. Contrary to that, we find people tend to judge humans more harshly in unfair scenarios.

Authors

STATEMENT OF CONTRIBUTIONS:
Written by: César A. Hidalgo
Data Analysis by: César A. Hidalgo
Data Collection: Diana Orghian, Filipa de Almeida
Experiment Design: César A. Hidalgo, Diana Orghian, Filipa de Almeida
Scenarios: César A. Hidalgo, Diana Orghian, Filipa de Almedia, Jordi Albo-Canals, Natalia Martin

 
 
author1-05.jpg
 

César A. Hidalgo

César A. Hidalgo leads the Collective Learning group at the Artificial and Natural Intelligence Toulouse Institute (ANITI) at the University of Toulouse. He also holds appointments at the University of Manchester and Harvard University. Hidalgo has authored dozens of peer reviewed publications and two books: Why Information Grows and The Atlas of Economic Complexity . Between 2010 and 2019, Hidalgo directed the Collective Learning group at MIT.

 
 
 

Diana Orghian

Diana Orghian is a Social Psychologist and Researcher at the University of Lisbon. Her research focuses on understanding how people perceive other’s personality and appearance, and on how people evaluate artificial agents. Orghian holds a Phd in Psychology from the University of Lisbon (2017), was a visiting scholar at New York University (2015) and Harvard (2016), and a Postdoctoral Fellow at the the MIT (2017-2019).

 
 
 
author4-06.png
 

Filipa De Almeida

Filipa de Almeida is a psychologist with a PhD in Social Cognition from the University of Lisbon. During her studies, she was a visiting student at the University College London and at MIT . Recently she completed a postdoctoral fellowship in consumer psychology at the University of Lisbon. Currently she is an invited assistant professor at Católica Lisbon School of Business and Economics, teaching at the undergraduate, masters and Executive Masters levels, and supervising master students working in social power and decision making. Her research focuses on the effects of social power and on the perception of artificial intelligent agents.

 
author3-06.png
 

Jordi Albo-Canals

Jordi Albo-Canals is a researcher and entrepreneur with expertise on human factors engineering, embodied social agents, AI-based education, health technologies, and cloud computing. He has done research and technology transfer at La Salle University, Tufts University, NTT DATA Corporation, and between the Barcelona Children's Hospital Foundation, and the company he co-founded, Lighthouse-DIG, LLC.

 
 
 
author-natalia-martin-06.png
 

Natalia Martin

Natalia Martin is a marketeer and publicist with a Master Degree from the ESIC Business & Marketing School. Martin has experience working for international companies , such as as NTT DATA, where she was part of the BXH team at the MIT Media Lab. Martin works on understanding how new marketing tools can influence a user’s perception of services and products.

 
 
 
 
 

Is artificial intelligence good or bad? Are there conditions in which we are too harsh, or lenient, when it comes to the way in which we judge machines?

Available online starting September 29, 2020