AI Decision-Making & Human Overtrust

My Role: Research leader and project manager
Institute: Reichman University, Media Innovation Lab
Project Overview
Led independent research thesis investigating the ethical implications of AI-assisted decision-making. Designed and executed controlled experiments to measure human overtrust in AI recommendations, providing evidence-based insights for responsible AI product design.
My Role
As Research Lead & Thesis Author, I owned the complete research lifecycle from literature review through experimental design, data collection, and analysis. Developed original methodology to quantify AI overtrust behaviors in cognitive task scenarios.
The inspiration for this project originated from an essay written by Bill Joy, a co-founder of Sun Microsystems, back in the early 2000s. Published in Wiredmagazine under the title "Why the Future Doesn't Need Us," Joy argued that intelligent robots would soon enough replace humanity, particularly in intellectual and social spheres:​“ [...] human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones.

Our experience with technology and computers has made us biased toward it. We tend to perceive technology as neutral, objective, and, many times, superior to us in many domains. Consequently, we are so inclined to accept recommendations provided by technology without any evaluation of its accuracy and limitations. This, we call "overtrust", or as Parasuraman and Riley(1997) define it, Misuse which in their words refers to"over-reliance on automation, which can result in failures of monitoring or decision biases"
Background and Motivation
To test my hypothesis, we set up an experiment that involved an "intelligent" robot and a cognitive task it had to solve. Participants joined the robot in solving a 10 Raven Progressive Matrices task—a puzzle where you identify the missing piece in a 3x3 grid based on logic and patterns. We told participants that the robot was in training and learning from their responses. However, during the test, the robot often provided its answers before them, so they could see its answers before they answered.​In reality, we pre-programmed the robot's answers, some correct and others intentionally incorrect. We assumed that by seeing the robot's answers, participants would often incline their answers to match those of the robot.

We divided participants into two groups and asked them to solve the task alongside the robot. Participants were seated in front of a computer screen where the task items were presented, with the robot placed on the table beside them. In the experimental condition, the robot's movements were deliberately designed to give participants the impression that it was providing answers to the items presented on the screen. The robot's answers were indicated by the button pressed for each question. In the Baseline condition, the robot's movements were designed to show an interest in the task, but its choices for answers were not indicated, as it was not pressing the buttons. 

We measured compliance with the Robot to measure how closely participants aligned their answers with the robot's and reaction time to get insights into the participants' effort and motivation for solving the more difficult items accurately.
Research Method
Analyzing the findings, we found an indication of the existence of overtrust in the robot's capabilities and an impact on participants' self-efficacy.

Compliance with the robot was indicated. We found that when participants could see the robot's answers, there was a notable trend - participants tended to align their responses with the robot's, specifically in the more challenging items. Given that we programmed the robot to be mistaken in those items, this reliance on the robot without appropriate evaluation of its capabilities made participants wrong more often compared to the baseline condition. 

In addition, participants' reaction times in the experimental condition decreased significantly, but again only in the more difficult items of the task. This suggests that when the task became more challenging, participants tended to invest less effort into solving the task independently when they could rely on the robots. 
Results
Full Paper
To read the full paper, click on the button below and download it.
Download Paprer
Other Projects
Design System for EasySend.io
Design and Implementation