The Influence of Robots' Fairness on Humans' Reward-Punishment Behaviors and Trust in Human-Robot Cooperative Teams.
- Authors
- Type
- Published Article
- Journal
- Human factors
- Publication Date
- Apr 01, 2024
- Volume
- 66
- Issue
- 4
- Pages
- 1103–1117
- Identifiers
- DOI: 10.1177/00187208221133272
- PMID: 36218282
- Source
- Medline
- Keywords
- Language
- English
- License
- Unknown
Abstract
Based on social exchange theory, this study investigates the effects of robots' fairness and social status on humans' reward-punishment behaviors and trust in human-robot interactions. In human-robot teamwork, robots show fair behaviors, dedication (altruistic unfair behaviors), and selfishness (self-interested unfair behaviors), but few studies have discussed the effects of these robots' behaviors on teamwork. This study adopts a 3 (the independent variable is the robot's fairness: self-interested unfair behaviors, fair behaviors, and altruistic unfair behaviors) × 3 (the moderator variable is the robot's social status: superior, peer, and subordinate) experimental design. Each participant and a robot completed the experimental task together through a computer. When robots have different social statuses, the more altruistic the fairness of the robot, the more reward behaviors, the fewer punishment behaviors, and the higher human-robot trust of humans. Robots' higher social status weakens the influence of their fairness on humans' punishment behaviors. Human-robot trust will increase humans' reward behaviors and decrease humans' punishment behaviors. Humans' reward-punishment behaviors will increase repaired human-robot trust. Robots' fairness has a significant impact on humans' reward-punishment behaviors and trust. Robots' social status moderates the effect of their fair behavior on humans' punishment behavior. There is an interaction between humans' reward-punishment behaviors and trust. The study can help to better understand the interaction mechanism of the human-robot team and can better serve the management and cooperation of the human-robot team by appropriately adjusting the robots' fairness and social status.