1 + 1 > 2? Information, Humans, and Machines

Published Online:https://doi.org/10.1287/isre.2023.0305

With the explosive growth of data and the rapid rise of artificial intelligence and automated working processes, humans inevitably fall into increasingly close collaboration with machines as either employees or consumers. Problems in human–machine interaction arise as a consequence, not to mention the dilemmas posed by the need to manage information on ever-expanding scales. Considering the general superiority of machines in this latter respect, compared with human performance, it is essential to explore whether human–machine collaboration is valuable and, if so, why. Recent studies propose diverse explanation methods to uncover machine learning algorithms’ “black boxes,” aiming to reduce human resistance and enhance efficiency. However, the findings of this literature stream have been inconclusive. Little is known about the influential factors involved or the rationale behind their impacts on human decision processes. We aimed to tackle these issues in the present study by specifically examining the joint impact of information complexity and machine explanations. Specifically, we cooperated with a large Asian microloan company to conduct a two-stage field experiment. Drawing upon studies in dual-process theories of reasoning that propose different conditions necessary to arouse humans’ active information processing and systematic thinking, we tailored the treatments to vary the level of information complexity, the presence of collaboration, and the availability of machine explanations. We observed that, with large volumes of information and with machine explanations alone, human evaluators could not add extra value to the final collaborative outcomes. However, when extensive information was coupled with machine explanations, human involvement significantly reduced the default rate compared with machine-only decisions. We disentangled the underlying mechanisms with three-step empirical analyses. We reveal that the coexistence of large-scale information and machine explanations can invoke humans’ active rethinking, which, in turn, shrinks gender gaps and increases prediction accuracy. In particular, we demonstrate that humans can spontaneously associate newly emerging features with others that had been overlooked but had the potential to correct the machine’s mistakes. This capacity not only underscores the necessity of human–machine collaboration, but also offers insights into system designs. Our experiments and empirical findings provide nontrivial implications that are both theoretical and practical.

History: Param Vir Singh, Senior Editor; Tianshu Sun, Associate Editor.

Funding: This work was supported by the National Natural Science Foundation of China [Grant 72272003].

Supplemental Material: The online appendix is available at https://doi.org/10.1287/isre.2023.0305.

INFORMS site uses cookies to store information on your computer. Some are essential to make our site work; Others help us improve the user experience. By using this site, you consent to the placement of these cookies. Please read our Privacy Statement to learn more.